Gate at RST line length of 79 chars

With this patch, the RST files have no line longer than 79 chars -
as discussed on the docs mailing list - and we
can gate on it. Previously this limit was 100 chars.

Change-Id: I23f550db81e9264649d0444f5f1ba1be0d6d343d
This commit is contained in:
Andreas Jaeger 2015-06-20 20:20:23 +02:00
parent 5eb4cf310d
commit 0a4e814f50
28 changed files with 515 additions and 443 deletions

View File

@ -289,8 +289,9 @@ including ensuring a set of instances run on different compute nodes for
service resiliency or on the same node for high performance
inter-instance communications.
Administrative users can specify which compute node their instances run
on. To do this, specify the ``--availability-zone AVAILABILITY_ZONE:COMPUTE_HOST`` parameter.
Administrative users can specify which compute node their instances
run on. To do this, specify the ``--availability-zone
AVAILABILITY_ZONE:COMPUTE_HOST`` parameter.
.. |Base image state with no running instances| image:: ../../common/figures/instance-life-1.png
.. |Instance creation from image and runtime state| image:: ../../common/figures/instance-life-2.png

View File

@ -165,9 +165,9 @@ in ``self.logger``, has these new methods:
"proxy-server.Container", or "proxy-server.Object" as soon as the
Controller object is determined and instantiated for the request.
- ``update_stats(self, metric, amount, sample_rate=1)`` Increments the supplied
meter by the given amount. This is used when you need to add or
subtract more that one from a counter, like incrementing
- ``update_stats(self, metric, amount, sample_rate=1)`` Increments
the supplied meter by the given amount. This is used when you need
to add or subtract more that one from a counter, like incrementing
"suffix.hashes" by the number of computed hashes in the object
replicator.
@ -177,11 +177,12 @@ in ``self.logger``, has these new methods:
- ``decrement(self, metric, sample_rate=1)`` Lowers the given counter
meter by one.
- ``timing(self, metric, timing_ms, sample_rate=1)`` Record that the given meter
took the supplied number of milliseconds.
- ``timing(self, metric, timing_ms, sample_rate=1)`` Record that the
given meter took the supplied number of milliseconds.
- ``timing_since(self, metric, orig_time, sample_rate=1)`` Convenience method to record
a timing meter whose value is "now" minus an existing timestamp.
- ``timing_since(self, metric, orig_time, sample_rate=1)``
Convenience method to record a timing meter whose value is "now"
minus an existing timestamp.
Note that these logging methods may safely be called anywhere you have a
logger object. If StatsD logging has not been configured, the methods

View File

@ -2,71 +2,62 @@
Features and benefits
=====================
+-----------------------------+--------------------------------------------------+
| Features | Benefits |
+=============================+==================================================+
| Leverages commodity | No lock-in, lower price/GB. |
| hardware | |
+-----------------------------+--------------------------------------------------+
| HDD/node failure agnostic | Self-healing, reliable, data redundancy protects |
| | from failures. |
+-----------------------------+--------------------------------------------------+
| Unlimited storage | Large and flat namespace, highly scalable |
| | read/write access, able to serve content |
| | directly from storage system. |
+-----------------------------+--------------------------------------------------+
| Multi-dimensional | Scale-out architecture: Scale vertically and |
| scalability | horizontally-distributed storage. Backs up |
| | and archives large amounts of data with |
| | linear performance. |
+-----------------------------+--------------------------------------------------+
| Account/container/object | No nesting, not a traditional file system: |
| structure | Optimized for scale, it scales to multiple |
| | petabytes and billions of objects. |
+-----------------------------+--------------------------------------------------+
| Built-in replication | A configurable number of accounts, containers |
| 3✕ + data redundancy | and object copies for high availability. |
| (compared with 2✕ on RAID) | |
+-----------------------------+--------------------------------------------------+
| Easily add capacity (unlike | Elastic data scaling with ease |
| RAID resize) | |
+-----------------------------+--------------------------------------------------+
| No central database | Higher performance, no bottlenecks |
+-----------------------------+--------------------------------------------------+
| RAID not required | Handle many small, random reads and writes |
| | efficiently |
+-----------------------------+--------------------------------------------------+
| Built-in management | Account management: Create, add, verify, |
| utilities | and delete users; Container management: Upload, |
| | download, and verify; Monitoring: Capacity, |
| | host, network, log trawling, and cluster health. |
+-----------------------------+--------------------------------------------------+
| Drive auditing | Detect drive failures preempting data corruption |
+-----------------------------+--------------------------------------------------+
| Expiring objects | Users can set an expiration time or a TTL on an |
| | object to control access |
+-----------------------------+--------------------------------------------------+
| Direct object access | Enable direct browser access to content, such as |
| | for a control panel |
+-----------------------------+--------------------------------------------------+
| Realtime visibility into | Know what users are requesting. |
| client requests | |
+-----------------------------+--------------------------------------------------+
| Supports S3 API | Utilize tools that were designed for the popular |
| | S3 API. |
+-----------------------------+--------------------------------------------------+
| Restrict containers per | Limit access to control usage by user. |
| account | |
+-----------------------------+--------------------------------------------------+
| Support for NetApp, | Unified support for block volumes using a |
| Nexenta, SolidFire | variety of storage systems. |
+-----------------------------+--------------------------------------------------+
| Snapshot and backup API for | Data protection and recovery for VM data. |
| block volumes | |
+-----------------------------+--------------------------------------------------+
| Standalone volume API | Separate endpoint and API for integration with |
| available | other compute systems. |
+-----------------------------+--------------------------------------------------+
| Integration with Compute | Fully integrated with Compute for attaching |
| | block volumes and reporting on usage. |
+-----------------------------+--------------------------------------------------+
.. list-table::
:header-rows: 1
:widths: 10 40
* - Features
- Benefits
* - Leverages commodity hardware
- No lock-in, lower price/GB.
* - HDD/node failure agnostic
- Self-healing, reliable, data redundancy protects from failures.
* - Unlimited storage
- Large and flat namespace, highly scalable read/write access,
able to serve content directly from storage system.
* - Multi-dimensional scalability
- Scale-out architecture: Scale vertically and
horizontally-distributed storage. Backs up and archives large
amounts of data with linear performance.
* - Account/container/object structure
- No nesting, not a traditional file system: Optimized for scale,
it scales to multiple petabytes and billions of objects.
* - Built-in replication 3✕ + data redundancy (compared with 2✕ on
RAID)
- A configurable number of accounts, containers and object copies
for high availability.
* - Easily add capacity (unlike RAID resize)
- Elastic data scaling with ease .
* - No central database
- Higher performance, no bottlenecks.
* - RAID not required
- Handle many small, random reads and writes efficiently.
* - Built-in management utilities
- Account management: Create, add, verify, and delete users;
Container management: Upload, download, and verify; Monitoring:
Capacity, host, network, log trawling, and cluster health.
* - Drive auditing
- Detect drive failures preempting data corruption.
* - Expiring objects
- Users can set an expiration time or a TTL on an object to
control access.
* - Direct object access
- Enable direct browser access to content, such as for a control
panel.
* - Realtime visibility into client requests
- Know what users are requesting.
* - Supports S3 API
- Utilize tools that were designed for the popular S3 API.
* - Restrict containers per account
- Limit access to control usage by user.
* - Support for NetApp, Nexenta, Solidfire
- Unified support for block volumes using a variety of storage
systems.
* - Snapshot and backup API for block volumes.
- Data protection and recovery for VM data.
* - Standalone volume API available
- Separate endpoint and API for integration with other compute
systems.
* - Integration with Compute
- Fully integrated with Compute for attaching block volumes and
reporting on usage.

View File

@ -14,7 +14,8 @@ created image:
**To configure tenant-specific image locations**
#. Configure swift as your ``default_store`` in the :file:`glance-api.conf` file.
#. Configure swift as your ``default_store`` in the
:file:`glance-api.conf` file.
#. Set these configuration options in the :file:`glance-api.conf` file:

View File

@ -70,8 +70,8 @@ This example creates a my-new-volume volume based on an image.
| nova | available |
+------+-----------+
#. Create a volume with 8 gibibytes (GiB) of space, and specify the availability zone
and image::
#. Create a volume with 8 gibibytes (GiB) of space, and specify the
availability zone and image::
$ cinder create 8 --display-name my-new-volume --image-id 397e713c-b95b-4186-ad46-6126863ea0a9 --availability-zone nova

View File

@ -361,8 +361,8 @@ The following steps involve compute node 1.
#. Security group rules (6) on the tunnel bridge ``qbr`` handle firewalling
and state tracking for the packet.
#. The tunnel bridge ``qbr`` forwards the packet to the ``tap`` interface (7)
on instance 1.
#. The tunnel bridge ``qbr`` forwards the packet to the ``tap``
interface (7) on instance 1.
#. For VLAN tenant networks:
@ -435,8 +435,8 @@ The following steps involve compute node 1:
bridge ``qbr``. The packet contains destination MAC address *TG1*
because the destination resides on another network.
#. Security group rules (2) on the tunnel bridge ``qbr`` handle state tracking
for the packet.
#. Security group rules (2) on the tunnel bridge ``qbr`` handle
state tracking for the packet.
#. The tunnel bridge ``qbr`` forwards the packet to the logical tunnel
interface ``vxlan-sid`` (3) where *sid* contains the tenant network
@ -473,9 +473,10 @@ The following steps involve the network node.
#. The logical tunnel interface ``vxlan-sid`` forwards the packet to the
tunnel bridge ``qbr``.
#. The tunnel bridge ``qbr`` forwards the packet to the ``qr-1`` interface (5)
in the router namespace ``qrouter``. The ``qr-1`` interface contains the
tenant network 1 gateway IP address *TG1*.
#. The tunnel bridge ``qbr`` forwards the packet to the ``qr-1``
interface (5) in the router namespace ``qrouter``. The ``qr-1``
interface contains the tenant network 1 gateway IP address
*TG1*.
#. For VLAN tenant networks:

View File

@ -148,7 +148,8 @@ The compute nodes contain the following network components:
Packet flow
~~~~~~~~~~~
During normal operation, packet flow with HA routers mirrors the legacy scenario with Linux bridge.
During normal operation, packet flow with HA routers mirrors the
legacy scenario with Linux bridge.
Case 1: HA failover operation
-----------------------------

View File

@ -356,8 +356,8 @@ The following steps involve compute node 1:
bridge ``qbr``. The packet contains destination MAC address *I2*
because the destination resides on the same network.
#. Security group rules (2) on the provider bridge ``qbr`` handle state tracking
for the packet.
#. Security group rules (2) on the provider bridge ``qbr`` handle
state tracking for the packet.
#. The Linux bridge ``qbr`` forwards the packet to the Open vSwitch
integration bridge ``br-int``.

View File

@ -335,8 +335,8 @@ The following steps involve compute node 1:
bridge ``qbr``. The packet contains destination MAC address *I2*
because the destination resides on the same network.
#. Security group rules (2) on the provider bridge ``qbr`` handle state tracking
for the packet.
#. Security group rules (2) on the provider bridge ``qbr`` handle
state tracking for the packet.
#. The provider bridge ``qbr`` forwards the packet to the logical VLAN
interface ``device.sid`` where *device* references the underlying

View File

@ -8,35 +8,37 @@ Ethernet
Ethernet is a networking protocol, specified by the IEEE 802.3 standard. Most
wired network interface cards (NICs) communicate using Ethernet.
In the `OSI model`_ of networking protocols, Ethernet occupies the second layer,
which is known as the data link layer. When discussing Ethernet, you will often
hear terms such as *local network*, *layer 2*, *L2*, *link layer* and *data link
layer*.
In the `OSI model`_ of networking protocols, Ethernet occupies the
second layer, which is known as the data link layer. When discussing
Ethernet, you will often hear terms such as *local network*, *layer
2*, *L2*, *link layer* and *data link layer*.
In an Ethernet network, the hosts connected to the network communicate by
exchanging *frames*, which is the Ethernet terminology for packets. Every host on
an Ethernet network is uniquely identified by an address called the media access
control (MAC) address. In particular, in an OpenStack environment, every virtual
machine instance has a unique MAC address, which is different from the MAC
address of the compute host. A MAC address has 48 bits and is typically represented
as a hexadecimal string, such as ``08:00:27:b9:88:74``. The MAC address is
hard-coded into the NIC by the manufacturer, although modern NICs allow you to change the MAC
address programatically. In Linux, you can retrieve the MAC address of a NIC
using the ``ip`` command::
In an Ethernet network, the hosts connected to the network communicate
by exchanging *frames*, which is the Ethernet terminology for packets.
Every host on an Ethernet network is uniquely identified by an address
called the media access control (MAC) address. In particular, in an
OpenStack environment, every virtual machine instance has a unique MAC
address, which is different from the MAC address of the compute host.
A MAC address has 48 bits and is typically represented as a
hexadecimal string, such as ``08:00:27:b9:88:74``. The MAC address is
hard-coded into the NIC by the manufacturer, although modern NICs
allow you to change the MAC address programatically. In Linux, you can
retrieve the MAC address of a NIC using the ``ip`` command::
$ ip link show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 08:00:27:b9:88:74 brd ff:ff:ff:ff:ff:ff
Conceptually, you can think of an Ethernet network as a single bus that each of the
network hosts connects to. In early implementations, an Ethernet
network consisted of a single coaxial cable that hosts would tap into to connect
to the network. Modern Ethernet networks do not use this approach, and instead
each network host connects directly to a network device called a *switch*.
Still, this conceptual model is useful, and in network diagrams (including those
generated by the OpenStack dashboard) an Ethernet network is often depicted as
if it was a single bus. You'll sometimes hear an Ethernet network
referred to as a *layer 2 segment*.
Conceptually, you can think of an Ethernet network as a single bus
that each of the network hosts connects to. In early implementations,
an Ethernet network consisted of a single coaxial cable that hosts
would tap into to connect to the network. Modern Ethernet networks do
not use this approach, and instead each network host connects directly
to a network device called a *switch*. Still, this conceptual model is
useful, and in network diagrams (including those generated by the
OpenStack dashboard) an Ethernet network is often depicted as if it
was a single bus. You'll sometimes hear an Ethernet network referred
to as a *layer 2 segment*.
In an Ethernet network, every host on the network can send a frame directly to
every other host. An Ethernet network also supports broadcasts, so
@ -46,27 +48,30 @@ are two notable protocols that use Ethernet broadcasts. Because Ethernet
networks support broadcasts, you will sometimes hear an Ethernet network
referred to as a *broadcast domain*.
When a NIC receives an Ethernet frame, by default the NIC checks to see if the
destination MAC address matches the address of the NIC (or the broadcast
address), and the Ethernet frame is discarded if the MAC address
does not match. For a compute host, this behavior is undesirable because the
frame may be intended for one of the instances. NICs can be configured for
*promiscuous mode*, where they pass all Ethernet frames to the operating
system, even if the MAC address does not match. Compute hosts should always have
the appropriate NICs configured for promiscuous mode.
When a NIC receives an Ethernet frame, by default the NIC checks to
see if the destination MAC address matches the address of the NIC (or
the broadcast address), and the Ethernet frame is discarded if the MAC
address does not match. For a compute host, this behavior is
undesirable because the frame may be intended for one of the
instances. NICs can be configured for *promiscuous mode*, where they
pass all Ethernet frames to the operating system, even if the MAC
address does not match. Compute hosts should always have the
appropriate NICs configured for promiscuous mode.
As mentioned earlier, modern Ethernet networks use switches to interconnect the
network hosts. A switch is a box of networking hardware with a large number of ports,
that forwards Ethernet frames from one connected host to another. When hosts first send
frames over the switch, the switch doesnt know which MAC address is associated
with which port. If an Ethernet frame is destined for an unknown MAC address,
the switch broadcasts the frame to all ports. The port learns which MAC addresses are
at which ports by observing the traffic. Once it knows which MAC address is
associated with a port, it can send Ethernet frames to the correct port instead
of broadcasting. The switch maintains the mappings of MAC addresses to switch
ports in a table called a *forwarding table* or *forwarding information base*
(FIB). Switches can be daisy-chained together, and the resulting connection of
switches and hosts behaves like a single network.
As mentioned earlier, modern Ethernet networks use switches to
interconnect the network hosts. A switch is a box of networking
hardware with a large number of ports, that forwards Ethernet frames
from one connected host to another. When hosts first send frames over
the switch, the switch doesnt know which MAC address is associated
with which port. If an Ethernet frame is destined for an unknown MAC
address, the switch broadcasts the frame to all ports. The port learns
which MAC addresses are at which ports by observing the traffic. Once
it knows which MAC address is associated with a port, it can send
Ethernet frames to the correct port instead of broadcasting. The
switch maintains the mappings of MAC addresses to switch ports in a
table called a *forwarding table* or *forwarding information base*
(FIB). Switches can be daisy-chained together, and the resulting
connection of switches and hosts behaves like a single network.
.. _OSI model: http://en.wikipedia.org/wiki/OSI_model
@ -74,32 +79,35 @@ VLANs
~~~~~
VLAN is a networking technology that enables a single switch to act as
if it was multiple independent switches. Specifically, two hosts that are
connected to the same switch but on different VLANs do not see each other's
traffic. OpenStack is able to take advantage of VLANs to isolate the traffic of
different tenants, even if the tenants happen to have instances running on the
same compute host. Each VLAN has an associated numerical ID, between 1 and 4095.
We say "VLAN 15" to refer to the VLAN with numerical ID of 15.
if it was multiple independent switches. Specifically, two hosts that
are connected to the same switch but on different VLANs do not see
each other's traffic. OpenStack is able to take advantage of VLANs to
isolate the traffic of different tenants, even if the tenants happen
to have instances running on the same compute host. Each VLAN has an
associated numerical ID, between 1 and 4095. We say "VLAN 15" to refer
to the VLAN with numerical ID of 15.
To understand how VLANs work, let's consider VLAN applications in a traditional
IT environment, where physical hosts are attached to a physical switch, and no
virtualization is involved. Imagine a scenario where you want three isolated
networks, but you only have a single physical switch. The network administrator
would choose three VLAN IDs, say, 10, 11, and 12, and would configure the switch
to associate switchports with VLAN IDs. For example, switchport 2 might be
associated with VLAN 10, switchport 3 might be associated with VLAN 11, and so
forth. When a switchport is configured for a specific VLAN, it is called an
*access port*. The switch is responsible for ensuring that the network traffic
is isolated across the VLANs.
To understand how VLANs work, let's consider VLAN applications in a
traditional IT environment, where physical hosts are attached to a
physical switch, and no virtualization is involved. Imagine a scenario
where you want three isolated networks, but you only have a single
physical switch. The network administrator would choose three VLAN
IDs, say, 10, 11, and 12, and would configure the switch to associate
switchports with VLAN IDs. For example, switchport 2 might be
associated with VLAN 10, switchport 3 might be associated with VLAN
11, and so forth. When a switchport is configured for a specific VLAN,
it is called an *access port*. The switch is responsible for ensuring
that the network traffic is isolated across the VLANs.
Now consider the scenario that all of the switchports in the first switch become
occupied, and so the organization buys a second switch and connects it to the first
switch to expand the available number of switchports. The second switch is also
configured to support VLAN IDs 10, 11, and 12. Now imagine host A connected to
switch 1 on a port configured for VLAN ID 10 sends an Ethernet frame intended
for host B connected to switch 2 on a port configured for VLAN ID 10. When switch 1
forwards the Ethernet frame to switch 2, it must communicate that the frame is
associated with VLAN ID 10.
Now consider the scenario that all of the switchports in the first
switch become occupied, and so the organization buys a second switch
and connects it to the first switch to expand the available number of
switchports. The second switch is also configured to support VLAN IDs
10, 11, and 12. Now imagine host A connected to switch 1 on a port
configured for VLAN ID 10 sends an Ethernet frame intended for host B
connected to switch 2 on a port configured for VLAN ID 10. When switch
1 forwards the Ethernet frame to switch 2, it must communicate that
the frame is associated with VLAN ID 10.
If two switches are to be connected together, and the switches are configured
for VLANs, then the switchports used for cross-connecting the switches must be
@ -157,11 +165,12 @@ identifier to all zeros to make reference to a subnet. For example, if
a host's IP address is ``10.10.53.24/16``, then we would say the
subnet is ``10.10.0.0/16``.
To understand how ARP translates IP addresses to MAC addresses, consider the
following example. Assume host *A* has an IP address of ``192.168.1.5/24`` and a
MAC address of ``fc:99:47:49:d4:a0``, and wants to send a packet to host *B*
with an IP address of ``192.168.1.7``. Note that the network number is the same
for both hosts, so host *A* is able to send frames directly to host *B*.
To understand how ARP translates IP addresses to MAC addresses,
consider the following example. Assume host *A* has an IP address of
``192.168.1.5/24`` and a MAC address of ``fc:99:47:49:d4:a0``, and
wants to send a packet to host *B* with an IP address of
``192.168.1.7``. Note that the network number is the same for both
hosts, so host *A* is able to send frames directly to host *B*.
The first time host *A* attempts to communicate with host *B*, the
destination MAC address is not known. Host *A* makes an ARP request to
@ -208,14 +217,15 @@ Protocol (:term:`DHCP`) to dynamically obtain IP addresses. A DHCP
server hands out the IP addresses to network hosts, which are the DHCP
clients.
DHCP clients locate the DHCP server by sending a UDP_ packet from port 68 to
address ``255.255.255.255`` on port 67. Address ``255.255.255.255`` is the local
network broadcast address: all hosts on the local network see the UDP
packets sent to this address. However, such packets are not forwarded to
other networks. Consequently, the DHCP server must be on the same local network
as the client, or the server will not receive the broadcast. The DHCP server
responds by sending a UDP packet from port 67 to port 68 on the client. The
exchange looks like this:
DHCP clients locate the DHCP server by sending a UDP_ packet from port
68 to address ``255.255.255.255`` on port 67. Address
``255.255.255.255`` is the local network broadcast address: all hosts
on the local network see the UDP packets sent to this address.
However, such packets are not forwarded to other networks.
Consequently, the DHCP server must be on the same local network as the
client, or the server will not receive the broadcast. The DHCP server
responds by sending a UDP packet from port 67 to port 68 on the
client. The exchange looks like this:
1. The client sends a discover ("Im a client at MAC address
``08:00:27:b9:88:74``, I need an IP address")
@ -248,22 +258,22 @@ protocol were carried out for the instance in question.
IP
~~
The Internet Protocol (IP) specifies how to route packets between hosts that are
connected to different local networks. IP relies on special network hosts
called *routers* or *gateways*. A router is a host that is connected to at least
two local networks and can forward IP packets from one local network to another.
A router has multiple IP addresses: one for each of the networks it is connected
to.
The Internet Protocol (IP) specifies how to route packets between
hosts that are connected to different local networks. IP relies on
special network hosts called *routers* or *gateways*. A router is a
host that is connected to at least two local networks and can forward
IP packets from one local network to another. A router has multiple IP
addresses: one for each of the networks it is connected to.
In the OSI model of networking protocols, IP occupies the third layer, which is
known as the network layer. When discussing IP, you will often hear terms such as
*layer 3*, *L3*, and *network layer*.
In the OSI model of networking protocols, IP occupies the third layer,
which is known as the network layer. When discussing IP, you will
often hear terms such as *layer 3*, *L3*, and *network layer*.
A host sending a packet to an IP address consults its *routing table* to
determine which machine on the local network(s) the packet should be sent to. The
routing table maintains a list of the subnets associated with each local network
that the host is directly connected to, as well as a list of routers that are
on these local networks.
A host sending a packet to an IP address consults its *routing table*
to determine which machine on the local network(s) the packet should
be sent to. The routing table maintains a list of the subnets
associated with each local network that the host is directly connected
to, as well as a list of routers that are on these local networks.
On a Linux machine, any of the following commands displays the routing table::
@ -279,25 +289,25 @@ Here is an example of output from ``ip route show``::
192.168.27.0/24 dev eth1 proto kernel scope link src 192.168.27.100
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
Line 1 of the output specifies the location of the default route, which is the effective
routing rule if none of the other rules match. The router associated with the
default route (``10.0.2.2`` in the example above) is sometimes referred to as
the *default gateway*. A DHCP_ server typically transmits the IP address of the
default gateway to the DHCP client along with the client's IP address and
a netmask.
Line 1 of the output specifies the location of the default route,
which is the effective routing rule if none of the other rules match.
The router associated with the default route (``10.0.2.2`` in the
example above) is sometimes referred to as the *default gateway*. A
DHCP_ server typically transmits the IP address of the default gateway
to the DHCP client along with the client's IP address and a netmask.
Line 2 of the output specifies that IPs in the 10.0.2.0/24 subnet are on the
local network associated with the network interface eth0.
Line 3 of the output specifies that IPs in the 192.168.27.0/24 subnet are on the
local network associated with the network interface eth1.
Line 3 of the output specifies that IPs in the 192.168.27.0/24 subnet
are on the local network associated with the network interface eth1.
Line 4 of the output specifies that IPs in the 192.168.122/24 subnet are on the
local network associated with the network interface virbr0.
The output of the ``route -n`` and ``netsat -rn`` commands are formatted in a
slightly different way. This example shows how the same routes would be formatted
using these commands::
The output of the ``route -n`` and ``netsat -rn`` commands are
formatted in a slightly different way. This example shows how the same
routes would be formatted using these commands::
$ route -n
Kernel IP routing table
@ -308,8 +318,8 @@ using these commands::
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
The ``ip route get`` command outputs the route for a destination IP address.
From the above example, destination IP address 10.0.2.14 is on the local network
of eth0 and would be sent directly::
From the above example, destination IP address 10.0.2.14 is on the
local network of eth0 and would be sent directly::
$ ip route get 10.0.2.14
10.0.2.14 dev eth0 src 10.0.2.15
@ -357,21 +367,24 @@ applications. A TCP port is associated with a number in the range 1-65535, and
only one application on a host can be associated with a TCP port at a time, a
restriction that is enforced by the operating system.
A TCP server is said to *listen* on a port. For example, an SSH server typically
listens on port 22. For a client to connect to a server using TCP, the client
must know both the IP address of a server's host and the server's TCP port.
A TCP server is said to *listen* on a port. For example, an SSH server
typically listens on port 22. For a client to connect to a server
using TCP, the client must know both the IP address of a server's host
and the server's TCP port.
The operating system of the TCP client application automatically
assigns a port number to the client. The client owns this port number until
the TCP connection is terminated, after which time the operating system
reclaims the port number. These types of ports are referred to as *ephemeral ports*.
assigns a port number to the client. The client owns this port number
until the TCP connection is terminated, after which time the operating
system reclaims the port number. These types of ports are referred to
as *ephemeral ports*.
IANA maintains a `registry of port numbers`_ for many TCP-based services, as
well as services that use other layer 4 protocols that employ ports. Registering
a TCP port number is not required, but registering a port number is helpful to
avoid collisions with other services. See `Appendix B. Firewalls and default
ports`_ of the `OpenStack Configuration Reference`_ for the default TCP ports
used by various services involved in an OpenStack deployment.
IANA maintains a `registry of port numbers`_ for many TCP-based
services, as well as services that use other layer 4 protocols that
employ ports. Registering a TCP port number is not required, but
registering a port number is helpful to avoid collisions with other
services. See `Appendix B. Firewalls and default ports`_ of the
`OpenStack Configuration Reference`_ for the default TCP ports used by
various services involved in an OpenStack deployment.
.. _registry of port numbers: http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml
.. _Appendix B. Firewalls and default ports: http://docs.openstack.org/kilo/config-reference/content/firewalls-default-ports.html
@ -384,21 +397,23 @@ simply, *sockets*. The sockets API exposes a *stream oriented* interface for
writing TCP applications: from the perspective of a programmer, sending data
over a TCP connection is similar to writing a stream of bytes to a file. It is
the responsibility of the operating system's TCP/IP implementation to break up
the stream of data into IP packets. The operating system is also responsible for
automatically retransmitting dropped packets, and for handling flow control to
ensure that transmitted data does not overrun the sender's data buffers,
receiver's data buffers, and network capacity. Finally, the operating system is
responsible for re-assembling the packets in the correct order into a stream of
data on the receiver's side. Because TCP detects and retransmits lost packets,
it is said to be a *reliable* protocol.
the stream of data into IP packets. The operating system is also
responsible for automatically retransmitting dropped packets, and for
handling flow control to ensure that transmitted data does not overrun
the sender's data buffers, receiver's data buffers, and network
capacity. Finally, the operating system is responsible for
re-assembling the packets in the correct order into a stream of data
on the receiver's side. Because TCP detects and retransmits lost
packets, it is said to be a *reliable* protocol.
The *User Datagram Protocol* (UDP) is another layer 4 protocol that is the basis
of several well-known networking protocols. UDP is a *connectionless* protocol:
two applications that communicate over UDP do not need to establish a connection
before exchanging data. UDP is also an *unreliable* protocol. The operating
system does not attempt to retransmit or even detect lost UDP packets. The
operating system also does not provide any guarantee that the receiving
application sees the UDP packets in the same order that they were sent in.
The *User Datagram Protocol* (UDP) is another layer 4 protocol that is
the basis of several well-known networking protocols. UDP is a
*connectionless* protocol: two applications that communicate over UDP
do not need to establish a connection before exchanging data. UDP is
also an *unreliable* protocol. The operating system does not attempt
to retransmit or even detect lost UDP packets. The operating system
also does not provide any guarantee that the receiving application
sees the UDP packets in the same order that they were sent in.
UDP, like TCP, uses the notion of ports to distinguish between different
applications running on the same system. Note, however, that operating systems
@ -416,23 +431,25 @@ for implementing this functionality in the application code.
DHCP_, the Domain Name System (DNS), the Network Time Protocol (NTP), and
:ref:`VXLAN` are examples of UDP-based protocols used in OpenStack deployments.
UDP has support for one-to-many communication: sending a single packet to multiple
hosts. An application can broadcast a UDP packet to all of the network
hosts on a local network by setting the receiver IP address as the special IP
broadcast address ``255.255.255.255``. An application can also send a UDP packet to a
set of receivers using *IP multicast*. The intended receiver applications join a
multicast group by binding a UDP socket to a special IP address that is one of
the valid multicast group addresses. The receiving hosts do not have to be on the
same local network as the sender, but the intervening routers must be configured
to support IP multicast routing. VXLAN is an example of a UDP-based protocol
that uses IP multicast.
UDP has support for one-to-many communication: sending a single packet
to multiple hosts. An application can broadcast a UDP packet to all of
the network hosts on a local network by setting the receiver IP
address as the special IP broadcast address ``255.255.255.255``. An
application can also send a UDP packet to a set of receivers using *IP
multicast*. The intended receiver applications join a multicast group
by binding a UDP socket to a special IP address that is one of the
valid multicast group addresses. The receiving hosts do not have to be
on the same local network as the sender, but the intervening routers
must be configured to support IP multicast routing. VXLAN is an
example of a UDP-based protocol that uses IP multicast.
The *Internet Control Message Protocol* (ICMP) is a protocol used for sending
control messages over an IP network. For example, a router that receives an IP
packet may send an ICMP packet back to the source if there is no route in the
router's routing table that corresponds to the destination address (ICMP code 1,
destination host unreachable) or if the IP packet is too large for the router to
handle (ICMP code 4, fragmentation required and "don't fragment" flag is set).
router's routing table that corresponds to the destination address
(ICMP code 1, destination host unreachable) or if the IP packet is too
large for the router to handle (ICMP code 4, fragmentation required
and "don't fragment" flag is set).
The *ping* and *mtr* Linux command-line tools are two examples of network
utilities that use ICMP.

View File

@ -7,11 +7,11 @@ destination addresses in the headers of an IP packet while the packet is
in transit. In general, the sender and receiver applications are not aware that
the IP packets are being manipulated.
NAT is often implemented by routers, and so we will refer to the host performing
NAT as a *NAT router*. However, in OpenStack deployments it is
typically Linux servers that implement the NAT functionality, not hardware
routers. These servers use the iptables_ software package to implement the
NAT functionality.
NAT is often implemented by routers, and so we will refer to the host
performing NAT as a *NAT router*. However, in OpenStack deployments it
is typically Linux servers that implement the NAT functionality, not
hardware routers. These servers use the iptables_ software package to
implement the NAT functionality.
There are multiple variations of NAT, and here we describe three kinds
commonly found in OpenStack deployments.
@ -47,18 +47,19 @@ the source, then the web server cannot send packets back to the sender.
SNAT solves this problem by modifying the source IP address to an IP address
that is routable on the public Internet. There are different variations of
SNAT; in the form that OpenStack deployments use, a NAT router on the path
between the sender and receiver replaces the packet's source IP address with the
router's public IP address. The router also modifies the source TCP or UDP port
to another value, and the router maintains a record of the sender's true IP
address and port, as well as the modified IP address and port.
between the sender and receiver replaces the packet's source IP
address with the router's public IP address. The router also modifies
the source TCP or UDP port to another value, and the router maintains
a record of the sender's true IP address and port, as well as the
modified IP address and port.
When the router receives a packet with the matching IP address and port, it
translates these back to the private IP address and port, and forwards the
packet along.
Because the NAT router modifies ports as well as IP addresses, this form of SNAT
is sometimes referred to as *Port Address Translation* (PAT). It is also sometimes
referred to as *NAT overload*.
Because the NAT router modifies ports as well as IP addresses, this
form of SNAT is sometimes referred to as *Port Address Translation*
(PAT). It is also sometimes referred to as *NAT overload*.
OpenStack uses SNAT to enable applications running inside of instances to
connect out to the public Internet.
@ -66,20 +67,21 @@ connect out to the public Internet.
DNAT
~~~~
In *Destination Network Address Translation* (DNAT), the NAT router modifies the
IP address of the destination in IP packet headers.
In *Destination Network Address Translation* (DNAT), the NAT router
modifies the IP address of the destination in IP packet headers.
OpenStack uses DNAT to route packets from instances to the OpenStack metadata service.
Applications running inside of instances access the OpenStack metadata service
by making HTTP GET requests to a web server with IP address 169.254.169.254. In an
OpenStack deployment, there is no host with this IP address. Instead, OpenStack
uses DNAT to change the destination IP of these packets so they reach the
network interface that a metadata service is listening on.
OpenStack uses DNAT to route packets from instances to the OpenStack
metadata service. Applications running inside of instances access the
OpenStack metadata service by making HTTP GET requests to a web server
with IP address 169.254.169.254. In an OpenStack deployment, there is
no host with this IP address. Instead, OpenStack uses DNAT to change
the destination IP of these packets so they reach the network
interface that a metadata service is listening on.
One-to-one NAT
~~~~~~~~~~~~~~
In *one-to-one NAT*, the NAT router maintains a one-to-one mapping between private IP
addresses and public IP addresses. OpenStack uses one-to-one NAT to implement
floating IP addresses.
In *one-to-one NAT*, the NAT router maintains a one-to-one mapping
between private IP addresses and public IP addresses. OpenStack uses
one-to-one NAT to implement floating IP addresses.

View File

@ -1,114 +1,128 @@
====================================================================================
Migrate the legacy nova-network networking service to OpenStack Networking (neutron)
====================================================================================
=============================================================
Migrate legacy nova-network to OpenStack Networking (neutron)
=============================================================
Two networking models exist in OpenStack. The first is called legacy networking
(nova-network) and it is a sub-process embedded in the Compute project (nova).
This model has some limitations, such as creating complex network topologies, extending
its back-end implementation to vendor-specific technologies, and providing
tenant-specific networking elements. These limitations are the main reasons the
OpenStack Networking (neutron) model was created.
This model has some limitations, such as creating complex network
topologies, extending its back-end implementation to vendor-specific
technologies, and providing tenant-specific networking elements. These
limitations are the main reasons the OpenStack Networking (neutron)
model was created.
This section describes the process of migrating clouds based on the legacy networking
model to the OpenStack Networking model. This process requires additional changes to
both compute and networking to support the migration. This document describes the
overall process and the features required in both Networking and Compute.
This section describes the process of migrating clouds based on the
legacy networking model to the OpenStack Networking model. This
process requires additional changes to both compute and networking to
support the migration. This document describes the overall process and
the features required in both Networking and Compute.
The current process as designed is a minimally viable migration with the goal of
deprecating and then removing legacy networking. Both the Compute and Networking teams agree
that a "one-button" migration process from legacy networking to OpenStack Networking
(neutron) is not an essential requirement for the deprecation and removal of the legacy
networking at a future date. This section includes a process and tools which are
designed to solve a simple use case migration.
The current process as designed is a minimally viable migration with
the goal of deprecating and then removing legacy networking. Both the
Compute and Networking teams agree that a "one-button" migration
process from legacy networking to OpenStack Networking (neutron) is
not an essential requirement for the deprecation and removal of the
legacy networking at a future date. This section includes a process
and tools which are designed to solve a simple use case migration.
Users are encouraged to take these tools, test them, provide feedback, and then expand
on the feature set to suit their own deployments; deployers that refrain from
participating in this process intending to wait for a path that better suits their use
case are likely to be disappointed.
Users are encouraged to take these tools, test them, provide feedback,
and then expand on the feature set to suit their own deployments;
deployers that refrain from participating in this process intending to
wait for a path that better suits their use case are likely to be
disappointed.
Impact and limitations
~~~~~~~~~~~~~~~~~~~~~~
The migration process from the legacy nova-network networking service to OpenStack
Networking (neutron) has some limitations and impacts on the operational state of the
cloud. It is critical to understand them in order to decide whether or not this process
is acceptable for your cloud and all users.
The migration process from the legacy nova-network networking service
to OpenStack Networking (neutron) has some limitations and impacts on
the operational state of the cloud. It is critical to understand them
in order to decide whether or not this process is acceptable for your
cloud and all users.
Management impact
-----------------
The Networking REST API is publicly read-only until after the migration is complete. During
the migration, Networking REST API is read-write only to nova-api, and changes to Networking
are only allowed via nova-api.
The Networking REST API is publicly read-only until after the
migration is complete. During the migration, Networking REST API is
read-write only to nova-api, and changes to Networking are only
allowed via nova-api.
The Compute REST API is available throughout the entire process, although there is a brief
period where it is made read-only during a database migration. The Networking REST API will
need to expose (to nova-api) all details necessary for reconstructing the information
The Compute REST API is available throughout the entire process,
although there is a brief period where it is made read-only during a
database migration. The Networking REST API will need to expose (to
nova-api) all details necessary for reconstructing the information
previously held in the legacy networking database.
Compute needs a per-hypervisor "has_transitioned" boolean change in the data model to be
used during the migration process. This flag is no longer required once the process is
complete.
Compute needs a per-hypervisor "has_transitioned" boolean change in
the data model to be used during the migration process. This flag is
no longer required once the process is complete.
Operations impact
-----------------
In order to support a wide range of deployment options, the migration process described
here requires a rolling restart of hypervisors. The rate and timing of specific
hypervisor restarts is under the control of the operator.
In order to support a wide range of deployment options, the migration
process described here requires a rolling restart of hypervisors. The
rate and timing of specific hypervisor restarts is under the control
of the operator.
The migration may be paused, even for an extended period of time (for example, while
testing or investigating issues) with some hypervisors on legacy networking and some
on Networking, and Compute API remains fully functional. Individual hypervisors may be rolled
back to legacy networking during this stage of the migration, although this requires
The migration may be paused, even for an extended period of time (for
example, while testing or investigating issues) with some hypervisors
on legacy networking and some on Networking, and Compute API remains
fully functional. Individual hypervisors may be rolled back to legacy
networking during this stage of the migration, although this requires
an additional restart.
In order to support the widest range of deployer needs, the process described here is
easy to automate but is not already automated. Deployers should expect to perform
multiple manual steps or write some simple scripts in order to perform this migration.
In order to support the widest range of deployer needs, the process
described here is easy to automate but is not already automated.
Deployers should expect to perform multiple manual steps or write some
simple scripts in order to perform this migration.
Performance impact
------------------
During the migration, nova-network API calls will go through an additional internal
conversion to Networking calls. This will have different and likely poorer performance
characteristics compared with either the pre-migration or post-migration APIs.
During the migration, nova-network API calls will go through an
additional internal conversion to Networking calls. This will have
different and likely poorer performance characteristics compared with
either the pre-migration or post-migration APIs.
Migration process overview
~~~~~~~~~~~~~~~~~~~~~~~~~~
#. Start neutron-server in intended final config, except with REST API restricted to
read-write only by nova-api.
#. Start neutron-server in intended final config, except with REST API
restricted to read-write only by nova-api.
#. Make the Compute REST API read-only.
#. Run a DB dump/restore tool that creates Networking data structures representing current
legacy networking config.
#. Enable a nova-api proxy that recreates internal Compute objects from Networking information
#. Run a DB dump/restore tool that creates Networking data structures
representing current legacy networking config.
#. Enable a nova-api proxy that recreates internal Compute objects
from Networking information
(via the Networking REST API).
#. Make Compute REST API read-write again. This means legacy networking DB is now unused,
new changes are now stored in the Networking DB, and no rollback is possible from here
without losing those new changes.
#. Make Compute REST API read-write again. This means legacy
networking DB is now unused, new changes are now stored in the
Networking DB, and no rollback is possible from here without losing
those new changes.
.. note::
At this moment the Networking DB is the source of truth, but nova-api is the only public
read-write API.
At this moment the Networking DB is the source of truth, but
nova-api is the only public read-write API.
Next, you'll need to migrate each hypervisor. To do that, follow these steps:
#. Disable the hypervisor. This would be a good time to live migrate or evacuate the
compute node, if supported.
#. Disable the hypervisor. This would be a good time to live migrate
or evacuate the compute node, if supported.
#. Disable nova-compute.
#. Enable the Networking agent.
#. Set the "has_transitioned" flag in the Compute hypervisor database/config.
#. Reboot the hypervisor (or run "smart" live transition tool if available).
#. Re-enable the hypervisor.
At this point, all compute nodes have been migrated, but they are still using the
nova-api API and Compute gateways. Finally, enable OpenStack Networking by following
these steps:
At this point, all compute nodes have been migrated, but they are
still using the nova-api API and Compute gateways. Finally, enable
OpenStack Networking by following these steps:
#. Bring up the Networking (l3) nodes. The new routers will have identical MAC+IPs as
old Compute gateways so some sort of immediate cutover is possible, except for stateful
connections issues such as NAT.
#. Bring up the Networking (l3) nodes. The new routers will have
identical MAC+IPs as old Compute gateways so some sort of immediate
cutover is possible, except for stateful connections issues such as
NAT.
#. Make the Networking API read-write and disable legacy networking.
Migration Completed!

View File

@ -21,9 +21,10 @@ OpenStack Networking's networks, confined to a single node.
libvirt network implementation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By default, libvirt's networking functionality is enabled, and libvirt creates a
network when the system boots. To implement this network, libvirt leverages some of
the same technologies that OpenStack Network does. In particular, libvirt uses:
By default, libvirt's networking functionality is enabled, and libvirt
creates a network when the system boots. To implement this network,
libvirt leverages some of the same technologies that OpenStack Network
does. In particular, libvirt uses:
* Linux bridging for implementing a layer 2 network
* dnsmasq for providing IP addresses to virtual machines using DHCP
@ -73,11 +74,12 @@ the output of :command:`ps`::
How to disable libvirt networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Although OpenStack does not make use of libvirt's networking, this networking
will not interfere with OpenStack's behavior, and can be safely left enabled.
However, libvirt's networking can be a nuisance when debugging OpenStack
networking issues. Because libvirt creates an additional bridge, dnsmasq process, and
iptables ruleset, these may distract an operator engaged in network troubleshooting.
Although OpenStack does not make use of libvirt's networking, this
networking will not interfere with OpenStack's behavior, and can be
safely left enabled. However, libvirt's networking can be a nuisance
when debugging OpenStack networking issues. Because libvirt creates an
additional bridge, dnsmasq process, and iptables ruleset, these may
distract an operator engaged in network troubleshooting.
Unless you need to start up virtual machines using libvirt directly, you can
safely disable libvirt's network.
@ -92,8 +94,8 @@ To deactivate the libvirt network named *default*::
# virsh net-destroy default
Deactivating the network will remove the ``virbr0`` bridge, terminate the dnsmasq process, and
remove the iptables rules.
Deactivating the network will remove the ``virbr0`` bridge, terminate
the dnsmasq process, and remove the iptables rules.
To prevent the network from automatically starting on boot::

View File

@ -2,10 +2,11 @@
Manage IP addresses
===================
Each instance has a private, fixed IP address (assigned when launched) and can also have a
public, or floating, address. Private IP addresses are used for communication between
instances, and public addresses are used for communication with networks outside the cloud,
including the Internet.
Each instance has a private, fixed IP address (assigned when launched)
and can also have a public, or floating, address. Private IP addresses
are used for communication between instances, and public addresses are
used for communication with networks outside the cloud, including the
Internet.
- By default, both administrative and end users can associate floating IP
addresses with projects and instances. You can change user permissions for
@ -19,8 +20,9 @@ including the Internet.
floating IP addresses are created by default in OpenStack Networking.
As an administrator using legacy networking (``nova-network``), you
can use the following bulk commands to list, create, and delete ranges of floating IP
addresses. These addresses can then be associated with instances by end users.
can use the following bulk commands to list, create, and delete ranges
of floating IP addresses. These addresses can then be associated with
instances by end users.
List addresses for all projects
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -2,29 +2,32 @@
Evacuate instances
==================
If a cloud compute node fails due to a hardware malfunction or another reason, you can
evacuate instances to make them available again. You can optionally include the target host
on the :command:`evacuate` command. If you omit the host, the scheduler determines
the target host.
If a cloud compute node fails due to a hardware malfunction or another
reason, you can evacuate instances to make them available again. You
can optionally include the target host on the :command:`evacuate`
command. If you omit the host, the scheduler determines the target
host.
To preserve user data on server disk, you must configure shared storage on the target
host. Also, you must validate that the current VM host is down; otherwise, the evacuation
fails with an error.
To preserve user data on server disk, you must configure shared
storage on the target host. Also, you must validate that the current
VM host is down; otherwise, the evacuation fails with an error.
#. To list hosts and find a different host for the evacuated instance, run::
$ nova host-list
#. Evacuate the instance. You can pass the instance password to the command by using
the :option:`--password PWD` option. If you do not specify a
password, one is generated and printed after the command finishes successfully. The
following command evacuates a server without shared storage from a host that is down
to the specified HOST_B::
#. Evacuate the instance. You can pass the instance password to the
command by using the :option:`--password PWD` option. If you do not
specify a password, one is generated and printed after the command
finishes successfully. The following command evacuates a server
without shared storage from a host that is down to the specified
HOST_B::
$ nova evacuate EVACUATED_SERVER_NAME HOST_B
The instance is booted from a new disk, but preserves its configuration including
its ID, name, uid, IP address, and so on. The command returns a password::
The instance is booted from a new disk, but preserves its
configuration including its ID, name, uid, IP address, and so on.
The command returns a password::
+-----------+--------------+
| Property | Value |
@ -32,9 +35,11 @@ fails with an error.
| adminPass | kRAJpErnT4xZ |
+-----------+--------------+
#. To preserve the user disk data on the evacuated server, deploy OpenStack Compute
with a shared file system. To configure your system, see `Configure migrations <http://docs.openstack.org/admin-guide-cloud/content/section_configuring-compute-migrations.html>`_ in
OpenStack Cloud Administrator Guide. In the following example, the
password remains unchanged::
#. To preserve the user disk data on the evacuated server, deploy
OpenStack Compute with a shared file system. To configure your
system, see `Configure migrations
<http://docs.openstack.org/admin-guide-cloud/content/section_configuring-compute-migrations.html>`_
in OpenStack Cloud Administrator Guide. In the following example,
the password remains unchanged::
$ nova evacuate EVACUATED_SERVER_NAME HOST_B --on-shared-storage

View File

@ -9,25 +9,38 @@ tenant user, as well as update the quota defaults for a new tenant.
**Compute quota descriptions**
=============================== ========================================================
Quota name Description
=============================== ========================================================
cores Number of instance cores (VCPUs)
allowed per tenant.
fixed-ips Number of fixed IP addresses allowed per tenant.
This number must be equal to or
greater than the number of allowed instances.
floating-ips Number of floating IP addresses allowed per tenant.
injected-file-content-bytes Number of content bytes allowed per injected file.
injected-file-path-bytes Length of injected file path.
injected-files Number of injected files allowed per tenant.
instances Number of instances allowed per tenant.
key-pairs Number of key pairs allowed per user.
metadata-items Number of metadata items allowed per instance.
ram Megabytes of instance ram allowed per tenant.
security-groups Number of security groups per tenant.
security-group-rules Number of rules per security group.
=============================== ========================================================
.. list-table::
:header-rows: 1
:widths: 10 40
* - Quota name
- Description
* - cores
- Number of instance cores (VCPUs) allowed per tenant.
* - fixed-ips
- Number of fixed IP addresses allowed per tenant. This number
must be equal to or greater than the number of allowed
instances.
* - floating-ips
- Number of floating IP addresses allowed per tenant.
* - injected-file-content-bytes
- Number of content bytes allowed per injected file.
* - injected-file-path-bytes
- Length of injected file path.
* - injected-files
- Number of injected files allowed per tenant.
* - instances
- Number of instances allowed per tenant.
* - key-pairs
- Number of key pairs allowed per user.
* - metadata-items
- Number of metadata items allowed per instance.
* - ram
- Megabytes of instance ram allowed per tenant.
* - security-groups
- Number of security groups per tenant.
* - security-group-rules
- Number of rules per security group.
View and update Compute quotas for a tenant (project)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -69,9 +69,9 @@ To manage host aggregates
- To manage hosts, locate the host aggregate that you want to edit
in the table. Click :guilabel:`More` and select :guilabel:`Manage Hosts`.
In the :guilabel:`Add/Remove Hosts to Aggregate` dialog box, click **+** to
assign a host to an aggregate. Click **-** to remove a host that is assigned
to an aggregate.
In the :guilabel:`Add/Remove Hosts to Aggregate` dialog box,
click **+** to assign a host to an aggregate. Click **-** to
remove a host that is assigned to an aggregate.
- To delete host aggregates, locate the host aggregate that you want
to edit in the table. Click :guilabel:`More` and select

View File

@ -29,9 +29,10 @@ As an administrative user, you can view information for OpenStack services.
* :guilabel:`Default Quotas`:
Displays the quotas that have been configured for the cluster.
* :guilabel:`Availability Zones`: Displays the availability zones that have been configured
for the cluster. It is only available when multiple availability zones have been defined.
* :guilabel:`Availability Zones`: Displays the availability zones
that have been configured for the cluster. It is only available
when multiple availability zones have been defined.
* :guilabel:`Host Aggregates`: Displays the host aggregates that have been
defined for the cluster. It is only available when multiple host aggregates
have been defined.
* :guilabel:`Host Aggregates`: Displays the host aggregates that
have been defined for the cluster. It is only available when
multiple host aggregates have been defined.

View File

@ -2,19 +2,22 @@
View cloud usage statistics
===========================
The Telemetry module provides user-level usage data for OpenStack-based clouds,
which can be used for customer billing, system monitoring, or alerts. Data can be
collected by notifications sent by existing OpenStack components (for example,
usage events emitted from Compute) or by polling the infrastructure (for example,
libvirt).
The Telemetry module provides user-level usage data for
OpenStack-based clouds, which can be used for customer billing, system
monitoring, or alerts. Data can be collected by notifications sent by
existing OpenStack components (for example, usage events emitted from
Compute) or by polling the infrastructure (for example, libvirt).
.. note::
You can only view metering statistics on the dashboard (available only to administrators).
You can only view metering statistics on the dashboard (available
only to administrators).
The Telemetry service must be set up and administered through the
:command:`ceilometer` command-line interface (CLI).
For basic administration information, refer to the "Measure Cloud Resources"
chapter in the `OpenStack End User Guide <http://docs.openstack.org/user-guide/>`_.
For basic administration information, refer to the "Measure Cloud
Resources" chapter in the `OpenStack End User Guide
<http://docs.openstack.org/user-guide/>`_.
.. _dashboard-view-resource-stats:
@ -29,15 +32,17 @@ View resource statistics
* :guilabel:`Global Disk Usage` tab to view disk usage per tenant (project).
* :guilabel:`Global Network Traffic Usage` tab to view ingress or egress usage
per tenant (project).
* :guilabel:`Global Network Traffic Usage` tab to view ingress or
egress usage per tenant (project).
* :guilabel:`Global Object Storage Usage` tab to view incoming and outgoing
storage bytes per tenant (project).
* :guilabel:`Global Network Usage` tab to view duration and creation requests for
networks, subnets, routers, ports, and floating IPs, per tenant (project).
* :guilabel:`Global Network Usage` tab to view duration and
creation requests for networks, subnets, routers, ports, and
floating IPs, per tenant (project).
* :guilabel:`Stats` tab to view a multi-series line chart with user-defined
meters. You group by project, define the value type (min, max, avg, or sum),
and specify the time period (or even use a calendar to define a date range).
* :guilabel:`Stats` tab to view a multi-series line chart with
user-defined meters. You group by project, define the value type
(min, max, avg, or sum), and specify the time period (or even use
a calendar to define a date range).

View File

@ -59,8 +59,8 @@ Alarm
The comparison operator compares a selected meter statistic against
an evaluation window of configurable length into the recent past.
This example uses the :command:`heat` client to create an auto-scaling stack and the
:command:`ceilometer` client to measure resources.
This example uses the :command:`heat` client to create an auto-scaling
stack and the :command:`ceilometer` client to measure resources.
#. Create an auto-scaling stack by running the following command.
The :option:`-f` option specifies the name of the stack template

View File

@ -236,7 +236,8 @@ Create ports
| f7a08fe4-e7... | | fa:16:3e:97:e0:fc | {"subnet_id"... ..."ip_address": "192.168.2.40"}|
+----------------+------+-------------------+-------------------------------------------------+
``--fixed-ips ip_address=192.168.2.2 ip_address=192.168.2.40`` is one unknown option.
``--fixed-ips ip_address=192.168.2.2 ip_address=192.168.2.40`` is one
unknown option.
**How to find unknown options**
The unknown options can be easily found by watching the output of

View File

@ -63,11 +63,12 @@ Before you can launch an instance, gather the following parameters:
make it accessible from outside the cloud. See
:doc:`cli_manage_ip_addresses`.
After you gather the parameters that you need to launch an instance, you
can launch it from an image_ or a :ref:`volume`. You can launch an instance directly
from one of the available OpenStack images or from an image that you have
copied to a persistent volume. The OpenStack Image service provides a
pool of images that are accessible to members of different projects.
After you gather the parameters that you need to launch an instance,
you can launch it from an image_ or a :ref:`volume`. You can launch an
instance directly from one of the available OpenStack images or from
an image that you have copied to a persistent volume. The OpenStack
Image service provides a pool of images that are accessible to members
of different projects.
Gather parameters to launch an instance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -54,8 +54,8 @@ Create a snapshot of the instance
$ nova image-create --poll myInstance myInstanceSnapshot
Instance snapshotting... 50% complete
#. Use the :command:`nova image-list` command to check the status until the status is
``ACTIVE``::
#. Use the :command:`nova image-list` command to check the status
until the status is ``ACTIVE``::
$ nova image-list
+--------------------------------------+---------------------------------+--------+--------+

View File

@ -157,8 +157,8 @@ copy of the image on the compute node where the instance starts.
The instance starts on a compute node in the cloud.
The :guilabel:`Instances` tab shows the instance's name, its private and public IP
addresses, size, status, task, and power state.
The :guilabel:`Instances` tab shows the instance's name, its private
and public IP addresses, size, status, task, and power state.
If you did not provide a key pair, security groups, or rules, users can
access the instance only from inside the cloud through VNC. Even pinging

View File

@ -66,8 +66,8 @@ The following template defines the :file:`my_nova.yaml` file as value for the
properties:
key_name: my_key
The :code:`key_name` argument of the ``my_nova.yaml`` template gets its value from
the :code:`key_name` property of the new template.
The :code:`key_name` argument of the ``my_nova.yaml`` template gets
its value from the :code:`key_name` property of the new template.
.. note::

View File

@ -177,12 +177,14 @@ data, and metadata is derived from any associated
Signals and wait conditions
---------------------------
Often it is necessary to pause further creation of stack resources until the
boot configuration script has notified that it has reached a certain state.
This is usually either to notify that a service is now active, or to pass out
some generated data which is needed by another resource. The resources
:hotref:`OS::Heat::WaitCondition` and :hotref:`OS::Heat::SwiftSignal` both perform
this function using different techniques and tradeoffs.
Often it is necessary to pause further creation of stack resources
until the boot configuration script has notified that it has reached a
certain state. This is usually either to notify that a service is now
active, or to pass out some generated data which is needed by another
resource. The resources :hotref:`OS::Heat::WaitCondition` and
:hotref:`OS::Heat::SwiftSignal` both perform this function using
different techniques and tradeoffs.
:hotref:`OS::Heat::WaitCondition` is implemented as a call to the
`Orchestration API`_ resource signal. The token is created using credentials
@ -529,10 +531,11 @@ The `Custom image script`_ already includes the ``heat-config-script`` element
so the built image will already have the ability to configure using shell
scripts.
Config inputs are mapped to shell environment variables. The script can
communicate outputs to heat by writing to the :file:`$heat_outputs_path.{output name}`
file. See the following example for a script
which expects inputs ``foo``, ``bar`` and generates an output ``result``.
Config inputs are mapped to shell environment variables. The script
can communicate outputs to heat by writing to the
:file:`$heat_outputs_path.{output name}` file. See the following
example for a script which expects inputs ``foo``, ``bar`` and
generates an output ``result``.
.. code-block:: yaml
:linenos:

View File

@ -250,8 +250,10 @@ with the syntax for each type.
length
++++++
The :code:`length` constraint applies to parameters of type ``string``. It defines
a lower and upper limit for the length of the string value.
The :code:`length` constraint applies to parameters of type
``string``. It defines a lower and upper limit for the length of the
string value.
The syntax of the :code:`length` constraint is:
@ -264,8 +266,10 @@ upper limit. However, at least one of ``min`` or ``max`` must be specified.
range
+++++
The :code:`range` constraint applies to parameters of type ``number``. It defines a
lower and upper limit for the numeric value of the parameter.
The :code:`range` constraint applies to parameters of type ``number``.
It defines a lower and upper limit for the numeric value of the
parameter.
The syntax of the :code:`range` constraint is:
@ -285,10 +289,11 @@ following range constraint would allow for all numeric values between 0 and 10:
allowed_values
++++++++++++++
The :code:`allowed_values` constraint applies to parameters of type ``string`` or
``number``. It specifies a set of possible values for a parameter. At
deployment time, the user-provided value for the respective parameter must
match one of the elements of the list.
The :code:`allowed_values` constraint applies to parameters of type
``string`` or ``number``. It specifies a set of possible values for a
parameter. At deployment time, the user-provided value for the
respective parameter must match one of the elements of the list.
The syntax of the :code:`allowed_values` constraint is:
@ -323,9 +328,10 @@ For example:
allowed_pattern
+++++++++++++++
The :code:`allowed_pattern` constraint applies to parameters of type ``string``.
It specifies a regular expression against which a user-provided parameter value
must evaluate at deployment.
The :code:`allowed_pattern` constraint applies to parameters of type
``string``. It specifies a regular expression against which a
user-provided parameter value must evaluate at deployment.
The syntax of the :code:`allowed_pattern` constraint is:
@ -760,7 +766,9 @@ For example:
list_join
---------
The :code:`list_join` function joins a list of strings with the given delimiter.
The :code:`list_join` function joins a list of strings with the given
delimiter.
The syntax of the :code:`list_join` function is:
@ -780,7 +788,9 @@ This resolve to the string ``one, two, and three``.
resource_facade
---------------
The :code:`resource_facade` function retrieves data in a parent provider template.
The :code:`resource_facade` function retrieves data in a parent
provider template.
A provider template provides a custom definition of a resource, called its
facade. For more information about custom templates, see :ref:`composition`.
@ -838,8 +848,8 @@ application:
params:
host: { get_attr: [ my_instance, first_address ] }
The following examples show the use of the :code:`str_replace` function to build an
instance initialization script:
The following examples show the use of the :code:`str_replace`
function to build an instance initialization script:
.. code-block:: yaml
:linenos:

View File

@ -94,12 +94,12 @@ commands = {toxinidir}/tools/generatepot-rst.sh {posargs}
[doc8]
# Settings for doc8:
# Ignore target directories
ignore-path = doc/*/target,doc/*/build,doc/common-rst/glossary.rst
ignore-path = doc/*/target,doc/*/build*,doc/common-rst/glossary.rst
# File extensions to use
extensions = .rst,.txt
# Maximal line length should be 79 but we have some overlong lines.
# Let's not get far more in.
max-line-length = 100
max-line-length = 79
# Disable some doc8 checks:
# D000: Check RST validity (cannot handle lineos directive)
ignore = D000