Layer edits

Use "layer-2" instead of "layer 2" if it is used as an adjective like
"layer-2 network".

Change-Id: Ie3361e08d7781c1c799c6953026d81c2325a83c3
This commit is contained in:
Andreas Jaeger
2014-08-09 23:23:26 +02:00
parent 056398f678
commit bae2eb60d3
10 changed files with 60 additions and 60 deletions

View File

@@ -150,7 +150,7 @@
</section>
<section xml:id="section_adv_cfg_l3_agent">
<title>L3 agent</title>
<para>You can run an L3 agent that enables layer 3 forwarding and floating IP support.</para>
<para>You can run an L3 agent that enables layer-3 forwarding and floating IP support.</para>
<para>The node that runs the L3 agent should run:</para>
<screen><userinput>neutron-l3-agent --config-file <replaceable>NEUTRON_CONFIG_FILE</replaceable> --config-file <replaceable>L3_CONFIG_FILE</replaceable></userinput></screen>
<para>You must configure a driver that matches the plug-in that runs on the service. This
@@ -304,7 +304,7 @@ external_network_bridge = br-ex-2</programlisting>
</section>
<section xml:id="section_adv_cfg_l3_metering_agent">
<title>L3 metering agent</title>
<para>You can run an L3 metering agent that enables layer 3 traffic metering. In general,
<para>You can run an L3 metering agent that enables layer-3 traffic metering. In general,
you should launch the metering agent on all nodes that run the L3 agent:</para>
<screen><userinput>neutron-metering-agent --config-file <replaceable>NEUTRON_CONFIG_FILE</replaceable> --config-file <replaceable>L3_METERING_CONFIG_FILE</replaceable></userinput></screen>
<para>You must configure a driver that matches the plug-in that runs on the service. The

View File

@@ -135,13 +135,13 @@
(neutron), has tremendous implications and will have
a huge impact on the architecture and design of the cloud
network infrastructure.</para>
<para>The legacy networking (nova-network) service is primarily a layer 2 networking
<para>The legacy networking (nova-network) service is primarily a layer-2 networking
service which has two main modes in which it will function.
The difference between the two modes in legacy networking pertain
to whether or not legacy networking uses VLANs. When using
legacy networking in a flat network mode, all network hardware
nodes and devices throughout the cloud are connected to a
single layer 2 network segment which provides access to
single layer-2 network segment which provides access to
application data.</para>
<para>When the network devices in the cloud support segmentation
using VLANs, legacy networking can operate in the second mode. In

View File

@@ -55,11 +55,11 @@
be needed.</para>
<para>Depending on the selected design, Networking itself may not
even support the required
<glossterm baseform="Layer-3 network">layer 3 network</glossterm>
<glossterm baseform="Layer-3 network">layer-3 network</glossterm>
functionality. If it
is necessary or advantageous to use the provider networking
mode of Networking without running the layer 3 agent, then an
external router will be required to provide layer 3
mode of Networking without running the layer-3 agent, then an
external router will be required to provide layer-3
connectivity to outside systems.</para>
<para>Interaction with orchestration services is inevitable in
larger-scale deployments. The Orchestration module is capable of allocating
@@ -125,7 +125,7 @@
<glossterm baseform="Layer-2 network">layer 2</glossterm>
with a provider network
configuration. For example, it may be necessary to implement
HSRP to terminate layer 3 connectivity.</para>
HSRP to terminate layer-3 connectivity.</para>
<para>Depending on the workload, overlay networks may or may not
be a recommended configuration. Where application network
connections are small, short lived or bursty, running a
@@ -145,7 +145,7 @@
mesh overlay network, while some network monitoring tools or
storage replication workloads will have performance issues
with throughput or excessive broadcast traffic.</para>
<para>A design decision that many overlook is a choice of layer 3
<para>A design decision that many overlook is a choice of layer-3
protocols. While OpenStack was initially built with only IPv4
support, Networking now supports IPv6 and dual-stacked networks.
Note that, as of the icehouse release, this only includes
@@ -164,7 +164,7 @@
routing within the cloud including network equipment, hardware
nodes, and instances. Some workloads will perform well with
nothing more than static routes and default gateways
configured at the layer 3 termination point. In most cases
configured at the layer-3 termination point. In most cases
this will suffice, however some cases require the addition of
at least one type of dynamic routing protocol if not multiple
protocols. Having a form of interior gateway protocol (IGP)
@@ -185,7 +185,7 @@
an optional design consideration and more a design warning
that MTU must be at least large enough to handle normal
traffic, plus any overhead from an overlay network, and the
desired layer 3 protocol. Adding externally built tunnels will
desired layer-3 protocol. Adding externally built tunnels will
further lessen the MTU packet size making it imperative to pay
attention to the fully calculated MTU as some systems may be
configured to ignore or drop path MTU discovery

View File

@@ -27,9 +27,9 @@
<para>Since sessions must remain until closing, the routing and
switching architecture is designed for high availability.
Switches are meshed to each hypervisor and to each other, and
also provide an MLAG implementation to ensure layer 2
also provide an MLAG implementation to ensure layer-2
connectivity does not fail. Routers are configured with VRRP
and fully meshed with switches to ensure layer 3 connectivity.
and fully meshed with switches to ensure layer-3 connectivity.
Since GRE is used as an overlay network, Networking is installed
and configured to use the Open vSwitch agent in GRE tunnel
mode. This ensures all devices can reach all other devices and
@@ -94,7 +94,7 @@
<section xml:id="overlay-networks"><title>Overlay networks</title>
<para>OpenStack Networking using the Open vSwitch GRE tunnel mode
was included in the design to provide overlay functionality.
In this case, the layer 3 external routers will be in a pair
In this case, the layer-3 external routers will be in a pair
with VRRP and switches should be paired with an implementation
of MLAG running to ensure that there is no loss of
connectivity with the upstream routing infrastructure.</para></section>

View File

@@ -7,32 +7,32 @@
<?dbhtml stop-chunking?>
<title>Technical considerations</title>
<para>Designing an OpenStack network architecture involves a
combination of layer 2 and layer 3 considerations. Layer 2
combination of layer-2 and layer-3 considerations. Layer-2
decisions involve those made at the data-link layer, such as
the decision to use Ethernet versus Token Ring. Layer 3
involve those made about the protocol layer and the point at
which IP comes into the picture. As an example, a completely
internal OpenStack network can exist at layer 2 and ignore
layer 3 however, in order for any traffic to go outside of
that cloud, to another network, or to the Internet, a layer 3
that cloud, to another network, or to the Internet, a layer-3
router or switch must be involved.</para>
<para>The past few years have seen two competing trends in
networking. There has been a trend towards building data
center network architectures based on layer 2 networking and
center network architectures based on layer-2 networking and
simultaneously another network architecture approach is to
treat the cloud environment essentially as a miniature version
of the Internet. This represents a radically different
approach to the network architecture from what is currently
installed in the staging environment because the Internet is
based entirely on layer 3 routing rather than layer 2
based entirely on layer-3 routing rather than layer-2
switching.</para>
<para>In the data center context, there are advantages of
designing the network on layer 2 protocols rather than layer
designing the network on layer-2 protocols rather than layer
3. In spite of the difficulties of using a bridge to perform
the network role of a router, many vendors, customers, and
service providers are attracted to the idea of using Ethernet
in as many parts of their networks as possible. The benefits
of selecting a layer 2 design are:</para>
of selecting a layer-2 design are:</para>
<itemizedlist>
<listitem>
<para>Ethernet frames contain all the essentials for
@@ -42,7 +42,7 @@
</listitem>
<listitem>
<para>Ethernet frames can carry any kind of packet.
Networking at layer 2 is independent of the layer 3
Networking at layer 2 is independent of the layer-3
protocol.</para>
</listitem>
<listitem>
@@ -68,11 +68,11 @@
of the benefits of Ethernet can be realized on the network.
Though it is not a substitute for IP networking, networking at
layer 2 can be a powerful adjunct to IP networking.</para>
<para>The basic reasoning behind using layer 2 Ethernet over layer
3 IP networks is the speed, the reduced overhead of the IP
<para>The basic reasoning behind using layer-2 Ethernet over layer-3
IP networks is the speed, the reduced overhead of the IP
hierarchy, and the lack of requirement to keep track of IP
address configuration as systems are moved around. Whereas the
simplicity of layer 2 protocols might work well in a data
simplicity of layer-2 protocols might work well in a data
center with hundreds of physical machines, cloud data centers
have the additional burden of needing to keep track of all
virtual machine addresses and networks. In these data centers,
@@ -93,9 +93,9 @@
addresses as well as MAC addresses.</para>
</important>
<section xml:id="layer-2-arch-limitations">
<title>Layer 2 architecture limitations</title>
<title>Layer-2 architecture limitations</title>
<para>Outside of the traditional data center the limitations of
layer 2 network architectures become more obvious.</para>
layer-2 network architectures become more obvious.</para>
<itemizedlist>
<listitem>
<para>Number of VLANs is limited to 4096.</para>
@@ -105,7 +105,7 @@
limited.</para>
</listitem>
<listitem>
<para>The need to maintain a set of layer 4 devices to
<para>The need to maintain a set of layer-4 devices to
handle traffic control must be accommodated.</para>
</listitem>
<listitem>
@@ -121,7 +121,7 @@
<para>
Configuring <glossterm
baseform="Address Resolution Protocol (ARP)">ARP</glossterm>
is considered complicated on large layer 2 networks.</para>
is considered complicated on large layer-2 networks.</para>
</listitem>
<listitem>
<para>All network devices need to be aware of all MACs,
@@ -141,8 +141,8 @@
or shape the traffic, and network troubleshooting is very
difficult. One reason for this difficulty is network devices
have no IP addresses. As a result, there is no reasonable way
to check network delay in a layer 2 network.</para>
<para>On large layer 2 networks, configuring ARP learning can also
to check network delay in a layer-2 network.</para>
<para>On large layer-2 networks, configuring ARP learning can also
be complicated. The setting for the MAC address timer on
switches is critical and, if set incorrectly, can cause
significant performance problems. As an example, the Cisco
@@ -151,22 +151,22 @@
be a significant problem. In this case, the network
information maintained in the switches could be out of sync
with the new location of the instance.</para>
<para>In a layer 2 network, all devices are aware of all MACs,
<para>In a layer-2 network, all devices are aware of all MACs,
even those that belong to instances. The network state
information in the backbone changes whenever an instance is
started or stopped. As a result there is far too much churn in
the MAC tables on the backbone switches.</para></section>
<section xml:id="layer-3-arch-advantages">
<title>Layer 3 architecture advantages</title>
<title>Layer-3 architecture advantages</title>
<para>In the layer 3 case, there is no churn in the routing tables
due to instances starting and stopping. The only time there
would be a routing state change would be in the case of a Top
of Rack (ToR) switch failure or a link failure in the backbone
itself. Other advantages of using a layer 3 architecture
itself. Other advantages of using a layer-3 architecture
include:</para>
<itemizedlist>
<listitem>
<para>Layer 3 networks provide the same level of
<para>Layer-3 networks provide the same level of
resiliency and scalability as the Internet.</para>
</listitem>
<listitem>
@@ -191,14 +191,14 @@
example ICMP, to monitor and manage traffic.</para>
</listitem>
<listitem>
<para>Layer 3 architectures allow for the use of Quality
<para>Layer-3 architectures allow for the use of Quality
of Service (QoS) to manage network performance.</para>
</listitem>
</itemizedlist>
<section xml:id="layer-3-arch-limitations">
<title>Layer 3 architecture limitations</title>
<title>Layer-3 architecture limitations</title>
<para>The main limitation of layer 3 is that there is no built-in
isolation mechanism comparable to the VLANs in layer 2
isolation mechanism comparable to the VLANs in layer-2
networks. Furthermore, the hierarchical nature of IP addresses
means that an instance will also be on the same subnet as its
physical host. This means that it cannot be migrated outside
@@ -270,7 +270,7 @@
recommendations can be made:</para>
<itemizedlist>
<listitem>
<para>Layer 3 designs are preferred over layer 2
<para>Layer-3 designs are preferred over layer-2
architectures.</para>
</listitem>
<listitem>
@@ -298,7 +298,7 @@
</listitem>
<listitem>
<para>Use iBGP to flatten the internal traffic on the
layer 3 mesh.</para>
layer-3 mesh.</para>
</listitem>
<listitem>
<para>Determine the most effective configuration for block
@@ -430,18 +430,18 @@
ways. The legacy networking (nova-network) provides a flat DHCP network
with a single broadcast domain. This implementation does not
support tenant isolation networks or advanced plug-ins, but it
is currently the only way to implement a distributed layer 3
is currently the only way to implement a distributed layer-3
agent using the multi_host configuration.
OpenStack Networking (neutron) is the official networking implementation
and provides a pluggable architecture that supports a large
variety of network methods. Some of these include a layer 2
variety of network methods. Some of these include a layer-2
only provider network model, external device plug-ins, or even
OpenFlow controllers.</para>
<para>Networking at large scales becomes a set of boundary
questions. The determination of how large a layer 2 domain
questions. The determination of how large a layer-2 domain
needs to be is based on the amount of nodes within the domain
and the amount of broadcast traffic that passes between
instances. Breaking layer 2 boundaries may require the
instances. Breaking layer-2 boundaries may require the
implementation of overlay networks and tunnels. This decision
is a balancing act between the need for a smaller overhead or
a need for a smaller domain.</para>

View File

@@ -17,22 +17,22 @@
required resources alters the design of an OpenStack
installation. Installations that rely on overlay networks are
unable to support a routing participant, and may also block
layer 2 listeners.</para>
layer-2 listeners.</para>
</section>
<section xml:id="possible-solutions-specialized-networking">
<title>Possible solutions</title>
<para>Deploying an OpenStack installation using OpenStack Networking with a
provider network will allow direct layer 2 connectivity to an
upstream networking device. This design provides the layer 2
provider network will allow direct layer-2 connectivity to an
upstream networking device. This design provides the layer-2
connectivity required to communicate via Intermediate
System-to-Intermediate System (ISIS) protocol or to pass
packets controlled via an OpenFlow controller. Using the
multiple layer 2 plug-in with an agent such as
multiple layer-2 plug-in with an agent such as
<glossterm>Open vSwitch</glossterm>
would allow a private connection through a VLAN directly to a
specific port in a layer 3 device. This would allow a BGP
specific port in a layer-3 device. This would allow a BGP
point to point link to exist that will join the autonomous
system. Avoid using layer 3 plug-ins as they will divide the
system. Avoid using layer-3 plug-ins as they will divide the
broadcast domain and prevent router adjacencies from
forming.</para>
</section>

View File

@@ -9,7 +9,7 @@
<para>Software-defined networking (SDN) is the separation of the data
plane and control plane. SDN has become a popular method of
managing and controlling packet flows within networks. SDN
uses overlays or directly controlled layer 2 devices to
uses overlays or directly controlled layer-2 devices to
determine flow paths, and as such presents challenges to a
cloud environment. Some designers may wish to run their
controllers within an OpenStack installation. Others may wish
@@ -26,9 +26,9 @@
</section>
<section xml:id="possible-solutions-sdn">
<title>Possible solutions</title>
<para>If an SDN implementation requires layer 2 access because it
<para>If an SDN implementation requires layer-2 access because it
directly manipulates switches, then running an overlay network
or a layer 3 agent may not be advisable. If the controller
or a layer-3 agent may not be advisable. If the controller
resides within an OpenStack installation, it may be necessary
to build an ML2 plug-in and schedule the controller instances
to connect to tenant VLANs that then talk directly to the

View File

@@ -592,8 +592,8 @@
<glossdef>
<para>
The protocol by which layer 3 IP addresses are resolved into
layer 2, link local addresses.
The protocol by which layer-3 IP addresses are resolved into
layer-2 link local addresses.
</para>
</glossdef>
</glossentry>

View File

@@ -64,10 +64,10 @@
your environment.</para>
</listitem>
<listitem>
<para>The network node runs the Networking plug-in, layer 2 agent,
and several layer 3 agents that provision and operate tenant
networks. Layer 2 services include provisioning of virtual
networks and tunnels. Layer 3 services include routing,
<para>The network node runs the Networking plug-in, layer-2 agent,
and several layer-3 agents that provision and operate tenant
networks. Layer-2 services include provisioning of virtual
networks and tunnels. Layer-3 services include routing,
<glossterm baseform="Network Address Translation (NAT)">NAT</glossterm>
, and <glossterm>DHCP</glossterm>. This node also handles
external (internet) connectivity for tenant virtual machines
@@ -77,7 +77,7 @@
<para>The compute node runs the hypervisor portion of Compute,
which operates tenant virtual machines or instances. By default
Compute uses KVM as the hypervisor. The compute node also runs
the Networking plug-in and layer 2 agent which operate tenant
the Networking plug-in and layer-2 agent which operate tenant
networks and implement security groups. You can run more than
one compute node.</para>
<para>Optionally, the compute node also runs the Telemetry

View File

@@ -20,7 +20,7 @@
<section xml:id="sec-overview">
<title>Overview</title>
<para>provide layer 2/3 connectivity to instances, handle
<para>provide layer-2/3 connectivity to instances, handle
physical-virtual network transition, handle metadata, etc</para>
</section>