diff --git a/_images/q32_gre_2nic.svg b/_images/q32_gre_2nic.svg
index ceb1c8740..2b6facf6b 100644
--- a/_images/q32_gre_2nic.svg
+++ b/_images/q32_gre_2nic.svg
@@ -1,4 +1,4 @@
-
+
diff --git a/_images/q32_gre_3nic.svg b/_images/q32_gre_3nic.svg
index 094fb5f6a..3a706188c 100644
--- a/_images/q32_gre_3nic.svg
+++ b/_images/q32_gre_3nic.svg
@@ -1,4 +1,4 @@
-
+
diff --git a/_images/q32_gre_4nic.svg b/_images/q32_gre_4nic.svg
index e4dc705fe..8726cb2ac 100644
--- a/_images/q32_gre_4nic.svg
+++ b/_images/q32_gre_4nic.svg
@@ -1,4 +1,4 @@
-
+
diff --git a/_images/q32_vlan_3nic.svg b/_images/q32_vlan_3nic.svg
index d51cff7b7..caa5db965 100644
--- a/_images/q32_vlan_3nic.svg
+++ b/_images/q32_vlan_3nic.svg
@@ -1,4 +1,4 @@
-
+
diff --git a/_images/q32_vlan_4nic.svg b/_images/q32_vlan_4nic.svg
index 74ba11da7..04f635111 100644
--- a/_images/q32_vlan_4nic.svg
+++ b/_images/q32_vlan_4nic.svg
@@ -1,4 +1,4 @@
-
+
diff --git a/pages/reference-architecture/0040-network-setup.rst b/pages/reference-architecture/0040-network-setup.rst
index 5d2a64e4d..f37ee012a 100644
--- a/pages/reference-architecture/0040-network-setup.rst
+++ b/pages/reference-architecture/0040-network-setup.rst
@@ -9,29 +9,9 @@ Network Architecture
.. contents :local:
-The current architecture assumes the presence of 3 NICs, but it can be
-customized for two or 4+ network interfaces. Most servers arebuilt with at least
-two network interfaces. In this case, let's consider a typical example of three
-NIC cards. They're utilized as follows:
-
-**eth0**:
- The internal management network, used for communication with Puppet & Cobbler
-
-**eth1**:
- The public network, and floating IPs assigned to VMs
-
-**eth2**:
- The private network, for communication between OpenStack VMs, and the
- bridge interface (VLANs)
-
-In the multi-host networking mode, you can choose between the FlatDHCPManager
-and VlanManager network managers in OpenStack. The figure below illustrates the
-relevant nodes and networks.
-
-.. image:: /_images/080-networking-diagram_svg.jpg
- :align: center
-
-Lets take a closer look at each network and how its used within the environment.
+For better network performance and manageability, Fuel place different types
+of traffic into separate networks. This section describes how to distribute
+the network traffic in an OpenStack cluster.
.. index:: Public Network
@@ -40,18 +20,16 @@ Public Network
This network allows inbound connections to VMs from the outside world (allowing
users to connect to VMs from the Internet). It also allows outbound connections
-from VMs to the outside world.
+from VMs to the outside world. For security reasons, the public network is usually
+isolated from other networks in cluster. The word "Public" means that these addresses
+can be used to communicate with cluster and its VMs from outside of cluster.
-For security reasons, the public network is usually isolated from the private
-network and internal (management) network. Typically, it's a single C class
-network from your globally routed or private network range.
-
-To enable Internet access to VMs, the public network provides the address space
+To enable external access to VMs, the public network provides the address space
for the floating IPs assigned to individual VM instances by the project
-administrator. Nova-network or Neutron (formerly Quantum) services can then
-configure this address on the public network interface of the Network controller
-node. Environments based on nova-network use iptables to create a
-Destination NAT from this address to the fixed IP of the corresponding VM
+administrator. Nova Network or Neutron services can then
+configure this address on the public network interface of the Network controller
+node. E.g. environments based on Nova Network use iptables to create a
+Destination NAT from this address to the private IP of the corresponding VM
instance through the appropriate virtual bridge interface on the Network
controller node.
@@ -60,7 +38,7 @@ routed address space for VMs. The IP address from the public network that has
been assigned to a compute node is used as the source for the Source NAT
performed for traffic going from VM instances on the compute node to Internet.
-The public network also provides VIPs for Endpoint nodes, which are used to
+The public network also provides Virtual IPs for Endpoint nodes, which are used to
connect to OpenStack services APIs.
.. index:: Internal Network, Management Network
@@ -76,17 +54,36 @@ networks for security reasons.
The internal network can also be used for serving iSCSI protocol exchanges
between Compute and Storage nodes.
-This network usually is a single C class network from your private, non-globally
-routed IP address range.
-
.. index:: Private Network
Private Network
---------------
The private network facilitates communication between each tenant's VMs. Private
-network address spaces are part of the enterprise network address space. Fixed
-IPs of virtual instances are directly accessible from the rest of Enterprise network.
+network address spaces are not a part of the enterprise network address space. Fixed
+IPs of virtual instances are directly unaccessible from the rest of Enterprise network.
+
+NIC usage
+---------
+
+The current architecture assumes the presence of 3 NICs, but it can be
+customized for two or 4+ network interfaces. Most servers are built with at least
+two network interfaces. In this case, let's consider a typical example of three
+NIC cards. They're utilized as follows:
+
+**eth0**:
+ The internal management network, used for communication with Puppet & Cobbler
+
+**eth1**:
+ The public network, and floating IPs assigned to VMs
+
+**eth2**:
+ The private network, for communication between OpenStack VMs, and the
+ bridge interface (VLANs)
+
+The figure below illustrates the relevant nodes and networks in Neutron VLAN mode.
+
+.. image:: /_images/080-networking-diagram_svg.jpg
+ :align: center
+
-The private network can be segmented into separate isolated VLANs, which are
-managed by nova-network or Neutron (formerly Quantum) services.
diff --git a/pages/reference-architecture/0060-quantum-vs-nova-network.rst b/pages/reference-architecture/0060-quantum-vs-nova-network.rst
index 4d00bdde3..62db54e6b 100644
--- a/pages/reference-architecture/0060-quantum-vs-nova-network.rst
+++ b/pages/reference-architecture/0060-quantum-vs-nova-network.rst
@@ -1,28 +1,8 @@
.. index:: Neutron vs. nova-network, Quantum vs. nova-network
-Neutron vs. nova-network
+Neutron vs. Nova Network
========================
-Neutron (formerly Quantum) is a service which provides Networking-as-a-Service
-functionality in OpenStack. It has a rich tenant-facing API for defining
-network connectivity and addressing in the cloud, and gives operators the
-ability to leverage different networking technologies to power their cloud
-networking.
-
-There are various deployment use cases for Neutron. Fuel supports the most
-common of them, called Per-tenant Routers with Private Networks.
-Each tenant has a virtual Neutron router with one or more private networks,
-which can communicate with the outside world.
-This allows full routing isolation for each tenant private network.
-
-Neutron is not, however, required in order to run an OpenStack environment. If
-you don't need (or want) this added functionality, it's perfectly acceptable to
-continue using nova-network.
-
-In order to deploy Neutron, you need to enable it in the Fuel configuration.
-Fuel sets up Neutron components on each of the controllers to act as a router
-in HA (if deploying in HA mode).
-
Terminology
-----------
@@ -46,26 +26,73 @@ Terminology
Overview
--------
-OpenStack networking with Neutron (Quantum) has some differences from
-Nova-network. Neutron is able to virtualize and manage both layer 2 (logical)
+
+With Fuel you can chose between two network providers: Nova Network and Neutron.
+
+Nova Network
+------------
+
+Nova Network is based on standart Linux bridges and uses firewall of the host node.
+Nova Network was a default network provider in OpenStack before Grizzly release.
+It can use two network managers: Flat DHCP Manager and Vlan Manager.
+
+* **Flat DHCP Manager** keeps all tenants in one L2 broadcast segment, so there is nothing
+ about security and tenant isolation. If you want to make a quick and simple
+ cluster you can use this manager.
+
+* **Vlan Manager** uses VLANs to separate tenants from each other.
+
+Because of Nova Network uses the host node's kernel for an IP routing,
+the IP address spaces inside of tenants can't be intersected.
+Also you can not apply restrictions to a tenant's internal traffic at cluster level.
+All traffic filtering in Nova Network applies at the border between external network and
+tenant's internal network.
+
+Neutron
+-------
+
+Neutron (formerly Quantum) is a service which provides Networking-as-a-Service
+functionality in OpenStack. It has a rich tenant-facing API for defining
+network connectivity and addressing in the cloud, and gives operators the
+ability to leverage different networking technologies to power their cloud
+networking.
+
+There are various deployment use cases for Neutron. Fuel supports the most
+common of them, called Per-tenant Routers with Private Networks.
+Each tenant has a virtual Neutron router with one or more private networks,
+which can communicate with the outside world.
+This allows full routing isolation for each tenant private network.
+
+Neutron is not, however, required in order to run an OpenStack environment. If
+you don't need (or want) this added functionality, it's perfectly acceptable to
+continue using Nova Network.
+
+In order to deploy Neutron, you need to enable it in the Fuel configuration.
+Fuel sets up Neutron components on each of the controllers to act as a router
+in HA (if deploying in HA mode).
+
+OpenStack networking with Neutron has some differences from
+Nova Network. Neutron is able to virtualize and manage both layer 2 (logical)
and layer 3 (network) of the OSI network model, as compared to simple layer 3
-virtualization provided by nova-network. This is the main difference between
+virtualization provided by Nova Network. This is the main difference between
the two networking models for OpenStack. Virtual networks (one or more) can be
created for a single tenant, forming an isolated L2 network called a
"private network". Each private network can support one or many IP subnets.
Private networks can be segmented using two different technologies:
-* **VLAN segmentation** "Private network" traffic is managed by
- Neutron by the use of a dedicated network adapter. This network adapter must be
- attached to a untagged network port. This network segment also must be
- reserved only for Neutron on each host (Computes and Controllers). You should
- not use any other 802.1q VLANs on this network for other purposes.
+* **VLAN segmentation** "Private network" traffic is managed by Neutron
+ by the use of a dedicated network adapter. This network segment also must be
+ reserved only for Neutron on each host (Computes and Controllers).
Additionally, each private network requires its own dedicated VLAN, selected
from a given range configured in Fuel UI.
* **GRE segmentation** In this mode of operation, Neutron does not
require a dedicated network adapter. Neutron builds a mesh of GRE tunnels from
each compute node and controller nodes to every other node. Private networks
- for each tenant make use of this mesh for isolated traffic.
+ for each tenant make use of this mesh for isolated traffic.
+ The good point for using GRE segmentation is when you don't have enough
+ free VLAN ranges in your network backbone or single L2 segment for all nodes
+ cannot be established. But GRE processing eats system resources a lot as well
+ as network performance.
It is important to note:
@@ -76,7 +103,7 @@ It is important to note:
and public Openstack API.
* Is a best if you place the Private, Admin, Public and Management networks on a
separate NIC to ensure that traffic is separated adequately.
-* Admin and Private networks must be located together on separate NIC from the
+* Admin and Private networks must be located on separate NICs from the
other networks.
A typical network configuration for Neutron with VLAN segmentation might look
@@ -94,45 +121,58 @@ like this:
The most likely configuration for different number NICs on cluster nodes:
-+------+----------------------------------------+----------------------------------------+
-| NICs | VLAN | GRE |
-+======+========================================+========================================+
-| 2 | Not supported | .. image:: /_images/q32_gre_2nic.svg |
-| | | :align: center |
-+------+----------------------------------------+----------------------------------------+
-| 3 | .. image:: /_images/q32_vlan_3nic.svg | .. image:: /_images/q32_gre_3nic.svg |
-| | :align: center | :align: center |
-+------+----------------------------------------+----------------------------------------+
-| 4 | .. image:: /_images/q32_vlan_4nic.svg | .. image:: /_images/q32_gre_4nic.svg |
-| | :align: center | :align: center |
-+------+----------------------------------------+----------------------------------------+
++------+----------------------------------------+-------------------------------------------+
+| NICs | VLAN | GRE |
++======+========================================+===========================================+
+| 2 | Not supported | .. image:: /_images/q32_gre_2nic.svg |
+| | | :align: center |
+| | | :width: 500 |
+| | | :height: 200 |
++------+----------------------------------------+-------------------------------------------+
+| 3 | .. image:: /_images/q32_vlan_3nic.svg | .. image:: /_images/q32_gre_3nic.svg |
+| | :align: center | :align: center |
+| | :width: 500 | :width: 500 |
+| | :height: 250 | :height: 250 |
++------+----------------------------------------+-------------------------------------------+
+| 4 | .. image:: /_images/q32_vlan_4nic.svg | .. image:: /_images/q32_gre_4nic.svg |
+| | :align: center | :align: center |
+| | :width: 500 | :width: 500 |
+| | :height: 300 | :height: 300 |
++------+----------------------------------------+-------------------------------------------+
Known limitations
-----------------
+* To deploy OpenStack using Neutron with GRE segmentation, each node requires at
+least 2 NICs.
+* To deploy OpenStack using Neutron with VLAN segmentation, each node requires
+at least 3 NICs.
+
* Neutron will not allocate a floating IP range for your tenants. After each
tenant is created, a floating IP range must be created. Note that this does
not prevent Internet connectivity for a tenant's instances, but it would
prevent them from receiving incoming connections. You, the administrator,
- should assign a floating IP addresses for the tenant. Below are steps you can
+ should assign a floating IP network for the tenant. Below are steps you can
follow to do this:
- | get admin credentials:
- | # source /root/openrc
- | get admin tenant-ID:
- | # keystone tenant-list
+ ::
- +----------------------------------+----------+---------+
- | id | name | enabled |
- +==================================+==========+=========+
- | b796f91df6b84860a7cd474148fb2229 | admin | True |
- +----------------------------------+----------+---------+
- | cba7b0ff68ee4985816ac3585c8e23a9 | services | True |
- +----------------------------------+----------+---------+
-
- | create one floating-ip address for admin tenant:
- | # quantum floatingip-create --tenant-id=b796f91df6b84860a7cd474148fb2229 net04_ext
+ %get admin credentials:
+ (bash)# source /root/openrc
+ %get admin tenant-ID:
+ (bash)# keystone tenant-list
+
+ +----------------------------------+----------+---------+
+ | id | name | enabled |
+ +==================================+==========+=========+
+ | b796f91df6b84860a7cd474148fb2229 | admin | True |
+ +----------------------------------+----------+---------+
+ | cba7b0ff68ee4985816ac3585c8e23a9 | services | True |
+ +----------------------------------+----------+---------+
+
+ %create floating-ip for admin tenant:
+ (bash)# quantum floatingip-create --tenant-id=b796f91df6b84860a7cd474148fb2229 net04_ext
* You can't combine Private or Admin network with any other networks on one NIC.
* To deploy OpenStack using Neutron with GRE segmentation, each node requires at
diff --git a/pages/reference-architecture/0080-swift-notes.rst b/pages/reference-architecture/0080-swift-notes.rst
index aec1a384a..3fede7797 100644
--- a/pages/reference-architecture/0080-swift-notes.rst
+++ b/pages/reference-architecture/0080-swift-notes.rst
@@ -3,7 +3,7 @@
.. _Swift-and-object-storage-notes:
Object Storage Deployment
--------------------------
+=========================
.. TODO(mihgen): we need to rewrite this and add info about Ceph
Fuel currently supports several scenarios to deploy the object storage:
diff --git a/pages/reference-architecture/0090-savanna.rst b/pages/reference-architecture/0090-savanna.rst
index 9bf6da345..cbe7cfd40 100644
--- a/pages/reference-architecture/0090-savanna.rst
+++ b/pages/reference-architecture/0090-savanna.rst
@@ -3,7 +3,7 @@
.. _savanna-deployment-label:
Savanna Deployment
-------------------
+==================
Savanna is a service for launching Hadoop clusters on OpenStack. It is
designed to be vendor-agnostic and currently supports two distributions: