break install guide into modular files

Change-Id: I4d3331365922f9449e1c3400b93aff4cec84184c
This commit is contained in:
Meg McRoberts
2014-03-31 20:52:34 -07:00
committed by Dmitry Borodaenko
parent 1f6453a72c
commit 161c35465e
53 changed files with 1215 additions and 1197 deletions

View File

@@ -9,7 +9,7 @@
.. include:: /pages/install-guide/0000-intro.rst
.. include:: /pages/install-guide/0010-prerequisites.rst
.. include:: /pages/install-guide/0060-download-fuel.rst
.. include:: /pages/install-guide/networks.rst
.. include:: /pages/install-guide/0070-networks.rst
.. include:: /pages/install-guide/install.rst
.. include:: /pages/install-guide/stop_reset.rst

View File

@@ -12,6 +12,6 @@ Installation Guide
.. include:: /pages/install-guide/0000-intro.rst
.. include:: /pages/install-guide/0010-prerequisites.rst
.. include:: /pages/install-guide/0060-download-fuel.rst
.. include:: /pages/install-guide/networks.rst
.. include:: /pages/install-guide/0070-networks.rst
.. include:: /pages/install-guide/install.rst
.. include:: /pages/install-guide/stop_reset.rst

View File

@@ -1,156 +1,7 @@
.. index:: Introduction
.. _Introduction:
Introduction
================================
This section introduces Fuel for OpenStack and its components.
Introducing Fuel for OpenStack
--------------------------------
Mirantis Fuel for OpenStack is a set of deployment tools that helps you to
quickly deploy your cloud environment. Fuel includes the scripts that
dramatically facilitate and speed up the process of cloud deployment.
Typically, OpenStack installation requires you familiarize yourself
with the processes of installation of OpenStack environment components.
Fuel eliminates the need to study these processes. With Fuel, system
administrators can provision an OpenStack single node, as well as
clustered cloud in terms of minutes.
Deployment Modes
-----------------------------
You can use Fuel for OpenStack to create virtually any OpenStack
configuration. However, Mirantis provides several pre-defined
architectures for your convenience.
The pre-defined architectures include:
* **Multi-node**
The Multi-node environment provides an easy way
to install an entire OpenStack environment without requiring the degree
of increased hardware involved in ensuring high availability.
Mirantis recommends that you use this architecture for testing
purposes.
* **Multi-node HA**
The Multi-node with HA environment is dedicated for highly available
production deployments. Using Multi-node with HA you can deploy
additional services, such as Cinder, Neutron, and Ceph.
You can create the following multi-node environments:
With Fuel, you can create your own cloud environment that includes
additional components.
For more information, contact `Mirantis <http://www.mirantis.com/contact/>`_.
.. seealso:: :ref:`Reference Architecture<ref-arch>`
About Fuel and OpenStack Components
-----------------------------------
You can use Fuel to quickly deploy and manage the OpenStack environment.
Fuel includes the following components:
* **Master Node**
The Fuel Master Node is the lifecycle management application for
deployment and managing OpenStack. It sits outside the OpenStack
environment and services as a control plane for multiple OpenStack
envionments.
* **Controller Node**
A controller node that manages the OpenStack environment including
deployment of additional controller and compute nodes, configuring
network settings, and so on. For HA deployments, Mirantis recommends
to deploy at least 3 controller nodes.
* **Compute Node(s)**
A compute node is a server where you run virtual machines and
applications.
* **Storage Node(s)**
Optional component. You can deploy a separate Swift or Ceph storage
node. Mirantis recommends to deploy standalone storage nodes for high
availability environments.
Supported Software
------------------
* **Operating Systems**
* CentOS 6.4 (x86_64 architecture only)
* Ubuntu 12.04 (x86_64 architecture only)
* **Puppet (IT automation tool)** 2.7.23
* **MCollective** 2.3.3
* **Cobbler (bare-metal provisioning tool)** 2.2.3
* **OpenStack Core Projects**
* Havana release 2013.2.2
* Nova (OpenStack Compute)
* Swift (OpenStack Object Storage)
* Glance (OpenStack Image Service)
* Keystone (OpenStack Identity)
* Horizon (OpenStack Dashboard)
* Neutron (OpenStack Networking)
* Cinder (OpenStack Block Storage service)
* **OpenStack Core Integrated Projects**
* Havana Release 2013.2.2
* Ceilometer (OpenStack Telemetry)
* Heat (OpenStack Orchestration)
* **OpenStack Related Projects**
* Savanna v0.3
* Murano v0.4.1
* **Hypervisor**
* KVM
* QEMU
* **Open vSwitch** 1.10.2 (CentOS), 1.10.1 (Ubuntu)
* **HA Proxy** 1.4.24
* **Galera** 23.2.2
* **RabbitMQ** 2.8.7
* **Pacemaker** 1.1.10
* **Corosync** 1.4.6
* **Keepalived** 1.2.4
* **Ceph Dumpling** (v0.67.5)
* **MySQL** (v5.5.28)
Fuel Installation Procedures
----------------------------
You must complete the following tasks to use Fuel to deploy OpenStack
clouds:
1. Install the Fuel Master Node on physical or virtual hardware using
the Fuel installation image
2. Set the other nodes to boot from the network and power them on
to make them accessible for Fuel Master node
3. Assign your desired roles to the discovered nodes using Fuel
UI or CLI.
Fuel is designed to maintain the OpenStack environment while providing
the flexibility to adapt it to your configuration.
.. image:: /_images/how-it-works.*
:width: 80%
:align: center
.. include:: /pages/install-guide/0000-intro/0000-intro-header.rst
.. include:: /pages/install-guide/0000-intro/0100-fuel-intro.rst
.. include:: /pages/install-guide/0000-intro/0200-deploy-modes.rst
.. include:: /pages/install-guide/0000-intro/0300-nodetypes-intro.rst
.. include:: /pages/install-guide/0000-intro/0400-supported-software-list.rst
.. include:: /pages/install-guide/0000-intro/0500-install-summary.rst

View File

@@ -0,0 +1,7 @@
.. index:: Introduction
.. _Introduction:
Introduction
============
This section introduces Fuel for OpenStack and its components.

View File

@@ -0,0 +1,11 @@
Introducing Fuel for OpenStack
------------------------------
Mirantis Fuel for OpenStack is a set of deployment tools that helps you to
quickly deploy your cloud environment. Fuel includes the scripts that
dramatically facilitate and speed up the process of cloud deployment.
Typically, OpenStack installation requires you familiarize yourself
with the processes of installation of OpenStack environment components.
Fuel eliminates the need to study these processes. With Fuel, system
administrators can provision an OpenStack single node, as well as
clustered cloud in terms of minutes.

View File

@@ -0,0 +1,27 @@
Deployment Modes
-----------------------------
You can use Fuel for OpenStack to create virtually any OpenStack
configuration. However, Mirantis provides several pre-defined
architectures for your convenience.
The pre-defined architectures include:
* **Multi-node**
The Multi-node environment provides an easy way
to install an entire OpenStack environment without requiring the degree
of increased hardware involved in ensuring high availability.
Mirantis recommends that you use this architecture for testing
purposes.
* **Multi-node HA**
The Multi-node with HA environment is dedicated for highly available
production deployments. Using Multi-node with HA you can deploy
additional services, such as Cinder, Neutron, and Ceph.
You can create the following multi-node environments:
With Fuel, you can create your own cloud environment that includes
additional components.
For more information, contact `Mirantis <http://www.mirantis.com/contact/>`_.
.. seealso:: :ref:`Reference Architecture<ref-arch>`

View File

@@ -0,0 +1,27 @@
About Fuel and OpenStack Components
-----------------------------------
You can use Fuel to quickly deploy and manage the OpenStack environment.
Fuel includes the following components:
* **Master Node**
The Fuel Master Node is the lifecycle management application for
deployment and managing OpenStack. It sits outside the OpenStack
environment and services as a control plane for multiple OpenStack
envionments.
* **Controller Node**
A controller node that manages the OpenStack environment including
deployment of additional controller and compute nodes, configuring
network settings, and so on. For HA deployments, Mirantis recommends
to deploy at least 3 controller nodes.
* **Compute Node(s)**
A compute node is a server where you run virtual machines and
applications.
* **Storage Node(s)**
Optional component. You can deploy a separate Swift or Ceph storage
node. Mirantis recommends to deploy standalone storage nodes for high
availability environments.

View File

@@ -0,0 +1,60 @@
Supported Software
------------------
* **Operating Systems**
* CentOS 6.4 (x86_64 architecture only)
* Ubuntu 12.04 (x86_64 architecture only)
* **Puppet (IT automation tool)** 2.7.23
* **MCollective** 2.3.3
* **Cobbler (bare-metal provisioning tool)** 2.2.3
* **OpenStack Core Projects**
* Havana release 2013.2.2
* Nova (OpenStack Compute)
* Swift (OpenStack Object Storage)
* Glance (OpenStack Image Service)
* Keystone (OpenStack Identity)
* Horizon (OpenStack Dashboard)
* Neutron (OpenStack Networking)
* Cinder (OpenStack Block Storage service)
* **OpenStack Core Integrated Projects**
* Havana Release 2013.2.2
* Ceilometer (OpenStack Telemetry)
* Heat (OpenStack Orchestration)
* **OpenStack Related Projects**
* Savanna v0.3
* Murano v0.4.1
* **Hypervisor**
* KVM
* QEMU
* **Open vSwitch** 1.10.2 (CentOS), 1.10.1 (Ubuntu)
* **HA Proxy** 1.4.24
* **Galera** 23.2.2
* **RabbitMQ** 2.8.7
* **Pacemaker** 1.1.10
* **Corosync** 1.4.6
* **Keepalived** 1.2.4
* **Ceph Dumpling** (v0.67.5)
* **MySQL** (v5.5.28)

View File

@@ -0,0 +1,20 @@
Fuel Installation Procedures
----------------------------
You must complete the following tasks to use Fuel to deploy OpenStack
clouds:
1. Install the Fuel Master Node on physical or virtual hardware using
the Fuel installation image
2. Set the other nodes to boot from the network and power them on
to make them accessible for Fuel Master node
3. Assign your desired roles to the discovered nodes using Fuel
UI or CLI.
Fuel is designed to maintain the OpenStack environment while providing
the flexibility to adapt it to your configuration.
.. image:: /_images/how-it-works.*
:width: 80%
:align: center

View File

@@ -1,323 +1,12 @@
.. index:: Prerequisites
.. _Prerequisites:
Prerequisites
===========================
The amount of hardware depends on your deployment requirements.
When you plan your OpenStack environment, consider the following:
* **CPU**
Depends on the number of virtual machines that you plan to deploy
in your cloud environment and the CPU per virtual machine.
* **Memory**
Depends on the amount of RAM assigned per virtual machine and the
controller node.
* **Storage**
Depends on the local drive space per virtual machine, remote volumes
that can be attached to a virtual machine, and object storage.
* **Networking**
Depends on the OpenStack architecture, network bandwidth per virtual
machine, and network storage.
Example of Hardware Requirements Calculation
-----------------------------------------------
When you calculate resources for your OpenStack environment, consider
the resources required for expanding your environment.
The example described in this section presumes that your environment
has the following prerequisites:
* 100 virtual machines
* 2 x Amazon EC2 compute units 2 GHz average
* 16 x Amazon EC2 compute units 16 GHz maximum
.. seealso:: `Fuel Hardware Calculator <https://www.mirantis.com/openstack-services/bom-calculator/>`_
Calculating CPU
----------------
Use the following formula to calculate the number of CPU cores per virtual machine::
max GHz /(number of GHz per core x 1.3 for hyper-threading)
Example::
16 GHz / (2.4 x 1.3) = 5.12
Therefore, you must assign at least 5 CPU cores per virtual machine.
Use the following formula to calculate the total number of CPU cores::
(number of VMs x number of GHz per VM) / number of GHz per core
Example::
(100 VMs * 2 GHz per VM) / 2.4 GHz per core = 84
Therefore, the total number of CPU cores for 100 virtual machines is 84.
Depending on the selected CPU you can calculate the required number of sockets.
Use the following formula::
total number of CPU cores / number of cores per socket
For example, you use Intel E5 2650-70 8 core CPU::
84 / 8 = 11
Therefore, you need 11 sockets. To calculate the number of servers required for your deployment, use the following formula::
total number of sockets / number of sockets per server
Round the number of sockets to an even number to get 12 sockets. Use the following formula::
12 / 2 = 6
Therefore, you need 6 dual socket servers.
You can calculate the number of virtual machines per server using the following formula::
number of virtual machines / number of servers
Example::
100 / 6 = 16.6
Therefore, you can deploy 17 virtual machines per server.
Using this calculation, you can add additional servers accounting for 17 virtual machines per server.
The calculation presumes the following conditions:
* No CPU oversubscribing
* If you use hyper-threading, count each core as 1.3, not 2.
* CPU supports the technologies required for your deployment
Calculating Memory
--------------------
Continuing to use the example from the previous section, we need to determine
how much RAM will be required to support 17 VMs per server. Let's assume that
you need an average of 4 GBs of RAM per VM with dynamic allocation for up to
12GBs for each VM. Calculating that all VMs will be using 12 GBs of RAM requires
that each server have 204 GBs of available RAM.
You must also consider that the node itself needs sufficient RAM to accommodate
core OS operations as well as RAM for each VM container (not the RAM allocated
to each VM, but the memory the core OS uses to run the VM). The node's OS must
run it's own operations, schedule processes, allocate dynamic resources, and
handle network operations, so giving the node itself at least 16 GBs or more RAM
is not unreasonable.
Considering that the RAM we would consider for servers comes in 4 GB, 8 GB, 16 GB
and 32 GB sticks, we would need a total of 256 GBs of RAM installed per server.
For an average 2-CPU socket server board you get 16-24 RAM slots. To have
256 GBs installed you would need sixteen 16 GB sticks of RAM to satisfy your RAM
needs for up to 17 VMs requiring dynamic allocation up to 12 GBs and to support
all core OS requirements.
You can adjust this calculation based on your needs.
Calculating Storage
--------------------
When it comes to disk space there are several types that you need to consider:
* Ephemeral (the local drive space for a VM)
* Persistent (the remote volumes that can be attached to a VM)
* Object Storage (such as images or other objects)
As far as local drive space that must reside on the compute nodes, in our
example of 100 VMs we make the following assumptions:
* 150 GB local storage per VM
* 5 TB total of local storage (100 VMs * 50 GB per VM)
* 500 GB of persistent volume storage per VM
* 50 TB total persistent storage
Returning to our already established example, we need to figure out how much
storage to install per server. This storage will service the 17 VMs per server.
If we are assuming 50 GBs of storage for each VMs drive container, then we would
need to install 2.5 TBs of storage on the server. Since most servers have
anywhere from 4 to 32 2.5" drive slots or 2 to 12 3.5" drive slots, depending on
server form factor (i.e., 2U vs. 4U), you will need to consider how the storage
will be impacted by the intended use.
If storage impact is not expected to be significant, then you may consider using
unified storage. For this example a single 3 TB drive would provide more than
enough storage for seventeen 150 GB VMs. If speed is really not an issue, you might even
consider installing two or three 3 TB drives and configure a RAID-1 or RAID-5
for redundancy. If speed is critical, however, you will likely want to have a
single hardware drive for each VM. In this case you would likely look at a 3U
form factor with 24-slots.
Don't forget that you will also need drive space for the node itself, and don't
forget to order the correct backplane that supports the drive configuration
that meets your needs. Using our example specifications and assuming that speed
is critical, a single server would need 18 drives, most likely 2.5" 15,000 RPM
146 GB SAS drives.
Throughput
++++++++++
As far as throughput, that's going to depend on what kind of storage you choose.
In general, you calculate IOPS based on the packing density (drive IOPS * drives
in the server / VMs per server), but the actual drive IOPS will depend on the
drive technology you choose. For example:
* 3.5" slow and cheap (100 IOPS per drive, with 2 mirrored drives)
* 100 IOPS * 2 drives / 17 VMs per server = 12 Read IOPS, 6 Write IOPS
* 2.5" 15K (200 IOPS, four 600 GB drive, RAID-10)
* 200 IOPS * 4 drives / 17 VMs per server = 48 Read IOPS, 24 Write IOPS
* SSD (40K IOPS, eight 300 GB drive, RAID-10)
* 40K * 8 drives / 17 VMs per server = 19K Read IOPS, 9.5K Write IOPS
Clearly, SSD gives you the best performance, but the difference in cost between
SSDs and the less costly platter-based solutions is going to be significant, to
say the least. The acceptable cost burden is determined by the balance between
your budget and your performance and redundancy needs. It is also important to
note that the rules for redundancy in a cloud environment are different than a
traditional server installation in that entire servers provide redundancy as
opposed to making a single server instance redundant.
In other words, the weight for redundant components shifts from individual OS
installation to server redundancy. It is far more critical to have redundant
power supplies and hot-swappable CPUs and RAM than to have redundant compute
node storage. If, for example, you have 18 drives installed on a server and have
17 drives directly allocated to each VM installed and one fails, you simply
replace the drive and push a new node copy. The remaining VMs carry whatever
additional load is present due to the temporary loss of one node.
Remote storage
++++++++++++++
IOPS will also be a factor in determining how you plan to handle persistent
storage. For example, consider these options for laying out your 50 TB of remote
volume space:
* 12 drive storage frame using 3 TB 3.5" drives mirrored
* 36 TB raw, or 18 TB usable space per 2U frame
* 3 frames (50 TB / 18 TB per server)
* 12 slots x 100 IOPS per drive = 1200 Read IOPS, 600 Write IOPS per frame
* 3 frames x 1200 IOPS per frame / 100 VMs = 36 Read IOPS, 18 Write IOPS per VM
* 24 drive storage frame using 1TB 7200 RPM 2.5" drives
* 24 TB raw, or 12 TB usable space per 2U frame
* 5 frames (50 TB / 12 TB per server)
* 24 slots x 100 IOPS per drive = 2400 Read IOPS, 1200 Write IOPS per frame
* 5 frames x 2400 IOPS per frame / 100 VMs = 120 Read IOPS, 60 Write IOPS per frame
You can accomplish the same thing with a single 36 drive frame using 3 TB
drives, but this becomes a single point of failure in your environment.
Object storage
++++++++++++++
When it comes to object storage, you will find that you need more space than
you think. For example, this example specifies 50 TB of object storage.
Object storage uses a default of 3 times the required space for replication,
which means you will need 150 TB. However, to accommodate two hands-off zones,
you will need 5 times the required space, which actually means 250 TB.
The calculations don't end there. You don't ever want to run out of space, so
"full" should really be more like 75% of capacity, which means you will need a
total of 333 TB, or a multiplication factor of 6.66.
Of course, that might be a bit much to start with; you might want to start
with a happy medium of a multiplier of 4, then acquire more hardware as your
drives begin to fill up. That calculates to 200 TB in our example. So how do
you put that together? If you were to use 3 TB 3.5" drives, you could use a 12
drive storage frame, with 6 servers hosting 36 TB each (for a total of 216 TB).
You could also use a 36 drive storage frame, with just 2 servers hosting 108 TB
each, but its not recommended due to the high cost of failure to replication
and capacity issues.
Calculating Network
--------------------
Perhaps the most complex part of designing an OpenStack environment is the
networking.
An OpenStack environment can involve multiple networks even beyond the Public,
Private, and Internal networks. Your environment may involve tenant networks,
storage networks, multiple tenant private networks, and so on. Many of these
will be VLANs, and all of them will need to be planned out in advance to avoid
configuration issues.
In terms of the example network, consider these assumptions:
* 100 Mbits/second per VM
* HA architecture
* Network Storage is not latency sensitive
In order to achieve this, you can use two 1 Gb links per server (2 x 1000
Mbits/second / 17 VMs = 118 Mbits/second).
Using two links also helps with HA. You can also increase throughput and
decrease latency by using two 10 Gb links, bringing the bandwidth per VM to
1 Gb/second, but if you're going to do that, you've got one more factor to
consider.
Scalability and oversubscription
++++++++++++++++++++++++++++++++
It is one of the ironies of networking that 1 Gb Ethernet generally scales
better than 10Gb Ethernet -- at least until 100 Gb switches are more commonly
available. It's possible to aggregate the 1 Gb links in a 48 port switch, so
that you have 48 x 1 Gb links down, but 4 x 10 Gb links up. Do the same thing with a
10 Gb switch, however, and you have 48 x 10 Gb links down and 4 x 100b links up,
resulting in oversubscription.
Like many other issues in OpenStack, you can avoid this problem to a great
extent with careful planning. Problems only arise when you are moving between
racks, so plan to create "pods", each of which includes both storage and
compute nodes. Generally, a pod is the size of a non-oversubscribed L2 domain.
Hardware for this example
+++++++++++++++++++++++++
In this example, you are looking at:
* 2 data switches (for HA), each with a minimum of 12 ports for data
(2 x 1 Gb links per server x 6 servers)
* 1 x 1 Gb switch for IPMI (1 port per server x 6 servers)
* Optional Cluster Management switch, plus a second for HA
Because your network will in all likelihood grow, it's best to choose 48 port
switches. Also, as your network grows, you will need to consider uplinks and
aggregation switches.
Summary
----------
In general, your best bet is to choose a 2 socket server with a balance in I/O,
CPU, Memory, and Disk that meets your project requirements.
Look for a 1U R-class or 2U high density C-class servers. Some good options
from Dell for compute nodes include:
* Dell PowerEdge R620
* Dell PowerEdge C6220 Rack Server
* Dell PowerEdge R720XD (for high disk or IOPS requirements)
You may also want to consider systems from HP (http://www.hp.com/servers) or
from a smaller systems builder like Aberdeen, a manufacturer that specializes
in powerful, low-cost systems and storage servers (http://www.aberdeeninc.com).
.. include:: /pages/install-guide/0010-prerequisites/0000-hdwr-sizing-intro.rst
.. include:: /pages/install-guide/0010-prerequisites/0200-ex-hardware-calculations.rst
.. include:: /pages/install-guide/0010-prerequisites/0300-cpu-hardware-sizing.rst
.. include:: /pages/install-guide/0010-prerequisites/0400-memory-hardware-sizing.rst
.. include:: /pages/install-guide/0010-prerequisites/0500-storage-memory-hardware-sizing.rst
.. include:: /pages/install-guide/0010-prerequisites/0530-throughput-hardware-sizing.rst
.. include:: /pages/install-guide/0010-prerequisites/0545-remote-storage-hardware.rst
.. include:: /pages/install-guide/0010-prerequisites/0547-object-storage-hardware.rst
.. include:: /pages/install-guide/0010-prerequisites/0600-network-hardware-sizing.rst
.. include:: /pages/install-guide/0010-prerequisites/0630-scalability.rst
.. include:: /pages/install-guide/0010-prerequisites/0640-hardware-ex.rst
.. include:: /pages/install-guide/0010-prerequisites/0700-hardware-summary.rst

View File

@@ -0,0 +1,32 @@
.. index:: Prerequisites
.. _Prerequisites:
Prerequisites
=============
The amount of hardware depends on your deployment requirements.
When you plan your OpenStack environment, consider the following:
* **CPU**
Depends on the number of virtual machines that you plan to deploy
in your cloud environment and the CPU per virtual machine.
* **Memory**
Depends on the amount of RAM assigned per virtual machine and the
controller node.
* **Storage**
Depends on the local drive space per virtual machine, remote volumes
that can be attached to a virtual machine, and object storage.
* **Networking**
Depends on the OpenStack architecture, network bandwidth per virtual
machine, and network storage.

View File

@@ -0,0 +1,15 @@
Example of Hardware Requirements Calculation
--------------------------------------------
When you calculate resources for your OpenStack environment, consider
the resources required for expanding your environment.
The example described in this section presumes that your environment
has the following prerequisites:
* 100 virtual machines
* 2 x Amazon EC2 compute units 2 GHz average
* 16 x Amazon EC2 compute units 16 GHz maximum
.. seealso:: `Fuel Hardware Calculator <https://www.mirantis.com/openstack-services/bom-calculator/>`_

View File

@@ -0,0 +1,64 @@
Calculating CPU
----------------
Use the following formula to calculate the number of CPU cores per virtual machine::
max GHz /(number of GHz per core x 1.3 for hyper-threading)
Example::
16 GHz / (2.4 x 1.3) = 5.12
Therefore, you must assign at least 5 CPU cores per virtual machine.
Use the following formula to calculate the total number of CPU cores::
(number of VMs x number of GHz per VM) / number of GHz per core
Example::
(100 VMs * 2 GHz per VM) / 2.4 GHz per core = 84
Therefore, the total number of CPU cores for 100 virtual machines is 84.
Depending on the selected CPU you can calculate the required number of sockets.
Use the following formula::
total number of CPU cores / number of cores per socket
For example, you use Intel E5 2650-70 8 core CPU::
84 / 8 = 11
Therefore, you need 11 sockets.
To calculate the number of servers required for your deployment,
use the following formula::
total number of sockets / number of sockets per server
Round the number of sockets to an even number to get 12 sockets.
Use the following formula::
12 / 2 = 6
Therefore, you need 6 dual socket servers.
You can calculate the number of virtual machines per server using the following formula::
number of virtual machines / number of servers
Example::
100 / 6 = 16.6
Therefore, you can deploy 17 virtual machines per server.
Using this calculation, you can add additional servers accounting for 17 virtual machines per server.
The calculation presumes the following conditions:
* No CPU oversubscribing
* If you use hyper-threading, count each core as 1.3, not 2.
* CPU supports the technologies required for your deployment

View File

@@ -0,0 +1,24 @@
Calculating Memory
------------------
Continuing to use the example from the previous section, we need to determine
how much RAM will be required to support 17 VMs per server. Let's assume that
you need an average of 4 GBs of RAM per VM with dynamic allocation for up to
12GBs for each VM. Calculating that all VMs will be using 12 GBs of RAM requires
that each server have 204 GBs of available RAM.
You must also consider that the node itself needs sufficient RAM to accommodate
core OS operations as well as RAM for each VM container (not the RAM allocated
to each VM, but the memory the core OS uses to run the VM). The node's OS must
run it's own operations, schedule processes, allocate dynamic resources, and
handle network operations, so giving the node itself at least 16 GBs or more RAM
is not unreasonable.
Considering that the RAM we would consider for servers comes in 4 GB, 8 GB, 16 GB
and 32 GB sticks, we would need a total of 256 GBs of RAM installed per server.
For an average 2-CPU socket server board you get 16-24 RAM slots. To have
256 GBs installed you would need sixteen 16 GB sticks of RAM to satisfy your RAM
needs for up to 17 VMs requiring dynamic allocation up to 12 GBs and to support
all core OS requirements.
You can adjust this calculation based on your needs.

View File

@@ -0,0 +1,38 @@
Calculating Storage
--------------------
When it comes to disk space there are several types that you need to consider:
* Ephemeral (the local drive space for a VM)
* Persistent (the remote volumes that can be attached to a VM)
* Object Storage (such as images or other objects)
As far as local drive space that must reside on the compute nodes, in our
example of 100 VMs we make the following assumptions:
* 150 GB local storage per VM
* 5 TB total of local storage (100 VMs * 50 GB per VM)
* 500 GB of persistent volume storage per VM
* 50 TB total persistent storage
Returning to our already established example, we need to figure out how much
storage to install per server. This storage will service the 17 VMs per server.
If we are assuming 50 GBs of storage for each VMs drive container, then we would
need to install 2.5 TBs of storage on the server. Since most servers have
anywhere from 4 to 32 2.5" drive slots or 2 to 12 3.5" drive slots, depending on
server form factor (i.e., 2U vs. 4U), you will need to consider how the storage
will be impacted by the intended use.
If storage impact is not expected to be significant, then you may consider using
unified storage. For this example a single 3 TB drive would provide more than
enough storage for seventeen 150 GB VMs. If speed is really not an issue, you might even
consider installing two or three 3 TB drives and configure a RAID-1 or RAID-5
for redundancy. If speed is critical, however, you will likely want to have a
single hardware drive for each VM. In this case you would likely look at a 3U
form factor with 24-slots.
Don't forget that you will also need drive space for the node itself, and don't
forget to order the correct backplane that supports the drive configuration
that meets your needs. Using our example specifications and assuming that speed
is critical, a single server would need 18 drives, most likely 2.5" 15,000 RPM
146 GB SAS drives.

View File

@@ -0,0 +1,35 @@
Throughput
++++++++++
As far as throughput, that's going to depend on what kind of storage you choose.
In general, you calculate IOPS based on the packing density (drive IOPS * drives
in the server / VMs per server), but the actual drive IOPS will depend on the
drive technology you choose. For example:
* 3.5" slow and cheap (100 IOPS per drive, with 2 mirrored drives)
* 100 IOPS * 2 drives / 17 VMs per server = 12 Read IOPS, 6 Write IOPS
* 2.5" 15K (200 IOPS, four 600 GB drive, RAID-10)
* 200 IOPS * 4 drives / 17 VMs per server = 48 Read IOPS, 24 Write IOPS
* SSD (40K IOPS, eight 300 GB drive, RAID-10)
* 40K * 8 drives / 17 VMs per server = 19K Read IOPS, 9.5K Write IOPS
Clearly, SSD gives you the best performance, but the difference in cost between
SSDs and the less costly platter-based solutions is going to be significant, to
say the least. The acceptable cost burden is determined by the balance between
your budget and your performance and redundancy needs. It is also important to
note that the rules for redundancy in a cloud environment are different than a
traditional server installation in that entire servers provide redundancy as
opposed to making a single server instance redundant.
In other words, the weight for redundant components shifts from individual OS
installation to server redundancy. It is far more critical to have redundant
power supplies and hot-swappable CPUs and RAM than to have redundant compute
node storage. If, for example, you have 18 drives installed on a server and have
17 drives directly allocated to each VM installed and one fails, you simply
replace the drive and push a new node copy. The remaining VMs carry whatever
additional load is present due to the temporary loss of one node.

View File

@@ -0,0 +1,23 @@
Remote storage
++++++++++++++
IOPS will also be a factor in determining how you plan to handle persistent
storage. For example, consider these options for laying out your 50 TB of remote
volume space:
* 12 drive storage frame using 3 TB 3.5" drives mirrored
* 36 TB raw, or 18 TB usable space per 2U frame
* 3 frames (50 TB / 18 TB per server)
* 12 slots x 100 IOPS per drive = 1200 Read IOPS, 600 Write IOPS per frame
* 3 frames x 1200 IOPS per frame / 100 VMs = 36 Read IOPS, 18 Write IOPS per VM
* 24 drive storage frame using 1TB 7200 RPM 2.5" drives
* 24 TB raw, or 12 TB usable space per 2U frame
* 5 frames (50 TB / 12 TB per server)
* 24 slots x 100 IOPS per drive = 2400 Read IOPS, 1200 Write IOPS per frame
* 5 frames x 2400 IOPS per frame / 100 VMs = 120 Read IOPS, 60 Write IOPS per frame
You can accomplish the same thing with a single 36 drive frame using 3 TB
drives, but this becomes a single point of failure in your environment.

View File

@@ -0,0 +1,21 @@
Object storage
++++++++++++++
When it comes to object storage, you will find that you need more space than
you think. For example, this example specifies 50 TB of object storage.
Object storage uses a default of 3 times the required space for replication,
which means you will need 150 TB. However, to accommodate two hands-off zones,
you will need 5 times the required space, which actually means 250 TB.
The calculations don't end there. You don't ever want to run out of space, so
"full" should really be more like 75% of capacity, which means you will need a
total of 333 TB, or a multiplication factor of 6.66.
Of course, that might be a bit much to start with; you might want to start
with a happy medium of a multiplier of 4, then acquire more hardware as your
drives begin to fill up. That calculates to 200 TB in our example. So how do
you put that together? If you were to use 3 TB 3.5" drives, you could use a 12
drive storage frame, with 6 servers hosting 36 TB each (for a total of 216 TB).
You could also use a 36 drive storage frame, with just 2 servers hosting 108 TB
each, but its not recommended due to the high cost of failure to replication
and capacity issues.

View File

@@ -0,0 +1,25 @@
Calculating Network
--------------------
Perhaps the most complex part of designing an OpenStack environment is the
networking.
An OpenStack environment can involve multiple networks even beyond the Public,
Private, and Internal networks. Your environment may involve tenant networks,
storage networks, multiple tenant private networks, and so on. Many of these
will be VLANs, and all of them will need to be planned out in advance to avoid
configuration issues.
In terms of the example network, consider these assumptions:
* 100 Mbits/second per VM
* HA architecture
* Network Storage is not latency sensitive
In order to achieve this, you can use two 1 Gb links per server (2 x 1000
Mbits/second / 17 VMs = 118 Mbits/second).
Using two links also helps with HA. You can also increase throughput and
decrease latency by using two 10 Gb links, bringing the bandwidth per VM to
1 Gb/second, but if you're going to do that, you've got one more factor to
consider.

View File

@@ -0,0 +1,14 @@
Scalability and oversubscription
++++++++++++++++++++++++++++++++
It is one of the ironies of networking that 1 Gb Ethernet generally scales
better than 10Gb Ethernet -- at least until 100 Gb switches are more commonly
available. It's possible to aggregate the 1 Gb links in a 48 port switch, so
that you have 48 x 1 Gb links down, but 4 x 10 Gb links up. Do the same thing with a
10 Gb switch, however, and you have 48 x 10 Gb links down and 4 x 100b links up,
resulting in oversubscription.
Like many other issues in OpenStack, you can avoid this problem to a great
extent with careful planning. Problems only arise when you are moving between
racks, so plan to create "pods", each of which includes both storage and
compute nodes. Generally, a pod is the size of a non-oversubscribed L2 domain.

View File

@@ -0,0 +1,13 @@
Hardware for this example
+++++++++++++++++++++++++
In this example, you are looking at:
* 2 data switches (for HA), each with a minimum of 12 ports for data
(2 x 1 Gb links per server x 6 servers)
* 1 x 1 Gb switch for IPMI (1 port per server x 6 servers)
* Optional Cluster Management switch, plus a second for HA
Because your network will in all likelihood grow, it's best to choose 48 port
switches. Also, as your network grows, you will need to consider uplinks and
aggregation switches.

View File

@@ -0,0 +1,15 @@
Summary
----------
In general, your best bet is to choose a 2 socket server with a balance in I/O,
CPU, Memory, and Disk that meets your project requirements.
Look for a 1U R-class or 2U high density C-class servers. Some good options
from Dell for compute nodes include:
* Dell PowerEdge R620
* Dell PowerEdge C6220 Rack Server
* Dell PowerEdge R720XD (for high disk or IOPS requirements)
You may also want to consider systems from HP (http://www.hp.com/servers) or
from a smaller systems builder like Aberdeen, a manufacturer that specializes
in powerful, low-cost systems and storage servers (http://www.aberdeeninc.com).

View File

@@ -1,44 +1,2 @@
.. raw:: pdf
PageBreak
.. index:: Download Fuel
Before You Download and Install Fuel
====================================
Before downloading and installing Fuel:
- Be sure that you your hardware configuration is adequate;
check the `Prerequisites information <http://docs.mirantis.com/fuel/fuel-4.1/install-guide.html#prerequisites>`_.
- Understand the network architecture and define your network configuration;
see `Understanding and Configuring the Network <http://docs.mirantis.com/fuel/fuel-4.1/install-guide.html#understanding-and-configuring-the-network>`_
and `Network Architecture <http://docs.mirantis.com/fuel/fuel-4.0/reference-architecture.html#network-architecture>`_.
Downloading the installable image
=================================
1. Go to the
`Mirantis Software Download Page <http://software.mirantis.com/>`_
and fill out the information to opt in to the Mirantis community.
2. You should receive a response from Mirantis within the hour;
this mail includes credential you can use to download Fuel.
If you do not receive the acknowledgement from Mirantis,
write to: sw-access@mirantis.com.
Fuel provides the following installation options:
* **ISO image**
Use as a file to install the virtualized deployment;
cut to DVD or upload to your IPMI server to install Fuel on bare metal.
* **Raw sector file (IMG)**
Cut to a USB flash media device
that can be used to to install Fuel on bare metal.
Both installation images contain the installer for Fuel Master node.
.. seealso:: `Downloads <http://fuel.mirantis.com/your-downloads/>`_
.. include:: /pages/install-guide/0060-download-fuel/0100-before-download.rst
.. include:: /pages/install-guide/0060-download-fuel/0200-download.rst

View File

@@ -0,0 +1,17 @@
.. raw:: pdf
PageBreak
.. index:: Download Fuel
Before You Download and Install Fuel
====================================
Before downloading and installing Fuel:
- Be sure that you your hardware configuration is adequate;
check the `Prerequisites information <http://docs.mirantis.com/fuel/fuel-4.1/install-guide.html#prerequisites>`_.
- Understand the network architecture and define your network configuration;
see `Understanding and Configuring the Network <http://docs.mirantis.com/fuel/fuel-4.1/install-guide.html#understanding-and-configuring-the-network>`_
and `Network Architecture <http://docs.mirantis.com/fuel/fuel-4.0/reference-architecture.html#network-architecture>`_.

View File

@@ -0,0 +1,30 @@
Downloading the installable image
=================================
1. Go to the
`Mirantis Software Download Page <http://software.mirantis.com/>`_
and fill out the information to opt in to the Mirantis community.
2. You should receive a response from Mirantis within the hour;
this mail includes credential you can use to download Fuel.
If you do not receive the acknowledgement from Mirantis,
write to: sw-access@mirantis.com.
Fuel provides the following installation options:
* **ISO image**
Use as a file to install the virtualized deployment;
cut to DVD or upload to your IPMI server to install Fuel on bare metal.
* **Raw sector file (IMG)**
Cut to a USB flash media device
that can be used to to install Fuel on bare metal.
Both installation images contain the installer for Fuel Master node.
.. seealso:: `Downloads <http://fuel.mirantis.com/your-downloads/>`_

View File

@@ -0,0 +1,10 @@
.. include:: /pages/install-guide/0070-networks/0100-understand-config-network.rst
.. include:: /pages/install-guide/0070-networks/0150-flatdhcp-multihost.rst
.. include:: /pages/install-guide/0070-networks/0170-flatdhcp-single-interface.rst
.. include:: /pages/install-guide/0070-networks/0190-vlanmanager.rst
.. include:: /pages/install-guide/0070-networks/0200-fuel-deployment-schema.rst
.. include:: /pages/install-guide/0070-networks/0210-config-network.rst
.. include:: /pages/install-guide/0070-networks/0220-map-logical-to-physical-nic.rst
.. include:: /pages/install-guide/0070-networks/0300-switch-config.rst
.. include:: /pages/install-guide/0070-networks/0350-router.rst
.. include:: /pages/install-guide/0070-networks/0400-access-to-public-net.rst

View File

@@ -0,0 +1,25 @@
.. raw:: pdf
PageBreak
.. index:: Fuel UI: Network Configuration
.. _fuelui-network:
Understanding and Configuring the Network
=========================================
.. contents :local:
OpenStack environments use several types of network managers: FlatDHCPManager,
VLANManager (Nova Network) and Neutron (formerly Quantum). All configurations
are supported. For more information about how the network managers work, you
can read these two resources:
* `OpenStack Networking FlatManager and FlatDHCPManager
<http://www.mirantis.com/blog/openstack-networking-flatmanager-and-flatdhcpmanager/>`_
* `OpenStack Networking for Scalability and Multi-tenancy with VLANManager
<http://www.mirantis.com/blog/openstack-networking-vlanmanager/>`_
* `Neutron - OpenStack <https://wiki.openstack.org/wiki/Neutron/>`_

View File

@@ -0,0 +1,31 @@
FlatDHCPManager (multi-host scheme)
-----------------------------------
The main ingredient in FlatManager is to configure a bridge
(i.e. **br100**) on every Compute node and have one of the machine's physical
interfaces connect to it. Once the virtual machine is launched, its virtual
interface connects to that bridge as well.
The same L2 segment is used for all OpenStack projects, which means that there
is no L2 isolation between virtual hosts, even if they are owned by separate
projects. Additionally, there is only one flat IP pool defined for the entire
environment. For this reason, it is called the *Flat* manager.
The simplest case here is as shown on the following diagram. Here the *eth1*
interface is used to give network access to virtual machines, while *eth0*
interface is the management network interface.
.. image:: /_images/flatdhcpmanager-mh_scheme.jpg
:align: center
Fuel deploys OpenStack in FlatDHCP mode with the **multi-host**
feature enabled. Without this feature enabled, network traffic from each VM
would go through the single gateway host, which inevitably creates a single
point of failure. In **multi-host** mode, each Compute node becomes a gateway
for all the VMs running on the host, providing a balanced networking solution.
In this case, if one of the Computes goes down, the rest of the environment
remains operational.
The current version of Fuel uses VLANs, even for the FlatDHCP network
manager. On the Linux host, it is implemented in such a way that it is not
the physical network interfaces that connects to the bridge, but the
VLAN interface (i.e. *eth0.102*).

View File

@@ -0,0 +1,15 @@
FlatDHCPManager (single-interface scheme)
-----------------------------------------
.. image:: /_images/flatdhcpmanager-sh_scheme.jpg
:width: 100%
:align: center
In order for FlatDHCPManager to work, one designated switch port where each
Compute node is connected needs to be configured as tagged (trunk) port
with the required VLANs allowed (enabled, tagged). Virtual machines will
communicate with each other on L2 even if they are on different Compute nodes.
If the virtual machine sends IP packets to a different network, they will be
routed on the host machine according to the routing table. The default route
will point to the gateway specified on the networks tab in the UI as the
gateway for the Public network.

View File

@@ -0,0 +1,19 @@
.. raw:: pdf
PageBreak
VLANManager
------------
VLANManager mode is more suitable for large scale clouds. The idea behind
this mode is to separate groups of virtual machines owned by different
projects into separate and distinct L2 networks. In VLANManager, this is done
by tagging IP frames, identified by a given VLAN. It allows virtual machines
inside the given project to communicate with each other and not to see any
traffic from VMs of other projects. Again, like with FlatDHCPManager, switch
ports must be configured as tagged (trunk) ports to allow this scheme to work.
.. image:: /_images/vlanmanager_scheme.jpg
:width: 100%
:align: center

View File

@@ -0,0 +1,14 @@
.. raw:: pdf
PageBreak
.. index:: Fuel UI: Deployment Schema
Fuel Deployment Schema
======================
Via VLAN tagging on a physical interface, OpenStack Computes will untag the IP
packets and send them to the appropriate VMs. Simplifying the configuration
of VLAN Manager, there is no known limitation which Fuel could add in this
particular networking mode.

View File

@@ -0,0 +1,32 @@
Configuring the network
-----------------------
Once you choose a networking mode (FlatDHCP/VLAN), you must configure equipment
accordingly. The diagram below shows an example configuration.
.. image:: /_images/physical-network.png
:width: 100%
:align: center
Fuel operates with following logical networks:
**Fuel** network
Used for internal Fuel communications only and PXE booting (untagged on the scheme);
**Public** network
Is used to get access from virtual machines to outside, Internet or office
network (VLAN 101 on the scheme);
**Floating** network
Used to get access to virtual machines from outside (shared L2-interface with
Public network; in this case it's VLAN 101);
**Management** network
Is used for internal OpenStack communications (VLAN 100 on the scheme);
**Storage** network
Is used for Storage traffic (VLAN 102 on the scheme);
**Fixed** network
One (for flat mode) or more (for VLAN mode) virtual machines
networks (VLANs 103-200 on the scheme).

View File

@@ -0,0 +1,21 @@
Mapping logical networks to physical interfaces on servers
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Fuel allows you to use different physical interfaces to handle different
types of traffic. When a node is added to the environment, click at the bottom
line of the node icon. In the detailed information window, click the "Configure
Interfaces" button to open the physical interfaces configuration screen.
.. image:: /_images/network_settings.jpg
:align: center
:width: 100%
On this screen you can drag-and-drop logical networks to physical interfaces
according to your network setup.
All networks are presented on the screen, except Fuel.
It runs on the physical interface from which node was initially PXE booted,
and in the current version it is not possible to map it on any other physical
interface. Also, once the network is configured and OpenStack is deployed,
you may not modify network settings, even to move a logical network to another
physical interface or VLAN number.

View File

@@ -0,0 +1,74 @@
Switch
++++++
Fuel can configure hosts, however switch configuration is still manual work.
Unfortunately the set of configuration steps, and even the terminology used,
is different for different vendors, so we will try to provide
vendor-agnostic information on how traffic should flow and leave the
vendor-specific details to you. We will provide an example for a Cisco switch.
First of all, you should configure access ports to allow non-tagged PXE booting
connections from all Slave nodes to the Fuel node. We refer this network
as the Fuel network.
By default, the Fuel Master node uses the `eth0` interface to serve PXE
requests on this network, but this can be changed :ref:`during installation
<Network_Install>` of the Fuel Master node.
So if that's left unchanged, you have to set the switch port for `eth0` of Fuel
Master node to access mode.
We recommend that you use the `eth0` interfaces of all other nodes for PXE booting
as well. Corresponding ports must also be in access mode.
Taking into account that this is the network for PXE booting, do not mix
this L2 segment with any other network segments. Fuel runs a DHCP
server, and if there is another DHCP on the same L2 network segment, both the
company's infrastructure and Fuel's will be unable to function properly.
You also need to configure each of the switch's ports connected to nodes as an
"STP Edge port" (or a "spanning-tree port fast trunk", according to Cisco
terminology). If you don't do that, DHCP timeout issues may occur.
As long as the Fuel network is configured, Fuel can operate.
Other networks are required for OpenStack environments, and currently all of
these networks live in VLANs over the one or multiple physical interfaces on a
node. This means that the switch should pass tagged traffic, and untagging is done
on the Linux hosts.
.. note:: For the sake of simplicity, all the VLANs specified on the networks tab of
the Fuel UI should be configured on switch ports, pointing to Slave nodes,
as tagged.
Of course, it is possible to specify as tagged only certain ports for certain
nodes. However, in the current version, all existing networks are automatically
allocated for each node, with any role.
And network check will also check if tagged traffic pass, even if some nodes do
not require this check (for example, Cinder nodes do not need fixed network traffic).
This is enough to deploy the OpenStack environment. However, from a
practical standpoint, it's still not really usable because there is no
connection to other corporate networks yet. To make that possible, you must
configure uplink port(s).
One of the VLANs may carry the office network. To provide access to the Fuel Master
node from your network, any other free physical network interface on the
Fuel Master node can be used and configured according to your network
rules (static IP or DHCP). The same network segment can be used for
Public and Floating ranges. In this case, you must provide the corresponding
VLAN ID and IP ranges in the UI. One Public IP per node will be used to SNAT
traffic out of the VMs network, and one or more floating addresses per VM
instance will be used to get access to the VM from your network, or
even the global Internet. To have a VM visible from the Internet is similar to
having it visible from corporate network - corresponding IP ranges and VLAN IDs
must be specified for the Floating and Public networks. One current limitation
of Fuel is that the user must use the same L2 segment for both Public and
Floating networks.
Example configuration for one of the ports on a Cisco switch::
interface GigabitEthernet0/6 # switch port
description s0_eth0 jv # description
switchport trunk encapsulation dot1q # enables VLANs
switchport trunk native vlan 262 # access port, untags VLAN 262
switchport trunk allowed vlan 100,102,104 # 100,102,104 VLANs are passed with tags
switchport mode trunk # To allow more than 1 VLAN on the port
spanning-tree portfast trunk # STP Edge port to skip network loop
# checks (to prevent DHCP timeout issues)
vlan 262,100,102,104 # Might be needed for enabling VLANs

View File

@@ -0,0 +1,16 @@
Router
++++++
To make it possible for VMs to access the outside world, you must have an IP
address set on a router in the Public network. In the examples provided,
that IP is 12.0.0.1 in VLAN 101.
Fuel UI has a special field on the networking tab for the gateway address. As
soon as deployment of OpenStack is started, the network on nodes is
reconfigured to use this gateway IP as the default gateway.
If Floating addresses are from another L3 network, then you have to configure the
IP address (or even multiple IPs if Floating addresses are from more than one L3
network) for them on the router as well.
Otherwise, Floating IPs on nodes will be inaccessible.

View File

@@ -0,0 +1,61 @@
.. _access_to_public_net:
Deployment configuration to access OpenStack API and VMs from host machine
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Helper scripts for VirtualBox create network adapters `eth0`, `eth1`, `eth2`
which are represented on host machine as `vboxnet0`, `vboxnet1`, `vboxnet2`
correspondingly, and assign IP addresses for adapters:
vboxnet0 - 10.20.0.1/24,
vboxnet1 - 172.16.1.1/24,
vboxnet2 - 172.16.0.1/24.
For the demo environment on VirtualBox, the first network adapter is used to run Fuel
network traffic, including PXE discovery.
To access the Horizon and OpenStack RESTful API via Public network from the host machine,
it is required to have route from your host to the Public IP address on the OpenStack Controller.
Also, if access to Floating IP of VM is required, it is also required to have route
to the Floating IP on Compute host, which is binded to Public interface there.
To make this configuration possible on VirtualBox demo environment, the user has
to run Public network untagged. On the image below you can see the configuration of
Public and Floating networks which will allow to make this happen.
.. image:: /_images/vbox_public_settings.jpg
:align: center
:width: 100%
By default Public and Floating networks are ran on the first network interface.
It is required to change it, as you can see on this image below. Make sure you change
it on every node.
.. image:: /_images/vbox_node_settings.jpg
:align: center
:width: 100%
If you use default configuration in VirtualBox scripts, and follow the exact same
settings on the images above, you should be able to access OpenStack Horizon via
Public network after the installation.
If you want to enable Internet on provisioned VMs by OpenStack, you
have to configure NAT on the host machine. When packets reach `vboxnet1` interface,
according to the OpenStack settings tab, they have to know the way out of the host.
For Ubuntu, the following command, executed on the host, can make this happen::
sudo iptables -t nat -A POSTROUTING -s 172.16.1.0/24 \! -d 172.16.1.0/24 -j MASQUERADE
To access VMs managed by OpenStack it is needed to provide IP addresses from
Floating IP range. When OpenStack environment is deployed and VM is provisioned there,
you have to associate one of the Floating IP addresses from the pool to this VM,
whether in Horizon or via Nova CLI. By default, OpenStack blocks all the traffic to the VM.
To allow the connectivity to the VM, you need to configure security groups.
It can be done in Horizon, or from OpenStack Controller using the following commands::
. /root/openrc
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
IP ranges for Public and Management networks (172.16.*.*) are defined in ``config.sh``
script. If default values don't fit your needs, you are free to change them, but before
the installation of Fuel Master node.

View File

@@ -1,269 +1,11 @@
.. raw:: pdf
PageBreak
.. index:: Installing Fuel Master Node
Installing Fuel Master Node
===========================
.. contents :local:
Fuel is distributed via both ISO and IMG images. Each contains an installer for
Fuel Master node. The ISO image is used for CD media devices, iLO (HP) or
similar remote access systems. The IMG file is used for USB memory stick-based
installation.
Once installed, Fuel can be used to deploy and manage OpenStack environments.
It will assign IP addresses to the nodes, perform PXE boot and initial
configuration, and provision of OpenStack nodes according to their roles in
the environment.
.. _Install_Bare-Metal:
Bare-Metal Environment
----------------------
To install Fuel on bare-metal hardware, you need to burn the provided ISO to
a writeable DVD or create a bootable USB stick. You would then begin the
installation process by booting from that media, very much like any other OS
install process.
Burning an ISO to optical media is a commonly supported function on all OSes.
On Linux, there are several programs available, such as `Brasero` or `Xfburn`,
two commonly pre-installed desktop applications. There is also
a number for Windows such as `ImgBurn <http://www.imgburn.com/>`_ and the
open source `InfraRecorder <http://infrarecorder.org/>`_.
Burning an ISO in Mac OS X is quite simple. Open `Disk Utility` from
`Applications > Utilities`, drag the ISO into the disk list on the left side
of the window and select it, insert blank DVD, and click `Burn`. If you prefer
a different utility, check out the open source `Burn
<http://burn-osx.sourceforge.net/Pages/English/home.html>`_.
Installing the ISO to a bootable USB stick, however, is an entirely different
matter. Canonical suggests `PenDriveLinux` which is a GUI tool for Windows.
On Windows, you can write the installation image with a number of different
utilities. The following list links to some of the more popular ones and they
are all available at no cost:
- `Win32 Disk Imager <http://sourceforge.net/projects/win32diskimager/>`_.
- `ISOtoUSB <http://www.isotousb.com/>`_.
After the installation is complete, you will need to make your bare-metal nodes
available for your OpenStack environment. Attach them to the same L2 network
(broadcast domain) as the Master node, and configure them to automatically
boot via network. The UI will discover them and make them available for
installing OpenStack.
VirtualBox
----------
If you would like to evaluate Fuel on VirtualBox, you can take advantage of the
included set of scripts that create and configure all the required VMs for a
test environment, including the Master node and Slave nodes for OpenStack
itself. It is a simple, single-click installation.
.. note::
These scripts are not supported on Windows directly, but you can still test on
Windows VirtualBox by running scripts on Cygwin or by creating the VMs by yourself.
See :ref:`Install_Manual` for more details.
The requirements for running Fuel on VirtualBox are:
A host machine with Linux, Windows or Mac OS. We recommend 64-bit host OS.
The scripts have been tested on Mac OS 10.7.5, Mac OS 10.8.3, Ubuntu 12.04,
Ubuntu 12.10, Fedora 19, OpenSUSE 12.2/12.3, and Windows 7 x64 + Cygwin_x64.
VirtualBox 4.2.16 (or later) is required, along with the extension pack.
Both can be downloaded from `<http://www.virtualbox.org/>`_.
8 GB+ of RAM
Will support 4 VMs for Multi-node OpenStack installation (1 Master node,
1 Controller node, 1 Compute node, 1 Cinder node) with reduced to 1536 MB VM RAM.
For dedicated Cinder node 768 MB of RAM is enough.
or
Will support 5 VMs for Multi-node with HA OpenStack installation (1 Master
node, 3 combined Controller + Cinder nodes, 1 Compute node) with reduced
to 1280 MB RAM amount per VM.
Such RAM amount per node is below the recommended requirements for HA
configurations (2048+ MB per controller) and may lead to unwanted issues.
.. _Install_Automatic:
Automatic Mode
++++++++++++++
When you unpack VirtualBox scripts, you will see the following
important files and folders:
`iso`
This folder needs to contain a single ISO image for Fuel. Once you have
downloaded the ISO from the portal, copy or move it into this directory.
`config.sh`
This file allows you to specify parameters used for automating Fuel
installation. For example, you can select how many virtual nodes to launch,
as well as how much memory, disk, and processing to allocate for each.
`launch.sh`
Once executed, this script will use the ISO image from the ``iso`` directory,
create a VM, mount the image, and automatically install the Fuel Master node.
After installation of the Master node, the script will create Slave nodes for
OpenStack and boot them via PXE from the Master node.
Finally, the script will give you the link to access the Web-based UI for the
Master node so you can start installation of an OpenStack environment.
.. _Install_Manual:
Manual Installation
+++++++++++++++++++
.. note::
The following steps are only suitable for setting up a vanilla OpenStack
environment for evaluation purposes only.
If you cannot or would rather not run our helper scripts, you can still run
Fuel on VirtualBox by following these steps.
Master Node Deployment
^^^^^^^^^^^^^^^^^^^^^^
First, create the Master node VM.
1. Configure the host-only interface vboxnet0 in VirtualBox by going to
`File -> Preferences -> Network` and clicking the screwdriver icon.
* IP address: 10.20.0.1
* Network mask: 255.255.255.0
* DHCP Server: disabled
2. Create a VM for the Master node with the following parameters:
* OS Type: Linux
* Version: Ubuntu (64bit)
* RAM: 1536+ MB (2048+ MB recommended)
* HDD: 50 GB with dynamic disk expansion
3. Modify your VM settings:
* Network: Attach `Adapter 1` to `Host-only adapter` ``vboxnet0``
4. Power on the VM in order to start the installation. Choose your Fuel ISO
when prompted to select start-up disk.
5. Wait for the Welcome message with all information needed to login into the UI
of Fuel.
Adding Slave Nodes
^^^^^^^^^^^^^^^^^^
Next, create Slave nodes where OpenStack needs to be installed.
1. Create 3 or 4 additional VMs depending on your wish with the following parameters:
* OS Type: Linux, Version: Ubuntu (64bit)
* RAM: 1536+ MB (2048+ MB recommended)
* HDD: 50+ GB, with dynamic disk expansion
* Network 1: host-only interface vboxnet0, PCnet-FAST III device
2. Set Network as first in the boot order:
.. image:: /_images/vbox-image1.jpg
:align: center
3. Configure two or more network adapters on each VM (in order to use single network
adapter for each VM you should choose "Use VLAN Tagging" later in the Fuel UI):
.. image:: /_images/vbox-image2.jpg
:align: center
4. Open "advanced" collapse, and check following options:
* Promiscuous mode is a "Allow All"
* Adapter type is a "PCnet-FAST III"
* Cable connected is a On
.. _Network_Install:
Changing Network Parameters During Installation
-----------------------------------------------
The console-based Fuel Setup allows you to customize the Fuel Admin (PXE booting)
network, which has a default network of ``10.20.0.2/24``, gateway
``10.20.0.1``.
In order to do so, press the <TAB> key on the very first installation screen
which says "Welcome to Fuel Installer!" and update the kernel option
``showmenu=no`` to ``showmenu=yes``. Alternatively, you can press a key to
start Fuel Setup during the first boot after installation.
Within Fuel Setup you can configure the following parameters:
* DHCP/Static configuration for each network interface
* Select interface for Fuel Admin network
* Define DHCP pool (bootstrap) and static range (installed nodes)
* Root password
* DNS options
The main function of this tool is to provide a simple way to configure Fuel for
your particular networking environment, while helping to detect errors early
so you need not waste time troubleshooting individual configuration files.
Please change `vm_master_ip` parameter in config.sh accordingly in case you use
VirtualBox automated scripts to deploy Fuel.
.. image:: /_images/fuel-menu-interfaces.jpg
:align: center
Use the arrow keys to navigate through the tool. Once you have made your
changes, go to Save & Quit.
Changing Network Parameters After Installation
----------------------------------------------
It is possible to run "fuelmenu" from a root shell on Fuel Master node after
deployment to make minor changes to network interfaces, DNS, and gateway. The
PXE settings, however, cannot be changed after deployment as it will lead to
deployment failure.
.. warning::
Once IP settings are set at the boot time for Fuel Master node, they
**should not be changed during the whole lifecycle of Fuel.**
PXE Booting Settings
--------------------
By default, `eth0` on Fuel Master node serves PXE requests. If you are planning
to use another interface, you configure this in :ref:`Network_Install`.
If you want to install Fuel on virtual machines, then you need to make sure
that dnsmasq on the Master node is configured to support the PXE client used by
your virtual machines. We enable *dhcp-no-override* option because without it,
dnsmasq tries to move ``PXE filename`` and ``PXE servername`` special fields
into DHCP options. Not all PXE implementations can recognize those options and
therefore they will not be able to boot. For example, libvirt in CentOS 6.4
uses gPXE implementation, instead of more advanced iPXE by default, and
therefore requires *dhcp-no-override*
When Master Node Installation is Done
-------------------------------------
Once the Master node is installed, power on all slave nodes and log in to the
Fuel UI. The login prompt on the console of the master node will show you the
URL you need to use. The default address is http://10.20.0.2:8000/
Slave nodes will automatically boot into bootstrap mode (CentOS based Linux
in memory) via PXE and you will see notifications in the user interface about
discovered nodes. At this point, you can create an environment, add nodes into
it, and start configuration.
Networking configuration is the most complicated part, so please read the
networking section of the documentation carefully.
.. include:: /pages/install-guide/install/0100-install-master-header.rst
.. include:: /pages/install-guide/install/0200-install-bare-metal.rst
.. include:: /pages/install-guide/install/0300-install-virtualbox.rst
.. include:: /pages/install-guide/install/0330-install-automatic-virtualbox.rst
.. include:: /pages/install-guide/install/0340-install-manual-virtualbox.rst
.. include:: /pages/install-guide/install/0350-manual-master.rst
.. include:: /pages/install-guide/install/0360-manual-slave.rst
.. include:: /pages/install-guide/install/0400-network-install.rst
.. include:: /pages/install-guide/install/0420-change-net-params.rst
.. include:: /pages/install-guide/install/0500-pxe.rst
.. include:: /pages/install-guide/install/0900-master-finish.rst

View File

@@ -0,0 +1,20 @@
.. raw:: pdf
PageBreak
.. index:: Installing Fuel Master Node
Installing Fuel Master Node
===========================
.. contents :local:
Fuel is distributed via both ISO and IMG images. Each contains an installer for
Fuel Master node. The ISO image is used for CD media devices, iLO (HP) or
similar remote access systems. The IMG file is used for USB memory stick-based
installation.
Once installed, Fuel can be used to deploy and manage OpenStack environments.
It will assign IP addresses to the nodes, perform PXE boot and initial
configuration, and provision of OpenStack nodes according to their roles in
the environment.

View File

@@ -0,0 +1,37 @@
.. _Install_Bare-Metal:
Bare-Metal Environment
----------------------
To install Fuel on bare-metal hardware, you need to burn the provided ISO to
a writeable DVD or create a bootable USB stick. You would then begin the
installation process by booting from that media, very much like any other OS
install process.
Burning an ISO to optical media is a commonly supported function on all OSes.
On Linux, there are several programs available, such as `Brasero` or `Xfburn`,
two commonly pre-installed desktop applications. There is also
a number for Windows such as `ImgBurn <http://www.imgburn.com/>`_ and the
open source `InfraRecorder <http://infrarecorder.org/>`_.
Burning an ISO in Mac OS X is quite simple. Open `Disk Utility` from
`Applications > Utilities`, drag the ISO into the disk list on the left side
of the window and select it, insert blank DVD, and click `Burn`. If you prefer
a different utility, check out the open source `Burn
<http://burn-osx.sourceforge.net/Pages/English/home.html>`_.
Installing the ISO to a bootable USB stick, however, is an entirely different
matter. Canonical suggests `PenDriveLinux` which is a GUI tool for Windows.
On Windows, you can write the installation image with a number of different
utilities. The following list links to some of the more popular ones and they
are all available at no cost:
- `Win32 Disk Imager <http://sourceforge.net/projects/win32diskimager/>`_.
- `ISOtoUSB <http://www.isotousb.com/>`_.
After the installation is complete, you will need to make your bare-metal nodes
available for your OpenStack environment. Attach them to the same L2 network
(broadcast domain) as the Master node, and configure them to automatically
boot via network. The UI will discover them and make them available for
installing OpenStack.

View File

@@ -0,0 +1,35 @@
VirtualBox
----------
If you would like to evaluate Fuel on VirtualBox, you can take advantage of the
included set of scripts that create and configure all the required VMs for a
test environment, including the Master node and Slave nodes for OpenStack
itself. It is a simple, single-click installation.
.. note::
These scripts are not supported on Windows directly, but you can still test on
Windows VirtualBox by running scripts on Cygwin or by creating the VMs by yourself.
See :ref:`Install_Manual` for more details.
The requirements for running Fuel on VirtualBox are:
A host machine with Linux, Windows or Mac OS. We recommend 64-bit host OS.
The scripts have been tested on Mac OS 10.7.5, Mac OS 10.8.3, Ubuntu 12.04,
Ubuntu 12.10, Fedora 19, OpenSUSE 12.2/12.3, and Windows 7 x64 + Cygwin_x64.
VirtualBox 4.2.16 (or later) is required, along with the extension pack.
Both can be downloaded from `<http://www.virtualbox.org/>`_.
8 GB+ of RAM
Will support 4 VMs for Multi-node OpenStack installation (1 Master node,
1 Controller node, 1 Compute node, 1 Cinder node) with reduced to 1536 MB VM RAM.
For dedicated Cinder node 768 MB of RAM is enough.
or
Will support 5 VMs for Multi-node with HA OpenStack installation (1 Master
node, 3 combined Controller + Cinder nodes, 1 Compute node) with reduced
to 1280 MB RAM amount per VM.
Such RAM amount per node is below the recommended requirements for HA
configurations (2048+ MB per controller) and may lead to unwanted issues.

View File

@@ -0,0 +1,24 @@
.. _Install_Automatic:
Automatic Mode
++++++++++++++
When you unpack VirtualBox scripts, you will see the following
important files and folders:
`iso`
This folder needs to contain a single ISO image for Fuel. Once you have
downloaded the ISO from the portal, copy or move it into this directory.
`config.sh`
This file allows you to specify parameters used for automating Fuel
installation. For example, you can select how many virtual nodes to launch,
as well as how much memory, disk, and processing to allocate for each.
`launch.sh`
Once executed, this script will use the ISO image from the ``iso`` directory,
create a VM, mount the image, and automatically install the Fuel Master node.
After installation of the Master node, the script will create Slave nodes for
OpenStack and boot them via PXE from the Master node.
Finally, the script will give you the link to access the Web-based UI for the
Master node so you can start installation of an OpenStack environment.

View File

@@ -0,0 +1,12 @@
.. _Install_Manual:
Manual Installation
+++++++++++++++++++
.. note::
The following steps are only suitable for setting up a vanilla OpenStack
environment for evaluation purposes only.
If you cannot or would rather not run our helper scripts, you can still run
Fuel on VirtualBox by following these steps.

View File

@@ -0,0 +1,28 @@
Master Node Deployment
^^^^^^^^^^^^^^^^^^^^^^
First, create the Master node VM.
1. Configure the host-only interface vboxnet0 in VirtualBox by going to
`File -> Preferences -> Network` and clicking the screwdriver icon.
* IP address: 10.20.0.1
* Network mask: 255.255.255.0
* DHCP Server: disabled
2. Create a VM for the Master node with the following parameters:
* OS Type: Linux
* Version: Ubuntu (64bit)
* RAM: 1536+ MB (2048+ MB recommended)
* HDD: 50 GB with dynamic disk expansion
3. Modify your VM settings:
* Network: Attach `Adapter 1` to `Host-only adapter` ``vboxnet0``
4. Power on the VM in order to start the installation. Choose your Fuel ISO
when prompted to select start-up disk.
5. Wait for the Welcome message with all information needed to login into the UI
of Fuel.

View File

@@ -0,0 +1,28 @@
Adding Slave Nodes
^^^^^^^^^^^^^^^^^^
Next, create Slave nodes where OpenStack needs to be installed.
1. Create 3 or 4 additional VMs depending on your wish with the following parameters:
* OS Type: Linux, Version: Ubuntu (64bit)
* RAM: 1536+ MB (2048+ MB recommended)
* HDD: 50+ GB, with dynamic disk expansion
* Network 1: host-only interface vboxnet0, PCnet-FAST III device
2. Set Network as first in the boot order:
.. image:: /_images/vbox-image1.jpg
:align: center
3. Configure two or more network adapters on each VM (in order to use single network
adapter for each VM you should choose "Use VLAN Tagging" later in the Fuel UI):
.. image:: /_images/vbox-image2.jpg
:align: center
4. Open "advanced" collapse, and check following options:
* Promiscuous mode is a "Allow All"
* Adapter type is a "PCnet-FAST III"
* Cable connected is a On

View File

@@ -0,0 +1,33 @@
.. _Network_Install:
Changing Network Parameters During Installation
-----------------------------------------------
The console-based Fuel Setup allows you to customize the Fuel Admin (PXE booting)
network, which has a default network of ``10.20.0.2/24``, gateway
``10.20.0.1``.
In order to do so, press the <TAB> key on the very first installation screen
which says "Welcome to Fuel Installer!" and update the kernel option
``showmenu=no`` to ``showmenu=yes``. Alternatively, you can press a key to
start Fuel Setup during the first boot after installation.
Within Fuel Setup you can configure the following parameters:
* DHCP/Static configuration for each network interface
* Select interface for Fuel Admin network
* Define DHCP pool (bootstrap) and static range (installed nodes)
* Root password
* DNS options
The main function of this tool is to provide a simple way to configure Fuel for
your particular networking environment, while helping to detect errors early
so you need not waste time troubleshooting individual configuration files.
Please change `vm_master_ip` parameter in config.sh accordingly in case you use
VirtualBox automated scripts to deploy Fuel.
.. image:: /_images/fuel-menu-interfaces.jpg
:align: center
Use the arrow keys to navigate through the tool. Once you have made your
changes, go to Save & Quit.

View File

@@ -0,0 +1,12 @@
Changing Network Parameters After Installation
----------------------------------------------
It is possible to run "fuelmenu" from a root shell on Fuel Master node after
deployment to make minor changes to network interfaces, DNS, and gateway. The
PXE settings, however, cannot be changed after deployment as it will lead to
deployment failure.
.. warning::
Once IP settings are set at the boot time for Fuel Master node, they
**should not be changed during the whole lifecycle of Fuel.**

View File

@@ -0,0 +1,14 @@
PXE Booting Settings
--------------------
By default, `eth0` on Fuel Master node serves PXE requests. If you are planning
to use another interface, you configure this in :ref:`Network_Install`.
If you want to install Fuel on virtual machines, then you need to make sure
that dnsmasq on the Master node is configured to support the PXE client used by
your virtual machines. We enable *dhcp-no-override* option because without it,
dnsmasq tries to move ``PXE filename`` and ``PXE servername`` special fields
into DHCP options. Not all PXE implementations can recognize those options and
therefore they will not be able to boot. For example, libvirt in CentOS 6.4
uses gPXE implementation, instead of more advanced iPXE by default, and
therefore requires *dhcp-no-override*

View File

@@ -0,0 +1,14 @@
When Master Node Installation is Done
-------------------------------------
Once the Master node is installed, power on all slave nodes and log in to the
Fuel UI. The login prompt on the console of the master node will show you the
URL you need to use. The default address is http://10.20.0.2:8000/
Slave nodes will automatically boot into bootstrap mode (CentOS based Linux
in memory) via PXE and you will see notifications in the user interface about
discovered nodes. At this point, you can create an environment, add nodes into
it, and start configuration.
Networking configuration is the most complicated part, so please read the
networking section of the documentation carefully.

View File

@@ -1,315 +0,0 @@
.. raw:: pdf
PageBreak
.. index:: Fuel UI: Network Configuration
.. _fuelui-network:
Understanding and Configuring the Network
=========================================
.. contents :local:
OpenStack environments use several types of network managers: FlatDHCPManager,
VLANManager (Nova Network) and Neutron (formerly Quantum). All configurations
are supported. For more information about how the network managers work, you
can read these two resources:
* `OpenStack Networking FlatManager and FlatDHCPManager
<http://www.mirantis.com/blog/openstack-networking-flatmanager-and-flatdhcpmanager/>`_
* `OpenStack Networking for Scalability and Multi-tenancy with VLANManager
<http://www.mirantis.com/blog/openstack-networking-vlanmanager/>`_
* `Neutron - OpenStack <https://wiki.openstack.org/wiki/Neutron/>`_
FlatDHCPManager (multi-host scheme)
-----------------------------------
The main ingredient in FlatManager is to configure a bridge
(i.e. **br100**) on every Compute node and have one of the machine's physical
interfaces connect to it. Once the virtual machine is launched, its virtual
interface connects to that bridge as well.
The same L2 segment is used for all OpenStack projects, which means that there
is no L2 isolation between virtual hosts, even if they are owned by separate
projects. Additionally, there is only one flat IP pool defined for the entire
environment. For this reason, it is called the *Flat* manager.
The simplest case here is as shown on the following diagram. Here the *eth1*
interface is used to give network access to virtual machines, while *eth0*
interface is the management network interface.
.. image:: /_images/flatdhcpmanager-mh_scheme.jpg
:align: center
Fuel deploys OpenStack in FlatDHCP mode with the **multi-host**
feature enabled. Without this feature enabled, network traffic from each VM
would go through the single gateway host, which inevitably creates a single
point of failure. In **multi-host** mode, each Compute node becomes a gateway
for all the VMs running on the host, providing a balanced networking solution.
In this case, if one of the Computes goes down, the rest of the environment
remains operational.
The current version of Fuel uses VLANs, even for the FlatDHCP network
manager. On the Linux host, it is implemented in such a way that it is not
the physical network interfaces that connects to the bridge, but the
VLAN interface (i.e. *eth0.102*).
FlatDHCPManager (single-interface scheme)
-----------------------------------------
.. image:: /_images/flatdhcpmanager-sh_scheme.jpg
:width: 100%
:align: center
In order for FlatDHCPManager to work, one designated switch port where each
Compute node is connected needs to be configured as tagged (trunk) port
with the required VLANs allowed (enabled, tagged). Virtual machines will
communicate with each other on L2 even if they are on different Compute nodes.
If the virtual machine sends IP packets to a different network, they will be
routed on the host machine according to the routing table. The default route
will point to the gateway specified on the networks tab in the UI as the
gateway for the Public network.
.. raw:: pdf
PageBreak
VLANManager
------------
VLANManager mode is more suitable for large scale clouds. The idea behind
this mode is to separate groups of virtual machines owned by different
projects into separate and distinct L2 networks. In VLANManager, this is done
by tagging IP frames, identified by a given VLAN. It allows virtual machines
inside the given project to communicate with each other and not to see any
traffic from VMs of other projects. Again, like with FlatDHCPManager, switch
ports must be configured as tagged (trunk) ports to allow this scheme to work.
.. image:: /_images/vlanmanager_scheme.jpg
:width: 100%
:align: center
.. raw:: pdf
PageBreak
.. index:: Fuel UI: Deployment Schema
Fuel Deployment Schema
======================
Via VLAN tagging on a physical interface, OpenStack Computes will untag the IP
packets and send them to the appropriate VMs. Simplifying the configuration
of VLAN Manager, there is no known limitation which Fuel could add in this
particular networking mode.
Configuring the network
-----------------------
Once you choose a networking mode (FlatDHCP/VLAN), you must configure equipment
accordingly. The diagram below shows an example configuration.
.. image:: /_images/physical-network.png
:width: 100%
:align: center
Fuel operates with following logical networks:
**Fuel** network
Used for internal Fuel communications only and PXE booting (untagged on the scheme);
**Public** network
Is used to get access from virtual machines to outside, Internet or office
network (VLAN 101 on the scheme);
**Floating** network
Used to get access to virtual machines from outside (shared L2-interface with
Public network; in this case it's VLAN 101);
**Management** network
Is used for internal OpenStack communications (VLAN 100 on the scheme);
**Storage** network
Is used for Storage traffic (VLAN 102 on the scheme);
**Fixed** network
One (for flat mode) or more (for VLAN mode) virtual machines
networks (VLANs 103-200 on the scheme).
Mapping logical networks to physical interfaces on servers
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Fuel allows you to use different physical interfaces to handle different
types of traffic. When a node is added to the environment, click at the bottom
line of the node icon. In the detailed information window, click the "Configure
Interfaces" button to open the physical interfaces configuration screen.
.. image:: /_images/network_settings.jpg
:align: center
:width: 100%
On this screen you can drag-and-drop logical networks to physical interfaces
according to your network setup.
All networks are presented on the screen, except Fuel.
It runs on the physical interface from which node was initially PXE booted,
and in the current version it is not possible to map it on any other physical
interface. Also, once the network is configured and OpenStack is deployed,
you may not modify network settings, even to move a logical network to another
physical interface or VLAN number.
Switch
++++++
Fuel can configure hosts, however switch configuration is still manual work.
Unfortunately the set of configuration steps, and even the terminology used,
is different for different vendors, so we will try to provide
vendor-agnostic information on how traffic should flow and leave the
vendor-specific details to you. We will provide an example for a Cisco switch.
First of all, you should configure access ports to allow non-tagged PXE booting
connections from all Slave nodes to the Fuel node. We refer this network
as the Fuel network.
By default, the Fuel Master node uses the `eth0` interface to serve PXE
requests on this network, but this can be changed :ref:`during installation
<Network_Install>` of the Fuel Master node.
So if that's left unchanged, you have to set the switch port for `eth0` of Fuel
Master node to access mode.
We recommend that you use the `eth0` interfaces of all other nodes for PXE booting
as well. Corresponding ports must also be in access mode.
Taking into account that this is the network for PXE booting, do not mix
this L2 segment with any other network segments. Fuel runs a DHCP
server, and if there is another DHCP on the same L2 network segment, both the
company's infrastructure and Fuel's will be unable to function properly.
You also need to configure each of the switch's ports connected to nodes as an
"STP Edge port" (or a "spanning-tree port fast trunk", according to Cisco
terminology). If you don't do that, DHCP timeout issues may occur.
As long as the Fuel network is configured, Fuel can operate.
Other networks are required for OpenStack environments, and currently all of
these networks live in VLANs over the one or multiple physical interfaces on a
node. This means that the switch should pass tagged traffic, and untagging is done
on the Linux hosts.
.. note:: For the sake of simplicity, all the VLANs specified on the networks tab of
the Fuel UI should be configured on switch ports, pointing to Slave nodes,
as tagged.
Of course, it is possible to specify as tagged only certain ports for certain
nodes. However, in the current version, all existing networks are automatically
allocated for each node, with any role.
And network check will also check if tagged traffic pass, even if some nodes do
not require this check (for example, Cinder nodes do not need fixed network traffic).
This is enough to deploy the OpenStack environment. However, from a
practical standpoint, it's still not really usable because there is no
connection to other corporate networks yet. To make that possible, you must
configure uplink port(s).
One of the VLANs may carry the office network. To provide access to the Fuel Master
node from your network, any other free physical network interface on the
Fuel Master node can be used and configured according to your network
rules (static IP or DHCP). The same network segment can be used for
Public and Floating ranges. In this case, you must provide the corresponding
VLAN ID and IP ranges in the UI. One Public IP per node will be used to SNAT
traffic out of the VMs network, and one or more floating addresses per VM
instance will be used to get access to the VM from your network, or
even the global Internet. To have a VM visible from the Internet is similar to
having it visible from corporate network - corresponding IP ranges and VLAN IDs
must be specified for the Floating and Public networks. One current limitation
of Fuel is that the user must use the same L2 segment for both Public and
Floating networks.
Example configuration for one of the ports on a Cisco switch::
interface GigabitEthernet0/6 # switch port
description s0_eth0 jv # description
switchport trunk encapsulation dot1q # enables VLANs
switchport trunk native vlan 262 # access port, untags VLAN 262
switchport trunk allowed vlan 100,102,104 # 100,102,104 VLANs are passed with tags
switchport mode trunk # To allow more than 1 VLAN on the port
spanning-tree portfast trunk # STP Edge port to skip network loop
# checks (to prevent DHCP timeout issues)
vlan 262,100,102,104 # Might be needed for enabling VLANs
Router
++++++
To make it possible for VMs to access the outside world, you must have an IP
address set on a router in the Public network. In the examples provided,
that IP is 12.0.0.1 in VLAN 101.
Fuel UI has a special field on the networking tab for the gateway address. As
soon as deployment of OpenStack is started, the network on nodes is
reconfigured to use this gateway IP as the default gateway.
If Floating addresses are from another L3 network, then you have to configure the
IP address (or even multiple IPs if Floating addresses are from more than one L3
network) for them on the router as well.
Otherwise, Floating IPs on nodes will be inaccessible.
.. _access_to_public_net:
Deployment configuration to access OpenStack API and VMs from host machine
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Helper scripts for VirtualBox create network adapters `eth0`, `eth1`, `eth2`
which are represented on host machine as `vboxnet0`, `vboxnet1`, `vboxnet2`
correspondingly, and assign IP addresses for adapters:
vboxnet0 - 10.20.0.1/24,
vboxnet1 - 172.16.1.1/24,
vboxnet2 - 172.16.0.1/24.
For the demo environment on VirtualBox, the first network adapter is used to run Fuel
network traffic, including PXE discovery.
To access the Horizon and OpenStack RESTful API via Public network from the host machine,
it is required to have route from your host to the Public IP address on the OpenStack Controller.
Also, if access to Floating IP of VM is required, it is also required to have route
to the Floating IP on Compute host, which is binded to Public interface there.
To make this configuration possible on VirtualBox demo environment, the user has
to run Public network untagged. On the image below you can see the configuration of
Public and Floating networks which will allow to make this happen.
.. image:: /_images/vbox_public_settings.jpg
:align: center
:width: 100%
By default Public and Floating networks are ran on the first network interface.
It is required to change it, as you can see on this image below. Make sure you change
it on every node.
.. image:: /_images/vbox_node_settings.jpg
:align: center
:width: 100%
If you use default configuration in VirtualBox scripts, and follow the exact same
settings on the images above, you should be able to access OpenStack Horizon via
Public network after the installation.
If you want to enable Internet on provisioned VMs by OpenStack, you
have to configure NAT on the host machine. When packets reach `vboxnet1` interface,
according to the OpenStack settings tab, they have to know the way out of the host.
For Ubuntu, the following command, executed on the host, can make this happen::
sudo iptables -t nat -A POSTROUTING -s 172.16.1.0/24 \! -d 172.16.1.0/24 -j MASQUERADE
To access VMs managed by OpenStack it is needed to provide IP addresses from
Floating IP range. When OpenStack environment is deployed and VM is provisioned there,
you have to associate one of the Floating IP addresses from the pool to this VM,
whether in Horizon or via Nova CLI. By default, OpenStack blocks all the traffic to the VM.
To allow the connectivity to the VM, you need to configure security groups.
It can be done in Horizon, or from OpenStack Controller using the following commands::
. /root/openrc
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
IP ranges for Public and Management networks (172.16.*.*) are defined in ``config.sh``
script. If default values don't fit your needs, you are free to change them, but before
the installation of Fuel Master node.

View File

@@ -1,89 +1,3 @@
.. raw:: pdf
PageBreak
.. index:: Stopping Deployment and Resetting Environment
Stopping Deployment and Resetting Environment
=============================================
.. contents :local:
.. _Stop_Deployment:
Stopping Deployment from Web UI
-------------------------------
After clicking "Deploy changes" and deployment itself starts, a small red
button to the right of the progress bar will appear:
.. image:: /_images/stop_deployment_button.png
:align: center
By clicking this button, you may interrupt deployment process (in case of any
errors, for example). This may lead to two possible results:
1. If no nodes have finished deployment (reached "ready" status), all nodes
will be rebooted back to bootstrap. The environment will be reset to its
state right before "Deploy Changes" was pressed. The environment may be
redeployed from scratch. Two things will happen in UI:
* First, all nodes will become offline and will eventually return back
online after reboot. As you can't deploy an environment which includes
offline nodes, the next deployment should be started after all nodes
have been successfully discovered and report as online in UI.
* Second, all settings will be unlocked on all tabs and for all nodes, so
that the user may change any setting before starting a new deployment.
This is quite similar to resetting the environment (:ref:`Reset_Environment`).
2. Some nodes are already deployed (usually controllers) and have reached
"ready" status in the UI. In this case, the behavior will be different:
* Only nodes which did not reach "ready" status will be rebooted back to
bootstrap and deployed ones will remain intact.
* Settings will remain locked because they have been already applied to
some nodes. You may reset the environment (:ref:`Reset_Environment`) to
reboot all nodes, unlock all parameters and redeploy an environment
from scratch to apply them again.
.. note::
In Release 4.1, deployment cannot be interrupted during the
provisioning stage. This means that a user can click on "Stop
deployment" while nodes are provisioning, but they will be rebooted
back to bootstrap only when OS installation is complete.
.. index:: Resetting an environment after deployment
.. contents :local:
.. _Reset_Environment:
Resetting environment after deployment
--------------------------------------
Right now the deployment process may be completed in one of three ways
(not including deleting the environment itself):
1) Environment is deployed successfully
2) Deployment failed and environment received an "error" status
3) Deployment was interrupted by clicking "Stop Deployment" button
(:ref:`Stop_Deployment`)
Any of these three possibilities will lead to the "Reset" button in the
"Actions" tab to become unlocked:
.. image:: /_images/reset_environment_button.png
:align: center
By clicking it, you will reset the whole environment to the same state
as right before "Deploy changes" button was clicked at the first time.
* All nodes will become offline and will eventually return back
online after reboot. As you can't deploy an environment which includes
offline nodes, the next deployment should be started after all nodes
have been successfully discovered and report as online in UI.
* All settings will be unlocked on all tabs and for all nodes, so
that the user may change any setting before starting a new deployment.
.. include:: /pages/install-guide/stop_reset/0100-stop-deploy-header.rst
.. include:: /pages/install-guide/stop_reset/0200-stop-deploy-ui.rst
.. include:: /pages/install-guide/stop_reset/0500-reset-environment.rst

View File

@@ -0,0 +1,10 @@
.. raw:: pdf
PageBreak
.. index:: Stopping Deployment and Resetting Environment
Stopping Deployment and Resetting Environment
=============================================
.. contents :local:

View File

@@ -0,0 +1,44 @@
.. _Stop_Deployment:
Stopping Deployment from Web UI
-------------------------------
After clicking "Deploy changes" and deployment itself starts, a small red
button to the right of the progress bar will appear:
.. image:: /_images/stop_deployment_button.png
:align: center
By clicking this button, you may interrupt deployment process (in case of any
errors, for example). This may lead to two possible results:
1. If no nodes have finished deployment (reached "ready" status), all nodes
will be rebooted back to bootstrap. The environment will be reset to its
state right before "Deploy Changes" was pressed. The environment may be
redeployed from scratch. Two things will happen in UI:
* First, all nodes will become offline and will eventually return back
online after reboot. As you can't deploy an environment which includes
offline nodes, the next deployment should be started after all nodes
have been successfully discovered and report as online in UI.
* Second, all settings will be unlocked on all tabs and for all nodes, so
that the user may change any setting before starting a new deployment.
This is quite similar to resetting the environment (:ref:`Reset_Environment`).
2. Some nodes are already deployed (usually controllers) and have reached
"ready" status in the UI. In this case, the behavior will be different:
* Only nodes which did not reach "ready" status will be rebooted back to
bootstrap and deployed ones will remain intact.
* Settings will remain locked because they have been already applied to
some nodes. You may reset the environment (:ref:`Reset_Environment`) to
reboot all nodes, unlock all parameters and redeploy an environment
from scratch to apply them again.
.. note::
In Release 4.1, deployment cannot be interrupted during the
provisioning stage. This means that a user can click on "Stop
deployment" while nodes are provisioning, but they will be rebooted
back to bootstrap only when OS installation is complete.

View File

@@ -0,0 +1,32 @@
.. index:: Resetting an environment after deployment
.. contents :local:
.. _Reset_Environment:
Resetting environment after deployment
--------------------------------------
Right now the deployment process may be completed in one of three ways
(not including deleting the environment itself):
1) Environment is deployed successfully
2) Deployment failed and environment received an "error" status
3) Deployment was interrupted by clicking "Stop Deployment" button
(:ref:`Stop_Deployment`)
Any of these three possibilities will lead to the "Reset" button in the
"Actions" tab to become unlocked:
.. image:: /_images/reset_environment_button.png
:align: center
By clicking it, you will reset the whole environment to the same state
as right before "Deploy changes" button was clicked at the first time.
* All nodes will become offline and will eventually return back
online after reboot. As you can't deploy an environment which includes
offline nodes, the next deployment should be started after all nodes
have been successfully discovered and report as online in UI.
* All settings will be unlocked on all tabs and for all nodes, so
that the user may change any setting before starting a new deployment.