merge fix

This commit is contained in:
Matthew Mosesohn 2013-10-09 12:59:05 +04:00
parent 9c7fbb76ba
commit cf1d070e6b
6 changed files with 117 additions and 112 deletions

View File

@ -12,15 +12,15 @@ Other Questions
**[A]** We are fully committed to providing our customers with working and
stable bits and pieces in order to make successful OpenStack deployments.
Please note that we do not distribute our own version of OpenStack; we rather
provide a plain vanilla distribution. As such, there is no vendor lock-in.
For convenience, our repository maintains the history of OpenStack packages
certified to work with our Puppet manifests.
provide a plain vanilla distribution. Put simply, there is no vendor lock-in
with Fuel. For your convenience, we maintain repositories containing a
history of OpenStack packages certified to work with our Puppet manifests.
The advantage of this approach is that you can install any OpenStack version
you want. If you are running Essex, just use the Puppet manifests which
reference OpenStack packages for Essex from our repository. With each new
release we add new OpenStack packages to our repository and created a
separate branch with the Puppet manifests (which, in turn, reference these
packages) corresponding to each release. With EPEL this would not be
possible, as that repository only keeps the latest version for OpenStack
packages.
you want (with possible custom big fixes). Even if you are running Essex,
just use the Puppet manifests which reference OpenStack packages for Essex
from our repository. With each new release we add new OpenStack packages to
our repository and create a new branch with Puppet manifests (which, in
turn, reference these packages) corresponding to each release. With EPEL
this would not be possible, as that repository only keeps the latest version
for OpenStack packages.

View File

@ -14,16 +14,16 @@ Sizing Hardware for Production Deployment
One of the first questions people ask when planning an OpenStack deployment is
"what kind of hardware do I need?" There is no such thing as a one-size-fits-all
answer, but there are straightforward rules to selecting appropriate hardware
that will suit your needs. The Golden Rule, however, is to always accommodate
for growth. With the potential for growth accounted for, you can move on to the
actual hardware needs.
that will suit your needs. The Golden Rule, however, is to always plan
for growth. With the potential for growth in your design, you can move onto
your actual hardware needs.
Many factors contribute to selecting hardware for an OpenStack cluster --
`contact Mirantis <http://www.mirantis.com/contact/>`_ for information on your
specific requirements -- but in general, you will want to consider the following
factors:
* Processing
* Processing power
* Memory
* Storage
* Networking
@ -31,8 +31,8 @@ factors:
Your needs in each of these areas are going to determine your overall hardware
requirements.
Processing
----------
Processing Power
----------------
In order to calculate how much processing power you need to acquire you will
need to determine the number of VMs your cloud will support. You must also
@ -56,7 +56,7 @@ density" of 17 VMs per server (102 VMs / 6 servers).
This process also accommodates growth since you now know what a single server
using this CPU configuration can support. You can add new servers accounting
for 17 VMs each as needed without having to re-calculate.
for 17 VMs each as needed without needing to re-calculate.
You will also need to take into account the following:
@ -69,22 +69,22 @@ Memory
Continuing to use the example from the previous section, we need to determine
how much RAM will be required to support 17 VMs per server. Let's assume that
you need an average of 4 GBs of RAM per VM with dynamic allocation for up to
12GBs for each VM. Calculating that all VMs will be using 12 GBs of RAM requires
that each server have 204 GBs of available RAM.
you need an average of 4 GB of RAM per VM with dynamic allocation for up to
12 GB for each VM. Calculating that all VMs will be using 12 GB of RAM requires
that each server have 204 GB of available RAM.
You must also consider that the node itself needs sufficient RAM to accommodate
core OS operations as well as RAM for each VM container (not the RAM allocated
to each VM, but the memory the core OS uses to run the VM). The node's OS must
run it's own operations, schedule processes, allocate dynamic resources, and
handle network operations, so giving the node itself at least 16 GBs or more RAM
is not unreasonable.
core OS operations as well as the RAM necessary for each VM container (not the
RAM allocated to each VM, but the memory the core OS uses to run the VM). The
node's OS must run it's own operations, schedule processes, allocate dynamic
resources, and handle network operations, so giving the node itself at least
16 GB or more available RAM is within reason.
Considering that the RAM we would consider for servers comes in 4 GB, 8 GB, 16 GB
and 32 GB sticks, we would need a total of 256 GBs of RAM installed per server.
and 32 GB sticks, we would need a total of 256 GB of RAM installed per server.
For an average 2-CPU socket server board you get 16-24 RAM slots. To have
256 GBs installed you would need sixteen 16 GB sticks of RAM to satisfy your RAM
needs for up to 17 VMs requiring dynamic allocation up to 12 GBs and to support
256 GB installed you would need sixteen 16 GB sticks of RAM to satisfy your RAM
needs for up to 17 VMs requiring dynamic allocation up to 12 GB and to support
all core OS requirements.
You can adjust this calculation based on your needs.
@ -92,11 +92,12 @@ You can adjust this calculation based on your needs.
Storage Space
-------------
When it comes to disk space there are several types that you need to consider:
With regards to disk requirements, there are several types of storage that
you need to consider:
* Ephemeral (the local drive space for a VM)
* Persistent (the remote volumes that can be attached to a VM)
* Object Storage (such as images or other objects)
* Ephemeral (local disk for a VM)
* Persistent (remote volumes which can be attached to a VM)
* Object Storage (such as images, documents, or other objects)
As far as local drive space that must reside on the compute nodes, in our
example of 100 VMs we make the following assumptions:
@ -108,19 +109,19 @@ example of 100 VMs we make the following assumptions:
Returning to our already established example, we need to figure out how much
storage to install per server. This storage will service the 17 VMs per server.
If we are assuming 50 GBs of storage for each VMs drive container, then we would
need to install 2.5 TBs of storage on the server. Since most servers have
anywhere from 4 to 32 2.5" drive slots or 2 to 12 3.5" drive slots, depending on
If we are assuming 50 GB of storage for each VM's drive container, then we
would need to outfit 2.5 TB of storage on the server. Since most servers have
anywhere from 4 to 32 2.5" disk bays or 2 to 12 3.5" disk bays, depending on
server form factor (i.e., 2U vs. 4U), you will need to consider how the storage
will be impacted by the intended use.
If storage impact is not expected to be significant, then you may consider using
unified storage. For this example a single 3 TB drive would provide more than
enough storage for seventeen 150 GB VMs. If speed is really not an issue, you might even
consider installing two or three 3 TB drives and configure a RAID-1 or RAID-5
for redundancy. If speed is critical, however, you will likely want to have a
single hardware drive for each VM. In this case you would likely look at a 3U
form factor with 24-slots.
If the storage impact is not expected to be significant, then you may consider
using unified storage. For this example, a single 3 TB drive would provide
more than enough storage for seventeen 150 GB VMs. If speed is not a major
concern an issue, you may even consider installing two or three 3 TB drives and
configuring a RAID-1 or RAID-5 setup for redundancy. If speed is critical,
however, you will likely want to have a single physical disk for each VM. In
this case you would likely look at a 3U form factor with 24 disk bays.
Don't forget that you will also need drive space for the node itself, and don't
forget to order the correct backplane that supports the drive configuration
@ -131,7 +132,7 @@ it critical, a single server would need 18 drives, most likely 2.5" 15,000 RPM
Throughput
++++++++++
As far as throughput, that's going to depend on what kind of storage you choose.
Regarding throughput, it depends on what kind of storage you choose.
In general, you calculate IOPS based on the packing density (drive IOPS * drives
in the server / VMs per server), but the actual drive IOPS will depend on the
drive technology you choose. For example:
@ -209,20 +210,19 @@ drives begin to fill up. That calculates to 200 TB in our example. So how do
you put that together? If you were to use 3 TB 3.5" drives, you could use a 12
drive storage frame, with 6 servers hosting 36 TB each (for a total of 216 TB).
You could also use a 36 drive storage frame, with just 2 servers hosting 108 TB
each, but its not recommended due to the high cost of failure to replication
each, but its not recommended due to the high cost of failed replication
and capacity issues.
Networking
----------
Perhaps the most complex part of designing an OpenStack cluster is the
networking.
Perhaps the most complex part of designing an OpenStack cluster is networking.
An OpenStack cluster can involve multiple networks even beyond the Public,
Private, and Internal networks. Your cluster may involve tenant networks,
storage networks, multiple tenant private networks, and so on. Many of these
will be VLANs, and all of them will need to be planned out in advance to avoid
configuration issues.
An OpenStack cluster can involve multiple networks even beyond the required
Public, Private, and Internal networks. Your cluster may involve tenant
networks, storage networks, multiple tenant private networks, and so on. Many
of these will be VLANs, and all of them will need to be planned out in advance
to avoid configuration issues.
In terms of the example network, consider these assumptions:
@ -249,7 +249,7 @@ that you have 48 x 1 Gb links down, but 4 x 10 Gb links up. Do the same thing wi
resulting in oversubscription.
Like many other issues in OpenStack, you can avoid this problem to a great
extent with careful planning. Problems only arise when you are moving between
extent with sensible planning. Problems only arise when you are moving between
racks, so plan to create "pods", each of which includes both storage and
compute nodes. Generally, a pod is the size of a non-oversubscribed L2 domain.
@ -264,7 +264,7 @@ In this example, you are looking at:
* Optional Cluster Management switch, plus a second for HA
Because your network will in all likelihood grow, it's best to choose 48 port
switches. Also, as your network grows, you will need to consider uplinks and
switches. Also, as your network expands, you will need to consider uplinks and
aggregation switches.
Summary

View File

@ -11,7 +11,7 @@ How Fuel Works
Fuel is a ready-to-install collection of the packages and scripts you need
to create a robust, configurable, vendor-independent OpenStack cloud in your
own environment. As of Fuel 3.1, Fuel Library and Fuel Web have been merged
own environment. Since Fuel 3.1, Fuel Library and Fuel Web have been merged
into a single toolbox with options to use the UI or CLI for management.
A single OpenStack cloud consists of packages from many different open source
@ -27,33 +27,33 @@ OpenStack-based infrastructure in your own environment.
.. image:: /_images/FuelSimpleDiagram.jpg
:align: center
Fuel works on a simple premise. Rather than installing each of the
components that make up OpenStack directly, you instead use a configuration
management system like Puppet to create scripts that can provide a
configurable, reproducible, sharable installation process.
Fuel works by using a simple approach. Rather than installing each of the
components that make up OpenStack directly, it instead uses a configuration
management system (Puppet) to compile a set of instructions that can provide a
configurable, reproducible, and sharable installation process.
In practice, Fuel works as follows:
1. First, set up Fuel Master Node using the ISO. This process only needs to
1. First, set up Fuel Master node using the ISO. This process only needs to
be completed once per installation.
2. Next, discover your virtual or physical nodes and configure your
OpenStack cluster using the Fuel UI.
3. Finally, deploy your OpenStack cluster on discovered nodes. Fuel will
perform all deployment magic for you by applying pre-configured and
perform all deployment steps for you by applying pre-configured and
pre-integrated Puppet manifests via Astute orchestration engine.
Fuel is designed to enable you to maintain your cluster while giving you the
flexibility to adapt it to your own configuration.
flexibility to adapt it to your own business needs and scale.
.. image:: /_images/how-it-works_svg.jpg
:align: center
Fuel comes with several pre-defined deployment configurations, some of them
include additional configuration options that allow you to adapt OpenStack
deployment to your environment.
Fuel comes with several pre-defined deployment configurations, plus
additional options that allow you to adapt your OpenStack deployment to your
environment.
Fuel UI integrates all of the deployments scripts into a unified,
Web-based Graphical User Interface that walks administrators through the
process of installing and configuring a fully functional OpenStack environment.
Fuel UI integrates all of the deployments scripts into a unified, web-based
interface that walks administrators through the process of installing and
configuring a fully functional OpenStack environment.

View File

@ -11,20 +11,20 @@ Deployment Configurations Provided By Fuel
One of the advantages of Fuel is that it comes with a number of pre-built
deployment configurations that you can use to quickly build your own
OpenStack cloud infrastructure. These are well-specified configurations of
OpenStack and its constituent components that are expertly tailored to one
or more common cloud use cases. Fuel provides the ability to create the
following cluster types without requiring extensive customization:
OpenStack cloud infrastructure. These are widely accepted configurations of
OpenStack, with its constituent components expertly tailored to serve
multipurpose cloud use cases. Fuel provides the ability to create the
following cluster types directly out of the box:
**Simple (non-HA)**: The Simple (non-HA) installation provides an easy way
to install an entire OpenStack cluster without requiring the degree of
increased hardware involved in ensuring high availability.
to install an entire OpenStack cluster without requiring the expense of
extra hardware required to ensure high availability.
**Multi-node (HA)**: When you're ready to begin your move to production, the
Multi-node (HA) configuration is a straightforward way to create an OpenStack
**Multi-node (HA)**: When you are ready to move to production, the Multi-node
(HA) configuration is a straightforward way to create an OpenStack
cluster that provides high availability. With three controller nodes and the
ability to individually specify services such as Cinder, Neutron (formerly
Quantum), and Swift, Fuel provides the following variations of the
Quantum), Swift, and Ceph, Fuel provides the following variations of the
Multi-node (HA) configurations:
- **Compact HA**: When you choose this option, Swift will be installed on
@ -32,8 +32,8 @@ Multi-node (HA) configurations:
for additional Swift servers while still addressing high availability
requirements.
- **Full HA**: This option enables you to install independent Swift and Proxy
nodes, so that you can separate their operation from your controller nodes.
- **Full HA**: This option enables you to install dedicated Cinder or Ceph
nodes, so that you can separate their operations from your controller nodes.
In addition to these configurations, Fuel is designed to be completely
customizable. For assistance on deeper customization options based on the

View File

@ -9,9 +9,10 @@ Installing Fuel Master Node
.. contents :local:
Fuel is distributed as both ISO and IMG images, each of them contains
an installer for Fuel Master node. The ISO image is used for CD media devices,
iLO or similar remote access systems. The IMG file is used for USB memory sticks.
Fuel is distributed via both ISO and IMG images. Eac contains an installer for
Fuel Master node. The ISO image is used for CD media devices, iLO (HP) or
similar remote access systems. The IMG file is used for USB memory stick-based
installation.
Once installed, Fuel can be used to deploy and manage OpenStack clusters. It
will assign IP addresses to the nodes, perform PXE boot and initial
@ -20,47 +21,49 @@ the cluster.
.. _Install_Bare-Metal:
On Bare-Metal Environment
-------------------------
Bare-Metal Environment
----------------------
To install Fuel on bare-metal hardware, you need to burn the provided ISO to
a CD/DVD or create a bootable USB stick. You would then begin the
installation process by booting from that media, very much like any other OS.
a writeable DVD or create a bootable USB stick. You would then begin the
installation process by booting from that media, very much like any other OS
install process.
Burning an ISO to optical media is a deeply supported function on all OSes.
For Linux there are several interfaces available such as `Brasero` or `Xfburn`,
two of the more commonly pre-installed desktop applications. There are also
Burning an ISO to optical media is a commonly supported function on all OSes.
On Linux, there are several programs available, such as `Brasero` or `Xfburn`,
two commonly pre-installed desktop applications. There are also
a number for Windows such as `ImgBurn <http://www.imgburn.com/>`_ and the
open source `InfraRecorder <http://infrarecorder.org/>`_.
Burning an ISO in Mac OS X is deceptively simple. Open `Disk Utility` from
Burning an ISO in Mac OS X is quite simple. Open `Disk Utility` from
`Applications > Utilities`, drag the ISO into the disk list on the left side
of the window and select it, insert blank media with enough room, and click
`Burn`. If you prefer a utility, check out the open source `Burn
of the window and select it, insert blank DVD, and click `Burn`. If you prefer
a different utility, check out the open source `Burn
<http://burn-osx.sourceforge.net/Pages/English/home.html>`_.
Installing the ISO to a bootable USB stick, however, is an entirely different
matter. Canonical suggests `PenDriveLinux` which is a GUI tool for Windows.
On Windows, you can write the installation image with a number of different
utilities. The following list links to some of the more popular ones and they are
all available at no cost:
utilities. The following list links to some of the more popular ones and they
are all available at no cost:
- `Win32 Disk Imager <http://sourceforge.net/projects/win32diskimager/>`_.
- `ISOtoUSB <http://www.isotousb.com/>`_.
After the installation is complete, you will need to allocate bare-metal
nodes for your OpenStack cluster, put them on the same L2 network as the
Master node, and PXE boot. The UI will discover them and make them available
for installing OpenStack.
After the installation is complete, you will need to make your bare-metal nodes
available for your OpenStack cluster. Attach them to the same L2 network
(broadcast domain) as the Master node, and configure them to automatically
boot via network. The UI will discover them and make them available for
installing OpenStack.
On VirtualBox
-------------
VirtualBox
----------
If you are going to evaluate Fuel on VirtualBox, you should know that we
provide a set of scripts that create and configure all of the required VMs for
you, including the Master node and Slave nodes for OpenStack itself. It's a very
simple, single-click installation.
If you would like to evaluate Fuel on VirtualBox, you can take advantage of the
included set of scripts that create and configure all the required VMs for a
test environment, including the Master node and Slave nodes for OpenStack
itself. It is a simple, single-click installation.
.. note::
@ -71,15 +74,17 @@ simple, single-click installation.
The requirements for running Fuel on VirtualBox are:
A host machine with Linux or Mac OS.
The scripts have been tested on Mac OS 10.7.5, Mac OS 10.8.3, Ubuntu 12.04 and Ubuntu 12.10.
The scripts have been tested on Mac OS 10.7.5, Mac OS 10.8.3, Ubuntu 12.04,
Ubuntu 12.10, and Fedora 19.
VirtualBox 4.2.12 (or later) must be installed with the extension pack. Both
can be downloaded from `<http://www.virtualbox.org/>`_.
VirtualBox 4.2.12 (or later) must be installed, along with the extension pack.
Both can be downloaded from `<http://www.virtualbox.org/>`_.
8 GB+ of RAM
to handle 4 VMs for non-HA OpenStack installation (1 Master node, 1 Controller
node, 1 Compute node, 1 Cinder node) or
to handle 5 VMs for HA OpenStack installation (1 Master node, 3 Controller
Will support 4 VMs for non-HA OpenStack installation (1 Master node,
1 Controller node, 1 Compute node, 1 Cinder node) or
Will support 5 VMs for HA OpenStack installation (1 Master node, 3 Controller
nodes, 1 Compute node)
.. _Install_Automatic: