Update documentation

This commit is contained in:
manashkin 2013-03-12 15:48:22 +04:00
parent a985d05d83
commit 9ca31562fd
11 changed files with 338 additions and 100 deletions

View File

@ -171,7 +171,8 @@ OS Installation
``vi/etc/yum.repos.d/puppet.repo``::
[puppetlabs] name=Puppet Labs Packages
[puppetlabs]
name=Puppet Labs Packages
baseurl=http://yum.puppetlabs.com/el/$releasever/products/$basearch/
enabled=1 gpgcheck=1 gpgkey=http://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs

View File

@ -15,7 +15,7 @@ You can download the latest release of Fuel here:
[LINK HERE]
Additionally, you can download a pre-built Puppet Master/Cobbler ISO,
Alternatively, you can download a pre-built Puppet Master/Cobbler ISO,
which will cut down the amount of time you'll need to spend getting
Fuel up and running. You can download the ISO here:
@ -28,14 +28,9 @@ Hardware for a virtual installation
For a virtual installation, you need only a single machine. You can get
by on 8GB of RAM, but 16GB will be better. To actually perform the
installation, you need a way to create Virtual Machines. This guide
assumes that you are using the latest version of VirtualBox (currently
4.2.6), which you can download from
`https://www.virtualbox.org/wiki/Downloads`
assumes that you are using version 4.2.6 of VirtualBox, which you can download from
https://www.virtualbox.org/wiki/Downloads
You'll need to run VirtualBox on a stable host system. Mac OS 10.7.x,
CentOS 6.3, or Ubuntu 12.04 are preferred; results in other operating

View File

@ -131,7 +131,7 @@ hostonly adapters exist and are configured correctly:
After creating these interfaces, reboot VirtualBox to make sure that
DHCP isnt running in the background.
DHCP isn't running in the background.
@ -158,7 +158,7 @@ sure that you can boot your server from the DVD or USB drive. Once you've booted
Creating fuel-pm on a Virtual Machine
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The process of creating a virtual machine in VirtualBox depends on
The process of creating a virtual machine to host Fuel in VirtualBox depends on
whether your deployment is purely virtual or consists of a virtual
fuel-pm controlling physical hardware. If your deployment is purely
virtual then Adapter 2 should be a Hostonly adapter attached to

View File

@ -108,12 +108,12 @@ Change the $domain_name to your own domain name. ::
$cobbler_password = 'cobbler'
$pxetimeout = '0'
# Predefined mirror type to use: internal or external (should be removed soon)
$mirror_type = 'external'
# Predefined mirror type to use: custom or default (should be removed soon)
$mirror_type = 'default'
Change the $mirror_type to be external so Fuel knows to request
Change the $mirror_type to be default so Fuel knows to request
resources from Internet sources rather than having to set up your own
internal repositories.

View File

@ -21,7 +21,7 @@ Puppet Master::
cp /etc/puppet/modules/openstack/examples/site_openstack_swift_compact.pp /etc/puppet/manifests/site.pp
cp /etc/puppet/modules/openstack/examples/site_openstack_ha_compact.pp /etc/puppet/manifests/site.pp
@ -343,6 +343,115 @@ The default value is loopback, which tells Swift to use a loopback storage devic
...
Configuring OpenStack to use syslog
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To use the syslog server, adjust the corresponding variables in the "if $use_syslog" clause::
$use_syslog = true
if $use_syslog {
class { "::rsyslog::client":
log_local => true,
log_auth_local => true,
server => '127.0.0.1',
port => '514'
}
}
For remote logging:
server => <syslog server hostname or ip>
port => <syslog server port>
For local logging:
set log_local and log_auth_local to true
Setting the mirror type
^^^^^^^^^^^^^^^^^^^^^^^
To tell Fuel to download packages from external repos provided by Mirantis and your distribution vendors, set the $mirror_type variable to "default"::
...
# If you want to set up a local repository, you will need to manually adjust mirantis_repos.pp,
# though it is NOT recommended.
$mirror_type = 'default'
$enable_test_repo = false
...
Future versions of Fuel will enable you to use your own internal repositories.
Configuring Rate-Limits
^^^^^^^^^^^^^^^^^^^^^^^
Openstack has predefined limits on different HTTP queries for nova-compute and cinder services. Sometimes (e.g. for big clouds or test scenarios) these limits are too strict. (See http://docs.openstack.org/folsom/openstack-compute/admin/content/configuring-compute-API.html) In this case you can change them to more appropriate values.
There are two hashes describing these limits: $nova_rate_limits and $cinder_rate_limits. ::
...
#Rate Limits for cinder and Nova
#Cinder and Nova can rate-limit your requests to API services.
#These limits can be reduced for your installation or usage scenario.
#Change the following variables if you want. They are measured in requests per minute.
$nova_rate_limits = {
'POST' => 1000,
'POST_SERVERS' => 1000,
'PUT' => 1000, 'GET' => 1000,
'DELETE' => 1000
}
$cinder_rate_limits = {
'POST' => 1000,
'POST_SERVERS' => 1000,
'PUT' => 1000, 'GET' => 1000,
'DELETE' => 1000
}
...
Enabling Horizon HTTPS/SSL mode
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using the $horizon_use_ssl variable, you have the option to decide whether the OpenStack dashboard (Horizon) uses HTTP or HTTPS::
...
# 'custom': require fileserver static mount point [ssl_certs] and hostname based certificate existence
$horizon_use_ssl = false
class compact_controller (
...
This variable accepts the following values:
* 'false': In this mode, the dashboard uses HTTP with no encryption
* 'default': In this mode, the dashboard uses keys supplied with the standard Apache SSL module package
* 'exist': In this case, the dashboard assumes that the domain name-based certificate, or keys, are provisioned in advance. This can be a certificate signed by any authorized provider, such as Symantec/Verisign, Comodo, GoDaddy, and so on. The system looks for the keys in these locations:
for Debian/Ubuntu:
* public `/etc/ssl/certs/domain-name.pem`
* private `/etc/ssl/private/domain-name.key`
for Centos/RedHat:
* public `/etc/pki/tls/certs/domain-name.crt`
* private `/etc/pki/tls/private/domain-name.key`
* 'custom': This mode requires a static mount point on the fileserver for [ssl_certs] and certificate pre-existence. To enable this mode, configure the puppet fileserver by editing /etc/puppet/fileserver.conf to add::
...
[ssl_certs]
path /etc/puppet/templates/ssl
allow *
..
From there, create the appropriate directory::
mkdir -p /etc/puppet/templates/ssl
Add the certificates to this directory. (Reload the puppetmaster service for these changes to take effect.)
Now we just need to make sure that all of our nodes get the proper
values.
@ -448,16 +557,24 @@ specify the individual controllers::
Notice also that each controller has the swift_zone specified, so each
of the three controllers can represent each of the three Swift zones.
<<<<<<< HEAD
=======
In ``openstack/examples/site_openstack_full.pp`` example, the following nodes are specified:
>>>>>>> 5f32c0d... Rename and sync manifests
In the ``openstack/examples/site_openstack_full.pp`` example, the following nodes are specified:
* fuel-controller-01
* fuel-controller-02
* fuel-controller-03
* fuel-compute-[\d+]
* fuel-swift-01
* fuel-swift-02
* fuel-swift-03
* fuel-swiftproxy-[\d+]
* fuel-quantum
Using this architecture, the system includes three stand-alone swift-storage servers, and one or more swift-proxy servers.
In the ``openstack/examples/site_openstack_compact.pp`` example on the other hand, the role of swift-storage and swift-proxy are combined with the controllers.
<<<<<<< HEAD
=======
In ``openstack/examples/site_openstack_compact.pp`` example, the role of swift-storage and swift-proxy combined with controllers.
>>>>>>> 5f32c0d... Rename and sync manifests
One final fix
^^^^^^^^^^^^^
@ -533,69 +650,6 @@ again grep for error messages.
When you see no errors on any of your nodes, your OpenStack cluster is
ready to go.
Configuring OpenStack to use syslog
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* If you want to use syslog server, you need to do the following steps:
Adjust the corresponding variables in "if $use_syslog" clause::
$use_syslog = true
if $use_syslog {
class { "::rsyslog::client":
log_local => true,
log_auth_local => true,
server => '127.0.0.1',
port => '514'
}
}
For remote logging:
server => <syslog server hostname or ip>
port => <syslog server port>
For local logging:
set log_local and log_auth_local to true
Setting the mirror type
^^^^^^^^^^^^^^^^^^^^^^^
To tell Fuel to download packages from external repos provided by Mirantis and your distribution vendors, set the $mirror_type variable to "external"::
...
# If you want to set up a local repository, you will need to manually adjust mirantis_repos.pp,
# though it is NOT recommended.
$mirror_type = 'external'
$enable_test_repo = false
...
Future versions of Fuel will enable you to use your own internal repositories.
Configuring Rate-Limits
^^^^^^^^^^^^^^^^^^^^^^^
Openstack has predefined limits on different HTTP queries for nova-compute and cinder services. Some
times (e.g. for big clouds or test scenarios) these limits are too strict. (See http://docs.openstac
k.org/folsom/openstack-compute/admin/content/configuring-compute-API.html) In this case you can chan
ge them to appropriate values.
There are two hashes describing these limits: $nova_rate_limits and $cinder_rate_limits. ::
$nova_rate_limits = { 'POST' => '10',
'POST_SERVERS' => '50',
'PUT' => 10, 'GET' => 3,
'DELETE' => 100 }
$cinder_rate_limits = { 'POST' => '10',
'POST_SERVERS' => '50',
'PUT' => 10, 'GET' => 3,
'DELETE' => 100 }
Installing Nagios Monitoring using Puppet

View File

@ -1 +1,3 @@
[NEED CONTENT HERE]
One of the advantages of using Fuel is that it makes it easy to set up an OpenStack cluster so that you can feel your way around and get your feet wet. You can easily set up a cluster using test, or even virtual machines but when you're ready to do an actual deployment, however, there are a number of things you need to consider.
In this section, you'll find information such as how to size the hardware for your cloud, how to handle large-scale deployments, and how to streamline your maintenance tasks using techniques such as orchestration.

View File

@ -1,4 +1,160 @@
Sizing Hardware
---------------
[CONTENT TO BE ADDED]
One of the first questions that comes to mind when planning an OpenStack deployment is "what kind of hardware do I need?" Finding the answer is rarely simple, but getting some idea is not impossible.
Many factors contribute to decisions regarding hardware for an OpenStack cluster -- contact Mirantis for information on your specific situation -- but in general, you will want to consider the following four areas:
* CPU
* Memory
* Disk
* Networking
Your needs in each of these areas are going to determine your overall hardware requirements.
CPU
---
The basic consideration when it comes to CPU is how many GHZ you're going to need. To determine that, think about how many VMs you plan to support, and the average speed you plan to provide, as well as the maximum you plan to provide for a single VM. For example, consider a situation in which you expect:
* 100 VMs
* 2 EC2 compute units (2 GHz) average
* 16 EC2 compute units (16 GHz) max
What does this mean? Well, to make it possible to provide the maximum CPU, you will need at least 5 cores (16 GHz/2.4 GHz per core) per machine, and at least 84 cores ((100 VMs * 2 GHz per VM)/2.4 GHz per core) in total.
If you were to choose the Intel E5 2650-70 8 core CPU, that means you need 10-11 sockets (84 cores / 8 cores per socket).
All of this means you will need 5-6 dual core servers (11 sockets / 2 sockets per server), for a "packing density" of 17 VMs per server (100 VMs / 6 servers).
You will need to take into account a couple of additional notes:
* This model assumes you are not oversubscribing your CPU.
* If you are considering Hyperthreading, count each core as 1.3, not 2.
* Choose a good value CPU.
Memory
------
The process of determining memory requirements is similar to determining CPU. Start by deciding how much memory will be devoted to each VM. In this example, with 4 GB per VM and a maximum of 32 GB for a single VM, you will need 400 GB of RAM.
For cost reasons, you will want to use 8 GB or smaller DIMMs, so considering 16 - 24 slots per server (or 128 GB at the low end) you will need 4 servers to meet your needs.
However, remember that you need 6 servers to meet your CPU requirements, so instead you can go with 6 64 GB or 96 GB machines.
Again, you do not want to oversubscribe memory.
Disk Space
----------
When it comes to disk space there are several types that you need to consider:
* Ephemeral (the local drive space for a VM)
* Persistent (the remote volumes that can be attached to a VM)
* Object Storage (such as images or other objects)
As far as local drive space that must reside on the compute nodes, in our example of 100 VMs, our assumptions are:
* 50 GB local space per VM
* 5 TB total of local space (100 VMs * 50 GB per VM)
* 500 GB of persistent volume space per VM
* 50 TB total persistent storage
Again you have 6 servers, so that means you're looking at .9TB per server (5 TB / 6 servers) for local drive space.
Throughput
^^^^^^^^^^
As far as throughput, that's going to depend on what kind of storage you choose. In general, you calculate IOPS based on the packing density (drive IOPS * drives in the server / VMs per server), but the actual drive IOPS will depend on what you choose. For example:
* 3.5" slow and cheap (100 IOPS per drive, with 2 mirrored drives)
* 100 IOPS * 2 drives / 17 VMs per server = 12 Read IOPS, 6 Write IOPS
* 2.5" 15K (200 IOPS, 4 600 GB drive, RAID 10)
* 200 IOPS * 4 drives / 17 VMs per server = 48 Read IOPS, 24 Write IOPS
* SSD (40K IOPS, 8 300 GB drive, RAID 10)
* 40K * 8 drives / 17 VMs per server = 19K Read IOPS, 9.5K Write IOPS
Clearly, SSD gives you the best performance, but the difference in cost between that and the lower end solution is going to be signficant, to say the least. You'll need to decide based on your own situation.
Remote storage
^^^^^^^^^^^^^^
IOPS will also be a factor in determining how you decide to handle persistent storage. For example, consider these options for laying out your 50 TB of remote volume space:
* 12 drive storage frame using 3 TB 3.5" drives mirrored
* 36 TB raw, or 18 TB usable space per 2U frame
* 3 frames (50 TB / 18 TB per server)
* 12 slots x 100 IOPS per drive = 1200 Read IOPS, 600 Write IOPS per frame
* 3 frames x 1200 IOPS per frame / 100 VMs = **36 Read IOPS, 18 Write IOPS per frame**
* 24 drive storage frame using 1TB 7200 RPM 2.5" drives
* 24 TB raw, or 12 TB usable space per 2U frame
* 5 frames (50 TB / 12 TB per server)
* 24 slots x 100 IOPS per drive = 2400 Read IOPS, 1200 Write IOPS per frame
* 5 frames x 2400 IOPS per frame / 100 VMs = **120 Read IOPS, 60 Write IOPS per frame**
You can accomplish the same thing with a single 36 drive frame using 3 TB drives, but this becomes a single point of failure in your cluster.
Object storage
^^^^^^^^^^^^^^
When it comes to object storage, you will find that you need more space than you think. For example, this example specifies 50 TB of object storage. Easy right?
Well, no. Object storage uses a default of 3 times the required space for replication, which means you will need 150 TB. However, to accommodate two hands-off zones, you will need 5 times the required space, which means 250 TB.
But the calculations don't end there. You don't ever want to run out of space, so "full" should really be more like 75% of capacity, which means 333 TB, or a multiplication factor of 6.66.
Of course, that might be a bit much to start with; you might want to start with a happy medium of a multiplier of 4, then acquire more hardware as your drives begin to fill up. That means 200 TB in this example.
So how do you put that together? If you were to use 3 TB 3.5" drives, you could use a 12 drive storage frame, with 6 servers hosting 36 TB each (for a total of 216 TB).
You could also use a 36 drive storage frame, with just 2 servers hosting 108 TB each, but it's not recommended due to several factors, from the high cost of failure to replication and capacity issues.
Networking
----------
Perhaps the most complex part of designing an OpenStack cluster is the networking. An OpenStack cluster can involve multiple networks even beyond the Public, Private, and Internal networks. Your cluster may involve tenant networks, storage networks, multiple tenant private networks, and so on. Many of these will be VLANs, and all of them will need to be planned out.
In terms of the example network, consider these assumptions:
* 100 Mbits/second per VM
* HA architecture
* Network Storage is not latency sensitive
In order to achieve this, you can use 2 1Gb links per server (2 x 1000 Mbits/second / 17 VMs = 118 Mbits/second). Using 2 links also helps with HA.
You can also increase throughput and decrease latency by using 2 10 Gb links, bringing the bandwidth per VM to 1 Gb/second, but if you're going to do that, you've got one more factor to consider.
Scalability and oversubscription
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It is one of the ironies of networking that 1Gb Ethernet generally scales better than 10Gb Ethernet -- at least until 100Gb switches are more commonly available. It's possible to aggregate the 1Gb links in a 48 port switch, so that you have 48 1Gb links down, but 4 10GB links up. Do the same thing with a 10Gb switch, however, and you have 48 10Gb links down and 4 100Gb links up, resulting in oversubscription.
Like many other issues in OpenStack, you can avoid this problem to a great extent with careful planning. Problems only arise when you are moving between racks, so plan to create "pods", each of which includes both storage and compute nodes. Generally, a pod is the size of a non-oversubscribed L2 domain.
Hardware for this example
^^^^^^^^^^^^^^^^^^^^^^^^^
In this example, you are looking at:
* 2 data switches (for HA), each with a minimum of 12 ports for data (2 x 1Gb links per server x 6 servers)
* 1 1Gb switch for IPMI (1 port per server x 6 servers)
* Optional Cluster Management switch, plus a second for HA
Because your network will in all likelihood grow, it's best to choose 48 port switches. Also, as your network grows, you will need to consider uplinks and aggregation switches.
Summary
-------
In general, your best bet is to choose a large multi-socket server, such as a 2 socket server with a balance in I/o, CPU, Memory, and Disk. Look for a 1U low cost R-class or 2U high density C-class server. Some good alternatives for compute nodes include:
* Dell PowerEdge R620
* Dell PowerEdge C6220 Rack Server
* Dell PowerEdge R720XD (for high disk or IOPS requirements)

View File

@ -1,4 +1,36 @@
Large Scale Deployments
-----------------------
[NEED CONTENT]
When deploying large clusters -- those of 100 nodes or more -- there are two basic bottlenecks:
* Certificate signing requests and Puppet Master/Cobbler capacity
* Downloading of operating systems and other software
All of these bottlenecks can be mitigated with careful planning.
Certificate signing requests and Puppet Master/Cobbler capacity
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When deploying a large cluster, you may find that Puppet Master begins to have difficulty when you start exceeding 20 or more simultaneous requests. Part of this problem is because the initial process of requesting and signing certificates involves *.tmp files that can create conflicts. To solve this problem, you have two options: reduce the number of simultaneous requests, or increase the number of Puppet Master/Cobbler servers.
Reducing the number of simultaneous requests is a simple matter of staggering Puppet agent runs. Orchestration can provide a convenient way to accomplish this goal. You don't need extreme staggering -- 1 to 5 seconds will do -- but if this method isn't practical, you can increase the number of Puppet Master/Cobbler servers.
If you're simply overwhelming the Puppet Master process and not running into file conflicts, one way to get around this problem is to use Puppet Master with Thin as a backend and nginx as a front end. This configuration will enable you to dynamically scale the number of Puppet Master processes up and down to accommodate load.
You can find sample configuration files for nginx and puppetmasterd at [CONTENT NEEDED HERE].
You can also increase the number of servers by creating a cluster of servers behind a round robin DNS managed by a service such as HAProxy. You will also need to ensure that these nodes are kept in sync. For Cobbler, that means a combination of the --replicate switch, XMLRPC for metadata, rsync for profiles and distributions. Similarly, Puppet Master and PuppetDB can be kept in sync with a combination of rsync (for modules, manifests, and SSL data) and database replication.
.. image:: /pages/production-considerations/cobbler-puppet-ha.png
Downloading of operating systems and other software
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Large deployments also suffer from a bottleneck in terms of downloading of software. One way to avoid this problem is the use of multiple 1G interfaces bonded together. You might also want to consider 10G Ethernet, if the rest of your architecture warrants it. (See "Sizing Hardware" for more information on choosing networking equipment.)
Another option is to prevent the need to download so much data in the first place using either apt-cacher, which acts as a repository cache, or a private repository.
To use apt-cacher, the kickstarts Cobbler provides to each node should specify Cobbler's IP address and the apt-cacher port as the proxy server. This will prevent all of the nodes from having to download the software individually.
Contact Mirantis for information on creating a private repository.

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

View File

@ -33,8 +33,7 @@ authorization for these transactions are handled by **Keystone**.
OpenStack provides for two different types of storage: block storage
and object storage. Block storage is traditional data storage, with
small, fixed-size blocks that are mapped to locations on storage media. At
its simplest level, OpenStack provides block storage using **nova-
volume**, but it is common to use **Cinder**.
its simplest level, OpenStack provides block storage using **nova-volume**, but it is common to use **Cinder**.
@ -66,7 +65,7 @@ essential services run out of a single server:
[INSERT DIAGRAM HERE]
.. image:: https://docs.google.com/drawings/d/1gGNYYayPAPPHgOYi98Dmebry4hP1SOGF2APXWzbnNo8/pub?w=767&h=413
@ -97,8 +96,7 @@ provide; because Swift runs on its own servers, you can reduce the
number of controllers from three (or five, for a full Swift implementation) to one, if desired:
[INSERT DIAGRAM HERE]
.. image:: https://docs.google.com/drawings/d/1nVEtfpNLaLV4EBKJQleLxovqMVrDCRT7yFWTYUQASB0/pub?w=767&h=413
@ -116,7 +114,7 @@ nodes:
[INSERT DIAGRAM HERE]
.. image:: https://docs.google.com/drawings/d/1xLv4zog19j0MThVGV9gSYa4wh1Ma4MQYsBz-4vE1xvg/pub?w=767&h=413
@ -131,7 +129,7 @@ but avoids the need for a separate Quantum node:
[INSERT DIAGRAM HERE]
.. image:: https://docs.google.com/drawings/d/1GYNM5yTJSlZe9nB5SHnlrqyMfVRdVh02OFLwXlz-itc/pub?w=767&h=413
Multi-node (HA) deployment (Standalone)
@ -145,7 +143,7 @@ networking, and controller functionality:
[INSERT DIAGRAM HERE]
.. image:: https://docs.google.com/drawings/d/1rJEZi5-l9oemMmrkH5UPjitQQDVGuZQ1KS0pPWTuovY/pub?w=769&h=594
@ -160,7 +158,7 @@ architecture.
Lets take a closer look at the details of this topology.
A closer look at the Multi-node (non-HA) deployment (compact Swift)
A closer look at the Multi-node (HA) deployment (compact Swift)
-------------------------------------------------------------------
In this section, you'll learn more about the Multi-node (HA) Compact
@ -168,8 +166,7 @@ Swift topology and how it achieves high availability in preparation
for installing this cluster in section 3. As you may recall, this
topology looks something like this:
[INSERT DIAGRAM HERE]
.. image:: https://docs.google.com/drawings/d/1xLv4zog19j0MThVGV9gSYa4wh1Ma4MQYsBz-4vE1xvg/pub?w=767&h=413
OpenStack services are interconnected by RESTful HTTP-based APIs and

View File

@ -73,3 +73,4 @@ as user files.
.. image:: https://docs.google.com/drawings/pub?id=1Xd70yy7h5Jq2oBJ12fjnPWP8eNsWilC-ES1ZVTFo0m8&w=777&h=778