renamed cluster to environment, full/compact to multinode HA

This commit is contained in:
Matthew Mosesohn 2013-10-15 14:19:00 +04:00
parent 8cb75eb604
commit 45300e7d95
28 changed files with 338 additions and 285 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 34 KiB

View File

@ -6,9 +6,9 @@ Add or Remove Controller and Compute Nodes Without Downtime
===========================================================
This document will help you become familiar with the process around lifecycle
management of controller and compute nodes within an OpenStack cluster deployed
by Fuel. There are some specific details to note, so reading this document
is highly recommended.
management of controller and compute nodes within an OpenStack environment
deployed by Fuel. There are some specific details to note, so reading this
document is highly recommended.
1. The addition of compute nodes works seamlessly - just specify its
IPs in `site.pp` file (if needed) and run puppet agent

View File

@ -51,15 +51,15 @@ need to create it yourself, use this procedure:
HowTo: Redeploy a node from scratch
------------------------------------
Compute and Cinder nodes in an HA configuration and controller in any
configuration cannot be redeployed without completely redeploying the cluster.
However, for a non-HA OpenStack cluster, you can redeploy a Compute or
Cinder node. To do so, follow these steps:
Compute and Cinder nodes can be redeployed in both multinode and multinode HA
configurations. However, controllers cannot be redeployed without compeltely
redeploying the environment. To do so, follow these steps:
1. Remove the certificate for the node by executing the command
``puppet cert clean <hostname>`` on Fuel Master node.
2. Reboot the node over the network so it can be picked up by Cobbler.
3. Run the puppet agent on the target node using ``puppet agent --test``.
1. Remove the node from your environment in the Fuel UI
2. Deploy Changes
3. Wait for the host to become available as an unallocated node
4. Add the node to the environment with the same role as before
5. Deploy Changes
.. _Enable_Disable_Galera_autorebuild:
@ -129,7 +129,7 @@ Here you can enter resource-specific commands::
**crm(live)resource# start|restart|stop|cleanup <resource_name>**
These commands allow you torespectively start, stop, and restart resources.
These commands allow you to respectively start, stop, and restart resources.
**cleanup**
@ -246,8 +246,8 @@ when members list is incomplete.
How To Smoke Test HA
--------------------
To test if Quantum HA is working, simply shut down the node hosting, e.g.
Quantum agents (either gracefully or hardly). You should see agents start on
To test if NeutrnoHA is working, simply shut down the node hosting, e.g.
Neutron agents (either gracefully or hardly). You should see agents start on
the other node::
@ -260,7 +260,7 @@ the other node::
p_quantum-dhcp-agent (ocf::pacemaker:quantum-agent-dhcp): Started fuel-controller-02
p_quantum-l3-agent (ocf::pacemaker:quantum-agent-l3): Started fuel-controller-02
and see corresponding Quantum interfaces on the new Quantum node::
and see corresponding Neutron interfaces on the new Neutron node::
# ip link show

View File

@ -28,34 +28,18 @@ configuration. However, Mirantis provides several pre-defined
architectures for your convenience.
The pre-defined architectures include:
* **Multi-node (non-HA)**
The Multi-node (non-HA) environment provides an easy way
to install an entire OpenStack cluster without requiring the degree
* **Multi-node**
The Multi-node environment provides an easy way
to install an entire OpenStack environment without requiring the degree
of increased hardware involved in ensuring high availability.
Mirantis recommends that you use this architecture for testing
purposes.
* **Multi-node with HA**
* **Multi-node HA**
The Multi-node with HA environment is dedicated for highly available
production deployments. Using Multi-node with HA you can deploy
additional services, such as Cinder, Neutron, and Ceph.
You can create the following multi-node environments:
* **Compact HA**
The Compact HA installation provides high availability and at
the same time saves on hardware. When you deploy Compact
HA, Fuel uses controller nodes to install Swift. Therefore,
the hardware requirements are reduced by eliminating the need
for additional storage servers while addressing the high
availability requirements.
* **Full HA**
The Full HA installation requires maximum hardware and provides
complete highly available OpenStack deployment. With Full HA, you
can install independent Ceph and Cinder nodes. Using the standalone
Ceph and Cinder servers, you can isolate their operations from
the controller nodes.
With Fuel, you can create your own cloud environment that include
additional components.

View File

@ -7,8 +7,8 @@ Working hands on with Fuel for OpenStack will help you see how to move
certain features around from the standard installation.
The first step, however, is to commit to a deployment template. A balanced,
compact, and full-featured deployment is the Multi-node (HA) Compact
deployment, so thats what well be using through the rest of this guide.
compact, and full-featured deployment is the Multi-node with HA deployment, so
that is what well be using through the rest of this guide.
Production installations require a physical hardware infrastructure, but you
can easily deploy a small simulation cloud on a single physical machine

View File

@ -3,7 +3,7 @@ How installation works
While version 2.0 of Fuel provided the ability to simplify installation of
OpenStack, versions 2.1 and above include orchestration capabilities that
simplify deployment of OpenStack clusters. The deployment process using
simplify deployment of OpenStack environments. The deployment process using
Fuel CLI follows this general procedure:
#. Design your architecture.

View File

@ -228,7 +228,7 @@ volume space:
* 5 frames x 2400 IOPS per frame / 100 VMs = 120 Read IOPS, 60 Write IOPS per frame
You can accomplish the same thing with a single 36 drive frame using 3 TB
drives, but this becomes a single point of failure in your cluster.
drives, but this becomes a single point of failure in your environment.
Object storage
++++++++++++++
@ -255,11 +255,11 @@ and capacity issues.
Calculating Network
--------------------
Perhaps the most complex part of designing an OpenStack cluster is the
Perhaps the most complex part of designing an OpenStack environment is the
networking.
An OpenStack cluster can involve multiple networks even beyond the Public,
Private, and Internal networks. Your cluster may involve tenant networks,
An OpenStack environment can involve multiple networks even beyond the Public,
Private, and Internal networks. Your environment may involve tenant networks,
storage networks, multiple tenant private networks, and so on. Many of these
will be VLANs, and all of them will need to be planned out in advance to avoid
configuration issues.

View File

@ -44,7 +44,7 @@ important decisions:
you are deploying on physical hardware, two of them -- the public network
and the internal, or management network -- must be routable in your
networking infrastructure. The third network is used by the nodes for
inter-node communications. Also, if you intend for your cluster to be
inter-node communications. Also, if you intend for your environment to be
accessible from the Internet, you'll want the public network to be on the
proper network segment. For simplicity in this case, this example assumes
an Internet router at 192.168.0.1. Additionally, a set of private network

View File

@ -18,7 +18,7 @@ that will suit your needs. The Golden Rule, however, is to always plan
for growth. With the potential for growth in your design, you can move onto
your actual hardware needs.
Many factors contribute to selecting hardware for an OpenStack cluster --
Many factors contribute to selecting hardware for an OpenStack environment --
`contact Mirantis <http://www.mirantis.com/contact/>`_ for information on your
specific requirements -- but in general, you will want to consider the following
factors:
@ -187,7 +187,7 @@ volume space:
* 5 frames x 2400 IOPS per frame / 100 VMs = 120 Read IOPS, 60 Write IOPS per frame
You can accomplish the same thing with a single 36 drive frame using 3 TB
drives, but this becomes a single point of failure in your cluster.
drives, but this becomes a single point of failure in your environment.
Object storage
++++++++++++++
@ -216,10 +216,11 @@ and capacity issues.
Networking
----------
Perhaps the most complex part of designing an OpenStack cluster is networking.
Perhaps the most complex part of designing an OpenStack environment is
networking.
An OpenStack cluster can involve multiple networks even beyond the required
Public, Private, and Internal networks. Your cluster may involve tenant
An OpenStack environment can involve multiple networks even beyond the required
Public, Private, and Internal networks. Your environment may involve tenant
networks, storage networks, multiple tenant private networks, and so on. Many
of these will be VLANs, and all of them will need to be planned out in advance
to avoid configuration issues.

View File

@ -38,14 +38,14 @@ In practice, Fuel works as follows:
be completed once per installation.
2. Next, discover your virtual or physical nodes and configure your
OpenStack cluster using the Fuel UI.
OpenStack environment using the Fuel UI.
3. Finally, deploy your OpenStack cluster on discovered nodes. Fuel will
3. Finally, deploy your OpenStack environment on discovered nodes. Fuel will
perform all deployment steps for you by applying pre-configured and
pre-integrated Puppet manifests via Astute orchestration engine.
Fuel is designed to enable you to maintain your cluster while giving you the
flexibility to adapt it to your own business needs and scale.
Fuel is designed to enable you to maintain your environment, while giving you
the flexibility to adapt it to your own business needs and scale.
.. image:: /_images/how-it-works_svg.jpg
:align: center

View File

@ -14,26 +14,22 @@ deployment configurations that you can use to quickly build your own
OpenStack cloud infrastructure. These are widely accepted configurations of
OpenStack, with its constituent components expertly tailored to serve
multipurpose cloud use cases. Fuel provides the ability to create the
following cluster types directly out of the box:
following environment types directly out of the box:
**Simple (non-HA)**: The Simple (non-HA) installation provides an easy way
to install an entire OpenStack cluster without requiring the expense of
**Multi-node**: The Multi-node installation provides an easy way
to install an entire OpenStack environment without requiring the expense of
extra hardware required to ensure high availability.
**Multi-node (HA)**: When you are ready to move to production, the Multi-node
(HA) configuration is a straightforward way to create an OpenStack
cluster that provides high availability. With three controller nodes and the
ability to individually specify services such as Cinder, Neutron (formerly
Quantum), Swift, and Ceph, Fuel provides the following variations of the
Multi-node (HA) configurations:
(HA) configuration is a straightforward way to create an OpenStack environment
that provides high availability. With three controller nodes and the
ability to individually assign roles such as Controller, Compute, Cinder,
and Ceph Object Storage Daemon (OSD). Fuel provides the ability to combine
roles to fit your sizing needs.
- **Compact HA**: When you choose this option, Swift will be installed on
your controllers, reducing your hardware requirements by eliminating the need
for additional Swift servers while still addressing high availability
requirements.
.. note::
- **Full HA**: This option enables you to install dedicated Cinder or Ceph
nodes, so that you can separate their operations from your controller nodes.
Controller and Compute roles cannot be combined on the same host.
In addition to these configurations, Fuel is designed to be completely
customizable. For assistance on deeper customization options based on the

View File

@ -46,8 +46,8 @@ and Astute orchestrator passes to the next node in deployment sequence.
.. index:: Deploying Using CLI
Deploying OpenStack Cluster Using CLI
=====================================
Deploying OpenStack Environment Using CLI
=========================================
.. contents :local:

View File

@ -14,10 +14,10 @@ Fuel Master node. The ISO image is used for CD media devices, iLO (HP) or
similar remote access systems. The IMG file is used for USB memory stick-based
installation.
Once installed, Fuel can be used to deploy and manage OpenStack clusters. It
will assign IP addresses to the nodes, perform PXE boot and initial
Once installed, Fuel can be used to deploy and manage OpenStack environments.
It will assign IP addresses to the nodes, perform PXE boot and initial
configuration, and provision of OpenStack nodes according to their roles in
the cluster.
the environment.
.. _Install_Bare-Metal:
@ -52,7 +52,7 @@ are all available at no cost:
- `ISOtoUSB <http://www.isotousb.com/>`_.
After the installation is complete, you will need to make your bare-metal nodes
available for your OpenStack cluster. Attach them to the same L2 network
available for your OpenStack environment. Attach them to the same L2 network
(broadcast domain) as the Master node, and configure them to automatically
boot via network. The UI will discover them and make them available for
installing OpenStack.
@ -60,7 +60,7 @@ installing OpenStack.
VirtualBox
----------
.. OpenStack-3.2-ReferenceArchitecture:
.. OpenStack-3.2-ReferenceArchitecture::
If you would like to evaluate Fuel on VirtualBox, you can take advantage of the
included set of scripts that create and configure all the required VMs for a
@ -83,13 +83,13 @@ VirtualBox 4.2.16 (or later) is required, along with the extension pack.
Both can be downloaded from `<http://www.virtualbox.org/>`_.
8 GB+ of RAM
Will support 4 VMs for non-HA OpenStack installation (1 Master node,
Will support 4 VMs for Multi-node OpenStack installation (1 Master node,
1 Controller node, 1 Compute node, 1 Cinder node)
or
Will support 5 VMs for HA OpenStack installation (1 Master node, 3 Controller
+ Cinder nodes, 1 Compute node)
Will support 5 VMs for Multi-node with HA OpenStack installation (1 Master
node, 3 Controller + Cinder nodes, 1 Compute node)
.. _Install_Automatic:
@ -114,7 +114,7 @@ important files and folders:
After installation of the Master node, the script will create Slave nodes for
OpenStack and boot them via PXE from the Master node.
Finally, the script will give you the link to access the Web-based UI for the
Master node so you can start installation of an OpenStack cluster.
Master node so you can start installation of an OpenStack environment.
.. _Install_Manual:
@ -150,7 +150,7 @@ First, create the Master node VM.
* OS Type: Linux
* Version: Red Hat (64bit)
* RAM: 1024+ MB
* HDD: 20 GB (35gb for Red Hat OpenStack) with dynamic disk expansion
* HDD: 50 GB with dynamic disk expansion
3. Modify your VM settings:
@ -221,68 +221,16 @@ changes, go to Save & Quit.
Changing Network Parameters After Installation
----------------------------------------------
It is still possible to configure other interfaces, or add 802.1Q sub-interfaces
to the Master node to be able to access it from your network if required.
It is easy to do via standard network configuration scripts for CentOS. When the
installation is complete, you can modify
``/etc/sysconfig/network-scripts/ifcfg-eth\*`` scripts. For example, if *eth1*
interface is on the L2 network which is planned for PXE booting, and *eth2* is
the interface connected to your office network switch, *eth0* is not in use, then
settings can be the following:
/etc/sysconfig/network-scripts/ifcfg-eth0::
DEVICE=eth0
ONBOOT=no
/etc/sysconfig/network-scripts/ifcfg-eth1::
DEVICE=eth1
ONBOOT=yes
HWADDR=<your MAC>
..... (other settings in your config) .....
PEERDNS=no
BOOTPROTO=static
IPADDR=192.168.1.10
NETMASK=255.255.255.0
/etc/sysconfig/network-scripts/ifcfg-eth2::
DEVICE=eth2
ONBOOT=yes
HWADDR=<your MAC>
..... (other settings in your config) .....
PEERDNS=no
IPADDR=172.18.0.5
NETMASK=255.255.255.0
It is possible to run "fuelmenu" from a root shell on Fuel Master node after
deployment to make minor changes to network interfaces, DNS, and gateway. The
PXE settings, however, cannot be changed after deployment as it will lead to
deployment failure.
.. warning::
Once IP settings are set at the boot time for Fuel Master node, they
**should not be changed during the whole lifecycle of Fuel.**
After modification of network configuration files, it is necessary to apply the
new configuration::
service network restart
Now you should be able to connect to Fuel UI from your network at
http://172.18.0.5:8000/
Name Resolution (DNS)
---------------------
During Master node installation, by default it is assumed that there is a
recursive DNS service on 10.20.0.1.
If you want to make it possible for Slave nodes to be able to resolve public names,
you need to change this default value to point to an actual DNS service.
To make the change, run the following commands on Fuel Master node (replace IP to
your actual DNS)::
echo "nameserver `XXX.XXX.XXX.XXX`" > /etc/dnsmasq.upstream
cobbler sync
PXE Booting Settings
--------------------

View File

@ -9,7 +9,7 @@ Understanding and Configuring the Network
.. contents :local:
OpenStack clusters use several types of network managers: FlatDHCPManager,
OpenStack environments use several types of network managers: FlatDHCPManager,
VLANManager (Nova Network) and Neutron (formerly Quantum). All configurations
are supported. For more information about how the network managers work, you
can read these two resources:
@ -30,7 +30,7 @@ interface connects to that bridge as well.
The same L2 segment is used for all OpenStack projects, which means that there
is no L2 isolation between virtual hosts, even if they are owned by separate
projects. Additionally, there is only one flat IP pool defined for the entire
cluster. For this reason, it is called the *Flat* manager.
environment. For this reason, it is called the *Flat* manager.
The simplest case here is as shown on the following diagram. Here the *eth1*
interface is used to give network access to virtual machines, while *eth0*
@ -58,13 +58,14 @@ FlatDHCPManager (single-interface scheme)
.. image:: /_images/flatdhcpmanager-sh_scheme.jpg
:align: center
In order for FlatDHCPManager to work, all switch ports where Compute nodes are
connected must be configured as tagged (trunk) ports with the required VLANs
allowed (enabled, tagged). Virtual machines will communicate with each other
on L2 even if they are on different Compute nodes. If the virtual machine sends
IP packets to a different network, they will be routed on the host machine
according to the routing table. The default route will point to the gateway
specified on the networks tab in the UI as the gateway for the Public network.
In order for FlatDHCPManager to work, one designated switch port where each
Compute node is connected needs to be configured as tagged (trunk) port
with the required VLANs allowed (enabled, tagged). Virtual machines will
communicate with each other on L2 even if they are on different Compute nodes.
If the virtual machine sends IP packets to a different network, they will be
routed on the host machine according to the routing table. The default route
will point to the gateway specified on the networks tab in the UI as the
gateway for the Public network.
VLANManager
------------
@ -89,9 +90,7 @@ ports must be configured as tagged (trunk) ports to allow this scheme to work.
Fuel Deployment Schema
======================
One of the physical interfaces on each host has to be selected to carry
VM-to-VM traffic (fixed network), and switch ports must be configured to
allow tagged traffic to pass through. OpenStack Computes will untag the IP
Via VLAN tagging on a physical interface, OpenStack Computes will untag the IP
packets and send them to the appropriate VMs. Simplifying the configuration
of VLAN Manager, there is no known limitation which Fuel could add in this
particular networking mode.
@ -287,7 +286,7 @@ For Ubuntu, the following command, executed on the host, can make this happen::
sudo iptables -t nat -A POSTROUTING -s 172.16.1.0/24 \! -d 172.16.1.0/24 -j MASQUERADE
To access VMs managed by OpenStack it is needed to provide IP addresses from
Floating IP range. When OpenStack cluster is deployed and VM is provisioned there,
Floating IP range. When OpenStack environment is deployed and VM is provisioned there,
you have to associate one of the Floating IP addresses from the pool to this VM,
whether in Horizon or via Nova CLI. By default, OpenStack blocking all the traffic to the VM.
To allow the connectivity to the VM, you need to configure security groups.

View File

@ -23,7 +23,7 @@ As you know, OpenStack provides the following basic services:
`nova-compute` controls the life-cycle of these VMs.
**Networking:**
Because an OpenStack cluster (virtually) always includes
Because an OpenStack environment (virtually) always includes
multiple servers, the ability for them to communicate with each other and with
the outside world is crucial. Networking was originally handled by the
`nova-network` service, but it has given way to the newer Neutron (formerly
@ -47,8 +47,7 @@ As you know, OpenStack provides the following basic services:
These services can be combined in many different ways. Out of the box,
Fuel supports the following deployment configurations:
- :ref:`Non-HA Simple <Simple>`
- :ref:`HA Compact <HA_Compact>`
- :ref:`HA Full <HA_Full>`
- :ref:`RHOS Non-HA Simple <RHOS_Simple>`
- :ref:`RHOS HA Compact <RHOS_Compact>`
- :ref:`Multi-node <Multi-node>`
- :ref:`Multi-node with HA <Multi-node_HA>`
- :ref:`RHOS Multi-node <RHOS_Multi-node>`
- :ref:`RHOS Multi-node with HA <RHOS_Multi-node_HA>`

View File

@ -2,14 +2,14 @@
PageBreak
.. index:: Reference Architectures: Non-HA Simple, Non-HA Simple
.. index:: Reference Architectures: Multi-node
.. _Simple:
.. _Multi-node:
Simple (no High Availability) Deployment
Multi-node Deployment
========================================
In a production environment, you will never have a Simple non-HA
In a production environment, you will not likely ever have a Multi-node
deployment of OpenStack, partly because it forces you to make a number
of compromises as to the number and types of services that you can
deploy. It is, however, extremely useful if you just want to see how

View File

@ -2,12 +2,12 @@
PageBreak
.. index:: Reference Architectures: HA Compact, HA Compact
.. index:: Reference Architectures: Multi-node with HA
.. _HA_Compact:
.. _Multi-node_HA:
Multi-node (HA) Deployment (Compact)
====================================
Multi-node with HA Deployment
=============================
Production environments typically require high availability, which
involves several architectural requirements. Specifically, you will
@ -21,4 +21,4 @@ nodes:
:align: center
We'll take a closer look at the details of this deployment configuration in
:ref:`Close_look_Compact` section.
:ref:`Close_look_Multi-node_HA` section.

View File

@ -2,14 +2,14 @@
PageBreak
.. index:: Reference Architectures: HA Compact Details, HA Compact Details
.. index:: Reference Architectures: Multi-node with HA Details
.. _Close_look_Compact:
.. _Close_look_Multi-node_HA:
Details of HA Compact Deployment
================================
Details of Multi-node with HA Deployment
===================================
In this section, you'll learn more about the Multi-node (HA) Compact
In this section, you'll learn more about the Multi-node with HA
deployment configuration and how it achieves high availability. As you may
recall, this configuration looks something like this:

View File

@ -1,24 +0,0 @@
.. raw:: pdf
PageBreak
.. index:: Reference Architectures: HA Full, HA Full
.. _HA_Full:
Multi-node (HA) Deployment (Full)
=================================
For large production deployments, its more common to provide
dedicated hardware for storage. This architecture gives you the advantages of
high availability, but this clean separation makes your cluster more
maintainable by separating storage and controller functionality:
.. image:: /_images/deployment-ha-full_svg.jpg
:align: center
Where Fuel really shines is in the creation of more complex architectures, so
in this document you'll learn how to use Fuel to easily create a multi-node HA
OpenStack cluster. To reduce the amount of hardware you'll need to follow the
installation, however, the guide focuses on the Multi-node HA Compact
architecture.

View File

@ -13,7 +13,7 @@ Red Hat has partnered with Mirantis to offer an end-to-end supported
distribution of OpenStack powered by Fuel. Because Red Hat offers support
for a subset of all available open source packages, the reference architecture
has been slightly modified to meet Red Hat's support requirements to provide
a highly available OpenStack cluster.
a highly available OpenStack environment.
Below is the list of modifications:
@ -35,14 +35,14 @@ Below is the list of modifications:
fixed in a future release. As a result, Fuel for Red Hat OpenStack
Platform will only support Nova networking.
.. index:: Reference Architectures: RHOS Non-HA Simple, RHOS Non-HA Simple
.. index:: Reference Architectures: RHOS Multi-node
.. _RHOS_Simple:
.. _RHOS_Multi-node:
Simple (non-HA) Red Hat OpenStack Deployment
Multi-node Red Hat OpenStack Deployment
--------------------------------------------
In a production environment, you will never have a Simple non-HA
In a production environment, it is not likely you will ever have a Multi-node
deployment of OpenStack, partly because it forces you to make a number
of compromises as to the number and types of services that you can
deploy. It is, however, extremely useful if you just want to see how
@ -59,12 +59,12 @@ enable you to achieve this separation while still keeping your
hardware investment relatively modest is to house your storage on your
controller nodes.
.. index:: Reference Architectures: RHOS HA Compact, RHOS HA Compact
.. index:: Reference Architectures: RHOS Multi-node with HA
.. _RHOS_Compact:
.. _RHOS_Multi-node_HA:
Multi-node (HA) Red Hat OpenStack Deployment (Compact)
------------------------------------------------------
Multi-node with HA Red Hat OpenStack Deployment
-----------------------------------------------
Production environments typically require high availability, which
involves several architectural requirements. Specifically, you will

View File

@ -9,8 +9,8 @@ HA Logical Setup
.. contents :local:
An OpenStack HA cluster involves, at a minimum, three types of nodes:
controller nodes, compute nodes, and storage nodes.
An OpenStack Multi-node HA environment involves, at a minimum, three types of
nodes: controller nodes, compute nodes, and storage nodes.
Controller Nodes
----------------
@ -34,7 +34,7 @@ keystone-api, quantum-api, nova-scheduler, MySQL or RabbitMQ, the
request goes to the live controller node currently holding the External VIP,
and the connection gets terminated by HAProxy. When the next request
comes in, HAProxy handles it, and may send it to the original
controller or another in the cluster, depending on load conditions.
controller or another in the environment, depending on load conditions.
Each of the services housed on the controller nodes has its own
mechanism for achieving HA:
@ -53,7 +53,7 @@ Compute Nodes
-------------
OpenStack compute nodes are, in many ways, the foundation of your
cluster; they are the servers on which your users will create their
environment; they are the servers on which your users will create their
Virtual Machines (VMs) and host their applications. Compute nodes need
to talk to controller nodes and reach out to essential services such
as RabbitMQ and MySQL. They use the same approach that provides
@ -66,7 +66,7 @@ controller nodes using the VIP and going through HAProxy.
Storage Nodes
-------------
In this OpenStack cluster reference architecture, shared storage acts
In this OpenStack environment reference architecture, shared storage acts
as a backend for Glance, so that multiple Glance instances running on
controller nodes can store images and retrieve images from it. To
achieve this, you are going to deploy Swift. This enables you to use

View File

@ -38,7 +38,7 @@ well by raising the bar to 9 nodes:
Of course, you are free to choose how to deploy OpenStack based on the
amount of available hardware and on your goals (such as whether you
want a compute-oriented or storage-oriented cluster).
want a compute-oriented or storage-oriented environment).
For a typical OpenStack compute deployment, you can use this table as
high-level guidance to determine the number of controllers, compute,

View File

@ -31,7 +31,7 @@ relevant nodes and networks.
.. image:: /_images/080-networking-diagram_svg.jpg
:align: center
Lets take a closer look at each network and how its used within the cluster.
Lets take a closer look at each network and how its used within the environment.
.. index:: Public Network
@ -50,7 +50,7 @@ To enable Internet access to VMs, the public network provides the address space
for the floating IPs assigned to individual VM instances by the project
administrator. Nova-network or Neutron (formerly Quantum) services can then
configure this address on the public network interface of the Network controller
node. Clusters based on nova-network use iptables to create a
node. Environments based on nova-network use iptables to create a
Destination NAT from this address to the fixed IP of the corresponding VM
instance through the appropriate virtual bridge interface on the Network
controller node.
@ -68,10 +68,10 @@ connect to OpenStack services APIs.
Internal (Management) Network
-----------------------------
The internal network connects all OpenStack nodes in the cluster. All components
of an OpenStack cluster communicate with each other using this network. This
network must be isolated from both the private and public networks for security
reasons.
The internal network connects all OpenStack nodes in the environment. All
components of an OpenStack environment communicate with each other using this
network. This network must be isolated from both the private and public
networks for security reasons.
The internal network can also be used for serving iSCSI protocol exchanges
between Compute and Storage nodes.
@ -89,4 +89,4 @@ network address spaces are part of the enterprise network address space. Fixed
IPs of virtual instances are directly accessible from the rest of Enterprise network.
The private network can be segmented into separate isolated VLANs, which are
managed by nova-network or Neutron (formerly Quantum) services.
managed by nova-network or Neutron (formerly Quantum) services.

View File

@ -13,11 +13,11 @@ common of them, called Provider Router with Private Networks. It provides each
tenant with one or more private networks, which can communicate with the outside
world via a Neutron router.
Neutron is not, however, required in order to run an OpenStack cluster. If you
don't need (or want) this added functionality, it's perfectly acceptable to
Neutron is not, however, required in order to run an OpenStack environment. If
you don't need (or want) this added functionality, it's perfectly acceptable to
continue using nova-network.
In order to deploy Neutron, you need to enable it in the Fuel configuration.
Fuel will then set up an additional node in the OpenStack installation to act
as an L3 router, or, depending on the configuration options you've chosen,
install Neutron on the controllers.
install Neutron on the controllers.

View File

@ -1,3 +1,3 @@
This section covers subjects that go beyond the standard OpenStack cluster,
This section covers subjects that go beyond the standard OpenStack environment,
from configuring OpenStack Networking for high-availability to adding your own
custom components to your cluster using Fuel.
custom components to your environment using Fuel.

View File

@ -7,66 +7,128 @@
Advanced Configurations
==========================================
This section covers subjects that go beyond the standard OpenStack cluster,
This section covers subjects that go beyond the standard OpenStack environment,
from configuring OpenStack Networking for high-availability to adding your own
custom components to your cluster using Fuel.
custom components to your environment using Fuel.
Adding And Configuring Custom Services
--------------------------------------
Fuel is designed to help you easily install a standard OpenStack cluster, but what do you do if your cluster is not standard? What if you need services or components that are not included with the standard Fuel distribution? This document gives you all of the information you need to add custom services and packages to a Fuel-deployed cluster.
Fuel is designed to help you easily install a standard OpenStack environment,
but what do you do if your environment is not standard? What if you need
services or components that are not included with the standard Fuel
distribution? This document gives you all of the information you need to add
custom services and packages to a Fuel-deployed environment.
Fuel usage scenarios and how they affect installation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Two basic Fuel usage scenarios exist:
* In the first scenario, a deployment engineer uses the Fuel ISO image to create a master node, make necessary changes to configuration files, and deploy OpenStack. In this scenario, each node gets a clean OpenStack installation.
* In the first scenario, a deployment engineer uses the Fuel ISO image to
create a master node, make necessary changes to configuration files, and deploy
OpenStack. In this scenario, each node gets a clean OpenStack installation.
* In the second scenario, the master node and other nodes in the cluster have already been installed, and the deployment engineer has to deploy OpenStack to an existing configuration.
* In the second scenario, the Fuel Master node and other nodes in the
environment have already been installed, and the deployment engineer has to
deploy OpenStack to an existing configuration.
For this discussion, the first scenario requires that any customizations needed must be applied during the deployment and the second scenario already has customizations applied.
For this discussion, the first scenario requires that any customizations needed
must be applied during the deployment and the second scenario already has
customizations applied.
In most cases, best practices dictate that you deploy and test OpenStack first, later adding any custom services. Fuel works using puppet manifests, so the simplest way to install a new service is to edit the current site.pp file on the Puppet Master to add any additional deployment paths for the target nodes. There are, however, certain components that must be installed prior to the installation of OpenStack (i.e., hardware drivers, management software, etc...). In cases like these, Puppet can only be used to perform these installations using a separate, custom site.pp file that prepares the target system(s) for OpenStack installation. An advantage to this method, however, is that it helps isolate version mismatches and the various OpenStack dependencies.
In most cases, best practices dictate that you deploy and test OpenStack first,
later adding any custom services. Fuel works using puppet manifests, so the
simplest way to install a new service is to edit the current site.pp file on
the Puppet Master to add any additional deployment paths for the target nodes.
There are, however, certain components that must be installed prior to the
installation of OpenStack (i.e., hardware drivers, management software, etc...).
In cases like these, Puppet can only be used to perform these installations
using a separate, custom site.pp file that prepares the target system(s) for
OpenStack installation. An advantage to this method, however, is that it helps
isolate version mismatches and the various OpenStack dependencies.
If a pre-deployment site.pp approach is not an option, you can inject a custom component installation into the existing Fuel manifests. If you elect to go this route, you'll need to be aware of software source compatibility issues, as well as installation stages, component versions, incompatible dependencies, and declared resource names.
If a pre-deployment site.pp approach is not an option, you can inject a custom
component installation into the existing Fuel manifests. If you elect to go
this route, you'll need to be aware of software source compatibility issues, as
well as installation stages, component versions, incompatible dependencies, and
declared resource names.
In short, simple custom component installation may be accomplished by editing the site.pp file, but more complex components should be added as new Fuel components.
In short, simple custom component installation may be accomplished by editing
the site.pp file, but more complex components should be added as new Fuel
components.
In the next section we take a closer look at what you need to know.
Installing the new service along with Fuel
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When it comes to installing your new service or component alongside Fuel, you have several options. How you go about it depends on where in the process the component needs to be available. Let's look at each step and how it can impact your installation.
When it comes to installing your new service or component alongside Fuel, you
have several options. How you go about it depends on where in the process the
component needs to be available. Let's look at each step and how it can impact
your installation.
**Boot the master node**
In most cases, you will be installing the master node from the Fuel ISO. This is a semi-automated step, and doesn't allow for any custom components. If for some reason you need to install a node at this level, you will need to use the manual Fuel installation procedure.
In most cases, you will be installing the master node from the Fuel ISO. This
is a semi-automated step, and doesn't allow for any custom components. If for
some reason you need to install a node at this level, you will need to use the
manual Fuel installation procedure.
**Cobbler configuration**
If your customizations need to take place before the install of the operating system, or even as part of the operating system install, this is where you will add them to the configuration process. This is also where you would make customizations to other services. At this stage, you are making changes to the operating system kickstart/pre-seed files, and may include any custom software source and components required to install the operating system for a node. Anything that needs to be installed before OpenStack should be configured during this step.
If your customizations need to take place before the install of the operating
system, or even as part of the operating system install, this is where you will
add them to the configuration process. This is also where you would make
customizations to other services. At this stage, you are making changes to the
operating system kickstart/pre-seed files, and may include any custom software
source and components required to install the operating system for a node.
Anything that needs to be installed before OpenStack should be configured
during this step.
**OpenStack installation**
It is during this stage that you perform any Puppet, Astute, or mCollective configuration. In most cases, this means customizing the Puppet site.pp file to add any custom components during the actual OpenStack installation.
It is during this stage that you perform any Puppet, Astute, or mCollective
configuration. In most cases, this means customizing the Puppet site.pp file
to add any custom components during the actual OpenStack installation.
This step actually includes several different stages. (In fact, Puppet STDLib defines several additional default stages that fuel does not use.) These stages include:
This step actually includes several different stages. (In fact, Puppet STDLib
defines several additional default stages that Fuel does not use.) These stages
include:
0. ``Puppetlabs-repo``. mCollective uses this stage to add the Puppetlabs repositories during operating system and Puppet deployment.
0. ``Puppetlabs-repo``. mCollective uses this stage to add the
Puppetlabs repositories during operating system and Puppet deployment.
1. ``Openstack-custom-repo``. Additional repositories required by OpenStack are configured at this stage. Additionally, to avoid compatibility issues, the Puppetlabs repositories are switched off at this stage. As a general rule, it is a good idea to turn off any unnecessary software repositories defined for operating system installation.
1. ``Openstack-custom-repo``. Additional repositories required by OpenStack
are configured at this stage. Additionally, to avoid compatibility issues,
the Puppetlabs repositories are switched off at this stage. As a general
rule, it is a good idea to turn off any unnecessary software repositories
defined for operating system installation.
2. ``FUEL``. During this stage, Fuel performs any actions defined for the current operating system.
2. ``FUEL``. During this stage, Fuel performs any actions defined for the
current operating system.
3. ``Netconfig``. During this stage, Fuel performs all network configuration actions. This means that you should include any custom components that are related to the network in this stage.
3. ``Netconfig``. During this stage, Fuel performs all network configuration
actions. This means that you should include any custom components that
are related to the network in this stage.
4. ``Main``. The actual OpenStack installation process happens during this stage. Install any remaining non-network-related components during or after this stage.
4. ``Main``. The actual OpenStack installation process happens during this
stage. Install any remaining non-network-related components during or
after this stage.
**Post-OpenStack install**
At this point, OpenStack is installed. You may add any components you like at this point. We suggest that you take care at this point so as not to break OpenStack. This is a good place to make an image of the nodes to have a roll-back in case of any catestrophic errors that render OpenStack or any other components inoperable. If you are preparing to deploy a large-scale environment, you may want to perform a small-scale test to familiarize yourself with the entire process and make yourself aware of any potential gotchas that are specific to your infrastructure. You should perform this small-scale test using the same hardware that the large-scale deployment will use and not VirtualBox. VirtualBox does not offer the ability to test any custom hardware driver installations your physical hardware may require.
At this point, OpenStack is installed. You may add any components you like at
this point. We suggest that you take care at this point so as not to break
OpenStack. This is a good place to make an image of the nodes to have a
roll-back in case of any catestrophic errors that render OpenStack or any other
components inoperable. If you are preparing to deploy a large-scale
environment, you may want to perform a small-scale test to familiarize yourself
with the entire process and make yourself aware of any potential gotchas that
are specific to your infrastructure. You should perform this small-scale test
using the same hardware that the large-scale deployment will use and not
VirtualBox. VirtualBox does not offer the ability to test any custom hardware
driver installations your physical hardware may require.
Defining a new component
^^^^^^^^^^^^^^^^^^^^^^^^
@ -75,14 +137,23 @@ In general, we recommend you follow these steps to define a new component:
#. **Custom stages. Optional.**
Declare a custom stage or stages to help Puppet understand the required installation sequence. Stages are special markers indicating the sequence of actions. Best practice is to use the input parameter Before for every stage, to help define the correct sequence. The default built-in stage is "main". Every Puppet action is automatically assigned to the main stage if no stage is explicitly specified for the action.
Declare a custom stage or stages to help Puppet understand the required
installation sequence. Stages are special markers indicating the sequence of
actions. Best practice is to use the input parameter Before for every stage,
to help define the correct sequence. The default built-in stage is "main".
Every Puppet action is automatically assigned to the main stage if no stage
is explicitly specified for the action.
Note that since Fuel installs almost all of OpenStack during the main stage, custom stages may not help, so future plans include breaking the OpenStack installation into several sub-stages.
Note that since Fuel installs almost all of OpenStack during the main stage,
custom stages may not help, so future plans include breaking the OpenStack
installation into several sub-stages.
Don't forget to take into account other existing stages; training several parallel sequences of stages increases the chances that Puppet will order them in correctly if you do not explicitly specify the order.
Don't forget to take into account other existing stages; training several
parallel sequences of stages increases the chances that Puppet will order
them in correctly if you do not explicitly specify the order.
*Example*::
stage {'Custom stage 1':
before => Stage['Custom stage 2'],
}
@ -90,19 +161,30 @@ In general, we recommend you follow these steps to define a new component:
before => Stage['main'],
}
Note that there are several limitations to stages, and they should be used with caution and only with the simplest of classes. You can find more information regarding stages and limitations here: http://docs.puppetlabs.com/puppet/2.7/reference/lang_run_stages.html.
Note that there are several limitations to stages, and they should be used
with caution and only with the simplest of classes. You can find more
information regarding stages and limitations here:
http://docs.puppetlabs.com/puppet/2.7/reference/lang_run_stages.html.
#. **Custom repositories. Optional.**
If the custom component requires a custom software source, you may declare a new repository and add it during one of the early stages of the installation.
If the custom component requires a custom software source, you may declare
a new repository and add it during one of the early stages of the
installation.
#. **Common variable definition**
It is a good idea to have all common variables defined in a single place. Unlike variables in many other languages, Puppet variables are actually constants, and may be assigned only once inside a given scope.
It is a good idea to have all common variables defined in a single place.
Unlike variables in many other languages, Puppet variables are actually
constants, and may be assigned only once inside a given scope.
#. **OS and condition-dependent variable definition**
We suggest that you assign all common operating system or condition-dependent variables to a single location, preferably near the other common variables. Also, be sure to always use a ``default`` section when defining conditional operators or you could experience configuration issues.
We suggest that you assign all common operating system or
condition-dependent variables to a single location, preferably near the
other common variables. Also, be sure to always use a ``default`` section
when defining conditional operators or you could experience configuration
issues.
*Example*::
@ -125,7 +207,12 @@ In general, we recommend you follow these steps to define a new component:
#. **Define installation procedures for independent custom components as classes**
You can think of public classes as singleton collections, or as a named block of code with its own namespace. Each class should be defined only once, but every class may be used with different input variable sets. The best practice is to define a separate class for every component, define required sub-classes for sub-components, and include class-dependent required resources within the actual class/subclass.
You can think of public classes as singleton collections, or as a named
block of code with its own namespace. Each class should be defined only
once, but every class may be used with different input variable sets. A
best practice is to define a separate class for every component, define
required sub-classes for sub-components, and include class-dependent
required resources within the actual class/subclass.
*Example*::
@ -179,7 +266,14 @@ In general, we recommend you follow these steps to define a new component:
#. **Target nodes**
Every component should be explicitly assigned to a particular target node or nodes. To do that, declare the node or nodes within site.pp. When Puppet runs the manifest for each node, it compares each node definition with the name of the current hostname and applies only to classes assigned to the current node. Node definitions may include regular expressions. For example, you can apply the class 'add custom service' to all controller nodes with hostnames fuel-controller-00 to fuel-controller-xxx, where xxx = any integer value using the following definition:
Every component should be explicitly assigned to a particular target node
or nodes. To do that, declare the node or nodes within site.pp. When Puppet
runs the manifest for each node, it compares each node definition with the
name of the current hostname and applies only to classes assigned to the
current node. Node definitions may include regular expressions. For
example, you can apply the class 'add custom service' to all controller
nodes with hostnames fuel-controller-00 to fuel-controller-xxx, where
xxx represents any integer value using the following definition:
*Example*::
@ -199,7 +293,9 @@ Fuel API Reference
**add_haproxy_service**
Location: Top level
As the name suggests, this function enables you to create a new HAProxy service. The service is defined in the ``/etc/haproxy/haproxy.cfg`` file, and generally looks something like this::
As the name suggests, this function enables you to create a new HAProxy
service. The service is defined in the ``/etc/haproxy/haproxy.cfg`` file, and
generally looks something like this::
listen keystone-2
bind 10.0.74.253:35357
@ -248,86 +344,141 @@ Let's look at how this command works.
``<'Service name'>``
The name of the new HAProxy listener section. In our example it was ``keystone-2``. If you want to include an IP address or port in the listener name, you have the option to use a name such as::
The service name is specified in the name of the new HAProxy listener. In our
example it was ``keystone-2``. If you want to include an IP address or port in
the listener name, you have the option to use a name such as::
'stats 0.0.0.0:9000 #Listen on all IP's on port 9000'
``order``
This parameter determines the order of the file fragments. It is optional, but we strongly recommend setting it manually. Fuel already has several different order values from 1 to 100 hardcoded for HAProxy configuration. If your HAProxy configuration fragments appear in the wrong places in ``/etc/haproxy/haproxy.cfg`` this is likely due to an incorrect order value. It is acceptable to set order values greater than 100 in order to place your custom configuration block at the end of ``haproxy.cfg``.
This parameter determines the order of the file fragments. It is optional, but
we strongly recommend setting it manually. Fuel already has several different
order values from 1 to 100 hardcoded for HAProxy configuration. If your
HAProxy configuration fragments appear in the wrong places in
``/etc/haproxy/haproxy.cfg`` this is likely due to an incorrect order value.
It is acceptable to set order values greater than 100 in order to place your
custom configuration block at the end of ``haproxy.cfg``.
Puppet assembles configuration files from fragments. First it creates several configuration fragments and temporarily stores all of them as separate files. Every fragment has a name such as ``${order}-${fragment_name}``, so the order determines the number of the current fragment in the fragment sequence. After all the fragments are created, Puppet reads the fragment names and sorts them in ascending order, concatenating all the fragments in that order. In other words, a fragment with a smaller order value always goes before all fragments with a greater order value.
Puppet assembles configuration files from fragments. First it creates several
configuration fragments and temporarily stores all of them as separate files.
Every fragment has a name such as ``${order}-${fragment_name}``, so the order
determines the number of the current fragment in the fragment sequence. After
all the fragments are created, Puppet reads the fragment names and sorts them
in ascending order, concatenating all the fragments in that order. In other
words, a fragment with a smaller order value always goes before all fragments
with a greater order value.
The ``keystone-2`` fragment from the example above has ``order = 30`` so it's placed after the ``keystone-1`` section (``order = 20``) and the ``nova-api-1`` section (order = 40).
The ``keystone-2`` fragment from the example above has ``order = 30``, so it
gets placed after the ``keystone-1`` section (``order = 20``) and the
``nova-api-1`` section (order = 40).
``balancers``
Balancers (or **Backends** in HAProxy terms) are a hash of ``{ "$::hostname" => $::ipaddress }`` values.
The default is ``{ "<current hostname>" => <current ipaddress> }``, but that value is set for compatability only, and may not work correctly in HA mode. Instead, the default for HA mode is to explicitly set the Balancers as ::
Balancers (or **Backends** in HAProxy terms) are a hash of
``{ "$::hostname" => $::ipaddress }`` values.
The default is ``{ "<current hostname>" => <current ipaddress> }``, but that
value is set for compatability only, and may not work correctly in HA mode.
Instead, the default for HA mode is to explicitly set the Balancers as ::
Haproxy_service {
balancers => $controller_internal_addresses
}
where ``$controller_internal_addresses`` represents a hash of all the controllers with a corresponding internal IP address; this value is set in ``site.pp``.
where ``$controller_internal_addresses`` represents a hash of all the
controllers with a corresponding internal IP address; this value is set in
``site.pp``.
The ``balancers`` parameter is a list of HAProxy listener balance members (hostnames) with corresponding IP addresses. The following strings from the ``keystone-2`` listener example represent balancers::
The ``balancers`` parameter is a list of HAProxy listener balance members
(hostnames) with corresponding IP addresses. The following strings from the
``keystone-2`` listener example represent balancers::
server fuel-controller-01.example.com 10.0.0.101:35357 check
server fuel-controller-02.example.com 10.0.0.102:35357 check
Every key pair in the ``balancers`` hash adds a new string to the list of balancers defined in the listener section. Different options may be set for every string.
Every key pair in the ``balancers`` hash adds a new string to the list of
balancers defined in the listener section. Different options may be set for
every string.
``virtual_ips``
This parameter represents an array of IP addresses (or **Frontends** in HAProxy terms) of the current listener. Every IP address in this array adds a new string to the bind section of the current listeners. The following strings from the ``keystone-2`` listener example represent virtual IPs::
This parameter represents an array of IP addresses (or **Frontends** in
HAProxy terms) of the current listener. Every IP address in this array adds
a new string to the bind section of the current listeners. The following
strings from the ``keystone-2`` listener example represent virtual IPs::
bind 10.0.74.253:35357
bind 10.0.0.110:35357
``port``
This parameters specifies the frontend port for the listeners. Currently you must set the same port frontends.
The following strings from the ``keystone-2`` listener example represent the frontend port, where the port is 35357::
This parameters specifies the frontend port for the listeners. Currently you
must set the same port frontends. The following strings from the ``keystone-2``
listener example represent the frontend port, where the port is 35357::
bind 10.0.74.253:35357
bind 10.0.0.110:35357
``haproxy_config_options``
This parameter represents a hash of key pairs of HAProxy listener options in the form ``{ 'option name' => 'option value' }``. Every key pair from this hash adds a new string to the listener options.
This parameter represents a hash of key pairs of HAProxy listener options in
the form ``{ 'option name' => 'option value' }``. Each key pair from this
hash adds a new string to the listener options.
**NOTE** Every HAProxy option may require a different input value type, such as strings or a list of multiple options per single string.
**NOTE** Every HAProxy option may require a different input value type, such
as strings or a list of multiple options per single string.
The '`keystone-2`` listener example has the ``{ 'option' => ['httplog'], 'balance' => 'roundrobin' }`` option array and this array is represented as the following in the resulting /etc/haproxy/haproxy.cfg:
The '`keystone-2`` listener example has the
``{ 'option' => ['httplog'], 'balance' => 'roundrobin' }`` option array and
this array is represented as the following in the resulting
/etc/haproxy/haproxy.cfg:
balance roundrobin
option httplog
``balancer_port``
This parameter represents the balancer (backend) port. By default, the balancer_port is the same as the frontend ``port``. The following strings from the ``keystone-2`` listener example represent ``balancer_port``, where port is ``35357``::
This parameter represents the balancer (backend) port. By default, the
balancer_port is the same as the frontend ``port``. The following strings from
the ``keystone-2`` listener example represent ``balancer_port``, where port is
``35357``::
server fuel-controller-01.example.com 10.0.0.101:35357 check
server fuel-controller-02.example.com 10.0.0.102:35357 check
``balancermember_options``
This is a string of options added to each balancer (backend) member. The ``keystone-2`` listener example has the single ``check`` option::
This is a string of options added to each balancer (backend) member. The
``keystone-2`` listener example has the single ``check`` option::
server fuel-controller-01.example.com 10.0.0.101:35357 check
server fuel-controller-02.example.com 10.0.0.102:35357 check
``mode``
This optional parameter represents the HAProxy listener mode. The default value is ``tcp``, but Fuel writes ``mode http`` to the defaults section of ``/etc/haproxy/haproxy.cfg``. You can set the same option via ``haproxy_config_options``. A separate mode parameter is required to set some modes by default on every new listener addition. The ``keystone-2`` listener example has no ``mode`` option and so it works in the default Fuel-configured HTTP mode.
This optional parameter represents the HAProxy listener mode. The default
value is ``tcp``, but Fuel writes ``mode http`` to the defaults section of
``/etc/haproxy/haproxy.cfg``. You can set the same option via
``haproxy_config_options``. A separate mode parameter is required to set some
modes by default on every new listener addition. The ``keystone-2`` listener
example has no ``mode`` option and so it works in the default Fuel-configured
HTTP mode.
``define_cookies``
This optional boolean parameter is a Fuel-only feature. The default is ``false``, but if set to ``true``, Fuel directly adds ``cookie ${hostname}`` to every balance member (backend).
This optional boolean parameter is a Fuel-only feature. The default is
``false``, but if set to ``true``, Fuel directly adds ``cookie ${hostname}``
to every balance member (backend).
The ``keystone-2`` listener example has no ``define_cookies`` option. Typically, frontend cookies are added with ``haproxy_config_options`` and backend cookies with ``balancermember_options``.
The ``keystone-2`` listener example has no ``define_cookies`` option.
Typically, frontend cookies are added with ``haproxy_config_options`` and
backend cookies with ``balancermember_options``.
``collect_exported``
This optional boolean parameter has a default value of ``false``. True means 'collect exported @@balancermember resources' (when every balancermember node exports itself), while false means 'rely on the existing declared balancermember resources' (for when you know the full set of balancermembers in advance and use ``haproxy::balancermember`` with array arguments, which allows you to deploy everything in one run).
This optional boolean parameter has a default value of ``false``. True means
'collect exported @@balancermember resources' (when every balancermember node
exports itself), while false means 'rely on the existing declared
balancermember resources' (for when you know the full set of balancermembers
in advance and use ``haproxy::balancermember`` with array arguments, which
allows you to deploy everything in one run).

View File

@ -58,7 +58,7 @@ Running Post-Deployment Checks
------------------------------
Now, let`s take a closer look on what should be done to execute the tests and
to understand if something is wrong with your OpenStack cluster.
to understand if something is wrong with your OpenStack environment.
.. image:: /_images/healthcheck_tab.jpg
:align: center
@ -101,7 +101,7 @@ health of the deployment. To do so, start by checking the following:
* Under the `Health Check` tab
* In the OpenStack Dashboard
* In the test execution logs (/var/log/ostf-stdout.log)
* In the test execution logs (in Environment Logs)
* In the individual OpenStack components' logs
Certainly there are many different conditions that can lead to system
@ -126,10 +126,9 @@ If any service is off (has “XXX” status), you can restart it using this comm
If all services are on, but you`re still experiencing some issues, you can
gather information from OpenStack Dashboard (exceeded number of instances,
fixed IPs, etc). You may also read the logs generated by tests which are
stored in ``/var/log/ostf-stdout.log``, or go to ``/var/log/<component>`` and
check if any operation is in ERROR status. If it looks like the last item, you
may have underprovisioned your environment and should check your math and your
project requirements.
stored in Logs -> Fuel Master -> Health Check and check if any operation is
in ERROR status. If it looks like the last item, you may have underprovisioned
our environment and should check your math and your project requirements.
Sanity Tests Description
------------------------

View File

@ -8,12 +8,12 @@ Network Issues
==============
Fuel has the built-in capability to run a network check before or after
OpenStack deployment. Currently, it can check connectivity between nodes within
configured VLANs on configured server interfaces. The image below shows sample
result of such check. By using this simple table it is easy to determine which
interfaces do not receive certain VLAN IDs. Usually, it means that a switch or
multiple switches are not configured correctly and do not allow certain
tagged traffic to pass through.
OpenStack deployment. The network check includes tests for connectivity between
nodes via configured VLANs on configured host interfaces. Additionally, checks
for an unexpected DHCP server are done to ensure outside DHCP servers will not
interfere with deployment. The image below shows a sample result of the check.
If there are errors, it is either in your configuration of interfaces or
possibly the VLAN tagging feature is disabled on your switch port.
.. image:: /_images/net_verify_failure.jpg
:align: center