Structure the documentation better.

* Add Basic Deployment chapter for beginners
* Add Advanced Deployment chapter for advanced users
* Don't change anything from commands or the flow

Change-Id: Ib85420423dfdbd040db339699ea03a16381d02f4
This commit is contained in:
Jaromir Coufal 2015-04-27 11:41:58 +02:00
parent 95a7a4d1ca
commit 9be32d0dcd
12 changed files with 308 additions and 269 deletions

View File

@ -0,0 +1,20 @@
Advanced Deployment
===================
In this chapter you will find advanced management of various RDO-Manager areas.
.. toctree::
Ready-States (BIOS, RAID) <ready_states>
Automated Health Check <automated_health_check>
.. <MOVE THESE UNDER TOCTREE WHEN READY, KEEP LOGICAL WORKFLOW ORDER>
Images <images>
Nodes <nodes>
Flavors <flavors>
Roles <roles>
Deployment <deployment>
Connection to Overcloud <overcloud>
Updates <updates>

View File

@ -1,5 +1,5 @@
AHC (Automated Health Check) Workflow
=====================================
Automated Health Check (AHC)
============================
Additional setup steps to take advantage of the AHC features.

View File

@ -1,10 +1,9 @@
Ready-State (BIOS, RAID)
========================
Dell DRAC Setup
===============
Additional setup steps available for Dell hardware with a DRAC.
Ready-state configuration
-------------------------
---------------
Configure BIOS based on the deployment profile::

View File

@ -0,0 +1,267 @@
Basic Deployment
================
With these few steps you will be able to simply deploy RDO to your environment
using our defaults in a few steps.
Prepare Your Environment
------------------------
#. Make sure you have your environment ready and undercloud running:
* :doc:`../environments/environments`
* :doc:`../installation/installing`
#. Log into your undercloud (instack) virtual machine as non-root user::
ssh root@<rdo-manager-machine>
su - stack
#. In order to use CLI commands easily you need to source needed environment
variables::
source stackrc
Get Images
----------
Images must be built prior to doing a deployment. A discovery ramdisk,
deployment ramdisk, and openstack-full image can all be built using
instack-undercloud.
It's recommended to build images on the installed undercloud directly since all
the dependencies are already present.
The following steps can be used to build images. They should be run as the same
non-root user that was used to install the undercloud.
#. Choose image operating system:
The built images will automatically have the same base OS as the
running undercloud. To choose a different OS use one of the following
commands (make sure you have your OS specific content visible):
.. admonition:: CentOS
:class: centos
::
export NODE_DIST=centos7
.. admonition:: RHEL
:class: rhel
::
export NODE_DIST=rhel7
#. Build the required images:
.. only:: internal
.. admonition:: RHEL
:class: rhel
Download the RHEL 7.1 cloud image or copy it over from a different location,
and define the needed environment variable for RHEL 7.1 prior to running
``instack-build-images``::
curl -O http://download.devel.redhat.com/brewroot/packages/rhel-guest-image/7.1/20150203.1/images/rhel-guest-image-7.1-20150203.1.x86_64.qcow2
export DIB_LOCAL_IMAGE=rhel-guest-image-7.1-20150203.1.x86_64.qcow2
# Enable rhos-release
export RUN_RHOS_RELEASE=1
.. only:: external
.. admonition:: RHEL
:class: rhel
Download the RHEL 7.1 cloud image or copy it over from a different location,
for example:
https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.1/x86_64/product-downloads,
and define the needed environment variables for RHEL 7.1 prior to running
``instack-build-images``::
export DIB_LOCAL_IMAGE=rhel-guest-image-7.1-20150224.0.x86_64.qcow2
.. admonition:: RHEL Portal Registration
:class: portal
To register the image builds to the Red Hat Portal define the following variables::
export REG_METHOD=portal
export REG_USER="[your username]"
export REG_PASSWORD="[your password]"
# Find this with `sudo subscription-manager list --available`
export REG_POOL_ID="[pool id]"
export REG_REPOS="rhel-7-server-rpms rhel-7-server-extras-rpms rhel-ha-for-rhel-7-server-rpms \
rhel-7-server-optional-rpms rhel-7-server-openstack-6.0-rpms"
.. admonition:: RHEL Satellite Registration
:class: satellite
To register the image builds to a Satellite define the following
variables. Only using an activation key is supported when registering to
Satellite, username/password is not supported for security reasons. The
activation key must enable the repos shown::
export REG_METHOD=satellite
# REG_SAT_URL should be in the format of:
# http://<satellite-hostname>
export REG_SAT_URL="[satellite url]"
export REG_ORG="[satellite org]"
# Activation key must enable these repos:
# rhel-7-server-rpms
# rhel-7-server-optional-rpms
# rhel-7-server-extras-rpms
# rhel-7-server-openstack-6.0-rpms
export REG_ACTIVATION_KEY="[activation key]"
::
instack-build-images
.. note::
This script will build **overcloud-full** images (\*.qcow2, \*.initrd,
\*.vmlinuz), **deploy-ramdisk-ironic** images (\*.initramfs, \*.kernel),
**discovery-ramdisk** images (\*.initramfs, \*.kernel) and **testing**
fedora-user.qcow2 (which is always Fedora based).
#. Load the images into Glance::
instack-prepare-for-overcloud
Register Nodes
--------------
Register nodes for your deployment with Ironic::
instack-ironic-deployment --nodes-json instackenv.json --register-nodes
.. note::
It's not recommended to delete nodes and/or rerun this command after
you have proceeded to the next steps. Particularly, if you start discovery
and then re-register nodes, you won't be able to retry discovery until
the previous one times out (1 hour by default). If you are having issues
with nodes after registration, please follow
:ref:`node_registration_problems`.
Introspect Nodes
----------------
Introspect hardware attributes of nodes::
instack-ironic-deployment --discover-nodes
.. note:: **Introspection has to finish without errors.**
The process can take up to 5 minutes for VM / 15 minutes for baremetal. If
the process takes longer, see :ref:`introspection_problems`.
Create Flavors
--------------
Create the necessary flavors::
instack-ironic-deployment --setup-flavors
Deploy the Overcloud
--------------------
.. admonition:: Baremetal
:class: baremetal
Copy the sample overcloudrc file and edit to reflect your environment. Then source this file::
cp /usr/share/instack-undercloud/deploy-baremetal-overcloudrc ~/deploy-overcloudrc
source deploy-overcloudrc
Deploy the overcloud (default of 1 compute and 1 control):
.. admonition:: RHEL Satellite Registration
:class: satellite
To register the Overcloud nodes to a Satellite define the following
variables. Only using an activation key is supported when registering to
Satellite, username/password is not supported for security reasons. The
activation key must enable the repos shown::
export REG_METHOD=satellite
# REG_SAT_URL should be in the format of:
# http://<satellite-hostname>
export REG_SAT_URL="[satellite url]"
export REG_ORG="[satellite org]"
export REG_ACTIVATION_KEY="[activation key]"
# Activation key must enable these repos:
# rhel-7-server-rpms
# rhel-7-server-optional-rpms
# rhel-7-server-extras-rpms
# rhel-7-server-openstack-6.0-rpms
.. admonition:: Ceph
:class: ceph
When deploying Ceph, specify the number of Ceph OSD nodes to be deployed
with::
export CEPHSTORAGESCALE=1
By default when Ceph is enabled the Cinder iSCSI back-end is disabled. This
behavior may be changed by setting the environment variable::
export CINDER_ISCSI=1
::
instack-deploy-overcloud --tuskar
Post-Deployment
---------------
Access the Overcloud
^^^^^^^^^^^^^^^^^^^^
``instack-deploy-overcloud`` generates an overcloudrc file appropriate for
interacting with the deployed overcloud in the current user's home directory.
To use it, simply source the file::
source ~/overcloudrc
To return to working with the undercloud, source the stackrc file again::
source ~/stackrc
Redeploy the Overcloud
^^^^^^^^^^^^^^^^^^^^^^
The overcloud can be redeployed when desired.
#. First, delete any existing Overcloud::
heat stack-delete overcloud
#. Confirm the Overcloud has deleted. It may take a few minutes to delete::
# This command should show no stack once the Delete has completed
heat stack-list
#. Although not required, introspection can be rerun::
instack-ironic-deployment --discover-nodes
#. Deploy the Overcloud again::
instack-deploy-overcloud --tuskar

View File

@ -1,109 +0,0 @@
Building Images
===============
Images must be built prior to doing a deployment. A discovery ramdisk,
deployment ramdisk, and openstack-full image can all be built using
instack-undercloud.
It's recommended to build images on the installed undercloud directly since all
the dependencies are already present.
The following steps can be used to build images. They should be run as the same
non-root user that was used to install the undercloud.
#. The built images will automatically have the same base OS as the running
undercloud. See the Note below to choose a different OS::
.. note:: To build images with a base OS different from the undercloud,
set the ``$NODE_DIST`` environment variable prior to running
``instack-build-images``:
.. admonition:: CentOS
:class: centos
::
export NODE_DIST=centos7
.. admonition:: RHEL
:class: rhel
::
export NODE_DIST=rhel7
2. Build the required images:
.. only:: internal
.. admonition:: RHEL
:class: rhel
Download the RHEL 7.1 cloud image or copy it over from a different location,
and define the needed environment variable for RHEL 7.1 prior to running
``instack-build-images``::
curl -O http://download.devel.redhat.com/brewroot/packages/rhel-guest-image/7.1/20150203.1/images/rhel-guest-image-7.1-20150203.1.x86_64.qcow2
export DIB_LOCAL_IMAGE=rhel-guest-image-7.1-20150203.1.x86_64.qcow2
# Enable rhos-release
export RUN_RHOS_RELEASE=1
.. only:: external
.. admonition:: RHEL
:class: rhel
Download the RHEL 7.1 cloud image or copy it over from a different location,
for example:
https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.1/x86_64/product-downloads,
and define the needed environment variables for RHEL 7.1 prior to running
``instack-build-images``::
export DIB_LOCAL_IMAGE=rhel-guest-image-7.1-20150224.0.x86_64.qcow2
.. admonition:: RHEL Portal Registration
:class: portal
To register the image builds to the Red Hat Portal define the following variables::
export REG_METHOD=portal
export REG_USER="[your username]"
export REG_PASSWORD="[your password]"
# Find this with `sudo subscription-manager list --available`
export REG_POOL_ID="[pool id]"
export REG_REPOS="rhel-7-server-rpms rhel-7-server-extras-rpms rhel-ha-for-rhel-7-server-rpms \
rhel-7-server-optional-rpms rhel-7-server-openstack-6.0-rpms"
.. admonition:: RHEL Satellite Registration
:class: satellite
To register the image builds to a Satellite define the following
variables. Only using an activation key is supported when registering to
Satellite, username/password is not supported for security reasons. The
activation key must enable the repos shown::
export REG_METHOD=satellite
# REG_SAT_URL should be in the format of:
# http://<satellite-hostname>
export REG_SAT_URL="[satellite url]"
export REG_ORG="[satellite org]"
# Activation key must enable these repos:
# rhel-7-server-rpms
# rhel-7-server-optional-rpms
# rhel-7-server-extras-rpms
# rhel-7-server-openstack-6.0-rpms
export REG_ACTIVATION_KEY="[activation key]"
::
instack-build-images
.. note::
This script will build **overcloud-full** images (\*.qcow2, \*.initrd,
\*.vmlinuz), **deploy-ramdisk-ironic** images (\*.initramfs, \*.kernel),
**discovery-ramdisk** images (\*.initramfs, \*.kernel) and **testing**
fedora-user.qcow2 (which is always Fedora based).
#. Load the images into Glance::
instack-prepare-for-overcloud

View File

@ -1,133 +0,0 @@
Deploying the Overcloud
=======================
All the commands on this page require that the appropriate stackrc file has
been sourced into the environment::
source stackrc
Registering Nodes
-----------------
Register nodes for your deployment with Ironic::
instack-ironic-deployment --nodes-json instackenv.json --register-nodes
.. note::
It's not recommended to delete nodes and/or rerun this command after
you have proceeded to the next steps. Particularly, if you start discovery
and then re-register nodes, you won't be able to retry discovery until
the previous one times out (1 hour by default). If you are having issues
with nodes after registration, please follow
:ref:`node_registration_problems`.
Introspecting Nodes
-------------------
Introspect hardware attributes of nodes::
instack-ironic-deployment --discover-nodes
.. note:: **Introspection has to finish without errors.**
The process can take up to 5 minutes for VM / 15 minutes for baremetal. If
the process takes longer, see :ref:`introspection_problems`.
Ready-state configuration
-------------------------
.. admonition:: Baremetal
:class: baremetal
Some hardware has additional setup available, using its vendor-specific management
interface. See the :doc:`/vendor-specific` for details.
Deploying Nodes
---------------
Create the necessary flavors::
instack-ironic-deployment --setup-flavors
.. admonition:: Baremetal
:class: baremetal
Copy the sample overcloudrc file and edit to reflect your environment. Then source this file::
cp /usr/share/instack-undercloud/deploy-baremetal-overcloudrc ~/deploy-overcloudrc
source deploy-overcloudrc
Deploy the overcloud (default of 1 compute and 1 control):
.. admonition:: RHEL Satellite Registration
:class: satellite
To register the Overcloud nodes to a Satellite define the following
variables. Only using an activation key is supported when registering to
Satellite, username/password is not supported for security reasons. The
activation key must enable the repos shown::
export REG_METHOD=satellite
# REG_SAT_URL should be in the format of:
# http://<satellite-hostname>
export REG_SAT_URL="[satellite url]"
export REG_ORG="[satellite org]"
export REG_ACTIVATION_KEY="[activation key]"
# Activation key must enable these repos:
# rhel-7-server-rpms
# rhel-7-server-optional-rpms
# rhel-7-server-extras-rpms
# rhel-7-server-openstack-6.0-rpms
.. admonition:: Ceph
:class: ceph
When deploying Ceph, specify the number of Ceph OSD nodes to be deployed
with::
export CEPHSTORAGESCALE=1
By default when Ceph is enabled the Cinder iSCSI back-end is disabled. This
behavior may be changed by setting the environment variable::
export CINDER_ISCSI=1
::
instack-deploy-overcloud --tuskar
Working with the Overcloud
--------------------------
``instack-deploy-overcloud`` generates an overcloudrc file appropriate for
interacting with the deployed overcloud in the current user's home directory.
To use it, simply source the file::
source ~/overcloudrc
To return to working with the undercloud, source the stackrc file again::
source ~/stackrc
Redeploying the Overcloud
-------------------------
The overcloud can be redeployed when desired.
#. First, delete any existing Overcloud::
heat stack-delete overcloud
#. Confirm the Overcloud has deleted. It may take a few minutes to delete::
# This command should show no stack once the Delete has completed
heat stack-list
#. Although not required, discovery can be rerun. Reset the state file and then rediscover nodes::
sudo cp /usr/libexec/os-apply-config/templates/etc/edeploy/state /etc/edeploy/state
instack-ironic-deployment --discover-nodes
#. Deploy the Overcloud again::
instack-deploy-overcloud --tuskar

View File

@ -1,5 +1,5 @@
Environments
============
Environment Setup
=================
RDO-Manager can be used in baremetal as well as in virtual environments. This
section contains instructions on how to setup your environments properly.

View File

@ -246,7 +246,7 @@ You can ssh to the instack vm as the root user::
The vm contains a ``stack`` user to be used for installing the undercloud. You
can ``su - stack`` to switch to the stack user account.
Continue with :doc:`../install-undercloud`.
Continue with :doc:`../installation/installing`.
.. rubric:: Footnotes

View File

@ -12,12 +12,10 @@ Contents:
:maxdepth: 2
Introduction <introduction/introduction>
Environments <environments/environments>
Installation <installation/installation>
Building Images <build-images>
Deploying the Overcloud <deploy-overcloud>
Vendor-Specific Setup <vendor-specific>
AHC (Automated Health Check) Workflow <ahc-workflow>
Environment Setup <environments/environments>
Undercloud Installation <installation/installation>
Basic Deployment <basic_deployment/basic_deployment>
Advanced Deployment <advanced_deployment/advanced_deployment>
Troubleshooting <troubleshooting/troubleshooting>
How to Contribute <contributions/contributions>

View File

@ -265,8 +265,7 @@ for Puppet-enabled images.
* Use `devtest with Puppet
<http://docs.openstack.org/developer/tripleo-incubator/puppet.html>`_
to set up a development environment. Submit your changes via
OpenStack Gerrit (see `OpenStack Developer's Guide
<http://docs.openstack.org/infra/manual/developers.html>`_).
OpenStack Gerrit (see `OpenStack Developer's Guide`_).
**Useful links**
@ -293,4 +292,4 @@ TBD
..
<GLOBAL_LINKS>
.. _OpenStack Developer's Guide: http://docs.openstack.org/developer/openstack-projects.html
.. _OpenStack Developer's Guide: http://docs.openstack.org/infra/manual/developers.html

View File

@ -6,6 +6,10 @@ RDO-Manager is an OpenStack Deployment & Management tool for RDO. It is based on
philosophy is inspired by `SpinalStack <http://spinal-stack.readthedocs.org/en/
latest/>`_.
TripleO architecture is based on the **undercloud** and **overcloud** concept.
To learn more about it, visit `TripleO Documentation <http://docs.openstack.org/
developer/tripleo-incubator/README.html>`_.
Useful links:
* `RDO-Manager Home Page <http://rdoproject.org/RDO-Manager>`_

View File

@ -1,6 +0,0 @@
Vendor-Specific Setup
=====
.. toctree::
Dell DRAC <drac-setup>