tripleo-docs/doc/source/basic_deployment/basic_deployment_cli.rst
Alex Schultz 7b15d3986e Remove Liberty install instructions
Liberty has reached EOL so we should not support installing it anymore.
The upgrade instructions for Liberty to Mitaka have been left in the
documentation.

Change-Id: Icf4d1d6930a4faa4134864e56c7bf125dec10df5
Closes-Bug: #1650928
2016-12-19 11:19:35 -07:00

17 KiB

Basic Deployment (CLI)

With these few steps you will be able to simply deploy via to your environment using our defaults in a few steps.

Prepare Your Environment

  1. Make sure you have your environment ready and undercloud running:

    • ../environments/environments
    • ../installation/installing
  2. Log into your undercloud (instack) virtual machine as non-root user:

    ssh root@<undercloud-machine>
    
    su - stack
  3. In order to use CLI commands easily you need to source needed environment variables:

    source stackrc

Get Images

Note

If you already have images built, perhaps from a previous installation of , you can simply copy those image files into your regular user's home directory and skip this section.

If you do this, be aware that sometimes newer versions of do not work with older images, so if the deployment fails it may be necessary to delete the older images and restart the process from this step.

Alternatively, images are available via RDO at http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/

The image files required are:

ironic-python-agent.initramfs
ironic-python-agent.kernel
overcloud-full.initrd
overcloud-full.qcow2
overcloud-full.vmlinuz

Images must be built prior to doing a deployment. An IPA ramdisk and openstack-full image can all be built using tripleo-common.

It's recommended to build images on the installed undercloud directly since all the dependencies are already present, but this is not a requirement.

The following steps can be used to build images. They should be run as the same non-root user that was used to install the undercloud. If the images are not created on the undercloud, one should use a non-root user.

  1. Choose image operating system:

    The common YAML is /usr/share/openstack-tripleo-common/image-yaml/overcloud-images.yaml. It must be specified along with one of the following.

    CentOS

    The default YAML for CentOS is /usr/share/openstack-tripleo-common/image-yaml/overcloud-images-centos7.yaml

    export OS_YAML="/usr/share/openstack-tripleo-common/image-yaml/overcloud-images-centos7.yaml"

    RHEL

    The default YAML for RHEL is /usr/share/openstack-tripleo-common/image-yaml/overcloud-images-rhel7.yaml

    export OS_YAML="/usr/share/openstack-tripleo-common/image-yaml/overcloud-images/rhel7.yaml"
  2. Install the current-tripleo delorean repository and deps repository:

  1. Export environment variables

    export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean*"

    Ceph

    export DIB_YUM_REPO_CONF="$DIB_YUM_REPO_CONF /etc/yum.repos.d/CentOS-Ceph-Jewel.repo"

    Stable Branch

    Mitaka

    STABLE_RELEASE="mitaka"

    Ceph

    export DIB_YUM_REPO_CONF="$DIB_YUM_REPO_CONF /etc/yum.repos.d/CentOS-Ceph-Hammer.repo"

    Newton

    STABLE_RELEASE="newton"

    Ceph

    export DIB_YUM_REPO_CONF="$DIB_YUM_REPO_CONF /etc/yum.repos.d/CentOS-Ceph-Jewel.repo"
  2. Build the required images:

RHEL

Download the RHEL 7.1 cloud image or copy it over from a different location, for example: https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.1/x86_64/product-downloads, and define the needed environment variables for RHEL 7.1 prior to running tripleo-build-images:

export DIB_LOCAL_IMAGE=rhel-guest-image-7.1-20150224.0.x86_64.qcow2

RHEL Portal Registration

To register the image builds to the Red Hat Portal define the following variables:

export REG_METHOD=portal
export REG_USER="[your username]"
export REG_PASSWORD="[your password]"
# Find this with `sudo subscription-manager list --available`
export REG_POOL_ID="[pool id]"
export REG_REPOS="rhel-7-server-rpms rhel-7-server-extras-rpms rhel-ha-for-rhel-7-server-rpms \
    rhel-7-server-optional-rpms rhel-7-server-openstack-6.0-rpms"

Ceph

If using Ceph, additional channels need to be added to REG_REPOS. Enable the appropriate channels for the desired release, as indicated below. Do not enable any other channels not explicitly marked for that release.

rhel-7-server-rhceph-2-mon-rpms
rhel-7-server-rhceph-2-osd-rpms
rhel-7-server-rhceph-2-tools-rpms

Mitaka

rhel-7-server-rhceph-1.3-mon-rpms
rhel-7-server-rhceph-1.3-osd-rpms
rhel-7-server-rhceph-1.3-tools-rpms

RHEL Satellite Registration

To register the image builds to a Satellite define the following variables. Only using an activation key is supported when registering to Satellite, username/password is not supported for security reasons. The activation key must enable the repos shown:

export REG_METHOD=satellite
# REG_SAT_URL should be in the format of:
# http://<satellite-hostname>
export REG_SAT_URL="[satellite url]"
export REG_ORG="[satellite org]"
# Activation key must enable these repos:
# rhel-7-server-rpms
# rhel-7-server-optional-rpms
# rhel-7-server-extras-rpms
# rhel-7-server-openstack-6.0-rpms
# rhel-7-server-rhceph-{2,1.3}-mon-rpms
# rhel-7-server-rhceph-{2,1.3}-osd-rpms
# rhel-7-server-rhceph-{2,1.3}-tools-rpms
export REG_ACTIVATION_KEY="[activation key]"

Source

Git checkouts of the puppet modules can be used instead of packages. Export the following environment variable:

export DIB_INSTALLTYPE_puppet_modules=source

It is also possible to use this functionality to use an in-progress review as part of the overcloud image build. See ../developer/in_progress_review for details.

openstack overcloud image build --config-file /usr/share/openstack-tripleo-common/image-yaml/overcloud-images.yaml --config-file $OS_YAML

See the help for openstack overcloud image build for further options.

The YAML files are cumulative. Order on the command line is important. The packages, elements, and options sections will append. All others will overwrite previously read values.

Note

This command will build overcloud-full images (*.qcow2, *.initrd, *.vmlinuz) and ironic-python-agent images (*.initramfs, *.kernel)

In order to build specific images, one can use the --image-name flag to openstack overcloud image build. It can be specified multiple times.

Note

If you want to use whole disk images with TripleO, please see ../advanced_deployment/whole_disk_images.

Upload Images

Load the images into the undercloud Glance:

openstack overcloud image upload

To upload a single image, see ../advanced_deployment/upload_single_image.

Register Nodes

Register and configure nodes for your deployment with Ironic:

openstack baremetal import instackenv.json

The file to be imported may be either JSON, YAML or CSV format, and the type is detected via the file extension (json, yaml, csv). The file format is documented in instackenv.

Stable Branch

For TripleO release Mitaka and older the following command must be run after registration to assign the deployment kernel and ramdisk to all nodes:

openstack baremetal configure boot

Starting with the Newton release you can take advantage of the enroll provisioning state - see ../advanced_deployment/node_states for details.

If your hardware has several hard drives, it's highly recommended that you specify the exact device to be used during introspection and deployment as a root device. Please see root_device for details.

Warning

If you don't specify the root device explicitly, any device may be picked. Also the device chosen automatically is NOT guaranteed to be the same across rebuilds. Make sure to wipe the previous installation before rebuilding in this case.

Warning

It's not recommended to delete nodes and/or rerun this command after you have proceeded to the next steps. Particularly, if you start introspection and then re-register nodes, you won't be able to retry introspection until the previous one times out (1 hour by default). If you are having issues with nodes after registration, please follow node_registration_problems.

Introspect Nodes

Introspect hardware attributes of nodes:

openstack baremetal introspection bulk start

Note

Introspection has to finish without errors. The process can take up to 5 minutes for VM / 15 minutes for baremetal. If the process takes longer, see introspection_problems.

Note

If you need to introspect just a single node, see ../advanced_deployment/introspect_single_node

Flavor Details

The undercloud will have a number of default flavors created at install time. In most cases these flavors do not need to be modified, but they can be if desired. By default, all overcloud instances will be booted with the baremetal flavor, so all baremetal nodes must have at least as much memory, disk, and cpu as that flavor.

In addition, there are profile-specific flavors created which can be used with the profile-matching feature. For more details on deploying with profiles, see ../advanced_deployment/profile_matching.

Configure a nameserver for the Overcloud

Overcloud nodes can have a nameserver configured in order to resolve hostnames via DNS. The nameserver is defined in the undercloud's neutron subnet. If needed, define the nameserver to be used for the environment:

# List the available subnets
neutron subnet-list
neutron subnet-update <subnet-uuid> --dns-nameserver <nameserver-ip>

Note

A public DNS server, such as 8.8.8.8 or the undercloud DNS name server can be used if there is no internal DNS server.

Virtual

In virtual environments, the libvirt default network DHCP server address, typically 192.168.122.1, can be used as the overcloud nameserver.

Deploy the Overcloud

By default 1 compute and 1 control node will be deployed, with networking configured for the virtual environment. To customize this, see the output of:

openstack help overcloud deploy

Ceph

When deploying Ceph it is necessary to specify the number of Ceph OSD nodes to be deployed and to provide some additional parameters to enable usage of Ceph for Glance, Cinder, Nova or all of them. To do so, use the following arguments when deploying:

--ceph-storage-scale <number of nodes> -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml

RHEL Satellite Registration

To register the Overcloud nodes to a Satellite add the following flags to the deploy command:

--rhel-reg --reg-method satellite --reg-org <ORG ID#> --reg-sat-url <satellite URL> --reg-activation-key <KEY>

Note

Only using an activation key is supported when registering to Satellite, username/password is not supported for security reasons. The activation key must enable the following repos:

rhel-7-server-rpms

rhel-7-server-optional-rpms

rhel-7-server-extras-rpms

rhel-7-server-openstack-6.0-rpms

SSL

To deploy an overcloud with SSL, see ../advanced_deployment/ssl.

Run the deploy command, including any additional parameters as necessary:

openstack overcloud deploy --templates [additional parameters]

To deploy an overcloud with multiple controllers and achieve HA, follow ../advanced_deployment/high_availability.

Virtual

When deploying the Compute node in a virtual machine without nested guest support, add --libvirt-type qemu or launching instances on the deployed overcloud will fail.

Note

To deploy the overcloud with network isolation, bonds, and/or custom network interface configurations, instead follow the workflow here to deploy: ../advanced_deployment/network_isolation

Note

Previous versions of the client had many parameters defaulted. Some of these parameters are now pulling defaults directly from the Heat templates. In order to override these parameters, one should use an environment file to specify these overrides, via 'parameter_defaults'.

The parameters that controlled these parameters will be deprecated in the client, and eventually removed in favor of using environment files.

Post-Deployment

Access the Overcloud

openstack overcloud deploy generates an overcloudrc file appropriate for interacting with the deployed overcloud in the current user's home directory. To use it, simply source the file:

source ~/overcloudrc

To return to working with the undercloud, source the stackrc file again:

source ~/stackrc

Setup the Overcloud network

Initial networks in Neutron in the Overlcoud need to be created for tenant instances. The following are example commands to create the initial networks. Edit the address ranges, or use the necessary neutron commands to match the environment appropriately. This assumes a dedicated interface or native VLAN:

neutron net-create public --router:external --provider:network_type flat \
  --provider:physical_network datacentre
neutron subnet-create --name public --disable-dhcp \
  --allocation-pool start=172.16.23.140,end=172.16.23.240 \
  --gateway 172.16.23.251 public 172.16.23.128/25

The example shows naming the network "public" because that will make tempest tests to pass, based on the default floating pool name set in nova.conf. You can confirm that the network was created with:

neutron net-list

Sample output of the command:

+--------------------------------------+-------------+-------------------------------------------------------+
| id                                   | name        | subnets                                               |
+--------------------------------------+-------------+-------------------------------------------------------+
| d474fe1f-222d-4e32-802b-cde86e746a2a | public        | 01c5f621-1e0f-4b9d-9c30-7dc59592a52f 172.16.23.128/25 |
+--------------------------------------+-------------+-------------------------------------------------------+

To use a VLAN, the following example should work. Customize the address ranges and VLAN id based on the environment:

neutron net-create public --router:external --provider:network_type vlan \
  --provider:physical_network datacentre --provider:segmentation_id 195
neutron subnet-create --name public --disable-dhcp \
  --allocation-pool start=172.16.23.140,end=172.16.23.240 \
  --gateway 172.16.23.251 public 172.16.23.128/25

Validate the Overcloud

Source the overcloudrc file:

source ~/overcloudrc

Create a directory for Tempest (eg. naming it tempest):

mkdir ~/tempest
cd ~/tempest

Tempest expects the tests it discovers to be in the current working directory. Set it up accordingly:

/usr/share/openstack-tempest-*/tools/configure-tempest-directory

The ~/tempest-deployer-input.conf file was created during deployment and contains deployment specific settings. Use that file to configure Tempest:

tools/config_tempest.py --deployer-input ~/tempest-deployer-input.conf \
                        --debug --create \
                        identity.uri $OS_AUTH_URL \
                        identity.admin_password $OS_PASSWORD

Run Tempest:

tools/run-tests.sh

Note

The full Tempest test suite might take hours to run on a single CPU.

Redeploy the Overcloud

The overcloud can be redeployed when desired.

  1. First, delete any existing Overcloud:

    heat stack-delete overcloud
  2. Confirm the Overcloud has deleted. It may take a few minutes to delete:

    # This command should show no stack once the Delete has completed
    heat stack-list
  3. Although not required, introspection can be rerun:

    openstack baremetal introspection bulk start
  4. Deploy the Overcloud again:

    openstack overcloud deploy --templates