OpenStack Orchestration (Heat)
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

26 KiB

Software configuration

There are a variety of options to configure the software which runs on the servers in your stack. These can be broadly divided into the following:

  • Custom image building
  • User-data boot scripts and cloud-init
  • Software deployment resources

This section will describe each of these options and provide examples for using them together in your stacks.

Image building

The first opportunity to influence what software is configured on your servers is by booting them with a custom-built image. There are a number of reasons you might want to do this, including:

  • Boot speed - since the required software is already on the image there is no need to download and install anything at boot time.
  • Boot reliability - software downloads can fail for a number of reasons including transient network failures and inconsistent software repositories.
  • Test verification - custom built images can be verified in test environments before being promoted to production.
  • Configuration dependencies - post-boot configuration may depend on agents already being installed and enabled

A number of tools are available for building custom images, including:

  • diskimage-builder <> image building tools for OpenStack
  • imagefactory builds images for a variety of operating system/cloud combinations

Examples in this guide that require custom images will use diskimage-builder <>.

User-data boot scripts and cloud-init

When booting a server it is possible to specify the contents of the user-data to be passed to that server. This user-data is made available either from configured config-drive or from the Metadata service <admin/networking-nova.html#metadata-service>

How this user-data is consumed depends on the image being booted, but the most commonly used tool for default cloud images is cloud-init.

Whether the image is using cloud-init or not, it should be possible to specify a shell script in the user_data property and have it be executed by the server during boot:


Debugging these scripts it is often useful to view the boot log using nova console-log <server-id> to view the progress of boot script execution.

Often there is a need to set variable values based on parameters or resources in the stack. This can be done with the str_replace intrinsic function:


If a stack-update is performed and there are any changes at all to the content of user_data then the server will be replaced (deleted and recreated) so that the modified boot configuration can be run on a new server.

When these scripts grow it can become difficult to maintain them inside the template, so the get_file intrinsic function can be used to maintain the script in a separate file:


str_replace can replace any strings, not just strings starting with $. However doing this for the above example is useful because the script file can be executed for testing by passing in environment variables.

Choosing the user_data_format

The OS::Nova::Server user_data_format property determines how the user_data should be formatted for the server. For the default value HEAT_CFNTOOLS, the user_data is bundled as part of the heat-cfntools cloud-init boot configuration data. While HEAT_CFNTOOLS is the default for user_data_format, it is considered legacy and RAW or SOFTWARE_CONFIG will generally be more appropriate.

For RAW the user_data is passed to Nova unmodified. For a cloud-init enabled image, the following are both valid RAW user-data:

For SOFTWARE_CONFIG user_data is bundled as part of the software config data, and metadata is derived from any associated Software deployment resources.

Signals and wait conditions

Often it is necessary to pause further creation of stack resources until the boot configuration script has notified that it has reached a certain state. This is usually either to notify that a service is now active, or to pass out some generated data which is needed by another resource. The resources OS::Heat::WaitCondition and OS::Heat::SwiftSignal both perform this function using different techniques and tradeoffs.

OS::Heat::WaitCondition is implemented as a call to the Orchestration API resource signal. The token is created using credentials for a user account which is scoped only to the wait condition handle resource. This user is created when the handle is created, and is associated to a project which belongs to the stack, in an identity domain which is dedicated to the orchestration service.

Sending the signal is a simple HTTP request, as with this example using curl:

The JSON containing the signal data is expected to be of the following format:

All of these values are optional, and if not specified will be set to the following defaults:

If status is set to FAILURE then the resource (and the stack) will go into a FAILED state using the reason as failure reason.

The following template example uses the convenience attribute curl_cli which builds a curl command with a valid token:

    type: OS::Heat::WaitCondition
      handle: {get_resource: wait_handle}
      # Note, count of 5 vs 6 is due to duplicate signal ID 5 sent below
      count: 5
      timeout: 300

    type: OS::Heat::WaitConditionHandle

    type: OS::Nova::Server
      # flavor, image etc
      user_data_format: RAW
          template: |
            # Below are some examples of the various ways signals
            # can be sent to the Handle resource

            # Simple success signal
            wc_notify --data-binary '{"status": "SUCCESS"}'

            # Or you optionally can specify any of the additional fields
            wc_notify --data-binary '{"status": "SUCCESS", "reason": "signal2"}'
            wc_notify --data-binary '{"status": "SUCCESS", "reason": "signal3", "data": "data3"}'
            wc_notify --data-binary '{"status": "SUCCESS", "reason": "signal4", "id": "id4", "data": "data4"}'

            # If you require control of the ID, you can pass it.
            # The ID should be unique, unless you intend for duplicate
            # signals to overwrite each other.  The following two calls
            # do the exact same thing, and will be treated as one signal
            # (You can prove this by changing count above to 7)
            wc_notify --data-binary '{"status": "SUCCESS", "id": "id5"}'
            wc_notify --data-binary '{"status": "SUCCESS", "id": "id5"}'

            # Example of sending a failure signal, optionally
            # reason, id, and data can be specified as above
            # wc_notify --data-binary '{"status": "FAILURE"}'
            wc_notify: { get_attr: [wait_handle, curl_cli] }

    value: { get_attr: [wait_condition, data] }
    # this would return the following json
    # {"1": null, "2": null, "3": "data3", "id4": "data4", "id5": null}

    value: { 'Fn::Select': ['id4', { get_attr: [wait_condition, data] }] }
    # this would return "data4"

OS::Heat::SwiftSignal is implemented by creating an Object Storage API temporary URL which is populated with signal data with an HTTP PUT. The orchestration service will poll this object until the signal data is available. Object versioning is used to store multiple signals.

Sending the signal is a simple HTTP request, as with this example using curl:

The above template example only needs to have the type changed to the swift signal resources:

The decision to use OS::Heat::WaitCondition or OS::Heat::SwiftSignal will depend on a few factors:

  • OS::Heat::SwiftSignal depends on the availability of an Object Storage API
  • OS::Heat::WaitCondition depends on whether the orchestration service has been configured with a dedicated stack domain (which may depend on the availability of an Identity V3 API).
  • The preference to protect signal URLs with token authentication or a secret webhook URL.

Software config resources

Boot configuration scripts can also be managed as their own resources. This allows configuration to be defined once and run on multiple server resources. These software-config resources are stored and retrieved via dedicated calls to the Orchestration API. It is not possible to modify the contents of an existing software-config resource, so a stack-update which changes any existing software-config resource will result in API calls to create a new config and delete the old one.

The resource OS::Heat::SoftwareConfig is used for storing configs represented by text scripts, for example:

The resource OS::Heat::CloudConfig allows cloud-init cloud-config to be represented as template YAML rather than a block string. This allows intrinsic functions to be included when building the cloud-config. This also ensures that the cloud-config is valid YAML, although no further checks for valid cloud-config are done.

The resource OS::Heat::MultipartMime allows multiple OS::Heat::SoftwareConfig and OS::Heat::CloudConfig resources to be combined into a single cloud-init multi-part message:

Software deployment resources

There are many situations where it is not desirable to replace the server whenever there is a configuration change. The OS::Heat::SoftwareDeployment resource allows any number of software configurations to be added or removed from a server throughout its life-cycle.

Building custom image for software deployments

OS::Heat::SoftwareConfig resources are used to store software configuration, and a OS::Heat::SoftwareDeployment resource is used to associate a config resource with one server. The group attribute on OS::Heat::SoftwareConfig specifies what tool will consume the config content.

OS::Heat::SoftwareConfig has the ability to define a schema of inputs and which the configuration script supports. Inputs are mapped to whatever concept the configuration tool has for assigning variables/parameters.

Likewise, outputs are mapped to the tool's capability to export structured data after configuration execution. For tools which do not support this, outputs can always be written to a known file path for the hook to read.

The OS::Heat::SoftwareDeployment resource allows values to be assigned to the config inputs, and the resource remains in an IN_PROGRESS state until the server signals to heat what (if any) output values were generated by the config script.

Custom image script

Each of the following examples requires that the servers be booted with a custom image. The following script uses diskimage-builder to create an image required in later examples:

# Clone the required repositories. Some of these are also available
# via pypi or as distro packages.
git clone
git clone

# Install diskimage-builder from source
sudo pip install git+

# Required by diskimage-builder to discover element collections
export ELEMENTS_PATH=tripleo-image-elements/elements:heat-agents/

# The base operating system element(s) provided by the diskimage-builder
# elements collection. Other values which may work include:
# centos7, debian, opensuse, rhel, rhel7, or ubuntu
export BASE_ELEMENTS="fedora selinux-permissive"
# Install and configure the os-collect-config agent to poll the metadata
# server (heat service or zaqar message queue and so on) for configuration
# changes to execute
export AGENT_ELEMENTS="os-collect-config os-refresh-config os-apply-config"

# heat-config installs an os-refresh-config script which will invoke the
# appropriate hook to perform configuration. The element heat-config-script
# installs a hook to perform configuration with shell scripts
export DEPLOYMENT_BASE_ELEMENTS="heat-config heat-config-script"

# Install a hook for any other chosen configuration tool(s).
# Elements which install hooks include:
# heat-config-cfn-init, heat-config-puppet, or heat-config-salt

# The name of the qcow2 image to create, and the name of the image
# uploaded to the OpenStack image registry.
export IMAGE_NAME=fedora-software-config

# Create the image
disk-image-create vm $BASE_ELEMENTS $AGENT_ELEMENTS \

# Upload the image, assuming valid credentials are already sourced
openstack image create --disk-format qcow2 --container-format bare \


Above script uses diskimage-builder, make sure the environment already fulfill all requirements in requirements.txt of diskimage-builder.

Configuring with scripts

The Custom image script already includes the heat-config-script element so the built image will already have the ability to configure using shell scripts.

Config inputs are mapped to shell environment variables. The script can communicate outputs to heat by writing to the $heat_outputs_path.{output name} file. See the following example for a script which expects inputs foo, bar and generates an output result.


A config resource can be associated with multiple deployment resources, and each deployment can specify the same or different values for the server and input_values properties.

As can be seen in the outputs section of the above template, the result config output value is available as an attribute on the deployment resource. Likewise the captured stdout, stderr and status_code are also available as attributes.

Configuring with os-apply-config

The agent toolchain of os-collect-config, os-refresh-config and os-apply-config can actually be used on their own to inject heat stack configuration data into a server running a custom image.

The custom image needs to have the following to use this approach:

  • All software dependencies installed
  • os-refresh-config scripts to be executed on configuration changes
  • os-apply-config templates to transform the heat-provided config data into service configuration files

The projects tripleo-image-elements and tripleo-heat-templates demonstrate this approach.

Configuring with cfn-init

Likely the only reason to use the cfn-init hook is to migrate templates which contain AWS::CloudFormation::Init metadata without needing a complete rewrite of the config metadata. It is included here as it introduces a number of new concepts.

To use the cfn-init tool the heat-config-cfn-init element is required to be on the built image, so Custom image script needs to be modified with the following:

Configuration data which used to be included in the AWS::CloudFormation::Init section of resource metadata is instead moved to the config property of the config resource, as in the following example:

There are a number of things to note about this template example:

  • OS::Heat::StructuredConfig is like OS::Heat::SoftwareConfig except that the config property contains structured YAML instead of text script. This is useful for a number of other configuration tools including ansible, salt and os-apply-config.
  • cfn-init has no concept of inputs, so {get_input: bar} acts as a placeholder which gets replaced with the OS::Heat::StructuredDeployment input_values value when the deployment resource is created.
  • cfn-init has no concept of outputs, so specifying signal_transport: NO_SIGNAL will mean that the deployment resource will immediately go into the CREATED state instead of waiting for a completed signal from the server.
  • The template has 2 deployment resources deploying the same config with different input_values. The order these are deployed in on the server is determined by sorting the values of the name property for each resource (10_deployment, 20_other_deployment)

Configuring with puppet

The puppet hook makes it possible to write configuration as puppet manifests which are deployed and run in a masterless environment.

To specify configuration as puppet manifests the heat-config-puppet element is required to be on the built image, so Custom image script needs to be modified with the following:

This demonstrates the use of the get_file function, which will attach the contents of the file example-puppet-manifest.pp, containing:

file { 'barfile':
    ensure  => file,
    mode    => '0644',
    path    => '/tmp/$::bar',
    content => '$::foo',

file { 'output_result':
    ensure  => file,
    path    => '$::heat_outputs_path.result',
    mode    => '0644',
    content => 'The file /tmp/$::bar contains $::foo',