.. | ||
database | ||
disabled | ||
logging | ||
messaging | ||
metrics | ||
monitoring | ||
pacemaker | ||
time | ||
aide.yaml | ||
aodh-api.yaml | ||
aodh-base.yaml | ||
aodh-evaluator.yaml | ||
aodh-listener.yaml | ||
aodh-notifier.yaml | ||
apache.j2.yaml | ||
auditd.yaml | ||
barbican-api.yaml | ||
barbican-backend-dogtag.yaml | ||
barbican-backend-kmip.yaml | ||
barbican-backend-pkcs11-crypto.yaml | ||
barbican-backend-simple-crypto.yaml | ||
ca-certs.yaml | ||
ceilometer-agent-central.yaml | ||
ceilometer-agent-compute.yaml | ||
ceilometer-agent-ipmi.yaml | ||
ceilometer-agent-notification.yaml | ||
ceilometer-base.yaml | ||
certmonger-user.yaml | ||
cinder-api.yaml | ||
cinder-backend-dellemc-unity.yaml | ||
cinder-backend-dellemc-vmax-iscsi.yaml | ||
cinder-backend-dellemc-vnx.yaml | ||
cinder-backend-dellemc-xtremio-iscsi.yaml | ||
cinder-backend-dellps.yaml | ||
cinder-backend-dellsc.yaml | ||
cinder-backend-netapp.yaml | ||
cinder-backend-nvmeof.yaml | ||
cinder-backend-pure.yaml | ||
cinder-backend-scaleio.yaml | ||
cinder-backend-veritas-hyperscale.yaml | ||
cinder-backup.yaml | ||
cinder-base.yaml | ||
cinder-hpelefthand-iscsi.yaml | ||
cinder-scheduler.yaml | ||
cinder-volume.yaml | ||
congress.yaml | ||
container-image-prepare.j2.yaml | ||
designate-api.yaml | ||
designate-base.yaml | ||
designate-central.yaml | ||
designate-mdns.yaml | ||
designate-producer.yaml | ||
designate-sink.yaml | ||
designate-worker.yaml | ||
docker-registry.yaml | ||
docker.yaml | ||
ec2-api.yaml | ||
etcd.yaml | ||
external-swift-proxy.yaml | ||
glance-api.yaml | ||
gnocchi-api.yaml | ||
gnocchi-base.yaml | ||
gnocchi-metricd.yaml | ||
gnocchi-statsd.yaml | ||
haproxy-internal-tls-certmonger.j2.yaml | ||
haproxy-public-tls-certmonger.yaml | ||
haproxy-public-tls-inject.yaml | ||
haproxy.yaml | ||
heat-api-cfn.yaml | ||
heat-api.yaml | ||
heat-base.yaml | ||
heat-engine.yaml | ||
horizon.yaml | ||
ironic-api.yaml | ||
ironic-base.yaml | ||
ironic-conductor.yaml | ||
ironic-inspector.yaml | ||
ironic-neutron-agent.yaml | ||
iscsid.yaml | ||
keepalived.yaml | ||
kernel.yaml | ||
keystone.yaml | ||
liquidio-compute-config.yaml | ||
login-defs.yaml | ||
manila-api.yaml | ||
manila-backend-cephfs.yaml | ||
manila-backend-isilon.yaml | ||
manila-backend-netapp.yaml | ||
manila-backend-unity.yaml | ||
manila-backend-vmax.yaml | ||
manila-backend-vnx.yaml | ||
manila-base.yaml | ||
manila-scheduler.yaml | ||
manila-share.yaml | ||
masquerade-networks.yaml | ||
memcached.yaml | ||
mistral-api.yaml | ||
mistral-base.yaml | ||
mistral-engine.yaml | ||
mistral-event-engine.yaml | ||
mistral-executor.yaml | ||
neutron-api.yaml | ||
neutron-base.yaml | ||
neutron-bgpvpn-api.yaml | ||
neutron-bgpvpn-bagpipe.yaml | ||
neutron-bigswitch-agent.yaml | ||
neutron-compute-plugin-midonet.yaml | ||
neutron-compute-plugin-nuage.yaml | ||
neutron-compute-plugin-plumgrid.yaml | ||
neutron-dhcp.yaml | ||
neutron-l2gw-agent.yaml | ||
neutron-l2gw-api.yaml | ||
neutron-l3-compute-dvr.yaml | ||
neutron-l3.yaml | ||
neutron-lbaas-agent.yaml | ||
neutron-lbaas-api.yaml | ||
neutron-linuxbridge-agent.yaml | ||
neutron-metadata.yaml | ||
neutron-midonet.yaml | ||
neutron-ovs-agent.yaml | ||
neutron-ovs-dpdk-agent.yaml | ||
neutron-plugin-ml2-ansible.yaml | ||
neutron-plugin-ml2-cisco-vts.yaml | ||
neutron-plugin-ml2-fujitsu-cfab.yaml | ||
neutron-plugin-ml2-fujitsu-fossw.yaml | ||
neutron-plugin-ml2-mlnx-sdn-assist.yaml | ||
neutron-plugin-ml2-nuage.yaml | ||
neutron-plugin-ml2-odl.yaml | ||
neutron-plugin-ml2-ovn.yaml | ||
neutron-plugin-ml2.yaml | ||
neutron-plugin-nsx.yaml | ||
neutron-plugin-nuage.yaml | ||
neutron-plugin-plumgrid.yaml | ||
neutron-sfc-api.yaml | ||
neutron-sriov-agent.yaml | ||
neutron-sriov-host-config.yaml | ||
neutron-vpp-agent.yaml | ||
nova-api.yaml | ||
nova-base.yaml | ||
nova-compute.yaml | ||
nova-conductor.yaml | ||
nova-consoleauth.yaml | ||
nova-ironic.yaml | ||
nova-libvirt-guests.yaml | ||
nova-libvirt.yaml | ||
nova-metadata.yaml | ||
nova-migration-target.yaml | ||
nova-placement.yaml | ||
nova-scheduler.yaml | ||
nova-vnc-proxy.yaml | ||
octavia-api.yaml | ||
octavia-base.yaml | ||
octavia-health-manager.yaml | ||
octavia-housekeeping.yaml | ||
octavia-worker.yaml | ||
opendaylight-api.yaml | ||
opendaylight-ovs.yaml | ||
openstack-clients.yaml | ||
openvswitch.yaml | ||
ovn-controller.yaml | ||
ovn-dbs.yaml | ||
ovn-metadata.yaml | ||
pacemaker_remote.yaml | ||
pacemaker.yaml | ||
panko-api.yaml | ||
panko-base.yaml | ||
podman.yaml | ||
qdr.yaml | ||
rabbitmq.yaml | ||
README.rst | ||
sahara-api.yaml | ||
sahara-base.yaml | ||
sahara-engine.yaml | ||
securetty.yaml | ||
selinux.yaml | ||
snmp.yaml | ||
sshd.yaml | ||
swift-base.yaml | ||
swift-dispersion.yaml | ||
swift-proxy.yaml | ||
swift-ringbuilder.yaml | ||
swift-storage.yaml | ||
tacker.yaml | ||
tripleo-firewall.yaml | ||
tripleo-packages.yaml | ||
tripleo-ui.yaml | ||
tripleo-validations.yaml | ||
tuned.yaml | ||
veritas-hyperscale-controller.yaml | ||
vpp.yaml | ||
zaqar-api.yaml |
services
A TripleO nested stack Heat template that encapsulates generic configuration data to configure a specific service. This generally includes everything needed to configure the service excluding the local bind ports which are still managed in the per-node role templates directly (controller.yaml, compute.yaml, etc.). All other (global) service settings go into the puppet/service templates.
Input Parameters
Each service may define its own input parameters and defaults. Operators will use the parameter_defaults section of any Heat environment to set per service parameters.
Apart from sevice specific inputs, there are few default parameters for all the services. Following are the list of default parameters:
ServiceData: Mapping of service specific data. It is used to encapsulate all the service specific data. As of now, it contains net_cidr_map, which contains the CIDR map for all the networks. Additional data will be added as and when required.
ServiceNetMap: Mapping of service_name -> network name. Default mappings for service to network names are defined in ../network/service_net_map.j2.yaml, which may be overridden via ServiceNetMap values added to a user environment file via parameter_defaults.
EndpointMap: Mapping of service endpoint -> protocol. Contains a mapping of endpoint data generated for all services, based on the data included in ../network/endpoints/endpoint_data.yaml.
DefaultPasswords: Mapping of service -> default password. Used to pass some passwords from the parent templates, this is a legacy interface and should not be used by new services.
RoleName: Name of the role on which this service is deployed. A service can be deployed in multiple roles. This is an internal parameter (should not be set via environment file), which is fetched from the name attribute of the roles_data.yaml template.
RoleParameters: Parameter specific to a role on which the service is applied. Using the format "<RoleName>Parameters" in the parameter_defaults of user environment file, parameters can be provided for a specific role. For example, in order to provide a parameter specific to "Compute" role, below is the format:
parameter_defaults: ComputeParameters: Param1: value
Config Settings
Each service may define three ways in which to output variables to configure Hiera settings on the nodes.
- config_settings: the hiera keys will be pushed on all roles of which the service is a part of.
- global_config_settings: the hiera keys will be distributed to all roles
- service_config_settings: Takes an extra key to wire in values that are defined for a service that need to be consumed by some other service. For example: service_config_settings: haproxy: foo: bar This will set the hiera key 'foo' on all roles where haproxy is included.
Deployment Steps
Each service may define an output variable which returns a puppet manifest snippet that will run at each of the following steps. Earlier manifests are re-asserted when applying latter ones.
config_settings: Custom hiera settings for this service.
global_config_settings: Additional hiera settings distributed to all roles.
step_config: A puppet manifest that is used to step through the deployment sequence. Each sequence is given a "step" (via hiera('step') that provides information for when puppet classes should activate themselves.
Steps correlate to the following:
- Load Balancer configuration
- Core Services (Database/Rabbit/NTP/etc.)
- Early Openstack Service setup (Ringbuilder, etc.)
- General OpenStack Services
- Service activation (Pacemaker)
It is also possible to use Mistral actions or workflows together with a deployment step, these are executed before the main configuration run. To describe actions or workflows from within a service use:
- workflow_tasks: One or more workflow task properties
which expects a map where the key is the step and the value a list of dictionaries descrbing each a workflow task, for example:
workflow_tasks:
step2:
- name: echo
action: std.echo output=Hello
step3:
- name: external
workflow: my-pre-existing-workflow-name
input:
workflow_param1: value
workflow_param2: value
The Heat guide for the OS::Mistral::Workflow task property has more details about the expected dictionary.
- external_deploy_tasks: Ansible tasks to be run each step on the undercloud where a variable "step" is provided to enable conditionally running tasks at a given step.
- external_post_deploy_tasks: Ansible tasks to be run on the undercloud after all other deploy steps have completed.
Batch Upgrade Steps (deprecated)
Note: the upgrade_batch_tasks are no longer used and deprecated for Queens. The information below applies to upgrade_batch_tasks as they were used for the Ocata major upgrade. The upgrade_batch_tasks were used exclusively by the ceph services and for Pike ceph is now configured by ceph-ansible.
Each service template may optionally define a upgrade_batch_tasks key, which is a list of ansible tasks to be performed during the upgrade process.
Similar to the step_config, we allow a series of steps for the per-service upgrade sequence, defined as ansible tasks with a tag e.g "step1" for the first step, "step2" for the second, etc (currently only two steps are supported, but more may be added when required as additional services get converted to batched upgrades).
Note that each step is performed in batches, then we move on to the next step which is also performed in batches (we don't perform all steps on one node, then move on to the next one which means you can sequence rolling upgrades of dependent services via the step value).
The tasks performed at each step is service specific, but note that all batch upgrade steps are performed before the upgrade_tasks described below. This means that all services that support rolling upgrades can be upgraded without downtime during upgrade_batch_tasks, then any remaining services are stopped and upgraded during upgrade_tasks
The default batch size is 1, but this can be overridden for each role via the upgrade_batch_size option in roles_data.yaml
Update Steps
Each service template may optionally define a update_tasks key, which is a list of ansible tasks to be performed during the minor update process. These are executed in a rolling manner node-by-node.
We allow a series of steps for the per-service update sequence via conditionals referencing a step variable e.g when: step|int == 2.
Pre-upgrade Rolling Steps
Each service template may optionally define a pre_upgrade_rolling_tasks key, which is a list of ansible tasks to be performed before the main upgrade phase, and these tasks are executed in a node-by-node rolling manner on the overcloud, similarly as update_tasks.
Upgrade Steps
Each service template may optionally define a upgrade_tasks key, which is a list of ansible tasks to be performed during the upgrade process.
Similar to the update_tasks, we allow a series of steps for the per-service upgrade sequence, defined as ansible tasks with a "when: step|int == 1" for the first step, "== 2" for the second, etc.
Steps correlate to the following:
- Perform any pre-upgrade validations.
- Stop the control-plane services, e.g disable LoadBalancer, stop pacemaker cluster and stop any managed resources. The exact order is controlled by the cluster constraints.
- Perform a package update and install new packages: A general upgrade is done, and only new package should go into service ansible tasks.
- Start services needed for migration tasks (e.g DB)
- Perform any migration tasks, e.g DB sync commands
Note that the services are not started in the upgrade tasks - we instead re-run puppet which does any reconfiguration required for the new version, then starts the services.
Nova Server Metadata Settings
One can use the hook of type OS::TripleO::ServiceServerMetadataHook to pass entries to the nova instances' metadata. It is, however, disabled by default. In order to overwrite it one needs to define it in the resource registry. An implementation of this hook needs to conform to the following:
- It needs to define an input called RoleData of json type. This gets as input the contents of the role_data for each role's ServiceChain.
- This needs to define an output called metadata which will be given to the Nova Server resource as the instance's metadata.