tripleo-heat-templates/docker/services
Dan Prince cd354bc386 flatten the mistral service configurations
This change combines the previous puppet and docker files into a single
file that performs the docker service installation and configuration
for all mistral services.

With this patch the baremetal version of each mistral service has been removed.

Related-Blueprint: services-yaml-flattening

Change-Id: I3f2ac51c885548333299df2c92c1f8df154d241e
2019-01-25 15:49:18 -05:00
..
ceph-ansible Make ceph-ansible integration respect PythonInterpreter 2019-01-16 14:20:42 +00:00
database Use net=none for *_init_log(s) containers 2019-01-21 13:43:07 +05:30
logging Use net=none for *_init_log(s) containers 2019-01-21 13:43:07 +05:30
messaging Use net=none for *_init_log(s) containers 2019-01-21 13:43:07 +05:30
metrics Use net=none for *_init_log(s) containers 2019-01-21 13:43:07 +05:30
neutron Correct file modes for rpmlint failures 2018-12-14 13:21:28 -07:00
octavia Allow Octavia deployments for Standalone 2019-01-18 10:36:06 +01:00
pacemaker haproxy: deploy IPtables rules from the host 2019-01-24 16:14:15 -05:00
barbican-api.yaml Explicitly manage http configs 2019-01-24 11:08:28 -07:00
ceilometer-agent-central.yaml Use net=none for *_init_log(s) containers 2019-01-21 13:43:07 +05:30
ceilometer-agent-compute.yaml Ensure logs folder is created in prep hosts tasks. 2019-01-18 09:28:56 +01:00
ceilometer-agent-ipmi.yaml Use net=none for *_init_log(s) containers 2019-01-21 13:43:07 +05:30
ceilometer-agent-notification.yaml Use net=none for *_init_log(s) containers 2019-01-21 13:43:07 +05:30
congress.yaml Use net=none for *_init_log(s) containers 2019-01-21 13:43:07 +05:30
containers-common.yaml Add python shim for docker config scripts 2018-11-15 15:06:56 +00:00
designate-api.yaml Ensure logs folder is created in prep hosts tasks. 2019-01-18 09:28:56 +01:00
designate-central.yaml Use net=none for *_init_log(s) containers 2019-01-21 13:43:07 +05:30
designate-mdns.yaml Ensure logs folder is created in prep hosts tasks. 2019-01-18 09:28:56 +01:00
designate-producer.yaml Ensure logs folder is created in prep hosts tasks. 2019-01-18 09:28:56 +01:00
designate-sink.yaml Ensure logs folder is created in prep hosts tasks. 2019-01-18 09:28:56 +01:00
designate-worker.yaml Ensure logs folder is created in prep hosts tasks. 2019-01-18 09:28:56 +01:00
ec2-api.yaml Explicitly manage http configs 2019-01-24 11:08:28 -07:00
fluentd.yaml Ensure logs folder is created in prep hosts tasks. 2019-01-18 09:28:56 +01:00
gnocchi-api.yaml Explicitly manage http configs 2019-01-24 11:08:28 -07:00
gnocchi-metricd.yaml Ensure logs folder is created in prep hosts tasks. 2019-01-18 09:28:56 +01:00
gnocchi-statsd.yaml Ensure logs folder is created in prep hosts tasks. 2019-01-18 09:28:56 +01:00
haproxy.yaml haproxy: deploy IPtables rules from the host 2019-01-24 16:14:15 -05:00
horizon.yaml Explicitly manage http configs 2019-01-24 11:08:28 -07:00
iscsid.yaml Ensure /var/lib/iscsi actually exists before mounting it 2019-01-09 06:43:17 +01:00
liquidio-compute-config.yaml upgrade: remove Docker containers now managed by Podman 2018-11-27 00:20:31 +00:00
logrotate-crond.yaml upgrade: remove Docker containers now managed by Podman 2018-11-27 00:20:31 +00:00
manila-api.yaml Use net=none for *_init_log(s) containers 2019-01-21 13:43:07 +05:30
manila-common.yaml Set setype for log and persistant data directories 2018-11-12 02:04:07 +00:00
manila-scheduler.yaml Ensure logs folder is created in prep hosts tasks. 2019-01-18 09:28:56 +01:00
manila-share.yaml Ensure logs folder is created in prep hosts tasks. 2019-01-18 09:28:56 +01:00
multipathd.yaml Bind mount /var/lib/iscsi in containers using iSCSI 2019-01-03 16:36:58 +00:00
neutron-api.yaml Explicitly manage http configs 2019-01-24 11:08:28 -07:00
neutron-bgpvpn-api.yaml Change template names to rocky 2018-05-09 08:28:42 +02:00
neutron-dhcp.yaml Create /run/netns if does not exist 2019-01-21 13:43:07 +05:30
neutron-l2gw-api.yaml Change template names to rocky 2018-05-09 08:28:42 +02:00
neutron-l3.yaml Create /run/netns if does not exist 2019-01-21 13:43:07 +05:30
neutron-lbaas-api.yaml Change template names to rocky 2018-05-09 08:28:42 +02:00
neutron-metadata.yaml Allow container healthchecks to access netlink data 2019-01-10 13:59:10 +01:00
neutron-ovs-agent.yaml Make neutron ovs agent work with python3 2019-01-16 17:13:08 +01:00
neutron-ovs-dpdk-agent.yaml Remove references to logging_source 2018-10-08 13:43:47 +03:00
neutron-plugin-ml2-ansible.yaml Remove references to logging_source 2018-10-08 13:43:47 +03:00
neutron-plugin-ml2-cisco-vts.yaml Remove references to logging_source 2018-10-08 13:43:47 +03:00
neutron-plugin-ml2.yaml Remove references to logging_source 2018-10-08 13:43:47 +03:00
neutron-plugin-nsx.yaml Add config files/templates to integrate nsx plugin with container 2018-11-21 03:06:25 +00:00
neutron-sfc-api.yaml Adds docker SFC and cleans up ODL envs 2018-06-14 14:07:58 +00:00
neutron-sriov-agent.yaml Allow container healthchecks to access netlink data 2019-01-10 13:59:10 +01:00
nova-api.yaml Explicitly manage http configs 2019-01-24 11:08:28 -07:00
nova-compute-common.yaml Move cellv2 discovery from control plane services to compute services 2018-12-20 11:23:06 +05:30
nova-compute.yaml Mount mysql client configuration in nova cell discovery container 2019-01-24 15:50:45 +02:00
nova-conductor.yaml Enable virt_sandbox_use_netlink SELinux boolean for port healthchecks 2019-01-11 08:19:18 +01:00
nova-consoleauth.yaml Enable virt_sandbox_use_netlink SELinux boolean for port healthchecks 2019-01-11 08:19:18 +01:00
nova-ironic.yaml Mount mysql client configuration in nova cell discovery container 2019-01-24 15:50:45 +02:00
nova-libvirt.yaml nova-libvirt: conditionalize selinux bind-mount 2019-01-16 15:55:55 +01:00
nova-metadata.yaml Explicitly manage http configs 2019-01-24 11:08:28 -07:00
nova-migration-target.yaml flatten sshd service configuration 2018-12-19 13:04:08 -05:00
nova-placement.yaml Explicitly manage http configs 2019-01-24 11:08:28 -07:00
nova-scheduler.yaml Enable virt_sandbox_use_netlink SELinux boolean for port healthchecks 2019-01-11 08:19:18 +01:00
nova-vnc-proxy.yaml upgrade: remove tasks that stop and disable services 2018-12-10 09:19:59 -05:00
novajoin.yaml Ensure logs folder is created in prep hosts tasks. 2019-01-18 09:28:56 +01:00
octavia-api.yaml Explicitly manage http configs 2019-01-24 11:08:28 -07:00
octavia-health-manager.yaml Ensure logs folder is created in prep hosts tasks. 2019-01-18 09:28:56 +01:00
octavia-housekeeping.yaml Ensure logs folder is created in prep hosts tasks. 2019-01-18 09:28:56 +01:00
octavia-worker.yaml Ensure logs folder is created in prep hosts tasks. 2019-01-18 09:28:56 +01:00
opendaylight-api.yaml Run 'Delete Upgrade Flag and Unset it via Rest' only once 2019-01-13 10:14:02 +02:00
ovn-controller.yaml Ensure logs folder is created in prep hosts tasks. 2019-01-18 09:28:56 +01:00
ovn-dbs.yaml Ensure logs folder is created in prep hosts tasks. 2019-01-18 09:28:56 +01:00
ovn-metadata.yaml Create /run/netns if does not exist 2019-01-21 13:43:07 +05:30
panko-api.yaml Explicitly manage http configs 2019-01-24 11:08:28 -07:00
qdrouterd.yaml Use net=none for *_init_log(s) containers 2019-01-21 13:43:07 +05:30
rabbitmq.yaml Use net=none for *_init_log(s) containers 2019-01-21 13:43:07 +05:30
README.rst Run online data migrations 2018-09-04 15:42:33 +02:00
rsyslog-sidecar.yaml Change template names to rocky 2018-05-09 08:28:42 +02:00
sensu-client.yaml Ensure logs folder is created in prep hosts tasks. 2019-01-18 09:28:56 +01:00
swift-proxy.yaml Explicitly manage http configs 2019-01-24 11:08:28 -07:00
swift-ringbuilder.yaml Use net=none for *_init_log(s) containers 2019-01-21 13:43:07 +05:30
swift-storage.yaml Use net=none for *_init_log(s) containers 2019-01-21 13:43:07 +05:30
tacker.yaml Use net=none for *_init_log(s) containers 2019-01-21 13:43:07 +05:30
tempest.yaml Use net=none for *_init_log(s) containers 2019-01-21 13:43:07 +05:30
tripleo-ui.yaml Explicitly manage http configs 2019-01-24 11:08:28 -07:00
undercloud-upgrade.yaml ansible: replace yum module by package module when possible 2018-07-21 00:17:25 +00:00
xinetd.yaml upgrade: remove tasks that stop and disable services 2018-12-10 09:19:59 -05:00

Docker Services

TripleO docker services are currently built on top of the puppet services. To do this each of the docker services includes the output of the t-h-t puppet/service templates where appropriate.

In general global docker specific service settings should reside in these templates (templates in the docker/services directory.) The required and optional items are specified in the docker settings section below.

If you are adding a config setting that applies to both docker and baremetal that setting should (so long as we use puppet) go into the puppet/services templates themselves.

Also see the Puppet services README file for more information about how the service templates work in general.

Building Kolla Images

TripleO currently relies on Kolla docker containers. Kolla supports container customization and we are making use of this feature within TripleO to inject puppet (our configuration tool of choice) into the Kolla base images. The undercloud nova-scheduler also requires openstack-tripleo-common to provide custom filters.

To build Kolla images for TripleO adjust your kolla config1 to build your centos base image with puppet using the example below:

$ cat template-overrides.j2 {% extends parent_template %} {% set base_centos_binary_packages_append = ['puppet'] %} {% set nova_scheduler_packages_append = ['openstack-tripleo-common'] %}

kolla-build --base centos --template-override template-overrides.j2

Docker settings

Each service may define an output variable which returns a puppet manifest snippet that will run at each of the following steps. Earlier manifests are re-asserted when applying latter ones.

  • config_settings: This setting is generally inherited from the puppet/services templates and only need to be appended to on accasion if docker specific config settings are required.

  • kolla_config: Contains YAML that represents how to map config files into the kolla container. This config file is typically mapped into the container itself at the /var/lib/kolla/config_files/config.json location and drives how kolla's external config mechanisms work.

  • docker_config: Data that is passed to the docker-cmd hook to configure a container, or step of containers at each step. See the available steps below and the related docker-cmd hook documentation in the heat-agents project.

  • puppet_config: This section is a nested set of key value pairs that drive the creation of config files using puppet. Required parameters include:

    • puppet_tags: Puppet resource tag names that are used to generate config files with puppet. Only the named config resources are used to generate a config file. Any service that specifies tags will have the default tags of 'file,concat,file_line,augeas,cron' appended to the setting. Example: keystone_config
    • config_volume: The name of the volume (directory) where config files will be generated for this service. Use this as the location to bind mount into the running Kolla container for configuration.
    • config_image: The name of the docker image that will be used for generating configuration files. This is often the same container that the runtime service uses. Some services share a common set of config files which are generated in a common base container.
    • step_config: This setting controls the manifest that is used to create docker config files via puppet. The puppet tags below are used along with this manifest to generate a config directory for this container.
  • docker_puppet_tasks: This section provides data to drive the docker-puppet.py tool directly. The task is executed only once within the cluster (not on each node) and is useful for several puppet snippets we require for initialization of things like keystone endpoints, database users, etc. See docker-puppet.py for formatting.

Docker steps

Similar to baremetal docker containers are brought up in a stepwise manner. The current architecture supports bringing up baremetal services alongside of containers. For each step the baremetal puppet manifests are executed first and then any docker containers are brought up afterwards.

Steps correlate to the following:

Pre) Containers config files generated per hiera settings. 1) Load Balancer configuration baremetal a) step 1 baremetal b) step 1 containers 2) Core Services (Database/Rabbit/NTP/etc.) a) step 2 baremetal b) step 2 containers 3) Early Openstack Service setup (Ringbuilder, etc.) a) step 3 baremetal b) step 3 containers 4) General OpenStack Services a) step 4 baremetal b) step 4 containers c) Keystone containers post initialization (tenant,service,endpoint creation) 5) Service activation (Pacemaker), online data migration a) step 5 baremetal b) step 5 containers

Update steps:

All services have an associated update_tasks output that is an ansible snippet that will be run during update in an rolling update that is expected to run in a rolling update fashion (one node at a time)

For Controller (where pacemaker is running) we have the following states:
  1. Step=1: stop the cluster on the updated node;
  2. Step=2: Pull the latest image and retag the it pcmklatest
  3. Step=3: yum upgrade happens on the host.
  4. Step=4: Restart the cluster on the node
  5. Step=5: Verification: Currently we test that the pacemaker services are running.

Then the usual deploy steps are run which pull in the latest image for all containerized services and the updated configuration if any.

Note: as pacemaker is not containerized, the points 1 and 4 happen in puppet/services/pacemaker.yaml.

Fast-forward Upgrade Steps

Each service template may optionally define a fast_forward_upgrade_tasks key, which is a list of Ansible tasks to be performed during the fast-forward upgrade process. As with Upgrade steps each task is associated to a particular step provided as a variable and used along with a release variable by a basic conditional that determines when the task should run.

Steps are broken down into two categories, prep tasks executed across all hosts and bootstrap tasks executed on a single host for a given role.

The individual steps then correspond to the following tasks during the upgrade:

Prep steps:

  • Step=0: Check running services
  • Step=1: Stop the service
  • Step=2: Stop the cluster
  • Step=3: Update repos

Bootstrap steps:

  • Step=4: DB backups
  • Step=5: Pre package update commands
  • Step=6: Package updates
  • Step=7: Post package update commands
  • Step=8: DB syncs
  • Step=9: Verification

  1. See the override file which can be used to build Kolla packages that work with TripleO, and an `example build script <https://github.com/dprince/undercloud_containers/blob/master/build_kolla.sh>_.↩︎