Update documentation around container params

We've changed some of the naming in the Train cycle and need to update
the docs to reflect the current state. Additionally adding notes about
the previous naming conventions.

Change-Id: Ica16b31f332f6b8559a14617126037407f4e8165
This commit is contained in:
Alex Schultz 2019-07-03 10:01:54 -06:00
parent 8964476ad8
commit cd3a8cb3b0
5 changed files with 65 additions and 41 deletions

View File

@ -438,7 +438,7 @@ Here's an example of the container definition::
step_2:
etcd:
image: {get_param: DockerEtcdImage}
image: {get_param: ContainerEtcdImage}
net: host
privileged: false
restart: always
@ -456,13 +456,13 @@ This is what we're telling TripleO to do:
* Start the container on step 2
* Use the container image coming from the ``DockerEtcdImage`` heat parameter.
* Use the container image coming from the ``ContainerEtcdImage`` heat parameter.
* For the container, use the host's network.
* The container is not `privileged`_.
* Docker will use the ``/openstack/healthcheck`` endpoint for healthchecking
* The container will use the ``/openstack/healthcheck`` endpoint for healthchecking
* We tell it what volumes to mount
@ -486,8 +486,8 @@ This is what we're telling TripleO to do:
directives as part of the kolla entry point. If we don't set this, it
will only be executed the first time we run the container.
``docker_puppet_tasks`` section
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``container_puppet_tasks`` section
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
These are containerized puppet executions that are meant as bootstrapping
tasks. They typically run on a "bootstrap node", meaning, they only run on one
@ -500,6 +500,8 @@ section, except for the fact that you can set several of these, and they also
run as part of the steps (you can specify several of these, divided by the
``step_<step number>`` keys).
.. note:: This was docker_puppet_tasks prior to the Train cycle.
.. References

View File

@ -465,7 +465,7 @@ Overcloud deploy step tasks for [1,2,3,4,5]
tags: overcloud, deploy_setps
Overcloud common deploy step tasks [1,2,3,4,5]
Applies the common tasks done at each step to include puppet host
configuration, ``docker-puppet.py``, and ``paunch`` (container configuration).
configuration, ``container-puppet.py``, and ``paunch`` (container configuration).
tags: overcloud, deploy_setps
Server Post Deployments

View File

@ -121,7 +121,9 @@ create a custom heat environment file that contains your override. To swap out
the cinder container from our previous example we would add::
parameter_defaults:
   DockerCinderVolumeImage: centos-binary-cinder-volume-vendorx:rev1
   ContainerCinderVolumeImage: centos-binary-cinder-volume-vendorx:rev1
.. note:: Image parameters were named Docker*Image prior to the Train cycle.
3rd party kernel modules

View File

@ -14,25 +14,36 @@ parts that allow for deploying OpenStack in containers using TripleO.
Containers runtime deployment and configuration notes
-----------------------------------------------------
TripleO deploys the containers runtime and image components from the docker
packages. The installed components include the docker daemon system service and
`OCI`_ compliant `Moby`_ and `Containerd`_ - the building blocks for the
container system.
TripleO has transitioned to the `podman`_ container runtime. Podman does not
use a persistent daemon to manage containers. TripleO wraps the container
service execution in systemd managed services. These services are named
tripleo_<container name>. Prior to Stein TripleO deployed the containers
runtime and image components from the docker packages. The installed components
include the docker daemon system service and `OCI`_ compliant `Moby`_ and
`Containerd`_ - the building blocks for the container system.
Containers control plane includes `Paunch`_ and `Dockerd`_ for the
Containers control plane includes `Paunch`_ and systemd for the
stateless services, and Pacemaker `Bundle`_ for the containerized stateful
services, like the messaging system or database.
.. _podman: https://podman.io/
.. _OCI: https://www.opencontainers.org/
.. _Moby: https://mobyproject.org/
.. _Containerd: https://github.com/containerd/containerd
.. _dockerd: https://docs.docker.com/engine/reference/commandline/dockerd/
.. _Bundle: https://wiki.clusterlabs.org/wiki/Bundle_Walk-Through
There are ``Docker*`` configuration parameters in TripleO Heat Templates
available for operators. Those options may be used to override defaults for the
main docker daemon system service, or help to debug containerized TripleO
deployments. Parameter override example::
Currently we provide a ``ContainerCli`` parameter which can be used to switch
between podman and docker container runtimes. The default for the undercloud
is podman, while the default for the overcloud is docker due to pacemaker
limitations when running under CentOS 7. We expect to switch to podman by
default for the overcloud once CentOS 8 becomes the default.
We have provided various ``Container*`` configuration parameters in TripleO
Heat Templates for operators to tune some of the container based settings.
There are still some ``Docker*`` configuration parameters in TripleO Heat
Templates available for operators which are left over for the Docker based
deployment or historical reasons.
Parameter override example::
parameter_defaults:
DockerDebug: true
@ -50,6 +61,8 @@ deployments. Parameter override example::
.. note:: Make sure the default CIDR assigned for the `docker0` bridge interface
does not conflict to other network ranges defined for your deployment.
.. note:: These options have no effect when using podman.
* ``DockerInsecureRegistryAddress``, ``DockerRegistryMirror`` allow you to
specify a custom registry mirror which can optionally be accessed insecurely
by using the ``DockerInsecureRegistryAddress`` parameter.
@ -130,10 +143,10 @@ This file is a jinja template and it's rendered before the deployment is
started. This file defines the resources that are executed before and after the
container initialization.
.. _docker-puppet.py:
.. _container-puppet.py:
docker-puppet.py
................
container-puppet.py
...................
This script is responsible for generating the config files for each service. The
script is called from the `deploy-steps.j2` file and it takes a `json` file as
@ -141,10 +154,13 @@ configuration. The json files passed to this script are built out of the
`puppet_config` parameter set in every service template (explained in the
`Docker specific settings`_ section).
The `docker-puppet.py` execution results in a oneshot container being executed
The `container-puppet.py` execution results in a oneshot container being executed
(usually named `puppet-$service_name`) to generate the configuration options or
run other service specific initialization tasks. Example: Create Keystone endpoints.
.. note:: container-puppet.py was previously docker-puppet.py prior to the Train
cycle.
Anatomy of a containerized service template
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -202,18 +218,19 @@ The following sections are available:
used along with this manifest to generate a config directory for
this container.
* docker_puppet_tasks: This section provides data to drive the
docker-puppet.py tool directly. The task is executed only once
* container_puppet_tasks: This section provides data to drive the
container-puppet.py tool directly. The task is executed only once
within the cluster (not on each node) and is useful for several
puppet snippets we require for initialization of things like
keystone endpoints, database users, etc. See docker-puppet.py
for formatting.
keystone endpoints, database users, etc. See container-puppet.py
for formatting. NOTE: these tasks were docker_puppet_tasks prior to the
Train cycle.
Docker steps
............
Container steps
...............
Similar to baremetal, docker containers are brought up in a stepwise manner. The
Similar to baremetal, containers are brought up in a stepwise manner. The
current architecture supports bringing up baremetal services alongside of
containers. Therefore, baremetal steps may be required depending on the service
and they are always executed before the corresponding container step.
@ -274,7 +291,7 @@ Service Bootstrap
Bootstrapping services is a one-shot operation for most services and it's done
by defining a separate container that shares the same structure as the main
service container commonly defined under the `docker_step` number 3 (see `Docker
service container commonly defined under the `docker_step` number 3 (see `Container
steps`_ section above).
Unlike normal service containers, the bootstrap container should be run in the

View File

@ -308,16 +308,19 @@ the container:
172ed68eb44ab20551a70a3e33c90a02014f530e42cd7b30255da4577c8ed80c
Debugging docker-puppet.py
--------------------------
Debugging container-puppet.py
-----------------------------
The :ref:`docker-puppet.py` script manages the config file generation and
puppet tasks for each service. This also exists in the `docker` directory
The :ref:`container-puppet.py` script manages the config file generation and
puppet tasks for each service. This also exists in the `common` directory
of tripleo-heat-templates. When writing these tasks, it's useful to be
able to run them manually instead of running them as part of the entire
stack. To do so, one can run the script as shown below::
CONFIG=/path/to/task.json /path/to/docker-puppet.py
CONFIG=/path/to/task.json /path/to/container-puppet.py
.. note:: Prior to the Train cycle, container-puppet.py was called
docker-puppet.py which was located in the `docker` directory.
The json file must follow the following form::
@ -340,7 +343,7 @@ Using a more realistic example. Given a `puppet_config` section like this::
config_image: {get_param: DockerGlanceApiConfigImage}
Would generated a json file called `/var/lib/docker-puppet/docker-puppet-tasks2.json` that looks like::
Would generated a json file called `/var/lib/container-puppet/container-puppet-tasks2.json` that looks like::
[
{
@ -353,24 +356,24 @@ Would generated a json file called `/var/lib/docker-puppet/docker-puppet-tasks2.
Setting the path to the above json file as the `CONFIG` environment
variable passed to `docker-puppet.py` will create a container using
variable passed to `container-puppet.py` will create a container using
the `centos-binary-glance-api:latest` image and it and run puppet on a
catalog restricted to the given puppet `puppet_tags`.
As mentioned above, it's possible to create custom json files and call
`docker-puppet.py` manually, which makes developing and debugging puppet
`container-puppet.py` manually, which makes developing and debugging puppet
steps easier.
`docker-puppet.py` also supports the environment variable `SHOW_DIFF`,
`container-puppet.py` also supports the environment variable `SHOW_DIFF`,
which causes it to print out a docker diff of the container before and
after the configuration step has occurred.
By default `docker-puppet.py` runs things in parallel. This can make
By default `container-puppet.py` runs things in parallel. This can make
it hard to see the debug output of a given container so there is a
`PROCESS_COUNT` variable that lets you override this. A typical debug
run for docker-puppet might look like::
run for container-puppet might look like::
SHOW_DIFF=True PROCESS_COUNT=1 CONFIG=glance_api.json ./docker-puppet.py
SHOW_DIFF=True PROCESS_COUNT=1 CONFIG=glance_api.json ./container-puppet.py
Testing a code fix in a container
---------------------------------