fast_forward_upgrade_tasks for swift covering Ocata and Pike.
- Service status check
- Stop service when updating from Ocata to Pike
- Update swift packages
bp fast-forward-upgrades
Change-Id: I66879669cead2f7b5b2cd1398f344c063d771628
If we use variables defined in later step in conditional before
checking which step are we on we will fail.
Resolves: rhbz#1535457
Closes-Bug: #1743764
Change-Id: Ic21f6eb5c4101f230fa894cd0829a11e2f0ef39b
Enabling data-at-rest encryption and integration
with barbican to swift proxy
Related-Change-Id: I78c6003f5f599a422193dc47422ee607ce05c715
Related-Change-Id: I1ceda973733acb081967ab04a5fd57eb1609c9a7
Change-Id: I26cf063fe410689530ee507cc2f79e93b5e71732
Signed-off-by: Thiago da Silva <thiago@redhat.com>
This converts "tags: stepN" to "when: step|int == N" for the direct
execution as an ansible playbook, with a loop variable 'step'.
The tasks all include the explicit cast |int.
This also adds a set_fact task for handling of the package removal
with the UpgradeRemovePackages parameter (no change to the interface)
The yaml-validate also now checks for duplicate 'when:' statements
Q upgrade spec @ Ibde21e6efae3a7d311bee526d63c5692c4e27b28
Related Blueprint: major-upgrade-workflow
[0]: 394a92f761/tripleo_common/utils/config.py (L141)
Change-Id: I6adc5619a28099f4e241351b63377f1e96933810
As discussed in the bug below, the expirer service is provided
by the swift-proxy package so it needs to be stopped at the
same time as swift-proxy and not with the other swift services
Change-Id: I01518f82cef494682b4359ba7849ba7e37ac39cc
Related-Bug: 1701501
This ensures that the base class is applied, setting the required hash
values in swift.conf properly when deploying a proxy node without the
storage service at the same time.
Closes-Bug: 1732663
Depends-On: I11c044bbc8b9f56f95ace9320cc77303d9a7543e
Change-Id: Id916413c9d74071968d9988b604664fad30282b2
Step config is only required within the puppet_configs section
of docker/services/*. This patch drops the top level 'step_config'
and updates the unit tests accordingly.
Change-Id: I7dc7cfae3ef1965ec95b1d9ef23e7f162418c034
The logging directories are not used by swift, which in turn logs
to syslog. So the unnecessary bind-mounts were removed.
Furthermore, as part of the swift package which is included in the
overcloud image, there is an rsyslog rule which gets the swift logs
and persists them in /var/log/swift. Due to this and to be consistent
with the rest of the containers. A symlink was created to have the
swift logs available from /var/log/containers/swift as well. This makes
the readme indicating the change of place of the logs unnecessary, so
that was removed as well.
Closes-Bug: #1732107
Change-Id: I61610d3cb187235ac316e1c5de0a344be3ebb1e2
This should help operators find the new log files. We do have them
documented, but not everybody reads every word in the docs :)
The readme creation has ignore_errors: true so that if the directory
isn't present at all (e.g. on deployed server environments, which
don't have openstack packages installed), we don't fail the deployment
when we're not able to create the readme.
Change-Id: I6b36db7b7ce8b3e4da566eb7828d0c3b8646a14f
Partial-Bug: #1730957
Docker services are missing the pre-upgrade validation task
in the upgrade_tasks section which verifies if the service
is running before going on with the upgrade.
Change-Id: I16f38d9e1042c5d83455a28882b4a024aac27699
Partial-Bug: #1704389
Adds a UpgradeRemoveUnusedPackages param to use
in the ansible when conditional for the removal
Adds package removal to step2 right after a service
is stopped and disabled on step2. Package updates
happen in step3 so ideally remove before that.
The package removal task has ignore_errors true
so dependencies or other issue removing packages will
not fail the upgrade workflow.
Also adds this to the upgrade environment files
for visibility and defaulting false
Change-Id: Ie4e4a2d41f7752c5a13507a7c15c6f68e203cfca
Related-Bug: 1701501
We were setting them in the Dockerfile's previously. However this
caused the healtcheck commands to always run regardless of which
process we were running in the container. This caused 'unhealthy'
containers at times they were never intended to be checked. This
change makes it so they are explicitly set.
Change-Id: I7bc12d236b3cc7a52d3e6aa706fd04675dad3a9a
The services that docker depends on, have logging_sources and logging_groups;
but those are not set on the docker outputs so they are not used when dockers
are deployed.
Added logging_source & logging_groups as docker optional parameters in
tools/yaml-validate.py
Closes-Bug: #1718110
Change-Id: I8795eaf4bd06051e9b94aa50450dee0d8761e526
This removes the default container names from all the templates
and uses a single environment file to specify the full container
name and registry from which to pull. Also does away with most
of DockerNamespace.
Change-Id: Ieaedac33f0a25a352ab432cdb00b5c888be4ba27
Depends-On: Ibc108871ebc2beb1baae437105b2da1d0123ba60
Co-Authored-By: Dan Prince <dprince@redhat.com>
Co-Authored-By: Steve Baker <sbaker@redhat.com>
Makes it possible to resolve network subnets within a service
template; the data is transported into a new property ServiceData
wired into every service which hopefully is generic enough to
be extended in the future and transport more data.
Data can be consumed in service templates to set config values
which need to know what is the subnet where a deamon operates (for
example the Ceph Public vs Cluster network).
Change-Id: I28e21c46f1ef609517175f7e7ee19e28d1c0cba2
This solves a problem with bind-mounts when the containers are holding
files descriptors open.
At the same time this makes the template more robust to puppet changes
since new config files will be available in the containers without
needing to update the templates.
Partial-Bug: #1698323
Change-Id: Ia4ad6d77387e3dc354cd131c2f9756939fb8f736
This commit consistently defines a heat template parameter in the form
of DockerXXXConfigImage where XXX represents the name of the
config_volume that is used by docker-puppet.
The goal is to mitigate hard to debug errors where the templates would
set different defaults for the image docker-puppet.py uses to run, for
the same config_volume name.
This fixes a couple of inconsistencies on the way.
Change-Id: I212020a76622a03521385a6cae4ce73e51ce5b6b
Closes-Bug: #1699791
This change modifies these mounts to be more specific mounts based on
the files which puppet actually modifies.
The result is something a bit more self-documenting, and allows for
trying other techniques for populating /etc other than directly mounting
config-data directories.
Change-Id: Ied1eab99d43afcd34c00af25b7e36e7e55ff88e6
This is needed since it's what writes the service metadata to the nova
server in order to create the kerberos principals. It worked in a base
controller since the keystone template does have this. But if we would
deploy these services on a separate role, it would break. So this output
is needed.
bp tls-via-certmonger-containers
Change-Id: I3ee8c65d356dcd092a3fbf79041e5c69ef23b721
This was forgotten in I72376a803ec6b2ed93903cc0c95a6ffce718b6dc and
broke containerized deployment.
Change-Id: I599a87bf06efbfefd3067c77ed6ca866505900f9
Closes-Bug: #1690870
When a service is enabled on multiple roles, the parameters for the
service will be global. This change enables an option to provide
role specific parameter to services and other templates.
Two new parameters - RoleName and RoleParameters, are added to the
service template. RoleName provides the role name of on which the
current instance of the service is being applied on. RoleParameters
provides the list of parameters which are configured specific to the
role in the environment file, like below:
parameters_default:
# Default value for applied to all roles
NovaReservedHostMemory: 2048
ComputeDpdkParameters:
# Applied only to ComputeDpdk role
NovaReservedHostMemory: 4096
In above sample, the cluster contains 2 roles - Compute, ComputeDpdk.
The values of ComputeDpdkParameters will be passed on to the templates
as RoleParameters while creating the stack for ComputeDpdk role. The
parameter which supports role specific configuration, should find the
parameter first in in the RoleParameters list, if not found, then the
default (for all roles) should be used.
Implements: blueprint tripleo-derive-parameters
Change-Id: I72376a803ec6b2ed93903cc0c95a6ffce718b6dc
This spawns an extra container that runs httpd to run the TLS proxy that
will go in front of swift.
bp tls-via-certmonger-containers
Depends-On: Ib01137cd0d98e6f5a3e49579c080ab18d8905b0d
Change-Id: I9639af8b46b8e865cc1fa7249bf1d8b1b978adfe
Some containers are using the logs named volume for collecting logs
written to `/var/log`. We should make this consistent for all the
containers.
This patch also cleans up some mounts that weren't needed for some
services. For example, glance-api doesn't need `/run` to be mounted.
Other changes:
* Rework log volumes to hostpath mounts to omit slow COW writes.
* Add kolla_config's permission and host_prep_tasks create and
manage hostpath mounted log dirs permissions.
* Rework data owning init containers to kolla_config permissions
* When a step wants KOLLA_BOOTSTRAP or DB sync, use logs data owning
init containers to set permissions for logs. This is required
because kolla bootsrap and DB sync runs before the kolla config
stage and there is yet permissions set for logs.
* In order to address hybrid cases for host services vs containerized
ones to access logs having different UIDs, persist containerized
services' logs into separate directories (an upgrade impact)
* Ensure host prep tasks to create /var/log/containers/ and /var/lib/
sub-directories for services
* Fix missing /etc/httpd, /var/www config-data mounts for zaqar/ironic
* Fix YAML indentation and drop strings quotation.
Co-authored-by: Bogdan Dobrelya <bdobreli@redhat.com>
Partial blueprint containerized-services-logs
Change-Id: I53e737120bf0121bd28667f355b6f29f1b2a6b82
list_concat was introduced recently and is able to replace the yaql
calls for concatenating lists.
Change-Id: Id3a80a0e1e4c25b6d838898757c69ec99d0cd826
This enables common resources that the docker templates might need.
The initial resource only is common volumes, and two volumes are
introduced (localtime and hosts).
Change-Id: Ic55af32803f9493a61f9b57aff849bfc6187d992
Simplify the config of the containerized services by bind mounting in
the configurations instead of specifying them all in kolla config.
This is change is useful to limit the side effects of generating the
config files and running the container is two separate steps as config
directories are now bind-mounted inside the container instead of having
files being copied to the container. We've seen examples of Apache's
mod_ssl configuration file present on the container preventing it to
start when puppet configured apache not to load the ssl module (in case
TLS is disabled).
Co-Authored-By: Ian Main <imain@redhat.com>
Change-Id: I4ec5dd8b360faea71a044894a61790997f54d48a
Use mounts instead of docker volumes to preserve existing data when
moving from baremetal to containerized Swift.
Change-Id: Ib7cbca2ef674a0245a67b69ee2c77f574d74c181
We don't use docker_image for anything. It is a remant of the
pre-composable docker templates and we can now remove it.
This patch removes references to the 'docker_image' section
from docker/post.yaml and all of the docker/services* templates.
Change-Id: I208c1ef1550ab39ab0ee47ab282f9b1937379810
This aligns the docker based services with the new composable upgrades
architecture we landed for ocata, and does a first-pass adding upgrade_tasks
for the services (these may change, atm we only disable the service on
the host).
To run the upgrade workflow you basically do two steps:
openstack overcloud deploy --templates \
-e environments/major-upgrade-composable-steps-docker.yaml
This will run the ansible upgrade steps we define via upgrade_tasks
then run the normal docker PostDeploySteps to bring up the containers.
For the puppet workflow there's then an operator driven step where
compute nodes (and potentially storage nodes) are upgrades in batches
and finally you do:
openstack overcloud deploy --templates \
-e environments/major-upgrade-converge-docker.yaml
In the puppet case this re-applies puppet to unpin the nova RPC API
so I guess it'll restart the nova containers this affects but otherwise
will be a no-op (we also disable the ansible steps at this point.
Depends-On: I9057d47eea15c8ba92ca34717b6b5965d4425ab1
Change-Id: Ia50169819cb959025866348b11337728f8ed5c9e
This approach removes the need for the yaql zip to build the
docker-puppet data by building the data in a puppet_config dict.
This allows a future change to make docker-puppet.py only accept dict
data.
Currently the step_config is left where it is and referenced inside
puppet_config, but feedback is welcome whether this is necessary or
desirable.
Change-Id: I4a4d7a6fd2735cb841174af305dbb62e0b3d3e8c