This is needed because the Rabbitmq_ providers might prefetch
the data from rabbitmq on the system and we do not want to run
any task on the host that is not strictly pcmk-related.
Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com>
Change-Id: I0762b3ed7d881babfecc49b1a0e3798eb3efbddb
Related-Bug: #1863442
This is needed because the Mysql_ providers will prefetch
the the mysql users if facter finds the '/bin/mysql' executable
on the system and we do not want to run any mysql task on the host
directly.
Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com>
Change-Id: Ic6c65e6849368185177aeaa31d50f52761225f62
Related-Bug: #1863442
With the switch to Ubuntu Focal for tox jobs via https://review.opendev.org/#/c/738322/
our 1.1.0 version of hacking pulls in old modules that are not compatible
with python3.8:
https://github.com/openstack/hacking/blob/1.1.0/requirements.txt#L6
Let's upgrade hacking to >= 3.0.1 and < 3.1.0 so that it supports python3.8
correctly. The newer hacking also triggered new errors which are
fixed in this review as well:
./tools/render-ansible-tasks.py:113:25: F841 local variable 'e' is assigned to but never used
./tools/yaml-validate.py:541:19: F999 '...'.format(...) has unused arguments at position(s): 2
./tools/render-ansible-tasks.py:126:1: E305 expected 2 blank lines after class or function definition, found 1
./tools/yaml-validate.py:33:1: E305 expected 2 blank lines after class or function definition, found 1
./container_config_scripts/tests/test_nova_statedir_ownership.py:35:1: E305 expected 2 blank lines after class or function definition, found 0
Also make sure we exclude .tox and __pycache__ from flake8 as well
We also need to change the lower-constraint requirements to make them
py3.8 compatible. See https://bugs.launchpad.net/nova/+bug/1886298
cffi==1.14.0
greenlet==0.4.15
MarkupSafe==1.1.0
paramiko==2.7.1
Suggested-By: Yatin Karel <ykarel@redhat.com>
Change-Id: Ic280ce9a51f26d165d4e93ba0dc0c47cdf8d7961
Closes-Bug: #1895093
Define the parameter NetConfigDataLookup in overcloud.yaml,
and write it's content into ansible group_vars. The parameter
was previously used in the firstboot heat software config
resource firstboot/os-net-config-mappings.yaml. With nova-less
none of the firstboot software configuration resource can be
used. The depends-on change in tripleo-ansible will parse the
lookup data, and write the os-net-config mapping file.
Depends-On: https://review.opendev.org/749669
Change-Id: I583bf17c0020bb2a90f885ece0cd5684fc27a980
Blueprint: nova-less-deploy
Starting with Ceph Nautilus it is possible to enable on-wire
encryption between daemons and clients.
This change adds a setting to optionally configure Ceph with OTW
encryption and a setting in scenario001-standalone to test it.
Change-Id: I5d046b814a211aec9051f5278f98a3e81580057c
This sets the nova/glance services to OS::Heat::None in
default undercloud environment and adds environment file to
enable nova (if needed).
Once tripleoclient has been changed to flip the nova_enable
flag, we can drop undercloud-disable-nova.yaml
Partial-Bug : #1891242
Depends-On: https://review.opendev.org/#/c/749659/
Change-Id: I88aaa58f49eb8a2dc38232132d0397a83c76104e
The ipaclient role from ansible-freeipa ensures the dependencies it
needs are installed. We shouldn't need to add duplication here.
Change-Id: I730c456a8a2ede3a8f35f5c808bf5924809ec82f
We need a special ACI in FreeIPA to allow etcd to obtain a
certificate with an IP SAN. This ACI needs to be added ahead of
time. We add a call for a validation here to make sure that the
relevant ACI has been added.
On failure, the installation will fail with instructions to add
the ACI.
The validation that is invoked here has already mereged in:
https://review.opendev.org/#/c/741313/
Change-Id: I9baaa77b5b846c96cf075244a8ccb6889469b08e
This should not be needed, other jobs use the net-config-multinode.yaml
file, let's get rid of net-config-multinode-os-net-config.yaml for
consistency and also in order to ger rid of the network_config_hook()
function that should not be needed anymore.
Change-Id: Ic95d84e611164dc5e28bc182b92991a317bae567
This implements the creation of the redis bundle on the host.
The testing protocol used is documented in the depends-on.
The full rationale is contained in the LP bug.
The reason for adding a post_update task is that during a minor update
the deployment tasks are not run during the node update procedure but
only during the final converge. So we ran the role again there to make
sure that any config change will trigger a restart during the minor
update, so the disruption is only local to the single node being
updated. If we did not do this a final converge could potentially
trigger a global restart of HA bundles which would be quite disruptive.
Related-Bug: #1863442
Depends-On: Iaa7e89f0d25221c2a6ef0b81eb88a6f496f01696
Change-Id: I5ce8367363d535b71b01395b0bef4cf17c8935b5
We add the following depends on because at this stage ovn will be
created on the host and the resource-agent won't be present so
we have to be able to tell pcs to ignore the non existance of the
ovn-dbs ocfg resource on the host.
Depends-On: If9048196b5c03e3cfaba72f043b7f7275568bdc4
Depends-On: Iaa7e89f0d25221c2a6ef0b81eb88a6f496f01696
Change-Id: I918b6c16db6ed70d9ad612aecd7af7d725520f7b
Related-Bug: #1863442
This implements the creation of the haproxy bundle on the host.
The testing protocol used is documented in the depends-on.
The reason for adding a post_update task is that during a minor update
the deployment tasks are not run during the node update procedure but
only during the final converge. So we ran the role again there to make
sure that any config change will trigger a restart during the minor
update, so the disruption is only local to the single node being
updated. If we did not do this a final converge could potentially
trigger a global restart of HA bundles which would be quite disruptive.
Depends-On: Iaa7e89f0d25221c2a6ef0b81eb88a6f496f01696
Change-Id: Ia4399b632257e693fb2c516e487856331149589d
Related-Bug: #1863442
This exposes the remove_unused_original_minimum_age_seconds from nova.conf
which controls the time (in seconds) that nova compute should continue
caching an image once it is no longer used by and instances on the host.
Change-Id: Idfa892bab0cb59de5a418f0dc23a6e7d60100a49
Swift ring files are synchronized by up- and downloading them to the
undercloud, making sure every node on the overcloud has the same copy to
start with.
One (optional) step in the process is to ensure rings are in sync before
uploading them eventually. swift-recon is used to query all Swift object
storage nodes, get the md5sum of the ring files and compare them with
the local ring file md5sum.
However, in containerized deployments this will fail, because Swift
containers are not immediately restarted after rebalancing. The object
server will return the md5sum of the previous ring version, which does
not match with the rebalanced local file. TripleO is intended to skip
this check by setting skip_consistency_check to false.
However, the parameter was never set to false, and this patch fixes it.
Running an overcloud update immediately after an initial deployment was
not affected by this. Same for multiple overcloud updates - subsequent
updates did fix this issue automatically. In the first case the rings
were not rebalanced due to min_part_hours not passed, in the latter case
they were synchronized on the subsequent update.
Closes-Bug: 1892674
Change-Id: Ib56f59b7d2a981196eab334108d42ca4390c0566
This change adds a post_upgrade used to remove the
puppet-ceph package which is no longer required.
No ceph components depend on it, so it should be
removed from the system.
Change-Id: I9cd63ee34dfd964f18a33043980703660b0a55d7
This implements the creation of the manila-share bundle on the host.
The testing protocol used is documented in the depends-on.
The reason for adding a post_update task is that during a minor update
the deployment tasks are not run during the node update procedure but
only during the final converge. So we ran the role again there to make
sure that any config change will trigger a restart during the minor
update, so the disruption is only local to the single node being
updated. If we did not do this a final converge could potentially
trigger a global restart of HA bundles which would be quite disruptive.
Depends-On: Iaa7e89f0d25221c2a6ef0b81eb88a6f496f01696
Change-Id: If5730dba4973a5927c84565b6a65398ea1d7072f
Related-Bug: #1863442
The collectd-libpod-stats plugin requires additional libpod volumes to
be mounted into the collectd contianer in order to find and track running
containers. This mounts the only additional volume necessary
Change-Id: I0f3fb05d8295f8707ad041debb250f255d20626f
Signed-off-by: pleimer <pfbleimer@gmail.com>
This seprates the overcloud network configuration as a separate
play and adds a new tag 'network_deploy_steps' so that it can be
executed alone using '--tags network_deploy_steps'.
Change-Id: I96b0d838e79bcaa8b08ffaa2fb745ee7003d1284
This implements the creation of the rabbitmq bundle on the host.
The testing protocol used is documented in the depends-on.
The reason for adding a post_update task is that during a minor update
the deployment tasks are not run during the node update procedure but
only during the final converge. So we ran the role again there to make
sure that any config change will trigger a restart during the minor
update, so the disruption is only local to the single node being
updated. If we did not do this a final converge could potentially
trigger a global restart of HA bundles which would be quite disruptive.
NB: The init_bundle now has become the wait_bundle
as it just waits for rabbitmq to be up and functional
Related-Bug; #1863442
Depends-On: Iaa7e89f0d25221c2a6ef0b81eb88a6f496f01696
Change-Id: I853bcf354f64ef88cec9e68ad5c123e4af786de3
This implements the creation of the haproxy bundle on the host.
The testing protocol used is documented in the depends-on.
The reason for adding a post_update task is that during a minor update
the deployment tasks are not run during the node update procedure but
only during the final converge. So we ran the role again there to make
sure that any config change will trigger a restart during the minor
update, so the disruption is only local to the single node being
updated. If we did not do this a final converge could potentially
trigger a global restart of HA bundles which would be quite disruptive.
NB: in this case we keep the container init_bundle (renamed to
wait_bundle) around just use it to wait for galera to be up.
Depends-On: Iaa7e89f0d25221c2a6ef0b81eb88a6f496f01696
Change-Id: Ie14819b66cecdb5a9cc6299b68a0cc70a7aa3370
Related-Bug: #1863442