The bridge mappings should be managed in the standalone parameters. This
bridge mapping prevents us from being able to change the datacentre
mapping in CI.
Change-Id: I6b5b9db75a11c2347720258a39b03aa28702dbf1
Related-Bug: #1895822
(cherry picked from commit c86fa4fb4e)
Some bare metals may take longer time to boot up. It may happen Nova
bails out in Ironic driver too early. Setting higher timeout gives
machines more time to boot up.
Closes-bug: #1816728
Change-Id: I38070efa7e9511ca74e2bfe4e53618e1176ca65b
(cherry picked from commit d41f0d7c35)
Podman is the default in standalone generated environments
(e.g. environments/standalone/standalone-tripleo.yaml), however since we
haven't made it the default in overcloud-resource-registry-puppet.j2.yaml
until we get CentOS8, docker was still being deployed because the
roles/Standalone.yaml used to contain the Docker service.
This patch aims to make sure we disable Docker.
Note: for scenario004 & 012, we need to enable Docker as Pacemaker is
enabled and the job runs on CentOS7.
Closes-Bug: #1835411
Change-Id: Ib34ba24c84f34a1533a90189d5154825c6dfa868
For the CI scenarios which still run Pacemaker, let's force ContainerCli
until we get them working with Podman.
Change-Id: I8405a00c7e4686b1569f6e68e3c1507f1e11b3a3
This change combines the previous puppet and docker files into a single
file that performs the docker service installation and configuration.
With this patch the baremetal version of database service MySQL Server
has been removed.
Change-Id: I407bd8d8fe9bde53609e4316b12eb0b7151552ca
Related-Blueprint: services-yaml-flattening
This change adds scenario012-standalone to the list of scenarios, copied
from the scenario012 multinode and translate for use with standalone
Scenario012 multinode was overriding the standard standalone-post
template, to run specific configuration commands to test ironic with
tempest, so a new post template was created, that includes both post
operations
Change-Id: I308f569f41fc1a1c18ad543cf23db4672a3b5eb9
This patch switches the default mechanism driver for neutron from
openvswitch to OVN.
It will also flip scenario007 job to run with ML2/OVS.
Depends-On: I74ffb6b7f912e1fce6ce428cd23a7283c91b8b96
Depends-On: I99ba2fd6a85b4895b577719a7541b7cbf1fdb85c
Depends-On: Ib60de9b0df451273d1d81ba049b46b5214e09080
Depends-On: Iaed7304adf40a87a0f14b7a95339f8416140e947
Change-Id: Iab52cdf5d0f7a392c4f17c884493b5c5beb1d89f
Co-Authored-By: Kamil Sambor <ksambor@redhat.com>
Now that we could get rid of the puppet dedicated definitions,
we can move the docker/* rabbitmq related stuff to the final
location, and correct the paths and some nits.
Change-Id: I47ca1e303bd38642200ccb7f6823bcd06cd00255
This change combines the previous puppet and docker files
into a single file that performs the docker service installation
and configuration. With this patch the baremetal version of
nova has been removed.
Change-Id: If8f4daa9127aa528a2088a978494f2d6d83106e2
This change combines the previous puppet and docker files into a single
file that performs the docker service installation and configuration.
With this patch the baremetal version of haproxy services has been removed.
Change-Id: Id55ae44a7b1b5f08b40170f7406e14973fa93639
Related-Blueprint: services-yaml-flattening
This change combines the previous puppet and docker files into a single
file that performs the docker service installation and configuration.
With this patch the baremetal version of cinder services has been removed.
Change-Id: I88f047a8ee9c3eed80e4c48ed9cabdb3035d518b
Related-Blueprint: services-yaml-flattening
This change combines the previous puppet and docker files into a single file
that performs the docker service installation and configuration.
With this patch the baremetal version of Ironic services have been removed.
Change-Id: Icb33158a129356d939940433c82dae25a6334baf
Related-Blueprint: services-yaml-flattening
Used by a new featureset[1] that uses the neutron ansible-networking driver
to configure the switch providing networking for overcloud ironic
nodes such that tenant networks are segregated via vlan.
This scenario also sets up ssh keys so that ansible inside the neutron
container can ssh to the controller to configure OVS.
The new scenario also includes a temporary workaround to upgrade
ansible in the neutron container, this can be removed once ansible
in the container is v2.5.8+
Also remove scenario011 as the patches that used it never merged and
now the tennant networks are working in the overcloud, we'll test
with this instead.
[1] - https://review.openstack.org/#/c/579601/
Change-Id: Ife83825216ccb96a5f24918f42a757d0c48b0e9d
Remove scripts and templates which dealt with Pacemaker and its
resource restarts before we moved to containerized deployments. These
should all now be unused.
Many environments had this mapping:
OS::TripleO::Tasks::ControllerPreConfig: OS::Heat::None
OS::TripleO::Tasks::ControllerPostConfig: OS::Heat::None
OS::TripleO::Tasks::ControllerPostPuppetRestart: ../../extraconfig/tasks/post_puppet_pacemaker_restart.yaml
The ControllerPostPuppetRestart is only ever referenced from
ControllerPostConfig, so if ControllerPostConfig is OS::Heat::None, it
doesn't matter what ControllerPostPuppetRestart is mapped to.
Change-Id: Ibca72affb3d55cf62e5dfb52fe56b3b1c8b12ee0
Closes-Bug: #1794720
This change includes the service
OS::TripleO::Services::ContainerImagePrepare by default in the overcloud
which will trigger a container image prepare in the same way as is
currently done for the containerized undercloud.
Along with the mistral action which populates the container image
parameters, this change makes blueprint container-prepare-workflow
functionally complete.
Change-Id: I8b0c5e630e63ef6a2e6f70f1eb00fd02f4cfd1c0
Blueprint: container-prepare-workflow
This commit introduces oslo.messaging services in place of a single
rabbitmq server. This will enable the separation of rpc and
notifications for the continued use of a single backend (e.g.
rabbitmq server) or a dual backend for the messaging communications.
This patch:
* add oslo_messaging_rpc and oslo_messaging_notify services
* add puppet services for rpc and notification
(rabbitmq and qdrouterd servers)
* add docker services to deploy rpc (rabbitmq or qdrouterd)
and notify (rabbitmq or shared)
* retains rabbit parameters for core services
* update resource registries, service_net_map, roles, etc.
* update ci environment container scenarios
* add environment generator for messaging
* add release note
Depends-On: Ic2c1a58526febefc1703da5fec12ff68dcc0efa0
Depends-On: I154e2fe6f66b296b9b643627d57696e5178e1815
Depends-On: I03e99d35ed043cf11bea9b7462058bd80f4d99da
Needed-By: Ie181a92731e254b7f613ad25fee6cc37e985c315
Change-Id: I934561612d26befd88a9053262836b47bdf4efb0
This service is needed to install CA certificates for the overcloud. We
need it because the plan is to enable public TLS by default. And without
this it won't work.
Change-Id: I168e6a543f7143900fdb855ec29d8532fb9736ae
Some work is being done in I46fce28926cb5a881f7384948480266712ae75e3
to secure SNMP on a specific network but until then we need to stop
opening the services so cloud providers won't report any security issue
for TripleO jobs.
Change-Id: Icd8a6ddda6152186d6be4a227f6449232fecba5e
Related-Bug: #1749324
docker-puppet.py uses the DockerPuppetDebug boolean to trigger debug
logging. It is disabled by default which makes it hard to understand
what is happening in CI. Let's enable it for CI.
Change-Id: I071955df802d09bb4f6496617942868c7da421fd
The job timeouts too much, some services are already covered by
scenarios, no need to duplicate testing.
Change-Id: I30092400142af5c3308534a8da9daa22cbb82bad
Depends-On: I2a4aa707fa10664f1fc9026e3eb417f35834436f
Without this service, no package upgrade can happen on the host. This
breaks upgrade ci. For instance the puppet modules (for instance) are not
upgraded at all, neither is os-net-config and so on…
Change-Id: Ic050bd58065904f1c55d6a71c7bd4b09ba72d19d
Closes-Bug: #1734353
Some scenarios don't need Heat deployed, because we don't run pingtest
anymore and the tempest tests run in these scenarios don't require Heat.
It should spead-up a little bit the deployments.
Change-Id: I7c8f2cae770e87a356e3cb67346a41ef669aaeba
This patch fixes a timing issue with cinder's db sync when
cinder-manage is containerized but cinder-volume is not.
Cinder services were containerized in [1], and this patch updates
the CI jobs so cinder-volume and cinder-backup with pacemaker are
also containerized.
[1] https://review.openstack.org/479001
Change-Id: Ic20af8a9bb24c4d21d1fd71bc65b001aa9c09c7c
Closes-Bug: #1729253
Closes-Bug: #1729339
This commit brings the multinode containers scenario files closer to
their BM variants to add missing services and turning pacemaker on.
These require refactorings in OOOQ in order to support non-containerized
to containerized upgrade jobs across releases. Ceph-ansible is also
going to be switched separately.
Co-Authored-By: Jiri Stransky <jistr@redhat.com>
Depends-On: Ie0e8de54794a9259c0aeb8c67ae0f6a908844093
Change-Id: Icb659509b38575534be27a1881dbe671c40a5436
Related-Bug: #1714905
Related-Bug: #1712070
This was needed to make the upgrade job on Ocata->Pike passing, and we
now need to remove this to improve the argument order in OOOQ for
deployments with scenarios.
This shouldn't be backported to Ocata (at least not before we make the
split between deploy scenario and upgrade scenario).
Change-Id: Ie08bbe08530bd48a0ca58667f0704f360e0a4dd7
Co-Authored-By: Martin André <m.andre@redhat.com>
Related-Bug: #1714905
Related-Bug: #1712070
As per Ceph docs [1] we should default pg_num and pgp_num to 128 when
using less than 5 OSDs.
This same change was applied to the ceph-ansible profiles with [2].
Also updates the CI environment files to continue using 32 where we
deploy a single OSD.
1. http://docs.ceph.com/docs/master/rados/operations/placement-groups/
2. Ibd9fb23e04576e95e24af58f856663397886a947
Change-Id: I1920bc8f5251f362af38ad3bd6f46dda42c6ee93
Closes-Bug: #1718756
This service is necessary when we containerized TripleO with
Pacemaker.
The service is added also to non-containerized scenario lists, because
the aim is to get rid of the -containers.yaml variants eventually.
This shouldn't affect any jobs that don't include docker-ha.yaml. The
resource registry entry is mapped to OS::Heat::None by default, and
docker-ha.yaml maps it to actual containerized clustercheck.
Change-Id: I342e29de52cb6ce069a05a2dbfb0501a2da200e6
Partial-Bug: #1712070
Add a docker service template to provide containerized services
logs rotation with a crond job.
Add OS::TripleO::Services::LogrotateCrond to CI multinode-containers
and to all environments among with generic services like Ntp or Kernel.
Set it to OS::Heat::None for non containerized environments and
only enable it to the environments/docker.yaml.
Closes-bug: #1700912
Change-Id: Ic94373f0a0758e9959e1f896481780674437147d
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
Swift is already deployed on scenario002, and we want to keep
basic multinode as basic as possible with only the minimum so it runs
faster and we can use it for early tests in our CI.
Change-Id: I6d2f434305d7ca0d704a9454b758670c39a0af4a
Updates hieradata for changes in https://review.openstack.org/471950.
Creates a new service - NovaMigrationTarget. On baremetal this just configures
live/cold-migration. On docker is includes a container running a second sshd
services on an alternative port.
Configures /var/lib/nova/.ssh/config and mounts in nova-compute and libvirtd
containers.
Change-Id: Ic4b810ff71085b73ccd08c66a3739f94e6c0c427
Implements: blueprint tripleo-cold-migration
Depends-On: I6c04cebd1cf066c79c5b4335011733d32ac208dc
Depends-On: I063a84a8e6da64ae3b09125cfa42e48df69adc12
This currently assumes nova-compute and iscsid run in the same context which
isn't true for a containerized deployment
Change-Id: I11232fc412adcc18087928c281ba82546388376e
Depends-On: I91f1ce7625c351745dbadd84b565d55598ea5b59
Depends-On: I0cbb1081ad00b2202c9d913e0e1759c2b95612a5
So we don't waste RabbitMQ resources since nothing will actually consume
the messages sent on the queue.
Note: we don't change scenario001, since it's a Telemetry scenario and
the services require notifications enabled.
Change-Id: I7d1d80da4eda7c0385461fe62b1d3038022973c6
Sometimes the infracloud gateway refuses to ping even though
everything else is working fine. Since we have coverage of this
functionality in the OVB jobs it should be safe to turn it off
here so it stops spuriously failing our jobs.
We can't just set the resource to OS::Heat::None because there
are other resources with dependencies on it. Instead, this adds
a noop version of the validation software config that always
returns true.
Change-Id: I8361bc8be442b45c3ef6bdccdc53598fcb1d9540
Partial-Bug: 1680167
When using the Deployed Server feature, we rely on Puppet to install
packages. But nova-compute/libvirt puppet is running in a container, so
it cannot install anything on the host. We rely on virtlogd on the host,
so we need to install it there some way. This patch uses host_prep_tasks
for that, conditionally based on the EnablePackageInstall stack
parameter value.
Also multinode-container-upgrade.yaml env is copied as
multinode-containers.yaml, to remove the naming confusion, as the
environment file can be used for more than just upgrades. The old env
file will be removed once we make the upgrade job use the new one (catch
22 type of issue).
Change-Id: Ia9b3071daa15bc30792110e5f34cd859cc205fb8
Add container-specific variants of scenarios. These variants are
supposed to be temporary, as their only purpose is to allow us CIing the
scenarios with containers while we don't have pacemaker containerized
yet. Once we can deploy and upgrade containerized deployment with
pacemaker, these should be deleted and normal scenarios should be used.
Alternative approach would be to edit the scenarios on the fly within
the CI job, to remove the pacemaker parts, which would be more DRY, but
perhaps more surprising when trying to debug issues.
Change-Id: I36ef3f4edf83ed06a75bc82940152e60f9a0941f
We need Docker service mapping defined and set to OS::Heat::None so that
we can reuse multinode-container-upgrade.yaml service list both for
initial deployment and for the upgrade. The upgrade will not be broken
by this as its env files are being passed later on the command line, and
they'll take priority and effectively enable the Docker service on
upgrade.
Another change we need for mixed upgrade is to add the TripleoPackages
service, which will take care of updating RPMs on the bare metal and
prevent docker installation from failing with outdated
puppet-tripleo ("Could not find class ::tripleo::profile::base::docker").
Related-Bug: #1685795
Closes-Bug: #1689772
Change-Id: Idb6917f22d0e9f74f8853972c6a08bffb01be410
This change implements a MOTD message and provides a hash of
sshd config options which are sourced to the puppet-ssh module
as a hash.
The SSHD puppet service is enabled by default, as it is
required for Idb56acd1e1ecb5a5fd4d942969be428cc9cbe293.
Also added the service to the CI roles.
Change-Id: Ie2e01d93082509b8ede37297067eab03bb1ab06e
Depends-On: I1d09530d69e42c0c36311789166554a889e46556
Closes-Bug: #1668543
Co-Authored-By: Oliver Walsh <owalsh@redhat.com>
Non-working containers upgrade CI is caused by the fact that all
multinode jobs deploy pacemaker environments.
Currently we cannot upgrade Pacemakerized deployments
anyway (containerization of pacemakerized services is WIP), upgrades
have only been tested with non-Pacemaker deployments so far.
We need a new environment which will not try deploying in a
pacemakerized way. When pacemaker-managed services are containerized, we
can change the job to upgrade an HA deployment (or single-node "HA" at
least), and perhaps even get rid of the environment file introduced
here, and reuse multinode.yaml.
Change-Id: Ie635b1b3a0b91ed5305f38d3c76f6a961efc1d30
Closes-Bug: #1682051
We need the service to be present to run jobs involving containers. Note
that this is effectively a no-op for the current CI jobs, as by default
the Docker service is mapped to OS::Heat::None. Docker will actually be
deployed only if environments/docker.yaml is included in the deploy
command.
Change-Id: I97a35e30e428ff64feeb411bf63dbb7aa54f9829
Pacemaker is now deployed by default and it would be great to have it
tested for all scenarios to deploy real environments used in production.
Change-Id: Iff879cd641f6207644b1b6309a6ec4129f1a255a
When fixing LP#1643487 we added ?bind_address to all DB URIs.
Since this clashes with Cellsv2 due to the URIs becoming host
dependent, we need a new approach to pass bind_address to pymysql
that leaves the DB URIs host-independent.
In change Iff8bd2d9ee85f7bb1445aa2e1b3cfbff1f397b18 we first create a
/etc/my.cnf.d/tripleo.cnf file with a [tripleo] section with the correct
bind-address option.
In this change we make sure that the DB URIs will point to the added
file and to the specific section containing the necessary bind-address
option. We do introduce a new MySQLClient profile which will hold all
this more client-specific configuration so that this change can fit
better in the composable roles work. Also, in the future it might
contain the necessary configuration for SSL for example.
Note that in case the /etc/my.cnf.d/tripleo.cnf file does not exist
(because it is created via the mysqlclient profile), things keep on
working as usual and the bind-address option simply won't be set, which
has no impact on hosts where there are no VIPs.
Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com>
Change-Id: Ieac33efe38f32e949fd89545eb1cd8e0fe114a12
Related-Bug: #1643487
Closes-Bug: #1663181
Closes-Bug: #1664524
Depends-On: Iff8bd2d9ee85f7bb1445aa2e1b3cfbff1f397b18
Adds the CephMon and CephOSD services on the Controller role so
that we can test Ceph if the services are enabled via registry.
Change-Id: I73ee5380b88bf7643ba425a0c833922e330ecade