- removes duplicate keys from yaml files by assuming that the last
one was the desired one (matches current loader behavior)
- prevent regressions by activating yaml lint rule that detects them
(yaml skip was silencing all yaml checks, so the long list seen
is in fact shorter than just 'yaml')
- includes sorting of some of the keys, was needed in order to spot
the duplicates.
Change-Id: Idf5c0041a0c6d3ed7d5d49fb68be856719916663
There isn't a 1:1 correlation between the designate worker and bind
instances nor is it always desirable to run them on the same host.
Depends-On: If97e16a125537c1b5d9f5cfac1de0ffae0edb99a
Change-Id: I624299476a2911f12b1f5ce01964e5d926c6b38e
This patch addes TripleO support for the Unbound DNS resolver service.
This service will initially be used by the Designate service.
Change-Id: I8135ce4f344aeb7c0cf7521e0ba42335c4c7bbc8
This is using linux-system-roles.certificate ansible role,
which replaces puppet-certmonger for submitting certificate
requests to certmonger. Each service is configured through
it's heat template.
Partial-Implements: blueprint ansible-certmonger
Depends-On: https://review.rdoproject.org/r/31713
Change-Id: Ib868465c20d97c62cbcb214bfc62d949bd6efc62
Change it to POLL_SERVER_HEAT (Attempt 2, Earlier attempt had
issues when changing this as simultaneously deleting a bunch
of SoftwareDeployment resources). This is required to remove
swift from undercloud.
Change-Id: I639f5626013cd0ef61c1f9066fab7a7b8806287f
Rename the file in the environments directory so that it
reflects its expanded scope. This file is used when
deploying storage with DCN sites regardless of if those
sites use HCI. We are now supporting non-HCI DCN sites
with storage so the old name is confusing.
Old name : dcn-hci.yaml
New name : dcn-storage.yaml
dcn-hci.yaml is depreacated but will remain in the environments
directory for backwards compatibility. dcn-hci.yaml will be
removed during the X cycle.
Change-Id: Ice5e1cfbc158eb6705988706c8625bedb80d7de2
CinderVolumeEdge is an optional service (defaults to OS::Heat::None)
that can be enabled on DCN/Edge nodes for edge sites that support
persistent block storage (i.e. cinder). The dcn-hci.yaml environment
file enables the service.
The new service supports the following edge deployment models:
1. Edge site with no block storage
- Deploy DistributedCompute nodes
- Use dcn.yaml environment file (the CinderVolumeEdge service
remains disabled)
2. Edge site with traditional HCI storage
- Deploy DistributedComputeHCI nodes
- Use dcn-hci.yaml env file to enable the CinderVolumeEdge service
- Use ceph-ansible.yaml env file to deploy ceph for the RBD backend
3. Edge site with quasi-hyperconverged storage
- Deploy DistributedCompute nodes
- Use dcn-hci.yaml env file to enable the CinderVolumeEdge service
- Use ceph-ansible-external.yaml env file so the RBD backend can
access an external ceph cluster
This patch adds support for number 3, which is a new capability. Whereas
traditional HCI means ceph and cinder services run on compute nodes, the
new model is still quasi-hyperconverged because cinder (as well as
glance) runs on the compute nodes.
Change-Id: I56b5792c1d53bb8659e440f598006e471894ff2e
This exposes the nova workaround to disable downloading images from glance to
rbd (vs a cheap COW clone) when nova-compute and glance are not backed by the
same ceph cluster.
Related nova change: I069b6b1d28eaf1eee5c7fb8d0fdef9c0c229a1bf
Depends-On: I8329810d6c047c0d94e7b123e7cdc1263a7856cd
Change-Id: Ib5478e53eb1f216bf6924ff30ea8502cb8529d00
Sahara support was deprecated during previous Ussuri cycle[1], so we
can remove it completely now.
[1] f1d9b15c85
Change-Id: Id047221cb912c09984cc3bf864196a26fd36736f
This changes the parameter to non-role specific and by default
true. The dependant python-tripleoclient patch adds a check
to ensure that we only allow usage of old heat nic congigs with
'NetworkConfigWithAnsible: false'.
Change-Id: Ie37bdfe64eb1b33afe326161fc6f99601addb7b5
This replaces net-config-noop.yaml mappings to OS::Heat::None.
Also removes all unnecessary setting of it in environments as
we map them in overcloud-resource-registry-puppet.j2.yaml.
Normally that should be enough but we override them in so many
places, so there will be some redundancy.
Depends-On: https://review.opendev.org/755275
Change-Id: Ib4d07c835568cb3072770f81a082b5a5e1c790ea
This maps undercloud and standalone NetworkConfig resources to
net-config-noop.yaml
Also changes the standalone to actually use ansible for config
generation which was missed in https://review.opendev.org/752368
with env generation.
Change-Id: Ia8e3bec4a64c8317e0b6996c1b7e587789311ad2
For each role create a network config resource
{role.name}}NetworkConfig. Remove per node
NetworkConfig resource from puppet/role.role.j2.yaml.
NOTE: CI nic config templates was updated with using
tools/merge-new-params-nic-config-script.py
Depends-On: https://review.opendev.org/753930
Change-Id: Iff4bf742947a5a8170938372a8075519850b6f63
There appears to be an inconsistency in the ironic configuration
between the undercloud vs the minion.
The minion has:
enabled_inspect_interfaces=no-inspect
The undercloud has:
enabled_inspect_interfaces=idrac,ilo,inspector,no-inspect,redfish
Fix this by adding the same default params for Ironic on UC minions as
the main undercloud environment defines it.
Change-Id: I0aaf6a9e5ac0a2f7ed95c8f046a4df6147ff0edb
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
This patch changes undercloud and standalone roles to
generate network config with only ansible and
not depend on downloaded network config from
heat stack.
Depends-On: https://review.opendev.org/#/c/753958/
Change-Id: Ibcb0f0a65cfd04d677a4b861d9f647af13611b24
This uses the new ansible module for network configuration
on the nodes. Aso, converts the net-config-multinode.yaml to
use os-net-config.
Next patch in this series would change the NetworkConfig
resource type to OS::Heat::Value and drop run-os-net-config.sh.
Depends-On: https://review.opendev.org/748754
Change-Id: Ie48da5cfffe21eee6060a6d22045d09524283138
A network used to allocate MAC addresses for OVN chassis.
Ports without and IP allocation will be created on this
network, the MAC addresses of the ports will by used to
configure the ovn-chassis-mac-mappings.
NOTE, we may want to change the 'base_mac' option of the
undercloud, so that we don't have collissions with the
overcloud 'base_mac'.
Related-Bug: #1881593
Change-Id: If495b5d5c1e6beff02b48507051cccfb70fd995c
This is so that share content is usable by cinder-volume and other
containers mounting the same share.
Closes-Bug: 1890291
Change-Id: Iacf7c9c368b26106e9921b35996c134aacb9acd7
UIConfig endpoints were introduced for TripleO UI[1], but TripleO UI
has been removed thus these endpoints are no longer used.
[1] https://review.opendev.org/#/c/528679/
Change-Id: I74f5ede7bff9064889a4b7aaa978127ab456d88f
We open port: `` 3000 #SSL for websocket`` in the
zaqar service defenition:
deployment/zaqar/zaqar-container-puppet.yaml:L130
But SSL environment files use port 9000 for the public
endpoint.
Using 9000 also for SSL can cause issues in haproxy.
We may want to revert or relax the check in
https://review.opendev.org/664224 as duplicate IP's
is'nt the actual problem.
Related-Bug: #1832168
Related-RHBZ: #1868910
Change-Id: I05f31885ade46d47ff5d384dabbd5561f4df9278
This change updates the baremetal host sshd management to use ansible
instead of puppet. It should still be noted that the nova-migration
container still uses puppet to manage sshd.
Change-Id: Iedd149c123d807dee229160f8e9f1b17bf379368
Depends-On: https://review.opendev.org/#/c/742970/
We've been using InternalTLSCAFile parameter when enabling
public TLS for undercloud and is quite confusing. We recently
changed to use it in clouds.yaml and it would break when
both public and internal TLS are enabled for overcloud and both
use different CA certs. This adds a new parameter which we
will use in clouds.yaml, that would default to empty string
assuming that the certificates are trusted.
Closes-Bug: #1883818
Change-Id: Id6f612a91255b3158be821c363ca852c6b5d7496
Depends-On: https://review.opendev.org/737998
This commit attempts to build out a composible service that enrolls the
undercloud as a FreeIPA host using an OTP. This is similar to what we've
done in the past for tls-everywhere except we're not using novajoin.
Change-Id: I770227b2f4f1ea447cf0138f57a6ed66c034d225
- Docker isn't supported anymore.
- Clients are now installed by Ansible, not Puppet
- Neutron SRIOV host isn't supported and operators should deploy with
sriov_pf network object in nic configs.
- firewall is now managed by Ansible, not Puppet
Change-Id: I2b6068a719563a53bc255dcce72a92465e7df468
Not all deployments have the file in the current default location
and rather use trusted certs for public tls. This also creates
issues in downstream jobs that don't inject overcloud ca with
environment/inject-trust-anchor.yaml
This default will ensure that it works in those scenarios.
Change-Id: Ib71c3e2be2b8dc57f3c9107c6ddab47cd6594202
Related-Bug: #1880936
Default like undercloud for public TLS. Though this
is little confusing we're using the same parameter
for both undercloud and overclud.
For classic public TLS and certmonger-based internal
TLS, where we use both enable-tls.yaml and
enable-internal-tls.yaml, we reset it back to use the
default ipa cacert.
Change-Id: Icfef2768ebb90c1818f157c762b6981d24393ac3
Closes-Bug: #1880936
If you are using environments/dcn-hci.yaml, then you very likely
have more than one Glance server and will want to use the copy-image
feature. Thus, enable it by default for deployments which use this
environment file.
Also because GlanceCacheEnabled defaults to False and because
GlanceImageCacheMaxSize defaults to 10737418240 we don't need
to explicitly set them in environments/dcn{,-hci}.yaml.
Change-Id: If745aa0824098950367525170eaf6cb4e3804482
It seems that netwokring-fujitsu is no longer maintained[1], and it's
not compatible with Python 3.6 which currently all OpenStack services
require.
[1] https://opendev.org/x/networking-fujitsu
Change-Id: Iae639864cce8e3add635944f157ecde074312e74
We don't deploy Keepalived in multi-node as our HA story is done with
Pacemaker. Therefore, we don't use VRRP protocol that Keepalived
provides to maintain the VIPs alive, so we don't really need this
service.
Instead, we can configure the VIPs on the br-ctlplane interface which
already handled the local_ip. Now it also handles the configuration of
public ip and admin ip.
Keepalived is now deprecated and will be removed in the next cycle.
blueprint replace-keepalived-undercloud
Change-Id: I3192be07cb6c19d5e26cb4cddbe68213e7e48937
Adds a parameter to set [cinder]/cross_az_attach in nova to control whether
instances can attach cinder volumes from a different availability zone.
Defaults to true.
Set to false in DCN sample environment files as block I/O between sites would
be extremely slow (if it functions at all).
Change-Id: Ib15e305e34a3fddfc6f50986d2e27b6da815bd19