Overlay tunnel endpoints are supported only on
IPv4 address. Now that OVS and Neutron support
having v6 endpoints, edit network enviroment
files in TripleO to allow this.
Change-Id: Ie2523cf4e359289298e4ea5d0992093976a19e04
Closes-Bug: #1793239
This adds support for configuring horizon for WebSSO when keystone
federation with OpenID Connect is enabled. This patch just exposes
some new parameters to use puppet-horizon for configuration. The
sample environment file for OpenID Connect federation is also updated
to use the new parameters. Some of the sample defaults were updated
to more closely match the URLs that horizon expects.
Change-Id: I7c3ee6b54cc0c9653742c3ce1de60b2851d1fe68
This change combines the previous puppet and docker files
into a single file that performs the docker service installation
and configuration. With this patch the baremetal version of
keystone has been removed.
Related-Blueprint: services-yaml-flattening
Change-Id: I6140b02ad1ab6d88990e173dcf556977f065b3c5
MongoDB support was stopped in Pike, it is not used anywhere now.
Therefore, in Stein are removing it to clean things up.
Change-Id: I4ec8f35b1dd71c25cfb41cc54105ac743ef67745
When using neutron routed networks we need to specify
either the subnet or a ip address in the fixed-ips-request
when creating neutron ports.
a) For the Vip's:
Adds VipSubnetMap and VipSubnetMapDefaults parameters in
service_net_map.yaml. The two maps are merged, so that the
operator can override the subnet where VIP port should be
hosted. For example:
parameter_defaults:
VipSubnetMap:
ctlplane: ctlplane-leaf1
InternalApi: internal_api_leaf1
Storage: storage_leaf1
redis: internal_api_leaf1
b) For overcloud node ports:
Enrich 'networks' in roles defenition to include both
network and subnet data. Changes the list to a map
instead of a list of strings. New schema:
- name: <role_name>
networks:
<network_name>
subnet: <subnet_name>
For backward compatibility a conditional is used to check
if the data is a map or not. In either case the internal
list of role networks is created as '_role_networks' in
the jinja2 templates.
When the data is a map, and the map contains the 'subnet'
key the subnet specified in roles_data.yaml is used as
the subnet in the fixed-ips-reqest when ports are created.
If subnet is not set (or role.networks is not a map) the
default will be {{network.name_lower}}_subnet.
Also, since the fixed_ips request passed to Vip ports are no
longer [] by default, the conditinal has been updated to
test for 'ip_address' entries in the request.
Partial: blueprint tripleo-routed-networks-templates
Depends-On: I773a38fd903fe287132151a4d178326a46890969
Change-Id: I77edc82723d00bfece6752b5dd2c79137db93443
Change: I11e38f82eb9040f77412fe8ad200fcc48031e2f8 introduced mtu
property for composable networks. This change set the MTU of the
Tenant network as the global_physnet_mtu for neutron, unless the
NeutronGlobalPhysnetMtu is overridden. The default MTU used if
no MTU is defined for the Tenant network is 1500. (The same
default was previously used for the NeutronGlobalPhysnetMtu
parameter.)
Change-Id: I5e60d52ad571e1cdb3b82cd1d9947e33fa682bf8
Adds support for the Thales and ATOS client software.
Change-Id: I79f8608431fecc58c8bdeba2de4a692a7ee388e9
Co-Authored-By: Douglas Mendizabal <dmendiza@redhat.com>
This change realigns the sshd baremetal puppet service yaml config
files into a common hierachy as with the rest of this blueprint.
This change also removes container functionality, since this was a
temporary measure to proxy live-migration connections from
non-containerized to containerized compute nodes during upgrade.
Change-Id: I87e112a0f1973fa3b0e959777e00071c2bbf7c9c
Related-Blueprint: services-yaml-flattening
This change combines the previous puppet and docker files into a single
file that performs the docker service installation and configuration.
Change-Id: I6a9123627d754a153ab6cb68a33778a57846aeb7
Related-Blueprint: services-yaml-flattening
This changes moves podman service from puppet to deployment directory.
Change-Id: I31b8299b43158347f4f1f61f1e1fdf38b0a2102f
Related-Blueprint: services-yaml-flattening
Numerous files have incorrect modes set. Correct these so that executables
have 755 and yaml files are 644 to address rpmlint errors.
Change-Id: I8db36209b41a492f6b85e3469994de884bf556e8
This change combines the previous puppet and docker files into a single
file that performs the docker service installation and configuration.
With this patch the baremetal version of memcached services has been removed.
Depends-On: https://review.rdoproject.org/r/#/c/16994/
Change-Id: Ibb74d9e1673d079a6090efe4215c7ee041fce7d6
Related-Blueprint: services-yaml-flattening
This change combines the previous puppet and docker files into a single file
that performs the docker service installation and configuration.
With this patch the baremetal version of glance services has been removed.
Change-Id: Ie2ac2072f0742ec5e521fc6e3734e89f8a007077
Related-Blueprint: services-yaml-flattening
This change combines the previous puppet and docker files into a single
file that performs the docker service installation and configuration.
With this patch the baremetal version of zaqar service has been removed.
Change-Id: I8947d2fc5e5672e701d2802cd14a3fa176877a7d
Related-Blueprint: services-yaml-flattening
This change combines the previous puppet and docker files into a single file
that performs the docker service installation and configuration.
With this patch the baremetal version of Ironic services have been removed.
Change-Id: Icb33158a129356d939940433c82dae25a6334baf
Related-Blueprint: services-yaml-flattening
This change combines the previous puppet and docker files into a single file
that performs the docker service installation and configuration.
With this patch the baremetal version of keepalived service have been removed.
Change-Id: Ic0ddf1174e1d0a62f83f26f0ca6bc29ec7b078b7
Related-Blueprint: services-yaml-flattening
We want to include the way the service is deployed (container/baremetal)
and the configuration management tool used (ansible/puppet) in the
service file name so folks can easily identify how a service is being
deployed and with what tooling.
Change-Id: Id884009131ea1587042a8ac01eec7afd83d7eb6a
Related-Blueprint: services-yaml-flattening
This exposes parameters to configure OpenIDC federation in Keystone.
Change-Id: I3e06ca5fde65f3e2c3c084f96209d1b38d5f8b86
Depends-on: Id2ef3558a359883bf3182f50d6a082b1789a900a
This change adds a 2-bonds-with-vlans example template which
demonstrates the use of two Linux bonds. This template will
place the 'Tenant*' networks on a bond with an OVS bridge.
Other networks will be placed as VLANs on the Linux bond
without a bridge. There is special handling for the Tenant
network on DPDK-enabled Compute nodes.
Change-Id: I9277c0e6a1267392943214eb5fe55509f7956fbc
The standalone job were not running yum update on the containers, to do
so we need to specify the updater paremters in the
container-prepare-parameters [1] and also we have to activate the docker
local registry, call the conatiner prepare service and activate registry at
podman.
[1] https://review.openstack.org/#/c/621517/
Change-Id: I74e817bc9b9dd522db3da7753c91a3884d99f8c8
Related-Bug: #1805968
Add TunedCustomProfile parameter which may contain a string in
INI format describing a custom tuned profile. Also provide a new
environment file for users of hypercoverged Ceph deployments
using the Ceph filestore storage backened. The tuned profile is
based on heavy I/O load testing. The provided environment file
creates /etc/tuned/ceph-filestore-osd-hci/tuned.conf whose
content is the following and sets this tuned profile to be active.
[main]
summary=ceph-osd Filestore tuned profile
include=throughput-performance
[sysctl]
vm.dirty_ratio = 10
vm.dirty_background_ratio = 3
[sysfs]
/sys/kernel/mm/ksm/run=0
Depends-On: Iba17d86bbdd710623ba1ba44b1ea5d4c1b99c541
Change-Id: Iaa1c82cefac5c8f2959fd7aeb57bd6860fd9096a
Closes-Bug: #1800232
For deploying with hw offloading, we should use the
"environments/ovs-hw-offload.yaml" file beside neutron, opendaylight
or ovn environments files
Change-Id: I6702b4cce3776676b2da5a4d2af89ff9b171ce74
... and use host_prep_tasks from config-download.
We are trying to HostPrepConfig resource that use OS::Heat::SoftwareConfig
and the old fashion to run Ansible, for more native config-downlaod.
undercloud_pre is the only service that needs HostPrepConfig now, so
let's switch to config-download.
It restarts keepalived container at each undercloud install & upgrade.
Also it adds support for podman as it uses container_cli variable.
Note: the workaround can still be removed once we have Keepalived 2.0.6
but it won't happen before CentOS8 probably.
Change-Id: I7454013c2e37058b5010a2a6cacfae0d0f873744
Related-Bug: #1791238
This change combines the previous puppet and docker files into a single
file that performs the docker service installation and configuration.
With this patch the baremetal version of aodh services have been
removed.
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Depends-On: https://review.rdoproject.org/r/#/c/16994/
Change-Id: I39645aff0365218d4b841ed0d9c964b3622f143a
Related-Blueprint: services-yaml-flattening
The NtpServer default set now includes multiple pool.ntp.org hosts to
ensure that the time can be properly synced during the deployment.
Having only a single timesource can lead to deployment failures if the
time source is unavailable during the deployment. It is recommended
that you either set multiple NtpServers or use the NtpPool
configuration to ensure that enough time sources are available for the
hosts. Note that the NtpPool configuration is only available when using
chrony.
Change-Id: I5b82d77cbf0f2e8c2a59645a72aa533d7d2c86b8
Closes-Bug: #1806521
Currently when enabling ironic-inspector via its environment file, its support
is not enabled in ironic, so it is only usable via its only CLI. This patch
fixes it and sets ironic-inspector as the default inspect interface.
Change-Id: Ia4c7839d15284f89c66c639c96c7bdc68443e5c6
Closes-Bug: #1805788
Avoid panko related kolla configurations
in the ceilometer-agent-notification if panko
is disabled.
Change-Id: I9920e426e50e7fa6307ba8f453beb08fbd161534
Swift workers have been decreased to 1 recently, but after doing some
more benchmarks it seems that 2 is actually the sweet spot (details in
https://review.openstack.org/#/c/618105/).
Change-Id: If8135bb641f5e0e7e2ed983bc23808268558d054
NIC partitioning requires IOMMU to be enabled on roles using it.
By adding the BootParams service to all the roles, we could
enable IOMMU selectively by supplying the role specific parameter
"KernelArgs". If a role doesn't use NIC Partitioning then
"KernelArgs" shall be not be set and backward compatibility would
be retained.
Change-Id: I2eb078d9860d9a46d6bffd0fe2f799298538bf73
The number of requests to Swift on the undercloud is pretty low, while
the default number of services is set by the number of available CPU
cores. This is likely much to high and also increases memory
requirements et al, thus limiting this to 1 per service.
Change-Id: Ic6048b2a75120d44108ed2a7f3a04c4f38e63871
It modifies environments/podman.yaml that overrides the default
ContainerCli so we deploy podman instead of Docker on the Overcloud.
Change-Id: Ia6d5354d120fc2e76d6d9c4e41b3f637ad152ecd
Don't always masquerade these defaults, masquerading
should only happen to the ctlplane subnets defined
in undercloud.conf if masquerading is true.
Closes-Bug: #1794729
Depends-On: I11b325458517334f97fc5f4754b4b39efff3a3f3
Change-Id: I4b956e8be92f1b7a71579d04c7e41c20da7ffdfa
All of the current OVN environment files enable HA for the OVN
database, which does not work with tripleo standalone. This
introduces an environment that can be used in that mode.
There used to be non-HA modes, but they were removed in commit
819805d708. This adds back just one,
and gives it a "standalone" name to help clarify that this is not
one of the scenarios intended for full production deployments.
Change-Id: Ie21c468d1cf7e4db9c406c62f5d09f6af97d593a
During upgrade, as we don't use instack_undercloud anymore, we missing
the _member_ role to the admin user.
This creates the necessary hooks in tht to have the member role
created during upgrade (and install for that matter).
This passes on the keystone_enable_member to puppet-tripleo, but it
needs a patch there as well for this mechanism to fully work.
Change-Id: I2319ed876eba7f21c0e80444bf78ca080fef252a
Depends-On: https://review.openstack.org/611919
Partial-Bug: #1799177
Added a new parameter CinderDellScMultipathXfer to
support cinder::backend::dellsc_iscsi::use_multipath_for_image_xfer
to the Dell EMC SC Cinder iSCSI volume driver template.
Depends-On: https://review.openstack.org/#/c/611126/
Change-Id: I04f42ce0cd117f7dcc7a817274ea7664d9995864
With containerized undercloud, the Octavia playbook shipping with
tripleo-common can no longer install the octavia-amphora-image RPM
available in RHOSP-based environments as the yum repository list is
empty. Thus, the amphora QCOW2 file needs to be made available by the
undercloud base OS via a volume mount. This will also help in
uniformizing default placement of amphora images across different
OpenStack distributions.
Change Icae47e76f71b739cf0e1f5633b15432fd531e645 will close the loop.
Partial-Bug: #1800916
Change-Id: I84943a5e6e2b08baaf8e61a1cd9f2fe92286ad9a
Default resource registry points to containerized services too, we
shouldn't use docker.yaml anymore.
Change-Id: I6106e223d9c1e399d396d745ad28274107074b06
I'm testing podman without docker/docker registry
installed and it failed. This resolves issues with
the Mistral puppet execution so that it ignores
the docker group creation.
Change-Id: I1deb31dce021796f3ea98f1c1030c362108397bb
We did not have a easy way to ensure all the openstack clients are
installed on a given system. In the old instack-undercloud installation,
we were installing some additional clients outside of the ones required
via python-tripleoclient. To allow a user to quickly install all the
clients on a given system, this change adds an OpenStack clients
"service" which can be added to a role to ensure the clients are
available. In the future if we provide a client container, this service
can be converted into a container deployment mechanism.
Change-Id: If878c2ab7679eea2fff42b410bec9c8c9b92ed6f
Closes-Bug: #1800001
Add CinderStorageAvailabilityZone parameter that configures
cinder's DEFAULT/storage_availability_zone. The default value
of 'nova' matches cinder's own default value.
Add several CinderXXXAvailabilityZone parameters, where XXX is
any of the cinder volume service's storage backends. The
parameters are optional, and when set they override the
"backend_availability_zone" for the corresponding backend.
Implements: blueprint split-controlplane-cinder-volume-az
Depends-On: Ic407b747474b567858ad36beabc8a7d8c5022343
Change-Id: Idb035bf112cbab41547bd89935df4c175bf665f4