We need to ensure the openstack clients are installed on controllers for
the deployed server case. This should be handled by the overcloud images
themselves, however if the images are not used we should make sure the
clients get installed with our OpenstackClients service.
Change-Id: If7fad9f24c7294c2d749fc3838b1fb71182930fc
Related-Bug: #1829769
As of Rocky [1], the nova-consoleauth service has been deprecated and
cell databases are used for storing token authorizations. All new consoles
will be supported by the database backend and existing consoles will be
reset. Console proxies must be run per cell because the new console token
authorizations are stored in cell databases.
nova-consoleauth was deprecated in tripleo with:
I68485a6c4da4476d07ec0ab5e7b5a4c528820a4f
This change now removes the NovaConsoleauth Service.
[1] https://docs.openstack.org/releasenotes/nova/rocky.html
Closes-Bug: #1828414
Change-Id: Icdfbf26b5e83cc07a560eb227a0cf822e4c5a1e3
ComputeOvsDpdkSriov, ComputeOvsDpdkSriovRT and CellController roles do
not include OS::TripleO::Services::Podman which may cause Overcloud deployments to
fail.
Adding Podman service to these roles in order to be aligned with the rest of the roles.
Change-Id: If9b9ffa4651133b966ea0c28069dd1a81f3b2df5
The Ntp service should no longer defined on the roles as we should be
using the meta Timesync service to ensure the correct service is defined
for the sync service.
Change-Id: Ic2fb3291de78891d05ef12e3778263fe74fbff8c
Related-Blueprint: tripleo-chrony
Closes-Bug: #1827676
All known consumers of boot data (os-collect-config, etc) have a
preference for using config-drive as the data source.
The last known consumer was novajoin, but that switched to preferring
config-drive early in the Stein development cycle[1] so it should now
be safe to switch off the nova metadata API service.
[1] https://review.opendev.org/#/c/607492/
Blueprint: nova-less-deploy
Change-Id: If35aec24f446769fca7897c2126fb6657454f073
This change introduces an optional extracted version of the Placement
service into TripleO. This extracted version will only be required once
the Placement service is fully removed from Nova during the T cycle
(previously S but delayed) at which point the corresponding
NovaPlacement service will also be removed from TripleO.
The majority of this change is code motion between the original
NovaPlacement service and the new PlacementAPI service.
Upgrades from the original NovaPlacement service to the extracted
PlacementAPI service are not currently supported by this change and will
be worked on independently during the Train cycle.
Co-authored-by: mschuppert@redhat.com
Depends-On: https://review.openstack.org/#/c/624335/
Change-Id: I9e3287bcbe9d317f32bf6b468c6ee17f04b6fff9
We've switched the selinux mode management to ansible as part of the
deploy-steps and it's always included now so the service is not
necessary.
Change-Id: I562053ba6767bd9ab7af3cf06b93906568bec5cd
The Etcd service is needed for A/A management of the CinderVolume
service on these roles so it should be added to the roles by default.
Change-Id: I9d3d17fec857014f399b8339ce7c68f844d230a9
implements: blueprint split-controlplane-templates
With cellsv2 multicell in each cell there needs to be a novnc proxy as the
console token is stored in the cell conductor database. This change adds
the NovaVncProxy service to the CellController role and configures the
endpoint to the local public address of the cell.
Closes-Bug: #1822607
Depends-On: https://review.openstack.org/649265
Change-Id: Ia3a36d369fdc18685f4c965a9e371ca3143967bf
Installing and configuring tmpwatch allows to get rid of some
ugly things in logrotate configuration. As the container has no
network access anymore, we have to install the tool on the host
directly - this isn't that bad.
In order to avoid issues with logrotate manage logs, we explicitely
exclude patterns manage in the specific logorate configuration.
Also, always in order to avoid issues and ensure logrotate does its
own cleanup, we clean files one day later.
Change-Id: Ic666388d9ba7556e7b68ab2fc1082957a9e26552
Congress doesn't seem to be used anywhere, we never had a bug report or
any sign of somebody out there actually using it.
Let's remove its support in TripleO, to reduce the codebase.
Change-Id: Idca6b12f1c0ca3bc15bedf6469d4063a4dac31fa
The composable service OS::TripleO::Services::Sshd is
enabled by default in the overcloud but it is not included
in the default Networker.yaml role definition.
Change-Id: I20d35affba9da511ed4a9566013868146d3fbf4c
- uses split-control-plane
- adds a new CellController role
- nova-conductor, message rpc (not notifications) and db
- move nova dbsync from nova-api to nova-conductor
- nova db is more tightly coupled to conductor/computes
- we don't have a nova-api services on a CellController
- super-conductor on Controller will sync cell0 db
- new 'magic' MysqlCellInternal endpoint
- always refers the to local MysqlInternal endpoint
- identical to MysqlInternal for regular deployment
- but doesn't get overridden when inheriting EndpointMap from parent
control-plane stack
- duplicate service node name hiera for transport_urls on cell stack
- nova -> cell oslo messaging rpc nodes
- neutron agent -> global messaging rpc nodes
- run cell host discovery only on default cell, for additional cells
the cell needs to be created first
bp tripleo-multicell-basic
Co-Authored-By: Martin Schuppert <mschuppert@redhat.com>
Change-Id: Ife9bf12d3a6011906fa8d9f97f7524b51aef906a
Depends-On: I79c1080605611c5c7748a28d2afcc9c7275a2e5d
We stopped managing this service with the switch containers. This change
starts the removal and deprecated the TripleO management of the service.
Change-Id: Idc35bdfad126f21280444ebffaa5017e73ba8368
This addresses a possible bug when using FreeIPA to do TLS
everywhere.
It is possible that the IPA server is not on the ctlplane.
In this case, when the nodes start up, the registration of the node
with IPA will fail, resulting in failed certificate issuance requests
later on.
We introduce a composable service to run in host_prep_tasks.
This will always run once the networks have been set up. If the
instance has already been enrolled (by cloud-init or in an update),
then the script executed by the service will just exit.
In this iteration, we simply execute the code that the cloud-init
would have done. In later releases, we will execute all the code
performed by novajoin-server here in ansible - and deprecate the
novajoin server.
Change-Id: I31f64c3cbd1d151e3c2a436cc3e2ec5316535087
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Resolves: rhbz#1661635
Closes-Bug: #1815924
While using ControllerStorageNfs role images are not pushed to local registry,
since ContainerImagePrepare service is missing in ControllerStorageNfs role.
Closes-Bug: #1814057
Change-Id: Iafe7bf37d7d04eed32a32b8881fab48fdc9f9dd6
Removed all glance-registry related changes from THT, since
Glance Registry has become redundant & been deprecated from
glance due to support of Glance V2. The registry code base is
also going to be removed from Glance project once all the
dependencies removed from other projects.
Change-Id: I548816e3f2d8b9deed8a6f0ba3e203f84ad3d9ca
Closes-Bug: #1808911
Change https://review.openstack.org/614457 added these
networks because of the defaults in ServiceNetMap. With
changes related to LP Bug #1809313 these are no longer
required, as the ServiceNetMap fall's back to ctlplane
when networks are not defined or disabled in networks
data.
Related-Bug: #1809313
Depends-On: I102912851a3b9952daaf7c4d5a34a919f527f805
Change-Id: Ic4f22692f93db4ce0db0f4fbc83eca6b492b28e7
We have yet Nova for SSH keys management, when deploying a standalone
cloud. Allow Octavia deployments for such a case as well.
Jinja2 rendering of the octavia service template provides that
functionality by relying on a new role tag 'standalone'.
Change-Id: I69f3623646ec5b65109e0a4f0c16139018da9282
Closes-bug: #1806113
Co-Authored-By: Harald Jensas <hjensas@redhat.com>
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
Add timezone service to the undercloud role so that it is properly
configured when we install the undercloud.
Change-Id: I4814cfb52f57d8260cda61adb6ac20609f435846
Depends-On: https://review.openstack.org/#/c/628015/
Closes-Bug: #1784068
Adds new roles for DistributedCompute and DistributedComputeHCI. These
roles closely match the existing Compute roles but also include the
CinderVolume service.
implements split-controlplane
Change-Id: Ia7f5ba93a9fc31b4653e6cbd9b3e5d8f00d26a27
MongoDB support was stopped in Pike, it is not used anywhere now.
Therefore, in Stein are removing it to clean things up.
Change-Id: I4ec8f35b1dd71c25cfb41cc54105ac743ef67745
When using neutron routed networks we need to specify
either the subnet or a ip address in the fixed-ips-request
when creating neutron ports.
a) For the Vip's:
Adds VipSubnetMap and VipSubnetMapDefaults parameters in
service_net_map.yaml. The two maps are merged, so that the
operator can override the subnet where VIP port should be
hosted. For example:
parameter_defaults:
VipSubnetMap:
ctlplane: ctlplane-leaf1
InternalApi: internal_api_leaf1
Storage: storage_leaf1
redis: internal_api_leaf1
b) For overcloud node ports:
Enrich 'networks' in roles defenition to include both
network and subnet data. Changes the list to a map
instead of a list of strings. New schema:
- name: <role_name>
networks:
<network_name>
subnet: <subnet_name>
For backward compatibility a conditional is used to check
if the data is a map or not. In either case the internal
list of role networks is created as '_role_networks' in
the jinja2 templates.
When the data is a map, and the map contains the 'subnet'
key the subnet specified in roles_data.yaml is used as
the subnet in the fixed-ips-reqest when ports are created.
If subnet is not set (or role.networks is not a map) the
default will be {{network.name_lower}}_subnet.
Also, since the fixed_ips request passed to Vip ports are no
longer [] by default, the conditinal has been updated to
test for 'ip_address' entries in the request.
Partial: blueprint tripleo-routed-networks-templates
Depends-On: I773a38fd903fe287132151a4d178326a46890969
Change-Id: I77edc82723d00bfece6752b5dd2c79137db93443
Nova now allows use of templated urls in the database and mq
connections which will allow static configuration elements to be
applied to the urls read from the database per-node. This should
be a simpler and less obscure method of configuring things like
the per-node bind_address necessary for director's HA arrangement.
This patch addresses the templated DB urls as part 1.
Nova support added here:
https://review.openstack.org/#/c/578163/
Related-Bug: 1808134
Co-Authored-By: Martin Schuppert <mschuppert@redhat.com>
Change-Id: If30b4647bca210663a22fd653e752d4d57345bdd
The standalone job were not running yum update on the containers, to do
so we need to specify the updater paremters in the
container-prepare-parameters [1] and also we have to activate the docker
local registry, call the conatiner prepare service and activate registry at
podman.
[1] https://review.openstack.org/#/c/621517/
Change-Id: I74e817bc9b9dd522db3da7753c91a3884d99f8c8
Related-Bug: #1805968
When installing OpenShift by means of TripleO, after
the initial docker configuration, openshift-ansible
also adds several parameters there.
Then, if we want to remove a single node, then a stack
update is performed, which returns the configuration
to its original state. In other words, it removes all
parameters added by openshift-ansible, which breaks OpenShift.
This commit adds the ability to disable reconfiguration of
docker at the time of stack update for all roles associated
with OpenShift.
Closes-Bug: #1804790
Depends-On: I0bcaeea9cd24ab35a81d8c3d6fc3a384c1e4c3c2
Change-Id: If202be5d27d81672e39cbe21867459d277220e23
When Ironic uses the 'direct' deploy interface it requires
access to swift. To access swift it needs the storage
network.
Change-Id: Ie49b961bb276dff0e5afbf82b450caa57d17f6ff
NIC partitioning requires IOMMU to be enabled on roles using it.
By adding the BootParams service to all the roles, we could
enable IOMMU selectively by supplying the role specific parameter
"KernelArgs". If a role doesn't use NIC Partitioning then
"KernelArgs" shall be not be set and backward compatibility would
be retained.
Change-Id: I2eb078d9860d9a46d6bffd0fe2f799298538bf73
For network isolation, we specifcy available networks for role.
Therefore, there is no point in creating noop network resources for
networks that are not available/connected. This results in redundant
host entries for not available networks on overcloud nodes.
If a network is not available for a role we don't need to create
those extra noop resources.
For Undercloud/Standalone role we keep all networks in roles data
as the default ServiceNetMap specifies non ctlplane networks though
they map to ctlplane.
Change-Id: I07822ec0cba7eed352c0010eb893b5e5a522e95c
Closes-Bug: #1800811
The StorageMgmt and Tenant networks are not used in an OpenShift
context and should hence be removed from the OpenShift roles.
Change-Id: I06951742cd4e1e203e95d49ffe1b7404f75fca70
We did not have a easy way to ensure all the openstack clients are
installed on a given system. In the old instack-undercloud installation,
we were installing some additional clients outside of the ones required
via python-tripleoclient. To allow a user to quickly install all the
clients on a given system, this change adds an OpenStack clients
"service" which can be added to a role to ensure the clients are
available. In the future if we provide a client container, this service
can be converted into a container deployment mechanism.
Change-Id: If878c2ab7679eea2fff42b410bec9c8c9b92ed6f
Closes-Bug: #1800001
This service was in Controller but not in ControllerOpenStack.
Prepare service which causes overcloud deployment fail due to missing
images.
Change-Id: I7e1ffd3a50714c825f4091d7c0aee2d895ad72a1
Closes-Bug: #1798670
In some cases we may need to disable selinux (like in CI). The role
needs the SELinux service so that the management can be done during the
deployment.
Change-Id: Ife3c4600f5bd70490a68059eb27c5100743a5298
Closes-Bug: #1797910
Openshift-ansible already sets the right firewall rules on the
provisioned nodes, there is no need to set up (some of) the rules by
ourselves.
Add the 'OS::TripleO::Services::TripleoFirewall' to all the OpenShift
roles so that the operator can still set additional rules if desired.
Change-Id: I1e8ca10069c3f1017207abfebb803cb7aa3835a8
At the moment the 'OS::TripleO::Services::Timesync' service is
synonymous to 'OS::TripleO::Services::Ntp'. Let's use the more generic
Timesync service to pick up the new default in the event the value for
'OS::TripleO::Services::Timesync' changes.
This better aligns with the rest of the roles.
Change-Id: I44f706ce7dd1909ffd3805337fc6d9a5ce6de80f
The OpenShift roles should include the OS::TripleO::Services::Rhsm
service for Red Hat Subscription Management so that the provisioned
nodes can register with a Satellite or CDN.
Add the Podman service to OpenShifAllInOne to be more consistent with
other roles.
Change-Id: I08862635c68eddbb0940863c43867ece1b289ee5
Previously we were only deploying a master node. This commit adds the
worker and infra service to the deployed node and configures it as an
all-in-one node. In order to do so, we need to disable HAproxy when
deploying in all-in-one as the HAproxy instance Openshift deploys on
the infra node conflicts with the one we normally set up. They both
bind ports 80 and 443.
Also removes the useless ComputeServices parameter that only makes
sense in a multinode environment.
Change-Id: I6c7d1b3f2fa5c7b1d9cf695c9e021a4192e5d23a
Depends-On: Ibc98e699d34dc6ab9ff6dce0d41f275b6403d983
Depends-On: I0aa878db62e28340d019cd92769f477189886571
Since we moved to containerized UC, TLS Everywhere deployments are broken.
Namely we miss two things:
A. The NAT iptables rule for the nova metadata service to be reachable
B. The setting 'service_metadata_proxy=false' needs to be set for nova
metadata otherwise the curl calls to setup ipa will fail with the
following:
[root@overcloud-controller-0 log]# curl http://169.254.169.254/openstack/2016-10-06
<html>
<head>
<title>400 Bad Request</title>
</head>
<body>
<h1>400 Bad Request</h1>
X-Instance-ID header is missing from request.<br /><br />
</body>
</html>
A. Is fixed by adding a conditional iptables rule that is only triggered
when deploying an undercloud (where we set MetadataNATRule to true)
B. Is fixed by setting NeutronMetadataProxySharedSecret to '' on the
undercloud and then setting the corresponding hiera keys only when
the parameter != ''. We tried alternative simpler approaches like
setting NeutronMetadataProxySharedSecret to null but that will break
heat as the parameter is required and setting it to null breaks heat
validation (we also tried to make the parameter optional with a
default: '', but that broke as well)
While we're at it we also remove the neutron metadata service from the
undercloud as it is not needed.
Tested by deploying an undercloud with this change and observing:
A.
Chain PREROUTING (policy ACCEPT 106 packets, 6698 bytes)
pkts bytes target prot opt in out source destination
0 0 REDIRECT tcp -- br-ctlplane * 0.0.0.0/0 169.254.169.254 multiport dports 80 state NEW /* 999 undercloud nat ipv4 */ redir ports 8775
B.
grep -ir ^service_metadata_proxy /var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf
service_metadata_proxy=False
Also a deployment of a TLS overcloud was successful.
Change-Id: Id48df6db012fb433f9a0e618d0269196f4cfc2c6
Co-Authored-By: Martin Schuppert <mschuppe@redhat.com>
Closes-Bug: #1795722
Podman service will be in charge of installing, configuring, upgrading
and updating podman in TripleO.
For now, the service is disabled by default but included in all roles.
In the cycle, we'll make it the default.
Note: when Podman will be able to run in TripleO without Docker,
we'll do like https://review.openstack.org/#/c/586679/ and make it as
a generic service that can be switched to either podman or docker.
But for now, we need podman & docker working side by side.
Depends-On: Ie9f5d3b6380caa6824ca940ca48ed0fcf6308608
Change-Id: If9e311df2fc7b808982ee54224cc0ea27e21c830
The standalone role can be used either with the tripleo deploy command
to deploy locally, or it can be used with an undercloud to deploy an
all-in-one node. This change provides a sample set of environment files
for both deployment mechanisms.
Change-Id: Ibc735ac4326a9217469e368c074de8b0df7689bd
Related-Blueprint: all-in-one
Openshift Routers are located on the infra node and need to be highly
available on ports 80 and 443.
Depends-On: I5de14152904d06c49e9d5b2df6e3f09a35f23d92
Change-Id: Iee088e1279bff2cdb7a3601288804f626bff29a3
Introduce an openshift_node template that serves as base for all
openshift services. This reworks the inventory files so that hosts are
defined once and made part of the appropriate groups.
The master node can now be split from the infra node, or bundled
together with the Worker in the all-in-one role.
Provide environment files to enable the Master, Worker, Infra or
all-in-one role individually.
Change-Id: I9ad86185b01c88b609d320e2384c5644bd99bdae