Introduce THT for fossw ML2 plugin in networking-fujitsu.
networking-fujitsu is a neutron ML2 plugin which enables several
FUJITSU switch products in OpenStack environment. This templates
deploy overcloud with FOS switch.
Implements: blueprint integration-fossw-networking-fujitsu
Glance registry is not required for the v2 of the API and there are
plans to deprecate it in the glance community.
Let's remove v1 support since it has been deprecated for a while in
Co-Authored: Flavio Percoco <firstname.lastname@example.org>
This change adds a CephMds service, disabled by default, on the
Controller role and an environment file to enable it.
Introduce THT for networking-fujitsu. networking-fujitsu is a neutron ML2 plugin
which enables FUJITSU C-Fabric switch in OpenStack environment. This templates
deploy overcloud with C-Fabric switch.
Implements: blueprint integration-networking-fujitsu
With change I80c8559bb2d915385bcc20ae71fe144ddd6591c1 we removed
all the unused puppet-tripleo pacemaker profiles. With this change
we remove the corresponding puppet profiles from tripleo-heat-templates.
We can also remove any trace of the fake ::Core service as it was
introduced via Iacd94294b8a66bc082bb2b3e8d3364ec1bf053b8
for the fake openstack-core pacemaker resource during the Mitaka cycle
and became unused in Newton.
This enables the deployer to dynamically add nova metadata to the
servers based on the output of service profiles that implement the
metadata_settings key in the role_data output for the profiles.
One can set an implementation via the OS::TripleO::ServerMetadataHook
resource, which currently is set as OS::Heat::None. So, because of
the default implementation, if left untouched it actually does
Currently, besides the list, which is metadata_settings, this hook also
takes the name of the node that it's setting the metadata for.
This is useful for nova vendordata plugins that can parse said metadata.
Currently when the docker environments are invoked, every node has the
boot script run which replaces os-collect-config with the heat-agents
container. This should only be happening on Compute nodes currently,
and each role will be converted to heat-agents one at a time.
This change implements a role-specific NodeUserData resource and uses
that mechanism to run docker/firstboot/install_docker_agents.yaml only
on Compute nodes.
This allows us to take advantage of the composable roles hiera
settings to connect the plugin to the northd/ovndb API without
needing to hard-code the IP of the node running the service.
There are some requirements for early configuration that involves
e.g setting kernel parameters then rebooting. Currently this can
be done via cloud-init, e.g firstboot templates, but there's been
discussion around enabling a SoftwareDeployment approach instead.
The main advantage of doing it this way is there's an error path
if something goes wrong with the config (except triggering the
reboot as we have to use NO_SIGNAL for that).
This patch adds a new type called:
This defaults to a normal OS::Neutron::Port object but can
be mocked out for some implementations like when installing
the undercloud where neutron doesn't exist.
This shows how we could wire in the upgrade steps using Ansible
as was previously proposed e.g in https://review.openstack.org/#/c/321416/
but it's more closely integrated with the new composable services
It's also very similar to the approach taken by SpinalStack where
ansible snippets per-service were combined then run in a series of
steps using Ansible tags.
This patch just enables upgrade of keystone - we'll add support for
other patches in subsequent patches.
Partially-Implements: blueprint overcloud-upgrades-per-service
This patch drops use of the vip-hosts.yaml service which can
cause issues during deployment because puppet 'hosts' resources
overwrite the data in /etc/hosts. The only reason things seem to work
at all at the moment is because our hosts element in t-i-e runs
on each os-refresh-config iteration and re-adds the dropped hosts
To work around the issue we add a conditional which selectively
adds the extra hosts entries only if the AddVipsToEtcHosts is set
This adds the necessary hieradata for enabling TLS for MySQL (which
happens to run on the internal network). It also adds a template so
this can be done via certmonger. As with other services, this will
fill the necessary specs for the certificate to be requested in a
hash that will be consumed in puppet-tripleo.
Note that this only enables that we can now use TLS, however, we still
need to configure the services (or limit the users the services use)
to only connect via SSL. But that will be done in another patch, as
there is some things that need to land before we can do this (changes
in puppetlabs-mysql and puppet-openstacklib).
This change modifies the template interface to support containers and
converts the compute services to composable roles.
Co-Authored-By: Dan Prince <email@example.com>
Co-Authored-By: Flavio Percoco <firstname.lastname@example.org>
Co-Authored-By: Martin André <email@example.com>
Co-Authored-By: Steve Baker <firstname.lastname@example.org>
This integrates panko service api into tripleo heat templates.
By default, we will disable this service, an environment service
file is included to enable if needed.
Adds new puppet and puppet pacemaker specific services for Zaqar.
The Pacemaker templates extend the default Zaqar services and swap in
the Pacemaker specific puppet-tripleo profile instead.
Implements: blueprint composable-services-within-roles
The capitalization of OS::Tripleo is wrong compared to all other services
so correct this for avoidance of confusion when folks write custom roles_data
files or pass custom service lists via *Services parameters.
After deploying a fresh installed Overcloud or updating the stack
the haproxy configuration is updated correctly but no change in the
HA proxy stats happens.
This submission will add the missing resources to run pre and post
For parameter merge strategies to work we need to merge multiple environment
files, which doesn't consider the defaults defined in the heat template.
Moving where we define these defaults will enable the merge strategies
applied when appending services to roles in environment files to work.
This adds an environment file that can be used to enable TLS in
the internal endpoints via certmonger if used. This will include
a nested stack that will create the hash that will be used to
create the certmonger certificates.
When setting up a service over apache via puppet, we used to disable
explicitly ssl (which sets modd_ssl-related fields for that vhost).
We now make this depend on the EnableInternalTLS flag. This has only
been done for keystone, but more services will be added as the
puppet code lands
This patch moves the hosts configuration into its own deployment.
It will continue to use os-apply-config as something that is
required early on in the bootstrapping (it needs to be
configured before puppet runs for example).
The motivation here is so we can refactor all-nodes-config.yaml to use a
new hiera hook that that avoids os-apply-config entirely.
Unfortunately we use "SwiftStorage" in the ObjectStorage role
template, so we have to special-case this for backwards compatibility
or deployments enabling the ObjectStorage role will fail.
Ideally we'd align the port names in the objectstorage-role.yaml, but we
can't becauuse all the ports would be replaced in existing deployments
This patch modifies the service name to be more appropriately called
"OpenDaylightApi" along side the "OpenDaylightOvs" service used to
configure OpenVSwitch. It also splits out the OVS configuration for
controller nodes into the composable OpenDaylightOvs service.
Signed-off-by: Tim Rozet <email@example.com>
When generating these templates, we should
create them with the "-role" appended as they will
be generated from a role.role.j2.yaml file.
i.e. role.role.j2.yaml will generate <service>-role.yaml
config.role.j2.yaml will generate <service>-config.yaml
This adds some basic pieces to get certmonger to manage the
certificates for HAProxy. The aim is to be flexible enough that we
will be able to manage both public and internal certificates.
This also adds a relevant environment to get the endpoints to have
We do not want cinder-volume to be managed by Pacemaker on
BlockStorage nodes, where Pacemaker is not running at all.
This change adds a new BlockStorageCinderVolume service name
which can (and is, by default) mapped to the non Pacemaker
implementation of the service.
The error was:
Could not find dependency Exec[wait-for-settle] for
Also moves cinder::host setting into the Pacemaker specific service
definition because we only want to set a shared host= string when
the service is managed by Pacemaker.
Moving the rest of the static based resource registry
entries to j2, this allows to extend the content of the
template to the roles_list.
Also moved the templates to correspond with the role name.
The default resource-registry file contains a bunch of per-role
things which mean you need to cut/paste into a custom environment
file for custom roles, even if you only want the defaults like the
built-in roles. Using j2 we can template these just like in the
overcloud.j2.yaml and other files.
Enables configuring CephFS Native backend for Manila.
This change is based on the usage of environments like in
review https://review.openstack.org/#/c/354019 for Netapp
Co-Authored-By: Marios Andreou <firstname.lastname@example.org>
This is needed because currently we're not generating
nova_metadata_vip or nova_metadata_nodes_ip, and a service profile is
required for that. Unfortunately, currently puppet-nova only deploys
osapi and metadata through the same manifest, so this profile doesn't
really inject any puppet code. We can make this more elegant later.
This implements support for installing fluentd agents as a composable
service on the overcloud.
Enables configuring a NetApp backend for the Manila service
This was created based on the review at
This makes the netapp and generic backends disabled by default
in the services/manila-backend-*.yaml. A backend is then
enabled via backend-specific environment files, which will set
any config parameters and enable that backend.
It is expected that multiple manila backend specific environment
files might be specified simultaneously.
Finally generic and manila config is split into separate
service files rather than using manila-base for all the things.
Co-Authored-By: Ryan Hefner <email@example.com>
Co-Authored-By: Ben Swartzlander <firstname.lastname@example.org>
To enable steps to be aligned between roles, we need to define
dependencies between the steps, which is only possible if we
move the steps out of distinct nested stacks so we can use
depends_on to serialized the steps for all roles.
Note that we may be able to further refactor later to remove the
per-role -config.yaml nested stacks as well.
Partially-Implements: blueprint custom-roles
Make use of the new composable per-service node_ips lists by
adding a ServiceNetMap entry for SwiftStorage, then
pass the data to construct the raw device list into puppet-tripleo
instead of mangling it in t-h-t inside the role templates.
This will allow running swift storage services on nodes other than
the Controller and ObjectStorage roles, and is required to enable
Partially-Implements: blueprint custom-roles
This will aid us in using FQDNs instead of IPs if DNS is not set. If
the deployer already has DNS set up, they can easily disable this
profile by adding the use-dns-for-vips.yaml environment file.
- adds possibility to install sensu-client on all nodes
- each composable service has it's own subscription
Co-Authored-By: Emilien Macchi <email@example.com>
Co-Authored-By: Michele Baldessari <firstname.lastname@example.org>
Implements: blueprint tripleo-opstools-availability-monitoring
This patch moves the settings for Nova, Neutron, and Horizon
out of controller.yaml.
Also fixes the NovaPassword settings in nova-base.yaml
so they don't use get_input.
Also, creates a new apache.yaml base service to contain shared
apache settings for several services which use Apache for WSGI.
Co-Authored-By: Giulio Fidente <email@example.com>
Added an environment file to configure DPDK with OVS
by overriding ComputeNeutronOvsAgent. Also added nic
configs for configuring DPDK bridge and bond with
numbered nic format.
Implements: blueprint tripleo-ovs-dpdk
Co-Authored-By: Vijay Chundury <firstname.lastname@example.org>
Introduces environment files for deploying OpenDaylight in two ways:
- ODL only managing L2 as an ML2 plugin
- ODL managing L2 and L3 DVR, by replacing NeutronL3Agent
Two services are added. One to install ODL and configure OVS on the
Controllers, and another service to only configure OVS on compute nodes.
Paritally-Implements: blueprint opendaylight-integration
Signed-off-by: Tim Rozet <email@example.com>
ComputeNeutronOvsAgent should be overriden to neutron-ovs-dpdk-agent
service instead of neutron-ovs-agent (default) in order to enable
DPDK in OVS. This new service provides all the required parameters
for enabling DPDK with OVS (vswitch::dpdk).
Implements: blueprint tripleo-ovs-dpdk
This enables us to pass a map of CAs to deploy the CA certificates
using puppet and hiera instead of the bash script we were using. It
also gives us the feature that we will be able to deploy several CA
certificates on the nodes instead of just one as was the case before.