Support has been dropped for skyzone but these two
files remained. I think these were missed from
https://review.opendev.org/#/c/712783/
Change-Id: Idcd6485f24e70c965ebd60569a2d6cc06a1037d9
VxFlex OS driver is rebranded to PowerFlex.
This patch adds support for PowerFlex.
Will deprecate the VxFlexOS template in
a new patch.
Depends-On: https://review.opendev.org/#/c/743852/
Change-Id: I94310bf84a0af7a735bd6e1c0038686b0d0abfc8
A new BarbicanClient tripleo service provides a means of configuring
the barbican Key Manager settings for cinder, glance and nova services
running at an edge site. This is necessary because the BarbicanApi
tripleo service is only capable of configuring the Key Manager settings
for services running in the control plane.
For cinder, the BarbicanClient ensures the KeyManager settings are
available to the cinder-volume and cinder-backup services. This is
necessary because the Key Manager setttings are traditionally associated
with the cinder-api service, but cinder-api is not deployed at the edge.
Closes-Bug: #1886070
Change-Id: I17d6c3a3af5b192b77d264ff3e94e64ef6064c77
This commit attempts to build out a composible service that enrolls the
undercloud as a FreeIPA host using an OTP. This is similar to what we've
done in the past for tls-everywhere except we're not using novajoin.
Change-Id: I770227b2f4f1ea447cf0138f57a6ed66c034d225
- Docker isn't supported anymore.
- Clients are now installed by Ansible, not Puppet
- Neutron SRIOV host isn't supported and operators should deploy with
sriov_pf network object in nic configs.
- firewall is now managed by Ansible, not Puppet
Change-Id: I2b6068a719563a53bc255dcce72a92465e7df468
It seems that netwokring-fujitsu is no longer maintained[1], and it's
not compatible with Python 3.6 which currently all OpenStack services
require.
[1] https://opendev.org/x/networking-fujitsu
Change-Id: Iae639864cce8e3add635944f157ecde074312e74
We don't deploy Keepalived in multi-node as our HA story is done with
Pacemaker. Therefore, we don't use VRRP protocol that Keepalived
provides to maintain the VIPs alive, so we don't really need this
service.
Instead, we can configure the VIPs on the br-ctlplane interface which
already handled the local_ip. Now it also handles the configuration of
public ip and admin ip.
Keepalived is now deprecated and will be removed in the next cycle.
blueprint replace-keepalived-undercloud
Change-Id: I3192be07cb6c19d5e26cb4cddbe68213e7e48937
Updating the SC cinder backend to support both iSCSI
and FC drivers. It is also enhanceded to support
multiple backends.
CinderScBackendName supports a list of backend names
and a new CindeScMultiConfig parameter provides
a way to specify parameter values for each backend.
For example see file environments/cinder-dellemc-sc-config.yaml
Depends-On: https://review.opendev.org/#/c/722538/
Change-Id: I6e5f3753fe167c7fbc75c3d382c88c09c247c7b3
Updating the Xtremio cinder backend to support both iSCSI
and FC drivers. It is also enhanceded to support
multiple backends.
Depends-On: https://review.opendev.org/#/c/723020/
Change-Id: I2ba45aaa584c6fdcfb59cf6aed1b72dc8815f91f
PowerMax config options have changed since Newton.
Updating them to the latest and support both iSCSI
and FC drivers.
CinderPowermaxBackend is also enhanceded to support
multiple backends. CinderPowermaxBackendName supports a
list of backend names and a new CinderPowermaxMultiConfig
parameter provides a way to specify parameter values for
each backend. For example see file
environments/cinder-dellemc-powermax-config.yaml
Depends-On: https://review.opendev.org/#/c/712184
Change-Id: I4429ed2d45661ea82ae38a7050abb2b229953c9c
- Remove Docker service from all the roles; not needed anymore
- Switch ContainerCli to podman for docker-ha environment. Note; this
environment might be renamed at some point to, container-ha.yaml. But
for backward compatibility we still use it now.
Also switch EnablePaunch to false since we were waiting for the podman
switch to do it.
- In the overcloud registry, disable Docker by default and enable Podman
by default.
This patch will only work for centos8/rhel8 based deployments.
Change-Id: I561c52ce09c66a7f79763c59cd25f15949c054af
We're dropping this as it has no testing and is not currentily available
for CentOS 8.
Change-Id: I408490346840d5a2e3ae29f53cbc100edcf72ee7
Depends-On: https://review.opendev.org/#/c/712517/
In order to make SRIOV work for the OVN driver the concept of "external"
ports has been introduced (see depends-on). These ports lives on a
different host (gateway nodes) and are able to reply to arp requests
on behalf of VM port. In the SRIOV case, the SRIOV port is bypassed in
the hypervidor so the OVN driver creates an external port it which will
reply to the DHCP packets.
This patch is creating two new roles to work with the mechanism
described above:
* ControllerSriov: Same as the normal Controller role but with the OVN
Metadata agent deployed.
* NetworkerSriov: Same as the normal Networker role but with the OVN
Metadata agent deployed.
The patch also removes the Neutron DHCP agent from the
neutron-ovn-sriov.yaml environment file since no longer needed.
Depends-On: https://review.opendev.org/703376
Change-Id: I5ef3d6543785b677ea333803aaa23bd34abdd671
Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
The ScaleOut roles should accompany the DistributedCompute roles
which are sufficient to provide the BlockStorageCinderVolume
service.
The DistributedCompute role supports usecases without persistent
storage via Cinder while the DistributedComputeHCI supports
usecases with persistent storage via Cinder. For those usecases
we want the BlockStorageCinderVolume service to be used by
DistributedComputeHCI with Ceph but not without Ceph as that
is presently the only Cinder backend supporting active/active.
Change-Id: I8588919cecc2be06447eba2b53b79d8d7cfc6a9e
Fixes-Bug: #1863799
In Id6c416b8c7b3b6314d935e3eeb8a3f114492cecd the roles for
DistributedCompute and DistributedComputeHCI received the
GlanceApiEdge service so that Glance could run at DCN sites.
Those who wish to run >3 DCN nodes with Glance may then add
scale out roles by replacing the GlanceApiEdge service with
the new HAproxyEdge service, which configures a local haproxy
to forward glance-api requests to edge nodes running Glance.
This patch provides the DistributedComputeScaleOut and
DistributedComputeHCIScaleOut roles so that deployers may
specify 3 DCN nodes and N DCN scale out nodes without having
to compose the roles themselves.
Change-Id: I8900ba3bb470804b5bb5016aacc66dc171e1bb62
Change I52c52b62f1c21214b98c98773c8647609cb81d52 removed use of the
'NovaVcpuPinSet' from this role but did not remove references from the
description of same. Fix this now.
Change-Id: Ib957da14fd47953d7419438236888efc41034e1a
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Related-bug: #1860009
This has been deprecated and replaced by 'NovaComputeCpuDedicatedSet'
and 'NovaComputeCpuSharedSet', as seen in change
Ibba4273526392985ede6da2ef3fec66a61407777. Update the ComputeRealTime
role to reflect this.
Change-Id: I52c52b62f1c21214b98c98773c8647609cb81d52
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Closes-Bug: #1860009
After looking at https://review.opendev.org/#/c/702191/, I noticed we
had a couple of lingering references to removed services.
Change-Id: Iaa19d42a63261853d7e270b6b219de44a2fbb3ba
Already implemented, tested and validated.
Addressed reviews/comments of Emilien Macchi - Jan 3 10:37 AM - Patch Set 1: Code-Review-1:
- "it would be nice to have a validate function in tools/yaml-validate.py like we already have for ComputeHCI role": Done, using validate_hci_computehci_role as ComputeHCIOvsDpdk.yaml does.
- 2 comments inline: adding return line and wront identation: Corrected
Change-Id: I20cd54b677e8da8c3c5691d913c4b6b67bb82e27
Signed-off-by: josecastillolema <josecastillolema@gmail.com>
This patch adds two new tripleo services that together support
deploying the glance-api service at edge sites. The service uses the
same glance database in the control plane, but allows other edge
services (e.g. cinder and nova) to access a glance endpoint that is
local to the edge site.
A new GlanceApiEdge service is a minor variant of the GlanceApi
service. The most significant change is it doesn't use the control
plane VIP, but instead configures cinder and nova services to access
the glance-api endpoint running on that edge node (not the VIP).
A companion HAproxyEdge service supports scaling out DCN sites with
larger (>3) number of nodes. Instead of deploying GlanceApiEdge on
every node, the HAproxyEdge service configures a local haproxy to
forward glance-api requests to the edge nodes running GlanceApiEdge.
The HAproxyEdge is extensible. While this patch is only concerned
with proxying glance-api, it can be extended to support additional
proxy requirements as needs arise.
blueprint: split-controlplane-glance-cache
Change-Id: Id6c416b8c7b3b6314d935e3eeb8a3f114492cecd
Depends-On: Ic8d652a5209219c96f795a8c18ceb457c6d9382a
There is no longer a need for stable/train to have a working nova-metadata-api
This reverts commit 00cd4b0aea.
Change-Id: I520b1104e0dff683834f5bed13a33858ce21abaf
This change just adds the missing resource to include the missed
CephGrafana bits, fixing the ceph-dashboard deployment scenario.
In this review is also added a validation to make sure that both
ControllerStorage{Dashboard,Nfs} propertly inherit from Controller.
Change-Id: I0075bcb5318462555c7f9f96204ce037016f3e69
Closes-Bug: #1856060
This service is not required normally, but is required when updating
an existing overcloud from non-TLS to TLS (existing nodes need to
fetch the new vendor-data, which isn't available in the initial boot
config-drive)
Change-Id: I3685bd481fd23fbd83d8e6a1fadb72f2e57578bc
Partial-Bug: #1855929
netcontrold rebalances the overloaded queues within
available PMD threads to avoid packet loss. Add
support to enable this service in the DPDK deployments.
Change-Id: Ia0ec2a3db0626e9a93ef591d0bc4f3a53d98820f
Transfering to containerized deployment of undercloud we lost automated
configuration of SNMP for undercloud. Telemetry stack is now failing to
HW monitor this node.
Change-Id: I219e2a8a08bc9b47bd7110fadcb188ef703acfce
Create a new Rsyslog service that is deployed on the host (not in a
container) and with Ansible.
Make it so it's deployed by default on Undercloud & Standalone setups.
Also move the tasks that configure rsyslogd for HAproxy & Swift to be
executed after the host prep tasks (using deploy step tasks).
Change-Id: I027c64aefcc4715da17836a5cf0141152cf146aa
Closes-Bug: #1850562