Allow clustercheck to be started on-demand by socat in addition
to xinetd. Make socat the new default as xinetd will get deprecated
eventually.
Depends-On: Ie7ede82a755e729d66e077f97e87b3d6c816ed3c
Change-Id: I7d87b5861a576cf4849a25cd1d3f5e77568de1e4
Closes-Bug: #1928693
This simplifies the ServiceNetMap/VipSubnetMap interfaces
to use parameter merge strategy and removes the *Defaults
interfaces.
Change-Id: Ic73628a596e9051b5c02435b712643f9ef7425e3
Instead of creating ceph_admin_extra_vars with
distribute_private_key always set to true, set
that variable to true only when appropriate based
on logic in the depends-on patch.
Also, it is not necessary to override the values of
tripleo_admin_generate_key or ssh_servers to create
the ceph-admin user for cephadm.
Related-Bug: #1928717
Depends-On: I8343c419c140670f01bdc94b4c8130004bac64e1
Change-Id: I2bacf82f85e5c78f5ae603460919cf3ff7130e9c
With recent version of libvirt, nova-compute don't come up
correct when tls-everywhere (use_tls_for_live_migration)
is set. The enable_live_migration_tunnelled condition
did not consider tls-livemigration and got disabled.
Nova-compute fails to start with:
2021-05-12 12:49:09.278 7 ERROR oslo_service.service nova.exception.Invalid: Setting both 'live_migration_tunnelled' and 'live_migration_with_native_tls' at the same time is invalid. If you have the relevant libvirt and QEMU versions, and TLS configured in your environment, pick 'live_migration_with_native_tls'._
This change enhance the enable_live_migration_tunnelled
condition to not configure tunnelled mode when
use_tls_for_live_migration is true.
Closes-Bug: #1928554
Related-bug: https://bugzilla.redhat.com/show_bug.cgi?id=1959808
Change-Id: I1a6f5d3a98d185415b772fa6a94d6f4329dc59a0
It is not being used in this file. So let's remove it from here.
The only place it is used is the cinder-common-container-puppet.yaml
file:
$ grep -ir cvol_active_active
deployment/cinder/cinder-common-container-puppet.yaml: cvol_active_active_tls_enabled:
deployment/cinder/cinder-common-container-puppet.yaml: - cvol_active_active_tls_enabled
Change-Id: Id344f7f06eca903351b46bc5961bd9a749672bd7
CephHciOsdCount is the number of expected Ceph OSDs per HCI node.
CephHciOsdType is the type of data_device (not db_device) used for
each OSD and must be one of hdd, ssd, or nvme. These are used by
the Ansible module tripleo_derive_hci_parameters. Since CephOsdSpec,
as used by cephadm, might only specify a description of devices to
be used as OSDs (e.g. all devices), and not a list of devices like
CephAnsibleDisksConfig, setting the count directly is necessary in
order to know how much CPU/RAM to reserve. Similarly, because a
device path is not hard coded, we cannot look up that device in
Ironic to determine its type.
Closes-Bug: #1920954
Depends-On: Ia6bbdf023e2a0961cd91d3e9f40a8a5a26253ba3
Change-Id: Iccf97ca676ee6096e47474c571bd4f53381ce1c9
This patch adds a no_log clause to external_deploy tasks that might
result in an SSH key getting logged.
Change-Id: I2a38a48aabdc167134aee757cd5270af4c498c8d
Related-Bug: #1918138
The podman container module expects security_opts to be a list but
ansible is magically handling this. Rather than rely on the ansible
behavior, let's explicitly specify it as a list.
Change-Id: Ib88ed7d17547209f383cdf2f0449c02d06e41e2d
Though we've role specific parameters we don't seem
to honor them.
Related: https://bugzilla.redhat.com/1958418
Change-Id: I0946b3f4f48688dd3dc747ae31f48c9676687cbc
When deploying with TLS-E and cephadm, I disabled the ceph dashboard:
(undercloud) [stack@undercloud-0 ~]$ openstack stack environment show
overcloud -f yaml |grep -i cephenabledashboard
CephEnableDashboard: false
Yet it still tries to request a cert for it (and fails due to
https://bugs.launchpad.net/tripleo/+bug/1926746):
2021-05-03 14:02:54.876228 | 5254004b-fe7a-614d-c9eb-00000000e323 |
FATAL | Ensure certificate requests | ctrl-3-0 | item={'ca': 'ipa',
'dns': 'ctrl-3-0.mainnetwork.bgp.ftw', 'key_size': '2048', 'name':
'ceph_dashboard', 'principal':
'ceph_dashboard/ctrl-3-0.mainnetwork.bgp.ftw@BGP.FTW', 'run_after': '#
Get mgr systemd unit\nmgr_unit=$(systemctl list-units | awk \'/ceph-mgr/
{print $1}\')\n# Restart the mgr systemd unit\nif [ -n "$mgr_unit" ];
then\n systemctl restart "$mgr_unit"\nfi\n'} |
error={"ansible_loop_var": "item", "changed": false, "cmd":
"/bin/getcert request -N CN=ctrl-3-0.mainnetwork.bgp.ftw -c IPA -w -k
/etc/pki/tls/private/ceph_dashboard.key -f
/etc/pki/tls/certs/ceph_dashboard.crt -D ctrl-3-0.mainnetwork.bgp.ftw -D
'' -A '' -E '' -r -g 2048 -K '' -K '' -u digitalSignature -u
keyEncipherment -U 1.3.6.1.5.5.7.3.1 -U 1.3.6.1.5.5.7.3.2 -U '' -B '' -C
/etc/certmonger/post-scripts/ceph_dashboard-838da8a.sh", "item": {"ca":
"ipa", "dns": "ctrl-3-0.mainnetwork.bgp.ftw", "key_size": "2048",
"name": "ceph_dashboard", "principal":
"ceph_dashboard/ctrl-3-0.mainnetwork.bgp.ftw@BGP.FTW", "run_after": "#
Get mgr systemd unit\nmgr_unit=$(systemctl list-units | awk '/ceph-mgr/
{print $1}')\n# Restart the mgr systemd unit\nif [ -n \"$mgr_unit\" ];
then\n systemctl restart \"$mgr_unit\"\nfi\n"}, "msg": "", "rc": 2,
"stderr": "", "stderr_lines": [], "stdout": "New signing request
\"20210503140253\" added.\n", "stdout_lines": ["New signing request
\"20210503140253\" added."]}
With this patch applied I correctly get passed this point and am able to
reach later steps:
2021-05-04 12:40:44.300445 | 5254004b-fe7a-5ccf-c0b9-0000000000df | TASK | External deployment step 2
The problem is that the 'enable_internal_tls' is global and only checks
for internal TLS being enabled so it will still be triggered when
CephEnabledDashboard is set to false. Let's switch it to the internal
condition internal_tls_enabled which takes the dashboard into account.
Change-Id: I73a58b00f31bfeffb724e12515d8c5cb0625ca7f
Closes-Bug: #1927093
Moving the network and port management for OVN
bridge MAC addresses to ansible.
Removes the heat resources, and adds an external
deploy task at step 0 in the ovn controller service
templates which uses the 'tripleo_ovn_mac_addresses'
ansible module to create/remove OVN mac address ports.
Adds parameter role_specific OVNStaticBridgeMacMappings,
parameter that can be used to set static bridge mac
mappings. When this is set no neutron resources will be
created by the tripleo_ovn_mac_addresses ansible module.
OVNStaticBridgeMacMappings must be used for standalone
deployments.
Implements: blueprint network-data-v2-port
Depends-On: https://review.opendev.org/782891
Depends-On: https://review.opendev.org/783137
Change-Id: I6ce29d2908e76044c55eb96d0d3779fe67ba9169
Use cinder::backends::backend_host to override the value when the
cinder-volume service runs active/passive under pcmk. This puppet
parameter was added several cycles ago, and the original
cinder::backend_host variable is being deprecated.
Change-Id: Ic0b0f1bd703e46b9ed0d86381b4fbed4ed6f9699
This change updates the NovaHWMachineType parameter to now default to the
unversioned q35 machine type for x86_64 instances within a deployment.
A simple environment file is also included to pin NovaHWMachineType to
the previous versioned defaults during an upgrade to this release. Once
upgraded operators can then use the following flow to record the machine
type of existing instances allowing the default to eventually be
changed:
https://docs.openstack.org/nova/latest/admin/hw-machine-type.html
This change depends on Ieb21fd8f3e895ea7611882f1e92f398efe2e77fa to
ensure that the standalone role picks up this new default in CI.
It also depends on Ia3f839a3c5e4e4b59898c11561fe7ef7126bba5f to ensure
that all jobs use cirros 0.5.2 that includes the achi module now
required when using q35 based instances.
Depends-On: https://review.opendev.org/c/openstack/tripleo-quickstart-extras/+/785575
Finally, it also depends on I0e068043d8267ab91535413d950a3e154c2234f7
from Nova that is attempting to workaround a known QEMU issue that
appears more prevalent when using this newer machine type.
Depends-On: https://review.opendev.org/c/openstack/nova/+/785682
Change-Id: I9f60a73577ae7cd712e2a8285abc0c0788906112
When nova_virtlogd container gets restarted the instance console auth files
will not be reopened again by virtlogd. As a result either instances need
to be restarted or live migrated to a different compute node to get new
console logs messages logged again.
Usually on receipt of SIGUSR1, virtlogd will re-exec() its binary, while
maintaining all current logs and clients. This allows for live upgrades of
the virtlogd service on non containerized environments where updates just
by doing an RPM update.
To reduce the likelihood in a containerized environment virtlogd should
only be restarted on manual request, or on compute node reboot. It should
not be restarted on a minor update without migration off instances.
This introduces a nova_virtlogd_wrapper container and virtlogd wrapper
script, to only restart virtlogd on either manual or compute node restart.
With NovaEnableVirtlogdContainerWrapper the virtlogd wrapper can be
disabled.
Co-Authored-By: Rajesh Tailor <ratailor@redhat.com>
Closes-Bug: #1838272
Depends-On: https://review.opendev.org/c/openstack/puppet-tripleo/+/787771
Change-Id: Ib1fd2fb89899b40b3ce2574af067006f566ef2ea