The run-os-net-config.sh script checks to see if an IP address is
IPv4 or IPv6, and uses ping or ping6 accordingly. This change also
resolves hostnames and submits the resolved IP to the same test.
If the hostname only resolves to an IPv6 address, then ping6 will
be used.
Change-Id: I9f37992157935b37cc9beb8a2f3b9d749a62bd1b
Closes-bug: 1830274
CephAnsibleEnvironmentVariables are also useful when running
the nodes-uuid playbook. Users may know ceph-ansible playbook
is run but may not know the nodes-uuid playbook is run too.
If additional Ansible environment variables are useful for
running ceph-ansible it is likely they will be needed for
the nodes-uuid playbook. The altnernative is to create another
parameter like NodesUuidAnsibleEnvironmentVariables.
Change-Id: I10ddb4f79f5c8b69b09622b96e96325ba19d62e0
There are usecases when operator wants to talk to metadata API from
config-drive script (e.g. using curl to get data from metadata). That
means it makes sense to have OVN Metadata Agent deployed while forcing
config-drive to be used.
This patch sets force_config_drive to true only when OVNMetadataEnable
is set to false. If it's set to true then it doesn't touch
force_config_drive option, leaving it up to environment to define it.
(The default for force_config_drive is false.)
Closes-Bug: #1830179
Change-Id: Ib956ff2f521b9853c58eaa5500836c692dd9321d
This breaks the rules for the haproxy stats access because it
shadows them. Let's remove these rules and move the iptables
rules for haproxy in puppet-tripleo where they should have
been in the first place, like for all other services.
Depends-On: I1325171ef60d7a7e3b57373082fcdb5487be939b
Change-Id: I2f177c930567b3a45f0d95cec4140f478f14a074
Closes-Bug: #1829338
ovn::controller::hostname defaults to ::fqdn,
hostname can differ based on how nova configures it, detected
when dhcp_domain name is removed in [1].
So it's good to rely on fqdn_canonical hiera key which
nova also relies on to set "host" in nova.conf.
Also use neutron_timeout instead of neutron_url_timeout
which was deprecated for long and is removed in [1].
[1] https://review.opendev.org/#/c/658400/
Related-Bug: #1829993
Change-Id: If52302b5a04b5e146ac53ccd3fc65a064b2df2fb
Using EndpointMap to ensure we get the hostname/fqdn if possible
otherwise it fallbacks to the IP for Keystone public endpoint.
This is useful when the operator uses a certificate based on
hostname/fqdn and not an IP address.
Closes-Bug #1763776
Change-Id: Ifa9d55cca90caf5be0c83507cb47447e25311fce
We're running into issues where if someone creates a firstboot script
that touches a file that will eventually be mounted into a container, it
can fail if the time of the file ends up being in the future due to a
later timesync. Let's try a basic timesync bootstrap as part of
cloud-init to address the case of configuration changes occuring prior
to the host_prep_tasks where we traditionally configure chrony/ntp
Depends-On: https://review.opendev.org/#/c/659398
Change-Id: I294eba826b98c5793336815282f766e3d2e60a51
Related-Bug: #1776869
The current list_join implementation does not handle
IPv6 addresses properly. Switch to use the make_url
function.
Closes-Bug: #1829582
Change-Id: I9bd87fe94909107e7bfece0e7643cb48b6cf2355
Added user/project CONF with admin role at cinder group,
and when determine context is_admin and without token, do
authenticaion with user/project info to call cinder api.
When set reclaim_instance_interval > 0, and then delete an
instance which booted from volume with `delete_on_termination`
set as true. After reclaim_instance_interval time pass,
all volumes boot instance with state: attached and in-use,
even when attached instances was deleted.
This happens because as admin context from
`nova.compute.manager._reclaim_queued_deletes` did not have
any token info, then call cinder api would be failed.
The corresponding nova changes merged in change
https://review.opendev.org/#/c/522112/
Also rephrased CinderPassword parameter description in
cinder service templates to make it generic.
Depends-On: https://review.opendev.org/#/c/657918/
Related-Bug: #1734025
Change-Id: If0f9e442e5ed3b2d94bc51e65c145519c51cbc86
* Always use the FQDN supplied in the metadata.
* Read the metadata from network if hostname could not be determined.
These changes fix issues with deploying internal TLS after initialy
deploying without it (also known as a "brownfield deployment").
Change-Id: I9d1b4174dd349c29dc92079202176a11d3f85fe3
Co-Authored-By: Ade Lee <alee@redhat.com>
Revalidator and handler threads are not coherent with lcore
Configure these threads accoding to confgiure lcores
Change-Id: Idc3328658a4c5c21fd011c6c4f791e7993559f1a
Closes-Bug: #1822571
Depends-On: https://review.openstack.org/650626
The next change in this series turns off the nova_metadata service,
which means nova_compute needs to have the same vendordata
configuration so that it can populate the config-drive data with the
same vendordata served by nova_metadata.
Change-Id: I2dc1d120d0bd7cc91bde767097945598148d3e9b
Blueprint: nova-less-deploy
This deployment was for getting the hostname of pre-provisioned nodes.
This is no longer required with config-download since a HostnameMap is
required to be used with config-download.
Change-Id: I35d7d03c5373a251dfe96c2f71c4915ee52f113a
implements: reduce-deployment-resources
This deployment is no longer needed as it was only setting metadata that
was used by os-collect-config. Now that config-download is used,
os-collect-config is no longer used, we can get rid of this deployment.
Change-Id: Icd45f7299c4053373b3161d90ad32135c9f40e5a
implements: reduce-deployment-resources
I2702a022565a130ab339d165cb2252ad67d1162e changed the Nova NFS params to be
role specific, however the global param still takes precedence in the
enable_live_migration_tunnelled condition.
With this change the the global param is only considered when the role
specific param is not set.
Change-Id: I3d1a0f632e8a7e4924ebabdc795c0ef5d53cdd6d
Related-Bug: 1823712
Fluentd makes rsyslog to send the logs to fluentd locally.
This configuration was create within the puppet-tripleo,
mounting the /etc/rsyslog.d/ directory on the fluentd
container. This generates an issue when is deployed on
RHEL BZ #1701726.
This patch aim to fix it.
- The /etc/rsyslog.d directory is no longer mounted
on the fluentd container.
- The rsyslog configuration was moved to the host_prep_tasks.
Depends-On: I388180dc991926ff30f8bbc556f61447152f8dc9
Change-Id: Iae610832c12d63bde1eb507ba4bb89f2e3cfa24b
https://review.opendev.org/#/c/611188/ incorrectly removed the
undercloud-aodh.yaml environment file as we still reference it in
python-tripleoclient.
Change-Id: I458dd389ef8a953d5ec8f2bcb0fa454fe0ffffcb
Closes-Bug: #1828893
I5851dc7820fdcc4f5790980d94b81622ce3b0c8d corrected the dry-run case
only for non-HA setup.
The HA case was overlooked since it doesn't inherits from the non-HA.
Change-Id: Id678bbc2127bc3742d3c254ff4f62fc1b0e27daa
Related-Bug: #1823841
The problem we want to selve is that the change
https://review.opendev.org/#/c/631486/ (moving iptables creation to the
host) never really worked.
The reason it never worked and we never noticed is two-fold:
A) It ran: -e include ::tripleo::profile::base::haproxy
the problem is that without quoting puppet basically does a noop
B) Once the quoting is fixed it breaks because 'export FACTER_step'
exports a custom fact but does not export a hiera key per-se (so calls
to hiera('step') would fail
So we add proper quoting only on the variables that are arguments to a
parameter so that there is no risk of ansible doing the wrong thing and
puppet gets the correct arguments.
We also explicitely set the step for hiera in the deploy_steps_tasks.
The reason we need it is because in non-HA the iptables rules would
be created at step 1. But since the deploy_steps_tasks run before the
actual tasks that set the step hieradata.we would get the following
error:
Error: Function lookup() did not find a value for the name 'step'
We can just write out the step hiera key during the deploy_steps_tasks,
it will be enforced again shortly afterwards once the
common/deploy-steps-tasks.yaml gets invoked.
We also switch back to puppet_execute: ::tripleo::profile::base::haproxy
even for the pacemaker profile. This was broken by the flattening of the
haproxy service (Id55ae44a7b1b5f08b40170f7406e14973fa93639)
Co-Authored-By: Luca Miccini <lmiccini@redhat.com>
Change-Id: Iab310207ca17a6c596470dda30a39e029c4fe09c
Closes-Bug: #1828250
This change adds an additional deployment step that will attempt to
extract all Placement data from the nova_api database ahead of db syncs
being preformed. For the time being this is a noop as there should be no
data to move across. Eventually this will be used during upgrades and
actually used to migrate data between the nova_api and placement
database.
Co-Authored-By: Martin Schuppert <mschuppert@redhat.com>
Change-Id: Ifaa1101d05b835529730002ef985990c6469a449