This also removes the modules ``tripleo_reset_params`` module
that was used to reset parameters in plan and changed
``tripleo_derived_parameters`` role to not try and update the
plan with derived parameters.
Change-Id: I9b087452ef56b9ff53d08406158d8e1e5a3328f0
While we need to start the legacy network config on boot, we don't need to
try and start it on task invocation. Starting the legacy network on an
already running host will result in failure because ports are un-able to
be rebound. This change removes the start action and just makes sure that
the legacy network service is enabled.
Change-Id: I88cd4fd907262d6a1bedbe7b76bc025eeb45a837
Signed-off-by: Kevin Carter <kecarter@redhat.com>
Now that we've removed the plan, we should generate
an environmnent file with rotated passwords which
should be used in the next overcloud deploy.
When we move to ephemeral heat stack this would
update wherever initial passwords are stored.
Change-Id: I02541adfec2fa604e728aece343e7f0722b84ec6
Currently users have to pass in a dictionary of HOSTNAME:SOURCE_IPV4 of
all nodes in the deployment. An example is as follows from THT:
FrrBgpIpv4LoopbackMap:
ctrl-1-0: 99.99.1.1
ctrl-2-0: 99.99.2.1
ctrl-3-0: 99.99.3.1
cmp-1-0: 99.99.1.2
cmp-2-0: 99.99.2.2
cmp-3-0: 99.99.3.2
cmp-1-1: 99.99.1.3
cmp-2-1: 99.99.2.3
cmp-3-1: 99.99.3.3
This is rather time consuming, prone to typos and requires updating at
node scale up/down. It would be much easier if users could just pass in
the network and have tripleo_frr get the IP from given network.
Snip from tripleo-ansible-inventory.yaml:
ControllerRack1:
hosts:
ctrl-1-0: {ansible_host: 192.168.1.101, canonical_hostname: ctrl-1-0.bgp.ftw,
main_network_hostname: ctrl-1-0.mainnetwork.bgp.ftw, main_network_ip: 99.99.1.1,
[...]
Change-Id: I43852cb3570b8cb12a35f4bc641a42ddfd8ad7f1
When deleting overcloud ignore endpoint not found
error which looks like ``endpoint for object-store
service in <regionOne> region not found``, so as not
to fail when swift is disabled in the undercloud.
Change-Id: I0f553f8e7f622b983b6707674c958e211bb83829
When a user creates a HA load balancer in Octavia, Octavia creates
server groups as part of the load balancer resources. However because
the default quotas related to server group are very low and we have all
load balancer resources in the common service project, users can create
very limited number of HA load balancers by default.
This patch disables the quota limits of the server-group-members and
server-groups of the service project, so that HA load balancer creation
is not blocked by these quotas.
Closes-Bug: #1914018
Change-Id: I0048fec8c1e19bd20b1edcd23f2490456fe1cd12
Satellite server does not have an actual namespace in the url so the
container location is just host/container:tag. We need to be able to
properly set the headers on blobs without namespaces.
Change-Id: Ia6728d68305ba3c662e99ea067b4d4feef9eeea0
Related-Bug: #1913782
In the module extracting network data from an existing
heat stack, use the 'tripleo_net_idx' tag to write out
network data file in v2 format using identical network
ordering with the network data used to deploy the stack.
In the module used to create composable networks pass
in the loop index_var in the playbook and set the
tripleo_net_idx tag when creating and updating neutron
network resources.
Partial-Implements: blueprint network-data-v2-ports
Change-Id: Ib96e17987ae1dca629e275b0f219ef435af63bb0
This change adds an internal alias for port to dport. This is done to
allow legacy config to operate as expected should a user have overrides
with the puppet legacy option. Should the action plugin encounter a
rule with the deprecated option a notice will be printed on screen
containing the rule and and information on how to convert it so that
functionality isn't lost on a future release.
Change-Id: I0643345a144a4b4c94c11465e9f8a82f13da146d
Signed-off-by: Kevin Carter <kecarter@redhat.com>
This allow the master playbook used for update to set
tripleo_redhat_enforce to false on a per role basis on Red Hat
environment.
The default in defaults/main.yml is now "true" so that it keeps its
behavior of being run by default if nothing is changed in the role
definition.
We then avoid running it on other plateform than Red Hat by adding an
explicit test in that tasks/main.yml file.
Overall the behavior is as follows:
| Red Hat Env | tripleo_enforced | Test run |
|-------------+----------------------+----------|
| True | Unset | Yes |
| True | Set to true in role | Yes |
| True | Set to false in role | No |
| False | Doesn't matter | No |
Change-Id: I6268a01d16f8288bf862003d19184fc93b88282a
Partial-Bug: #1912512
Writes to an output env file that can then be used
for stack update.
Also removes the unused var persist_params_in_plan
Change-Id: Idc15cf94bd100efc8ab81dcd69787113746d9aee
[DEPRECATION WARNING]: evaluating 'environment_directories' as a bare
variable, this behaviour will go away and you might need to add |bool to
the expression in the future. Also see CONDITIONAL_BARE_VARS
configuration toggle. This feature will be removed in version 2.12.
Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
Change-Id: I16ed9e104d9daaa56ecf691f90dee492c6d06348
If the NFS server firewalld does not open the ports, ReaR cannot correctly mount the NFS server
while performing the backup and/or restore, and subsequently the action fails and the
openstack-ansible playbook stops running.
This change checks whether the server chosen to be NFS server has firewalld running, and implies that
if it is running, the operator must declare the firewalld zone where the ports must be opened.
Closes-Bug: #1912366
Change-Id: Ic6816fa647653baf8297dc62cdd99ee522b86535
Do not assume we will always have hostvars[<node>]['storage_ip'].
Instead use the service_net_map, found in global_vars.yaml of
config-download. Within this directory, if ceph_mon on the list
tripleo_enabled_services, then there will be a service_net_map
containing the ceph_mon_network. As per tripleo_common/inventory.py,
this network name, whatever it is composed to, will have an '_ip'
appended to it which will map to the correct IP address. Without
network isolation ceph_mon_network will default to ctlplane. With
network isolation it will default to storage, but it could also
be composed to anything, so we can use this method to pick up
whatever it is.
Closes-Bug: #1912218
Change-Id: I7c1052b1c27ea91c5f97f59ec80c906d60d5f13e
Given how config-download runs in the main branch it's no longer
necessary to use become when creating the work directories for
ceph-ansible to be executed or when running the tripleo_ceph_client
role. Using become introduces the bug this change resolves. Also,
as we are not using become we won't set the owner of the directory.
Instead we will use the default owner of whoever created the directory.
Change-Id: I65cd66ed5c94b548b775b9b4829717c202837d7e
Closes-Bug: #1912103
When priviledge mode is set, don't add any capabilities as they
are included.
Use 1.6.4 podman because 2.0.5 rootless doesn't work with
systemd [1]
Disable Selinux on host.
[1] https://github.com/containers/podman/issues/8965
Closes-Bug: #1910970
Change-Id: I73ac1c405e8a3539937a5578bb003cba0b935d94
If linting fails, content provider still builds.
Whis is suboptimal, since standalone/multinode jobs will be skipped and
nothing will use those builds.
Put cprovider into dependency on the linting jobs as well.
Change-Id: I18101f47245f92412cab6ff2289618605e1baa26
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
We currently forcefully install pacemaker + pcs in a number of upgrade
tasks. This is suboptimal because on certain nodes pcs/pacemaker repos
are not enabled. Let's let puppet pull in those packages normally.
Tested this during a queen -> train FFU successfully on a composable
roles environment.
Closes-Bug: #1911684
Change-Id: I70d8bebf0d6cbaeff3f108c5da10b6bfbdff8ccf
In order to launch the container and connect via networking, we need
selinux disabled for a rootless container to still work. Let's move the
selinux disabling to first rather than later.
Change-Id: I345e8b8547b81e5791656d0fca6e90b1de48fdac