Ironic has deprecated these for a while, so we should be notifying
our users of that fact. This change also makes us stop using those
old ramdisks by default so we don't have to build an extra image
during CI runs.
Co-Authored-By: Dan Prince <email@example.com>
This change switches the overcloud deploy stack polling with event
list polling. This uses the same technique as the recently added
heat stack-create --poll mode: Ib7d35b66521f0ccca8544fd18fb70e04eaf98e5a
There is a (CREATE|UPDATE)_(COMPLETE|FAILED) event generated when the stack
operation is complete, so this is used to decide when to stop polling.
Also, every time there is an event it is printed to the screen in log
style. This would give the user an indication that something is
happening, which is better than staring at nothing for minutes (hours).
On stack update, the oldest event from the previous operation is discovered and
that is used for the marker so that only new events are logged.
This patch updates 'overcloud deploy' commands so that
updates (re-running deploy on an existing stack) generate
a unique DeployIdentifier parameter. By passing an unique
DeployIdentifier parameter into the overcloud stack we
force all puppet config tasks (which are idempotent
and safe to exec multiple times) again.
We've had several issues with puppet not-rerunning on
a variety of config related changes and additionally recieved
user feedback that not re-running all puppet deployments on
an update is confusing.
Some time ago ironic-discoverd used to require maintenance mode.
This was changed back in Kilo, and was deprecated in Liberty.
Also cleans up introspection code a bit.
It's generally recommended that base Exceptions not be raised,
because this makes it impossible to handle exceptions in a more
granular way. except Exception: will catch all exceptions, not
just the one you might care about.
This replaces most instances of this pattern in tripleoclient, with
the exception of the one in overcloud_image.py because I have a
separate change open that already fixes it.
Remove parameters that have sane defaults in the Heat templates.
In order to change these, one should use an environment file rather
than passing args to the deploy command. These commands should now
be considered deprecated in favor of the environment file.
Use a property instead of a function. This is more in line with how the
clients are referenced with openstackclient, thereby making it easier
to switch to the Ironic OSC plugin in the future.
On an openstack overcloud deploy which results in a stack-update,
implicitly include overcloud-resource-registry-puppet.yaml if the user
specifies any other custom environments. Do not include
overcloud-resource-registry-puppet.yaml if the user specifies no extra
This solution would be an improvement over the current situation, but it
It does the right thing when:
- the user does an openstack overcloud with no environments specified
- the user does an openstack overcloud with all custom environments
It will sometimes do the wrong thing when:
- the user specifies a partial list of custom environments (like -e
Running openstack command with python-tripleoclient under
root user is not supported and should not be allowed. Added
check for user and exit if it is root (EUID=0) to openstack
undercloud install command.
Each command can be disabled for root by adding
utils.ensure_run_as_normal_user() into it's body.
is redundant and misuses the logging.exception function. The
argument to exception() is supposed to be a string describing the
error that occurred, not the exception itself. It already pulls
the exception details from the exception handler context and
includes them in the output.
This changes the instances of the above pattern to simply
which results in the correct output of something like:
Traceback (most recent call last):
File "./test.py", line 23, in <module>
There is a current effort on tripleo-heat-templates to manage keystone
resources directly via puppet. The effort is separate in two patchset
one to replace the current keystone.initialize() and the other other to
This review is necessary in order to make CI pass the upstream jobs.
The whole keystone functions call will be removed then.
The current implementation of the password generation function is
all or nothing. If the file exists, then it uses that file as the
list of passwords, otherwise it generates an entirely new file.
This means that if we add a new service that requires a password to
be generated and a user upgrades with a password file generated by
an older version, we will not generate a new password for the new
service and pandemonium will result.
To address the problem, read in any existing passwords from the file,
but loop through the names to make sure all of the expected passwords
are set. If any are not, generate them. Behavior in the case of
no password file remains unchanged.
Python provides the platform module that can do this for us, and
it will simplify adding support for other distros in the future if
we aren't dependent on a redhat-specific file.
Also changes the exception raised from Exception to RuntimeError
because raising bare Exceptions is frowned upon.
Moves create_overcloudrc and create_tempest_deployer_input to utils.py
because they didn't require the instance information from
DeployOvercloud, and now can be better tested independently.
3 separate places had hardcoded values for the t-h-t directory. Many
values are/will be shared, this patch consolidates them to avoid having
them diverge in the future.
device-mapper-multipath needs to be installed for managing of multipath
devices for scenarios where fail over or load balancing of block devices
This change installs the package by default in the overcloud-full image
for connvenience. The multipathd daemon is not started or enabled nor
any multipath configuration exposed. Those exercises would be left to be
done via the ExtraConfig resources in tripleo-heat-templates.
fedora-user is an optional image that is only used for testing the
deployed overcloud. We shouldn't force everyone to download it as
part of the --all build when in many cases it won't be needed. The
option to download it is left in the client though so it can still
be easily retrieved when necessary.
I consistently have problems building images on an 8 GB VM with
the undercloud already installed (which is the documented flow).
At 4 or 12 GB it works fine, but 8 seems to fall in a sweet spot
where the tmpfs usage of dib causes OOM problems. 4 GB works
because dib doesn't use tmpfs by default with that little memory,
and 12 GB seems to be sufficient to avoid running out of memory
even with tmpfs.
This changes passes the --min-tmpfs 5 parameter to the image build
for the overcloud-full image (the largest image by far) so it will
not attempt to build in tmpfs if there is less than 12 GB of RAM.
5 instead of 6 because due to rounding 6 would result in tmpfs not
being used on 12 GB VMs either (similar to how it is not used on a
4 GB VM even though the default is 2).
Note that tmpfs by default uses half of total memory, which is why
all of the min-tmpfs numbers are half of the expected memory on the
Because existing heat environment is used (instead of passing whole
environment with each stack-update command), it's possible
that there are some hooks left in the existing environment.
As a short-term solution, we make sure we unset hooks (which are
now set only by package update command).