--node-timeout - Maximum timeout for node introspection
--max-retries - Maximum introspection retries
--retry-timeout - Maximum timeout between introspection retries
Change-Id: I9c245dbc258c9714bb5a581d6d4d23b42cf53198
The __future__ module [1] was used in this context to ensure compatibility
between python 2 and python 3.
We previously dropped the support of python 2.7 and now we only support
python 3 so we don't need to continue to use this module and the imports
listed below.
Change-Id: I19fbdebe406575d2567f98a322ff68e6e992fac7
This change will set the verbosity consistently whenever a playbook is
executed via the client.
All tests have been updated to ensure that the verbosity setting is always
defined when a playbook is executed.
Change-Id: I35b10d48344c8b7f71186bc529a300f75d7b8d63
Signed-off-by: Kevin Carter <kecarter@redhat.com>
This change removes all of mistral from the provide workflows
by calling the new provide playbook.
Story: 2007212
Task: 38443
Task: 38446
Depends-On: Ic1cf39a29826b3828967e23a056884d867431062
Change-Id: If0184d3db5e49f6c3d302674ce7587ba27050a91
Signed-off-by: Kevin Carter <kecarter@redhat.com>
This change removes all of mistral from the clean_nodes and clean_manageable_nodes
workflows by calling the new cleaning playbook.
Story: 2007212
Task: 38451
Task: 38452
Depends-On: I0e3f50ff2bdc16b6a148977b39a12023d5b30b3e
Change-Id: I0b1981a82b1ae5d2b52a74250b623218a0cb6c0d
Signed-off-by: Kevin Carter <kecarter@redhat.com>
This change removes all of mistral from the introspection workflows
by calling the new introspection playbook.
Story: 2007212
Task: 38444
Task: 38445
Depends-On: I7ba8a31b71ed5bc9bdd944769ea472b537c83a98
Change-Id: I48bedfe9a705f5ff8e0eed1f852b77938d53c763
Signed-off-by: Kevin Carter <kecarter@redhat.com>
This change removes all of mistral from the configure functions
by calling the required functions directly.
Story: 2007212
Task: 38447
Task: 38448
Change-Id: I65eac9f813330d066f8ca6cbaae6c972d43d0246
Signed-off-by: Kevin Carter <kecarter@redhat.com>
This change removes all of mistral from the reset_bios_configuration and
apply_bios_settings workflows by calling the new raid playbooks.
Story: 2007212
Task: 38455
Task: 38456
Task: 38453
Task: 38454
Depends-On: I444ead2df7621fd11e336306e817dc1d28a04e72
Change-Id: I454c5330722f79075a3d9cf5fdf8bff80f7cc5ad
Signed-off-by: Kevin Carter <kecarter@redhat.com>
This change removes all of mistral from the create_raid_configuration
workflows by calling the new cleaning playbook.
Story: 2007212
Task: 38449
Depends-On: I51b1977b53995c93eb661f9ffa6d7cbad033ab75
Change-Id: If2f894316c3369432b16e07d68beee3a7d174b6a
Signed-off-by: Kevin Carter <kecarter@redhat.com>
This change removes all of mistral from the undeploy_roles function
by calling the required functions directly.
Story: 2007212
Task: 38458
Change-Id: Ief63d6b8af7ff85ae935b9a9b7cf9939958c7f27
Signed-off-by: Kevin Carter <kecarter@redhat.com>
This change removes all of mistral from the register_or_update workflow
by calling the required functions directly.
Story: 2007212
Task: 38442
Closes-Bug: #1866637
Change-Id: Ie85adc64cd4fcec469d6979a424d8f01b00f34f2
Signed-off-by: Kevin Carter <kecarter@redhat.com>
This change removes all of mistral from the validate_nodes workflow
by calling the required functions directly.
Story: 2007212
Task: 38440
Change-Id: Ic340fad0d7305188cf3bf33a73d36cd20fca4f6b
Signed-off-by: Kevin Carter <kecarter@redhat.com>
This change removes all of mistral from the expand_roles function
by calling the required functions directly.
Story: 2007212
Task: 38782
Change-Id: I07c1249571e1fe989d57427412c1aaa6ea1ff165
Signed-off-by: Kevin Carter <kecarter@redhat.com>
This change removes mistral from the discover_and_enroll workflow by managing the
context and simply calling the required methods. This change will speed up the
general execution process and ensure better reliability as we'll no longer require
mistral or zaqar to execute this workflow.
Story: 2007212
Task: 38450
Change-Id: Ic13a3f70383afdc23abd29b8db230f01e0bebf55
Signed-off-by: Kevin Carter <kecarter@redhat.com>
- print a table of baremetal nodes which are about to be unprovisioned
- prompt user to confirm unprovision, overridable with --yes argument
- add an -all argument which will unprovision all nodes in the yaml,
instead of just the provisioned:false nodes.
- add the missing plan argument to the provision deploy_roles to
support multiple overcloud stacks
Blueprint: nova-less-undercloud
Change-Id: I25c5311ec3599e0de9bd87038366ed8c7d7896a2
This is a companion command to "openstack overcloud node provision"
which is required for scale-down and should be run after "openstack
overcloud deploy".
It will undeploy any instance which has provisioned=False in the
supplied roles yaml, see this change[1] for the provisioned format.
[1] https://review.opendev.org/#/c/672605/
Change-Id: Ic7705c290220df7c55656fa080ff74315fb2030f
Blueprint: nova-less-deploy
Depends-On: https://review.opendev.org/#/c/672848/
The default concurrency of 20 may be too high for small underclouds
(especially CI environments), so this change adds a --concurrency
argument so callers can control the maximum number of nodes
to introspect concurrently.
Depends-On: https://review.opendev.org/#/c/672389/
Change-Id: I9faee9ab133e34466a79aa1176a16106bda1f15d
Closes-Bug: #1836976
Currently when something fails when introspecting we get this:
(undercloud) [stack@undercloud-0 ~]$ openstack overcloud node introspect --provide 1b646cb5-39e3-4ef9-a440-9a7c4ee0c6e1
Waiting for introspection to finish...
Waiting for messages on queue 'tripleo' with no timeout.
Retrying 1 nodes that failed introspection. Attempt 1 of 3
Retrying 1 nodes that failed introspection. Attempt 2 of 3
Retrying 1 nodes that failed introspection. Attempt 3 of 3
Retry limit reached with 1 nodes still failing introspection
Exception occured while running the command
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 30, in run
super(Command, self).run(parsed_args)
File "/usr/lib/python3.6/site-packages/osc_lib/command/command.py", line 41, in run
return super(Command, self).run(parsed_args)
File "/usr/lib/python3.6/site-packages/cliff/command.py", line 184, in run
return_code = self.take_action(parsed_args) or 0
File "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_node.py", line 208, in take_action
run_validations=parsed_args.run_validations
File "/usr/lib/python3.6/site-packages/tripleoclient/workflows/baremetal.py", line 159, in introspect
.join(msg for msg in payload['message'] if msg))
tripleoclient.exceptions.IntrospectionError: Introspection completed with errors:
R
e
t
r
y
l
i
m
i
t
.....
With this fix we correctly get a sane output:
(undercloud) [stack@undercloud-0 ~]$ openstack overcloud node introspect --provide 1b646cb5-39e3-4ef9-a440-9a7c4ee0c6e1
Waiting for introspection to finish...
Waiting for messages on queue 'tripleo' with no timeout.
Retrying 1 nodes that failed introspection. Attempt 1 of 3
Retrying 1 nodes that failed introspection. Attempt 2 of 3
Retrying 1 nodes that failed introspection. Attempt 3 of 3
Retry limit reached with 1 nodes still failing introspection
Exception occured while running the command
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 30, in run
super(Command, self).run(parsed_args)
File "/usr/lib/python3.6/site-packages/osc_lib/command/command.py", line 41, in run
return super(Command, self).run(parsed_args)
File "/usr/lib/python3.6/site-packages/cliff/command.py", line 184, in run
return_code = self.take_action(parsed_args) or 0
File "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_node.py", line 208, in take_action
run_validations=parsed_args.run_validations
File "/usr/lib/python3.6/site-packages/tripleoclient/workflows/baremetal.py", line 159, in introspect
.format(payload['message']))
tripleoclient.exceptions.IntrospectionError: Introspection completed with errors: Retry limit reached with 1 nodes still failing introspection
Introspection completed with errors: Retry limit reached with 1 nodes still failing introspection
Change-Id: I0f5bd1a2eb511041e6b93eacc71c964648bdf7ef
This commit introduces two new commands to configure BIOS:
1) openstack overcloud node bios configure --configuration <..> [--all-manageable|uuid1,uuid2,..]
2) openstack overcloud node bios reset [--all-manageable|uuid1,uuid2,..]
The first command applys BIOS configuration on given nodes or all manageable nodes;
the second command resets BIOS configuration to factory default on given nodes or all manageable nodes;
Depends-On: Icdaa734a44c55c830c0109d25488b68838e8736f
Change-Id: I809883e0e54a66bca58a0e45c035c69bcc39dcd1
This is a manual replacement for the disabled automated cleaning.
Depends-On: I909aad89bb18bb416e9749395970617d45b247ee
Change-Id: I8e7f9b1122ff697adb4538233242c7b48420b17b
Previously there was 'openstack baremetal instackenv validate' to
quickly validate an environment file.
Much of which is now done in the validate_nodes mistral workflow. Add a
mode to cease operations after validating the environment file
Change-Id: Ie2bfc79b45f4652285718be34a5e07ce5777c60d
This will allow the cli and tripleo UI to share a common queue,
which will help with scripting the frontend behaviour or track
the progress of the cli from the frontend.
Related-Bug: #1603345
Change-Id: I5849983eee82b5a56cae1bf38a3c2694b6b291db
The workflow now sends a success message the following message. So when
tripleoclient prints a similar message it is then duplicated.
"Successfully introspected all nodes."
Closes-Bug: #1716855
Change-Id: Ia4c88459a1a95f7f4e5a57a57a9ca8265c6e01f7
As of I892f03b0a703ce4d99c1e0f8caaae5f66dcb0d9b the workflow sends the
introspection status as each node completes. This means that we no
longer need to print out the information at the end as it just
duplicates it.
Closes-Bug: #1711660
Depends-On: I892f03b0a703ce4d99c1e0f8caaae5f66dcb0d9b
Change-Id: Ifc1cf0b959cf7da494d6582070ab0241fcb5e9ac
This is the frontend for discover_and_enroll_nodes workflow.
Change-Id: I822ec89add3742020262c091e79ff94f92ef92e7
Depends-On: I158f0b8f5251d9d94e7e57b3fe24362316d26599
Implements: blueprint node-discovery-by-range
There is a potential for a race condition if the websocket is not set
up before the call is made to Mistral. The websocket setup can take
sufficient time to set up, and if an error is thrown immediately from
a workflow, it has the potential of returning a message, and having
the websocket consume it before the websocket client is even set up.
Also, I think this has the benefit of keeping the call to Mistral
from even being made if there is something wrong with the websocket
setup.
Change-Id: Ib331037a7f5f4e59862d2b9646a83acdb18313eb
The 'bulk introspection start' and 'overcloud node introspect' commands
can now enable the usage of the additional validations that are called
via workflows.
Commands now accept a '--run-validations' switch.
Change-Id: Ifcb30b9ceb15d82fffcda24cb98f2d03f4ded354
Partial-Bug: #1638697
Depends-On: I439361ae20c4e302b83870cdc06a5baa90ea683c
This change introduces a new command to build RAID:
$ openstack overcloud raid create --node UUID1 [--node UUID2] configuraiton
More variants of this command will come later.
Part of blueprint raid-workflow
Change-Id: Ic4a547939213e5305cb7ca4f4458ef9eb577437b
It turns out that "Started Mistral Workflow. Execution ID:
58b6090b-7afb-4259-979c-f7a42b6de144" is not a particularly useful
message.
Include the Workflow name in the message to give the user more context.
Then it would be something like "Starting Mistral Workflow
tripleo.baremetal.register_or_update. Execution ID: ..." which explains
itself far more.
This change also adds the message to two baremetal workflow calls that
previously didn't include it.
Change-Id: I63e97ae88681dbf5aa29cb43f6d944061df6b7e0
Because 'provide' calls to other workflows, the error message that
surfaces back can be quite difficult to parse. This attempts to make
the message more readable.
Change-Id: Iae3f29e5da25177fdee45752410f92b064c874c3
Depends-On: I8da43e4ff76488fc5cdb7bd2efa0cf9c39e7bb5e
Closes-Bug: #1620949
The legacy 'baremetal configure boot' command calls the 'configure'
workflow individually for each node, causing a success/failure message
to be printed each time. The configured node UUID(s) are included in
the execution output, so printing a specific message for each node is
possible. However, considering that with many nodes this would make it
easy for failure messages to go unnoticed, and that the command prior
to the Mistral migration had no output, this patch removes the message
when nothing went wrong instead.
Change-Id: I3f1df8046ad92328fe0ee47fa073c58ca9ad5f20
Closes-Bug: #1619717
Calls to the Mistral workflows to configure boot options and the root
device.
Change-Id: Ifd868fcdd6ed2d54b40c2e1861558d0233731be5
Depends-On: I5ba0a3710012c44822dd3b8e69662bbef04d3787
Closes-Bug: #1595205
The message was always displayed even after successfully introspecting
nodes.
Change-Id: Ib97e7f2d2b233c2e7b91576141f920c32fc2e1cf
Closes-Bug: #1611284
It seems that if there are no nodes to introspect, the value we get
is an empty list instead of an empty dict. Which is not what we
expect. So this CR fixes that.
Change-Id: I72edd4633d32119eb5f39fb45f8e40ee715eb888
Closes-Bug: #1604371