Merge "Documentation reflects current state of VF"

This commit is contained in:
Zuul 2021-05-06 07:15:42 +00:00 committed by Gerrit Code Review
commit eb89777a00
8 changed files with 34 additions and 526 deletions

View File

@ -380,7 +380,7 @@ Introspect Nodes
Once the undercloud is installed, you can run the
``pre-introspection`` validations::
openstack workflow execution create tripleo.validations.v1.run_groups '{"group_names": ["pre-introspection"]}'
openstack tripleo validator run --group pre-introspection
Then verify the results as described in :ref:`running_validation_group`.
@ -470,7 +470,7 @@ Deploy the Overcloud
Before you start the deployment, you may want to run the
``pre-deployment`` validations::
openstack workflow execution create tripleo.validations.v1.run_groups '{"group_names": ["pre-deployment"]}'
openstack tripleo validator run --group pre-deployment
Then verify the results as described in :ref:`running_validation_group`.
@ -615,7 +615,7 @@ Post-Deployment
After the deployment finishes, you can run the ``post-deployment``
validations::
openstack workflow execution create tripleo.validations.v1.run_groups '{"group_names": ["post-deployment"]}'
openstack tripleo validator run --group post-deployment
Then verify the results as described in :ref:`running_validation_group`.

View File

@ -27,7 +27,6 @@ Documentation on additional features for |project|.
ipsec
keystone_security_compliance
lvmfilter
mistral-api
multiple_overclouds
network_isolation
network_isolation_virt

View File

@ -1,363 +0,0 @@
Mistral API
===========
.. warning::
Mistral on the Undercloud has been deprecated in the Ussuri cycle and is
soon being removed. You should consider Ansible playbooks instead of
Mistral workflows.
This page related to TripleO Train and backward.
The public API for TripleO uses the OpenStack Workflow service, `Mistral`_ to
provide its interface. This allows external systems to consume and use the same
Workflows used by python-tripleoclient and tripleo-ui.
Working with Mistral
--------------------
TripleO functionality can be accessed via Mistral Workflows and Actions.
Workflows define a set number of steps and typically use a number of actions.
There is a set of actions which are intended to be used directly. When actions
are called directly, Mistral executes them synchronously which is quicker for
simple actions.
Mistral can be used with the CLI interface or Python bindings which are both
provided by `python-mistralclient`_ or via the `REST API`_ directly. This
guide will use the Mistral CLI interface for brevity.
When using the CLI, all of the TripleO workflows can be viewed with the
command ``openstack workflow list``. All of the workflows provided by TripleO
will have a name starting with ``tripleo.``
.. code-block:: console
$ openstack workflow list
+--------------------------------------+-----------------------------------------+------------------------------+
| ID | Name | Input |
+--------------------------------------+-----------------------------------------+------------------------------+
| 1ae040b6-d330-4181-acb9-8638dc486b79 | tripleo.baremetal.v1.set_node_state | node_uuid, state_action, ... |
| 2ef20a58-b380-4b6b-a6cd-270352d0f3d2 | tripleo.deployment.v1.deploy_on_servers | server_name, config_name,... |
+--------------------------------------+-----------------------------------------+------------------------------+
To view the individual workflows in more detail and see the inputs they
accept use the ``openstack workflow show`` command. This command will also
show the default values for input parameters. If no default is given, then it
is required.
.. code-block:: console
$ openstack workflow show tripleo.plan_management.v1.create_default_deployment_plan
+------------+-----------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------+
| ID | fa8256ec-b585-476f-a83e-e800beb26684 |
| Name | tripleo.plan_management.v1.create_default_deployment_plan |
| Project ID | 65c5259b7a96436f898fd518815c42c1 |
| Tags | <none> |
| Input | container, queue_name=tripleo |
| Created at | 2016-08-19 10:07:10 |
| Updated at | None |
+------------+-----------------------------------------------------------+
This workflow can then be executed with the ``openstack workflow execution
create`` command.
.. code-block:: console
$ openstack workflow execution create tripleo.plan_management.v1.create_default_deployment_plan \
'{"container": "my_cloud"}'
+-------------------+-----------------------------------------------------------+
| Field | Value |
+-------------------+-----------------------------------------------------------+
| ID | 824a8cf7-3306-4ef2-8efd-a2715dd0dbce |
| Workflow ID | fa8256ec-b585-476f-a83e-e800beb26684 |
| Workflow name | tripleo.plan_management.v1.create_default_deployment_plan |
| Description | |
| Task Execution ID | <none> |
| State | RUNNING |
| State info | None |
| Created at | 2016-08-22 12:33:35.493135 |
| Updated at | 2016-08-22 12:33:35.495764 |
+-------------------+-----------------------------------------------------------+
After a Workflow execution is created it will be scheduled up by Mistral and
executed asynchronously. Mistral can either be polled until it is finished or
you can subscribe to the `Zaqar`_ queue for messages from the running
Workflow. By default the TripleO workflows will send messages to a Zaqar queue
with the name ``tripleo``, the workflows all accept a ``queue_name`` parameter
which allows a user defined queue name to be used. It can be useful to use
different queue names if you plan to execute multiple workflows and want the
messages to be handled individually.
Actions can be used in a similar way to workflows, but the CLI commands are
``openstack action definition list``, ``openstack action definition show``
and ``openstack action execution run``.
`API reference documentation`_ is available for all TripleO Workflows.
Creating a Deployment Plan
--------------------------
Deployment plans consist of a Swift container and a Mistral Environment. The
TripleO Heat Templates are stored in Swift and then user defined parameters are
stored in the Mistral environment.
Using the default plan
^^^^^^^^^^^^^^^^^^^^^^
When the undercloud is installed, it will create a default plan with the name
``overcloud``. To create a new plan from the packaged version of
tripleo-heat-templates on the undercloud use the workflow
``tripleo.plan_management.v1.create_default_deployment_plan``. This workflow
accepts a name which will be used for the Swift container and Mistral
environment.
The following command creates a plan called ``my_cloud``.
.. code-block:: console
$ openstack workflow execution create tripleo.plan_management.v1.create_default_deployment_plan \
'{"container": "my_cloud"}'
+-------------------+-----------------------------------------------------------+
| Field | Value |
+-------------------+-----------------------------------------------------------+
| ID | dc4800ef-8d0a-436e-9564-a7ee81ba93d5 |
| Workflow ID | fa8256ec-b585-476f-a83e-e800beb26684 |
| Workflow name | tripleo.plan_management.v1.create_default_deployment_plan |
| Description | |
| Task Execution ID | <none> |
| State | RUNNING |
| State info | None |
| Created at | 2016-08-23 10:06:45.372767 |
| Updated at | 2016-08-23 10:06:45.376122 |
+-------------------+-----------------------------------------------------------+
.. note::
When updating the packages on the undercloud with yum the TripleO Heat
Templates will be updated in `/usr/share/..` but any plans that were
previously created will not be updated automatically. At the moment this
is a manual process.
Using custom templates
^^^^^^^^^^^^^^^^^^^^^^
Manually creating a plan with custom templates is a three stage process. Each
step must use the same name for the container, we are using ``my_cloud``, but
it can be changed if they are all consistent. This will be the plan name.
1. Create the Swift container.
.. code-block:: bash
openstack action execution run tripleo.plan.create_container \
'{"container":"my_cloud"}'
.. note::
Creating a swift container directly isn't sufficient, as this Mistral
action also sets metadata on the container and may include further
steps in the future.
2. Upload the files to Swift.
.. code-block:: bash
swift upload my_cloud path/to/tripleo/templates
3. Trigger the plan create Workflow, which will create the Mistral environment
for the uploaded templates, do some initial template processing and generate
the passwords.
.. code-block:: bash
openstack workflow execution create tripleo.plan_management.v1.create_deployment_plan \
'{"container":"my_cloud"}'
Working with Bare Metal Nodes
-----------------------------
Some functionality for dealing with bare metal nodes is provided by the
``tripleo.baremetal`` workflows.
Register Nodes
^^^^^^^^^^^^^^
Baremetal nodes can be registered with Ironic via Mistral. The input for this
workflow is a bit larger, so this time we will store it in a file and pass it
in, rather than working inline.
.. code-block:: bash
$ cat nodes.json
{
"remove": false,
"ramdisk_name": "bm-deploy-ramdisk",
"kernel_name": "bm-deploy-kernel",
"nodes_json": [
{
"pm_password": "$RSA_PRIVATE_KEY",
"pm_type": "pxe_ssh",
"pm_addr": "192.168.122.1",
"mac": [
"00:8f:61:0d:6a:e1"
],
"memory": "8192",
"disk": "40",
"arch": "x86_64",
"cpu": "4",
"pm_user": "root"
}
]
}
* If ``remove`` is set to true, any nodes that are not passed to the workflow
will be removed.
* ``ramdisk_name`` and ``kernel_name`` are the Glance names for the kernel and
ramdisk to use for the nodes.
* If ``instance_boot_option`` is set, it defines whether to set instances for
booting from the local hard drive (local) or network (netboot).
* The format of the nodes_json is documented in :ref:`instackenv`.
.. code-block:: bash
$ openstack workflow execution create tripleo.baremetal.v1.register_or_update \
nodes.json
The result of this workflow can be seen with the following command.
.. code-block:: bash
$ mistral execution-get-output $EXECUTION_ID
{
"status": "SUCCESS",
"new_nodes": [],
"message": "Nodes set to managed.",
"__task_execution": {
"id": "001892c5-4197-4c04-af74-aff95f6d584f",
"name": "send_message"
},
"registered_nodes": [
{
"uuid": "93feecfb-8a4d-418c-9f2c-5ef8db7aff2e",
...
},
]
}
The above information is accessible like this, or via the zaqar queue. The
registered_nodes property will contain each of the nodes registered with all
their properties from Ironic, including the UUID which is useful for
introspection.
Introspect Nodes
^^^^^^^^^^^^^^^^
To introspect the nodes, we need to either use the Ironic UUID's returned by
the register_or_update workflow or retrieve them directly from Ironic. Then
those UUID's can be passed to the introspection workflow. The workflow expects
nodes to be in the "manageable" state.
.. code-block:: bash
$ openstack workflow execution create tripleo.baremetal.v1.introspect \
'{"nodes_uuids": ["UUID1", "UUID2"]}'
.. _cleaning_workflow:
Cleaning Nodes
^^^^^^^^^^^^^^
It is recommended to clean previous information from all disks on the bare
metal nodes before new deployments. As TripleO disables automated cleaning, it
has to be done manually via the ``manual_clean`` workflow. A node has to be in
the ``manageable`` state for it to work.
.. note::
See `Ironic cleaning documentation
<https://docs.openstack.org/ironic/deploy/cleaning.html>`_ for
more details.
To remove partitions from all disks on a given node, use the following
command:
.. code-block:: bash
$ openstack workflow execution create tripleo.baremetal.v1.manual_cleaning \
'{"node_uuid": "UUID", "clean_steps": [{"step": "erase_devices_metadata", "interface": "deploy"}]}'
To remove all data from all disks (either by ATA secure erase or by shredding
them), use the following command:
.. code-block:: bash
$ openstack workflow execution create tripleo.baremetal.v1.manual_cleaning \
'{"node_uuid": "UUID", "clean_steps": [{"step": "erase_devices", "interface": "deploy"}]}'
The node state is set back to ``manageable`` after successful cleaning and to
``clean failed`` after a failure. Inspect node's ``last_error`` field for the
cause of the failure.
.. warning::
Shredding disks can take really long, up to several hours.
Provide Nodes
^^^^^^^^^^^^^
After the nodes have been introspected they will still be in the manageable
state. To make them available for a deployment, use the provide workflow,
which has the same interface as introspection.
.. code-block:: bash
$ openstack workflow execution create tripleo.baremetal.v1.provide \
'{"nodes_uuids": ["UUID1", "UUID2"]}'
Parameters
----------
A number of parameters will need to be provided for a deployment to be
successful. These required parameters will depend on the Heat templates that
are being used. Parameters can be set with the Mistral Action
``tripleo.parameters.update``.
.. note::
This action will merge the passed parameters with those already set on the
plan. To set the parameters first use ``tripleo.parameters.reset`` to
remove any old parameters first.
In the following example we set the ``ComputeCount`` parameter to ``2`` on the
``my_cloud`` plan. This only sets one parameter, but any number can be provided.
.. code-block:: bash
$ openstack action execution run tripleo.parameters.update \
'{"container":"my_cloud", "parameters":{"ComputeCount":2}}'
Deployment
----------
After the plan has been configured it should be ready to be deployed.
.. code-block:: bash
$ openstack workflow execution create tripleo.deployment.v1.deploy_plan \
'{"container": "my_cloud"}'
Once the deployment is triggered, the templates will be processed and sent to
Heat. This workflow will complete when the Heat action has started, or if there
are any errors.
Deployment progress can be tracked via the Heat API. It is possible to either
follow the Heat events or simply wait for the Heat stack status to change.
.. _Mistral: https://docs.openstack.org/mistral/
.. _python-mistralclient: https://docs.openstack.org/mistral/guides/mistralclient_guide.html
.. _REST API: https://docs.openstack.org/mistral/developer/webapi/index.html
.. _Zaqar: https://docs.openstack.org/zaqar/
.. _API Reference Documentation: https://docs.openstack.org/tripleo-common/reference/index.html

View File

@ -57,7 +57,7 @@ Updating Undercloud Components
.. code-block:: bash
mistral execution-get-output $(openstack workflow execution create -f value -c ID tripleo.validations.v1.run_groups '{"group_names": ["pre-upgrade"]}')
openstack tripleo validator run --group pre-upgrade
.. admonition:: Newton to Ocata
:class: ntoo

View File

@ -32,6 +32,11 @@ For example you can run this as:
$ openstack tripleo validator run --validation check-ftype,512e
.. _running_validation_group:
Running validation groups
-------------------------
``--group``: This option allows to run specific group validations, if more than
one group is required, then separate the group names with commas. The default
value for this option is ['pre-deployment'].

View File

@ -5,12 +5,19 @@ Since the Newton release, TripleO ships with extensible checks for
verifying the Undercloud configuration, hardware setup, and the
Overcloud deployment to find common issues early.
The TripleO UI runs the validations automatically. Since
Stein, it is possible to run the validations using the TripleO
CLI.
Since Stein, it is possible to run the validations using the TripleO CLI.
.. note:: The TripleO UI is marked for deprecation beginning with
OpenStack Stein.
Validations are used to efficiently and reliably verify various facts about
the cloud on the level of individual nodes and hosts.
Validations are non-intrusive by design, and recommended when performing large
scale changes to the cloud, for example upgrades, or to aid in the diagnosis
of various issues. Detailed docs for both the CLI and the API are provided
by the Validations Framework project.
* tripleo-validations: https://docs.openstack.org/tripleo-validations/latest/
* validations-common: https://docs.openstack.org/validations-common/latest/
* validations-libs: https://docs.openstack.org/validations-libs/latest/
The validations are assigned into various groups that indicate when in
the deployment workflow they are expected to run:
@ -41,14 +48,18 @@ the deployment workflow they are expected to run:
* **post-upgrade** try to validate your OpenStack deployment after you upgrade it.
Note that for most of these validations, a failure does not mean that
you'll be unable to deploy or run OpenStack. But it can indicate
potential issues with long-term or production setups. If you're
running an environment for developing or testing TripleO, it's okay
that some validations fail. In a production setup, they should not.
.. note::
In case of the most validations, a failure does not mean that
you'll be unable to deploy or run OpenStack. But it can indicate
potential issues with long-term or production setups. If you're
running an environment for developing or testing TripleO, it's okay
that some validations fail. In a production setup, they should not.
The list of all existing validations and the specific documentation
for the project is described on the `tripleo-validations documentation page`_.
for the project can be found on the `tripleo-validations documentation page`_.
With implementation specifics described in docs for the `validations-libs`_,
and `validations-common`_.
The following sections describe the different ways of running and listing the
currently installed validations.
@ -58,8 +69,9 @@ currently installed validations.
:includehidden:
cli
mistral
ansible
in-flight
.. _tripleo-validations documentation page: https://docs.openstack.org/tripleo-validations/latest/readme.html
.. _tripleo-validations documentation page: https://docs.openstack.org/tripleo-validations/latest/
.. _validations-libs: https://docs.openstack.org/validations-libs/latest/
.. _validations-common: https://docs.openstack.org/validations-common/latest/

View File

@ -1,141 +0,0 @@
Running validations using mistral
---------------------------------
Running single validations
^^^^^^^^^^^^^^^^^^^^^^^^^^
If you want to run one validation in particular (because you're trying
things out or want to see whether you fixed the failure), you can run
it like so::
$ source stackrc
$ openstack workflow execution create tripleo.validations.v1.run_validation '{"validation_name": "undercloud-ram"}'
This will run the "undercloud-ram.yaml" validation.
The undercloud comes with a set of default validations, which are stored in a
Swift container called ``tripleo-validations``. To get the list of available
validations, run ``openstack container list tripleo-validations``. To get the
name of the validation (``undercloud-ram`` here) find the right one there and
use its filename without an extension.
The command will return an execution ID, which you can query for the
results. To find out whether the validation finished, run::
$ openstack workflow execution show ID
Note the ``State`` value: it can be either ``RUNNING`` or ``SUCCESS``.
Note that success here only means that the validation finished, not
that it succeeded. To find that out, we need to read its output::
$ mistral execution-get-output ID | jq .stdout -r
.. note:: There is an ``openstack workflow execution show output``
command which should do the same thing, but it currently
doesn't work in all environments supported by |project|.
This will return the hosts the validation ran against, any warnings
and error messages it encountered along the way as well as an overall
summary.
Custom validations
^^^^^^^^^^^^^^^^^^
Support for `custom validations`_ has been added in the Rocky development cycle.
It allows operators to add their own bespoke validations, in cases when it's
not appropriate to include these in the set of default TripleO validations.
Custom validations are associated with deployment plans and stored in the
plan's Swift container, together with the rest of the plan files.
To add custom validations for a deployment plan, create a ``custom-validations``
subdirectory within the deployment plan Swift container and place the
validations yaml files there.
To run a custom validation, follow the same procedure as for running one of the
default validations - determine the name of the validation by listing the contents
of the ``custom-validations`` subdirectory, and supply that name (without the
.yaml extension) to the ``run_validation`` workflow.
If a validation with the same name is found both in the set of default
validations, and in custom validations, the custom validation is always picked.
This means that, if you wish to override a default validation with your custom
implementation of it, all you need to do is create a validation with the same
name and place it in the ``custom-validations`` subdirectory of the Swift
container holding the deployment plan.
.. _running_validation_group:
Running a group of validations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The deployment documentation highlights places where you can run a
specific group of validations. Here's how to run the
``pre-deployment`` validations::
$ source stackrc
$ openstack workflow execution create tripleo.validations.v1.run_groups '{"group_names": ["pre-deployment"]}'
Unfortunately, getting the results of all these validations is more
complicated than in the case a single one. You need to see the tasks
the workflow above generated and see their results one by one: ::
$ openstack task execution list
Look for tasks with the ``run_validation`` name. Then take the ID of
each of those and run::
$ mistral task-get-result ID | jq .stdout -r
.. note:: There is a ``task execution show result`` command that
should do the same thing, but it's not working on all
platforms supported by |project|.
Since this can be tedious and hard to script, you can get the list of
validations belonging to a group instead and run them one-by-one using
the method above::
$ openstack action execution run tripleo.validations.list_validations '{"groups": ["pre-deployment"]}' | jq ".result[] | .id"
Another example are the "pre-upgrade" validations which are added during the P
development cycle. These can be executed as
the example above but instead using the "pre-upgrade" group::
openstack workflow execution create tripleo.validations.v1.run_groups '{"group_names": ["pre-upgrade"]}'
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| ID | 3f94a17b-835b-4a82-93af-a6cddd676ed8 |
| Workflow ID | e211099f-2c9b-46cd-a536-e38595ae8e7f |
| Workflow name | tripleo.validations.v1.run_groups |
| Description | |
| Task Execution ID | <none> |
| State | RUNNING |
| State info | None |
| Created at | 2017-06-29 12:01:35 |
| Updated at | 2017-06-29 12:01:35 |
+-------------------+--------------------------------------+
You can monitor the progress of the execution by getting its status and also
output::
mistral execution-get $ID
mistral execution-get-output $ID
When any of the validations fail the execution will have a ERROR status.
You can query the individual validations in the group to determine the exact
reasons that the validation fails. For example::
for i in $(mistral execution-list | grep tripleo.validations.*ERROR | awk '{print $2}'); do mistral execution-get-output $i; done
{
"result": "Failure caused by error in tasks: get_servers\n\n get_servers [task_ex_id=a6ef7d32-4678-4a58-85fe-bf2da8a963ae] -> Failed to run action [action_ex_id=3a9a81e2-d6b0-4380-8985-41d6f4e18f3a, action_cls='<class 'mistral.actions.action_factory.NovaAction'>', attributes='{u'client_method_name': u'servers.list'}', params='{}']\n NovaAction.servers.list failed: <class 'keystoneauth1.exceptions.connection.ConnectFailure'>: Unable to establish connection to http://192.168.24.1:8774/v2.1/servers/detail: ('Connection aborted.', BadStatusLine(\"''\",))\n [action_ex_id=3a9a81e2-d6b0-4380-8985-41d6f4e18f3a, idx=0]: Failed to run action [action_ex_id=3a9a81e2-d6b0-4380-8985-41d6f4e18f3a, action_cls='<class 'mistral.actions.action_factory.NovaAction'>', attributes='{u'client_method_name': u'servers.list'}', params='{}']\n NovaAction.servers.list failed: <class 'keystoneauth1.exceptions.connection.ConnectFailure'>: Unable to establish connection to http://192.168.24.1:8774/v2.1/servers/detail: ('Connection aborted.', BadStatusLine(\"''\",))\n"
}
{
"status": "FAILED",
"result": null,
"stderr": "",
"stdout": "Task 'Fail if services were not running' failed:\nHost: localhost\nMessage: One of the undercloud services was not active. Please check openstack-heat-api first and then confirm the status of undercloud services in general before attempting to update or upgrade the environment.\n\nTask 'Fail if services were not running' failed:\nHost: localhost\nMessage: One of the undercloud services was not active. Please check openstack-ironic-api first and then confirm the status of undercloud services in general before attempting to update or upgrade the environment.\n\nTask 'Fail if services were not running' failed:\nHost: localhost\nMessage: One of the undercloud services was not active. Please check openstack-zaqar first and then confirm the status of undercloud services in general before attempting to update or upgrade the environment.\n\nTask 'Fail if services were not running' failed:\nHost: localhost\nMessage: One of the undercloud services was not active. Please check openstack-glance-api first and then confirm the status of undercloud services in general before attempting to update or upgrade the environment.\n\nTask 'Fail if services were not running' failed:\nHost: localhost\nMessage: One of the undercloud services was not active. Please check openstack-glance-api first and then confirm the status of undercloud services in general before attempting to update or upgrade the environment.\n\nFailure! The validation failed for all hosts:\n* localhost\n"
}

View File

@ -56,7 +56,3 @@ to wipe the node's metadata starting with the Rocky release:
or provide all manageable nodes::
openstack overcloud node provide --all-manageable
See :ref:`cleaning_workflow` for an explanation how to use Mistral workflows
directly to initiate cleaning. This is particularly useful if you want to run
some non-standard clean steps.