Replace openstack baremetal commands with standalone baremetal

The standalone baremetal cli was introduced in Ussuri as a direct
replacement for "openstack baremetal" commands, which have since been
removed from openstackclient.

This change updates all "openstack baremetal" calls to "baremetal"
calls without regard for the overall correctness of the documentation,
but at least it is calling a command which actually exists.

Change-Id: I85fa3a5dddc5e0815a9650019504336e7feccf81
This commit is contained in:
Steve Baker 2022-04-14 15:28:46 +12:00
parent f1a77d1ede
commit 440117ffcd
21 changed files with 78 additions and 79 deletions

View File

@ -298,11 +298,11 @@ Check that Ironic works by connecting to the overcloud and trying to list the
nodes (you should see an empty response, but not an error)::
source overcloudrc
openstack baremetal node list
baremetal node list
You can also check the enabled driver list::
$ openstack baremetal driver list
$ baremetal driver list
+---------------------+-------------------------+
| Supported driver(s) | Active host(s) |
+---------------------+-------------------------+
@ -318,7 +318,7 @@ You can also check the enabled driver list::
For HA configuration you should see all three controllers::
$ openstack baremetal driver list
$ baremetal driver list
+---------------------+------------------------------------------------------------------------------------------------------------+
| Supported driver(s) | Active host(s) |
+---------------------+------------------------------------------------------------------------------------------------------------+
@ -507,7 +507,7 @@ and/or ``rescuing_network`` to the ``driver_info`` dictionary when
After enrolling nodes, you can update each of them with the following
command (adjusting it for your release)::
openstack baremetal node set <node> \
baremetal node set <node> \
--driver-info cleaning_network=<network uuid> \
--driver-info provisioning_network=<network uuid> \
--driver-info rescuing_network=<network uuid>
@ -746,12 +746,12 @@ The ``overcloud-nodes.yaml`` file prepared in the previous steps can now be
imported in Ironic::
source overcloudrc
openstack baremetal create overcloud-nodes.yaml
baremetal create overcloud-nodes.yaml
.. warning::
This command is provided by Ironic, not TripleO. It also does not feature
support for updates, so if you need to change something, you have to use
``openstack baremetal node set`` and similar commands.
``baremetal node set`` and similar commands.
The nodes appear in the ``enroll`` provision state, you need to check their BMC
credentials and make them available::
@ -759,15 +759,15 @@ credentials and make them available::
DEPLOY_KERNEL=$(openstack image show deploy-kernel -f value -c id)
DEPLOY_RAMDISK=$(openstack image show deploy-ramdisk -f value -c id)
for uuid in $(openstack baremetal node list --provision-state enroll -f value -c UUID);
for uuid in $(baremetal node list --provision-state enroll -f value -c UUID);
do
openstack baremetal node set $uuid \
baremetal node set $uuid \
--driver-info deploy_kernel=$DEPLOY_KERNEL \
--driver-info deploy_ramdisk=$DEPLOY_RAMDISK \
--driver-info rescue_kernel=$DEPLOY_KERNEL \
--driver-info rescue_ramdisk=$DEPLOY_RAMDISK
openstack baremetal node manage $uuid --wait &&
openstack baremetal node provide $uuid
baremetal node manage $uuid --wait &&
baremetal node provide $uuid
done
The deploy kernel and ramdisk were created as part of `Adding deployment
@ -777,7 +777,7 @@ The ``baremetal node provide`` command makes a node go through cleaning
procedure, so it might take some time depending on the configuration. Check
your nodes status with::
openstack baremetal node list --fields uuid name provision_state last_error
baremetal node list --fields uuid name provision_state last_error
Wait for all nodes to reach the ``available`` state. Any failures during
cleaning has to be corrected before proceeding with deployment.
@ -824,7 +824,7 @@ Check that nodes are really enrolled and the power state is reflected correctly
(it may take some time)::
$ source overcloudrc
$ openstack baremetal node list
$ baremetal node list
+--------------------------------------+------------+---------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+------------+---------------+-------------+--------------------+-------------+
@ -961,8 +961,8 @@ do this each baremetal node must first be configured to boot from a volume.
The connector ID for each node should be unique, below we achieve this by
incrementing the value of <NUM>::
$ openstack baremetal node set --property capabilities=iscsi_boot:true --storage-interface cinder <NODEID>
$ openstack baremetal volume connector create --node <NODEID> --type iqn --connector-id iqn.2010-10.org.openstack.node<NUM>
$ baremetal node set --property capabilities=iscsi_boot:true --storage-interface cinder <NODEID>
$ baremetal volume connector create --node <NODEID> --type iqn --connector-id iqn.2010-10.org.openstack.node<NUM>
The image used should be configured to boot from a iSCSI root disk, on Centos
7 this is achieved by ensuring that the `iscsi` module is added to the ramdisk
@ -1134,7 +1134,7 @@ If not already provided in ``overcloud-nodes.yaml`` above, the
local-link-connection values for `switch_info`, `port_id` and `switch_id`
can be provided here::
openstack baremetal port set --local-link-connection switch_info=switch1 \
baremetal port set --local-link-connection switch_info=switch1 \
--local-link-connection port_id=xe-0/0/7 \
--local-link-connection switch_id=00:00:00:00:00:00 <PORTID>

View File

@ -225,7 +225,7 @@ Tag node into the new flavor using the following command
.. code-block:: bash
openstack baremetal node set --property \
baremetal node set --property \
capabilities='profile:cellcontroller,boot_option:local' <node id>
Verify the tagged cellcontroller:

View File

@ -127,7 +127,7 @@ be used later during deployment for servers in the ComputeHCI role. We
will use this server's Ironic UUID so that the playbook gets its
introspection data::
[stack@undercloud ~]$ openstack baremetal node list | grep ceph-2
[stack@undercloud ~]$ baremetal node list | grep ceph-2
| ef4cbd49-3773-4db2-80da-4210a7c24047 | ceph-2 | None | power off | available | False |
[stack@undercloud ~]$

View File

@ -275,7 +275,7 @@ setting below::
Existing nodes can be updated to use the ``direct`` deploy interface. For
example::
openstack baremetal node set --deploy-interface direct 4b64a750-afe3-4236-88d1-7bb88c962666
baremetal node set --deploy-interface direct 4b64a750-afe3-4236-88d1-7bb88c962666
.. _deploy_control_plane:

View File

@ -26,7 +26,7 @@ Ironic database.
Then extract the machine unique UUID for the target node with a command like::
openstack baremetal introspection data save NODE-ID | jq .extra.system.product.uuid | tr '[:upper:]' '[:lower:]'
baremetal introspection data save NODE-ID | jq .extra.system.product.uuid | tr '[:upper:]' '[:lower:]'
where `NODE-ID` is the target node Ironic UUID. The value returned by the above
command will be a unique and immutable machine UUID which isn't related to the
@ -70,8 +70,8 @@ introspection data.
Export the introspection data from Ironic for the Ceph nodes to be
deployed::
openstack baremetal introspection data save oc0-ceph-0 > ceph0.json
openstack baremetal introspection data save oc0-ceph-1 > ceph1.json
baremetal introspection data save oc0-ceph-0 > ceph0.json
baremetal introspection data save oc0-ceph-1 > ceph1.json
...
Copy the utility to the stack user's home directory on the undercloud
@ -80,11 +80,10 @@ be passed during openstack overcloud deployment::
./make_ceph_disk_list.py -i ceph*.json -o node_data_lookup.json -k by_path
Pass the introspection data file from `openstack baremetal
introspection data save` for all nodes hosting Ceph OSDs to the
utility as you may only define `NodeDataLookup` once during a
deployment. The `-i` option can take an expression like `*.json` or a
list of files as input.
Pass the introspection data file from `baremetal introspection data save` for
all nodes hosting Ceph OSDs to the utility as you may only define
`NodeDataLookup` once during a deployment. The `-i` option can take an
expression like `*.json` or a list of files as input.
The `-k` option defines the key of ironic disk data structure to use
to identify the disk to be used as an OSD. Using `name` is not

View File

@ -270,10 +270,10 @@ the ones used in the ``subnets`` option in the undercloud configuration.
To set all nodes to ``manageable`` state run the following command::
for node in $(openstack baremetal node list -f value -c Name); do \
openstack baremetal node manage $node --wait; done
for node in $(baremetal node list -f value -c Name); do \
baremetal node manage $node --wait; done
#. Use ``openstack baremetal port list --node <node-uuid>`` command to find out
#. Use ``baremetal port list --node <node-uuid>`` command to find out
which baremetal ports are associated with which baremetal node. Then set the
``physical-network`` for the ports.
@ -283,11 +283,11 @@ the ones used in the ``subnets`` option in the undercloud configuration.
the baremetal port connected to ``leaf0`` use ``ctlplane``. The remaining
ports use the ``leafX`` names::
$ openstack baremetal port set --physical-network ctlplane <port-uuid>
$ baremetal port set --physical-network ctlplane <port-uuid>
$ openstack baremetal port set --physical-network leaf1 <port-uuid>
$ openstack baremetal port set --physical-network leaf2 <port-uuid>
$ openstack baremetal port set --physical-network leaf2 <port-uuid>
$ baremetal port set --physical-network leaf1 <port-uuid>
$ baremetal port set --physical-network leaf2 <port-uuid>
$ baremetal port set --physical-network leaf2 <port-uuid>
#. Make sure the nodes are in ``available`` state before deploying the
overcloud::

View File

@ -138,11 +138,11 @@ Installation Steps
undercloud::
source ~/stackrc
openstack baremetal conductor list
baremetal conductor list
Example output::
(undercloud) [stack@undercloud ~]$ openstack baremetal conductor list
(undercloud) [stack@undercloud ~]$ baremetal conductor list
+------------------------+-----------------+-------+
| Hostname | Conductor Group | Alive |
+------------------------+-----------------+-------+

View File

@ -109,8 +109,8 @@ Configuring nodes
Nodes have to be explicitly configured to use the Ansible deploy. For example,
to configure all nodes, use::
for node in $(openstack baremetal node list -f value -c UUID); do
openstack baremetal node set $node --deploy-interface ansible
for node in $(baremetal node list -f value -c UUID); do
baremetal node set $node --deploy-interface ansible
done
Editing playbooks
@ -149,8 +149,8 @@ nodes.
#. Set the newly introduced ``kernel_params`` extra variable to the desired
kernel parameters. For example, to update only compute nodes use::
for node in $(openstack baremetal node list -c Name -f value | grep compute); do
openstack baremetal node set $node \
for node in $(baremetal node list -c Name -f value | grep compute); do
baremetal node set $node \
--extra kernel_params='param1=value1 param2=value2'
done

View File

@ -586,9 +586,9 @@ The overcloud can then be deployed using the output from the provision command::
Viewing Provisioned Node Details
--------------------------------
The commands ``openstack baremetal node list`` and ``openstack baremetal node
show`` continue to show the details of all nodes, however there are some new
commands which show a further view of the provisioned nodes.
The commands ``baremetal node list`` and ``baremetal node show`` continue to
show the details of all nodes, however there are some new commands which show a
further view of the provisioned nodes.
The `metalsmith`_ tool provides a unified view of provisioned nodes, along with
allocations and neutron ports. This is similar to what Nova provides when it
@ -600,11 +600,11 @@ managed by metalsmith, run::
The baremetal allocation API keeps an association of nodes to hostnames,
which can be seen by running::
openstack baremetal allocation list
baremetal allocation list
The allocation record UUID will be the same as the Instance UUID for the node
which is allocated. The hostname can be seen in the allocation record, but it
can also be seen in the ``openstack baremetal node show`` property
can also be seen in the ``baremetal node show`` property
``instance_info``, ``display_name``.

View File

@ -39,7 +39,7 @@ to wipe the node's metadata starting with the Rocky release:
#. If the node is not in the ``manageable`` state, move it there::
openstack baremetal node manage <UUID or name>
baremetal node manage <UUID or name>
#. Run manual cleaning on a specific node::

View File

@ -3,21 +3,21 @@ Introspecting a Single Node
In addition to bulk introspection, you can also introspect nodes one by one.
When doing so, you must take care to set the correct node states manually.
Use ``openstack baremetal node show UUID`` command to figure out whether nodes
Use ``baremetal node show UUID`` command to figure out whether nodes
are in ``manageable`` or ``available`` state. For all nodes in ``available``
state, start with putting a node to ``manageable`` state (see
:doc:`node_states` for details)::
openstack baremetal node manage <UUID>
baremetal node manage <UUID>
Then you can run introspection::
openstack baremetal introspection start UUID
baremetal introspection start UUID
This command won't poll for the introspection result, use the following command
to check the current introspection state::
openstack baremetal introspection status UUID
baremetal introspection status UUID
Repeat it for every node until you see ``True`` in the ``finished`` field.
The ``error`` field will contain an error message if introspection failed,
@ -25,4 +25,4 @@ or ``None`` if introspection succeeded for this node.
Do not forget to make nodes available for deployment afterwards::
openstack baremetal node provide <UUID>
baremetal node provide <UUID>

View File

@ -9,7 +9,7 @@ the hardware and puts them as JSON in Swift. Starting with
``python-ironic-inspector-client`` version 1.4.0 there is a command to retrieve
this data::
openstack baremetal introspection data save <UUID>
baremetal introspection data save <UUID>
You can provide a ``--file`` argument to save the data in a file instead of
displaying it.
@ -48,7 +48,7 @@ and use that to collect a list of node mac addresses::
export IRONIC_INSPECTOR_PASSWORD=xxxxxx
# Download the extra introspection data from swift:
for node in $(openstack baremetal node list -f value -c UUID);
for node in $(baremetal node list -f value -c UUID);
do swift -U service:ironic -K $IRONIC_INSPECTOR_PASSWORD download ironic-inspector extra_hardware-$node;
done
@ -71,7 +71,7 @@ Extra data examples
Here is an example of CPU extra data, including benchmark results::
$ openstack baremetal introspection data save <UUID> | jq '.extra.cpu'
$ baremetal introspection data save <UUID> | jq '.extra.cpu'
{
"physical": {
"number": 1
@ -108,7 +108,7 @@ Here is an example of CPU extra data, including benchmark results::
Here is an example of disk extra data, including benchmark results::
$ openstack baremetal introspection data save <UUID> | jq '.extra.disk'
$ baremetal introspection data save <UUID> | jq '.extra.disk'
{
"logical": {
"count": 1

View File

@ -111,7 +111,7 @@ the discovery process:
.. code-block:: console
openstack baremetal introspection rule import /path/to/rules.json
baremetal introspection rule import /path/to/rules.json
See :doc:`profile_matching` for more examples on introspection rules.

View File

@ -21,7 +21,7 @@ by the Nova scheduler on deployment.
This can either be done via the nodes json file when registering the nodes, or
alternatively via manual adjustment of the node capabilities, e.g::
openstack baremetal node set <id> --property capabilities='node:controller-0'
baremetal node set <id> --property capabilities='node:controller-0'
This has assigned the capability ``node:controller-0`` to the node, and this
must be repeated (using a unique continuous index, starting from 0) for all

View File

@ -37,7 +37,7 @@ The ``manage`` action
can be used to bring nodes from enroll_ to ``manageable`` or nodes already
moved to available_ state back to ``manageable`` for configuration::
openstack baremetal node manage <NAME OR UUID>
baremetal node manage <NAME OR UUID>
available
---------
@ -51,4 +51,4 @@ in this state.
Nodes which failed introspection stay in ``manageable`` state and must be
reintrospected or made ``available`` manually::
openstack baremetal node provide <NAME OR UUID>
baremetal node provide <NAME OR UUID>

View File

@ -28,4 +28,4 @@ Make sure the nodes have profiles assigned as described in
:doc:`profile_matching`. Create a JSON file with the target ready-state
configuration for each profile. Then trigger the configuration::
openstack baremetal configure ready state ready-state.json
baremetal configure ready state ready-state.json

View File

@ -11,17 +11,17 @@ for more details.
For example::
openstack baremetal node set <UUID> --property root_device='{"wwn": "0x4000cca77fc4dba1"}'
baremetal node set <UUID> --property root_device='{"wwn": "0x4000cca77fc4dba1"}'
To remove a hint and fallback to the default behavior::
openstack baremetal node unset <UUID> --property root_device
baremetal node unset <UUID> --property root_device
Note that the root device hints should be assigned *before* both introspection
and deployment. After changing the root device hints you should either re-run
introspection or manually fix the ``local_gb`` property for a node::
openstack baremetal node set <UUID> --property local_gb=<NEW VALUE>
baremetal node set <UUID> --property local_gb=<NEW VALUE>
Where the new value is calculated as a real disk size in GiB minus 1 GiB to
account for partitioning (the introspection process does this calculation
@ -61,7 +61,7 @@ introspection to figure it out. First start with :ref:`introspection` as usual
without setting any root device hints. Then use the stored introspection data
to list all disk devices::
openstack baremetal introspection data save fdf975ae-6bd7-493f-a0b9-a0a4667b8ef3 | jq '.inventory.disks'
baremetal introspection data save fdf975ae-6bd7-493f-a0b9-a0a4667b8ef3 | jq '.inventory.disks'
For **python-ironic-inspector-client** versions older than 1.4.0 you can use
the ``curl`` command instead, see :ref:`introspection_data` for details.

View File

@ -44,19 +44,19 @@ configure introspected nodes to deploy in UEFI mode as well.
Here is how the ``properties`` field looks for nodes configured in BIOS mode::
$ openstack baremetal node show <NODE> -f value -c properties
$ baremetal node show <NODE> -f value -c properties
{u'capabilities': u'profile:compute,boot_mode:bios', u'memory_mb': u'6144', u'cpu_arch': u'x86_64', u'local_gb': u'49', u'cpus': u'1'}
Note that ``boot_mode:bios`` capability is set. For a node in UEFI mode, it
will look like this::
$ openstack baremetal node show <NODE> -f value -c properties
$ baremetal node show <NODE> -f value -c properties
{u'capabilities': u'profile:compute,boot_mode:uefi', u'memory_mb': u'6144', u'cpu_arch': u'x86_64', u'local_gb': u'49', u'cpus': u'1'}
You can change the boot mode with the following command (required for UEFI
before the Pike release)::
$ openstack baremetal node set <NODE> --property capabilities=profile:compute,boot_mode:uefi
$ baremetal node set <NODE> --property capabilities=profile:compute,boot_mode:uefi
.. warning::
Do not forget to copy all other capabilities, e.g. ``profile`` and

View File

@ -60,16 +60,16 @@ For example, a wrong MAC can be fixed in two steps:
* Find out the assigned port UUID by running
::
$ openstack baremetal port list --node <NODE UUID>
$ baremetal port list --node <NODE UUID>
* Update the MAC address by running
::
$ openstack baremetal port set --address <NEW MAC> <PORT UUID>
$ baremetal port set --address <NEW MAC> <PORT UUID>
A Wrong IPMI address can be fixed with the following command::
$ openstack baremetal node set <NODE UUID> --driver-info ipmi_address=<NEW IPMI ADDRESS>
$ baremetal node set <NODE UUID> --driver-info ipmi_address=<NEW IPMI ADDRESS>
Node power state is not enforced by Ironic
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -103,7 +103,7 @@ power management, and it gets stuck in an abnormal state.
Ironic requires that nodes that cannot be operated normally are put in the
maintenance mode. It is done by the following command::
$ openstack baremetal node maintenance set <NODE UUID> --reason "<EXPLANATION>"
$ baremetal node maintenance set <NODE UUID> --reason "<EXPLANATION>"
Ironic will stop checking power and health state for such nodes, and Nova will
not pick them for deployment. Power command will still work on them, though.
@ -112,11 +112,11 @@ After a node is in the maintenance mode, you can attempt repairing it, e.g. by
`Fixing invalid node information`_. If you manage to make the node operational
again, move it out of the maintenance mode::
$ openstack baremetal node maintenance unset <NODE UUID>
$ baremetal node maintenance unset <NODE UUID>
If repairing is not possible, you can force deletion of such node::
$ openstack baremetal node delete <NODE UUID>
$ baremetal node delete <NODE UUID>
Forcing node removal will leave it powered on, accessing the network with
the old IP address(es) and with all services running. Before proceeding, make
@ -189,4 +189,4 @@ How can introspection be stopped?
Introspection for a node can be stopped with the following command::
$ openstack baremetal introspection abort <NODE UUID>
$ baremetal introspection abort <NODE UUID>

View File

@ -31,7 +31,7 @@ Next, there are a few layers on which the deployment can fail:
* Post-deploy configuration (Puppet)
As Ironic service is in the middle layer, you can use its shell to guess the
failed layer. Issue ``openstack baremetal node list`` command to see all
failed layer. Issue ``baremetal node list`` command to see all
registered nodes and their current status, you will see something like::
+--------------------------------------+------+---------------+-------------+-----------------+-------------+
@ -50,7 +50,7 @@ in the resulting table.
You can check the actual cause using the following command::
$ openstack baremetal node show <UUID> -f value -c maintenance_reason
$ baremetal node show <UUID> -f value -c maintenance_reason
For example, **Maintenance** goes to ``True`` automatically, if wrong power
credentials are provided.
@ -58,7 +58,7 @@ in the resulting table.
Fix the cause of the failure, then move the node out of the maintenance
mode::
$ openstack baremetal node maintenance unset <NODE UUID>
$ baremetal node maintenance unset <NODE UUID>
* If **Provision State** is ``available`` then the problem occurred before
bare metal deployment has even started. Proceed with `Debugging Using Heat`_.
@ -75,7 +75,7 @@ in the resulting table.
* If **Provision State** is ``error`` or ``deploy failed``, then bare metal
deployment has failed for this node. Look at the **last_error** field::
$ openstack baremetal node show <UUID> -f value -c last_error
$ baremetal node show <UUID> -f value -c last_error
If the error message is vague, you can use logs to clarify it, see
:ref:`ironic_logs` for details.
@ -266,7 +266,7 @@ you have enough nodes corresponding to each flavor/profile. Watch
::
$ openstack baremetal node show <UUID> --fields properties
$ baremetal node show <UUID> --fields properties
It should contain e.g. ``profile:compute`` for compute nodes.

View File

@ -305,7 +305,7 @@ of your OpenShift nodes.
.. code-block:: bash
openstack baremetal node list
baremetal node list
2. Locate the OpenShift node: