Merge "doc/source/admin fixes part-1"

This commit is contained in:
Zuul 2024-09-24 07:44:03 +00:00 committed by Gerrit Code Review
commit 75927a8673
7 changed files with 33 additions and 33 deletions

View File

@ -18,7 +18,7 @@ states, which will prevent the node from being seen by the Compute
service as ready for use.
This feature is leveraged as part of the state machine workflow,
where a node in ``manageable`` can be moved to ``active`` state
where a node in ``manageable`` can be moved to an ``active`` state
via the provision_state verb ``adopt``. To view the state
transition capabilities, please see :ref:`states`.
@ -48,7 +48,7 @@ required boot image, or boot ISO image and then places any PXE or virtual
media configuration necessary for the node should it be required.
The adoption process makes no changes to the physical node, with the
exception of operator supplied configurations where virtual media is
exception of operator-supplied configurations where virtual media is
used to boot the node under normal circumstances. An operator should
ensure that any supplied configuration defining the node is sufficient
for the continued operation of the node moving forward.
@ -56,7 +56,7 @@ for the continued operation of the node moving forward.
Possible Risk
=============
The main risk with this feature is that supplied configuration may ultimately
The main risk with this feature is that the supplied configuration may ultimately
be incorrect or invalid which could result in potential operational issues:
* ``rebuild`` verb - Rebuild is intended to allow a user to re-deploy the node
@ -143,7 +143,7 @@ from the ``manageable`` state to ``active`` state::
.. NOTE::
In the above example, the image_source setting must reference a valid
image or file, however that image or file can ultimately be empty.
image or file, however, that image or file can ultimately be empty.
.. NOTE::
The above example utilizes a capability that defines the boot operation
@ -154,7 +154,7 @@ from the ``manageable`` state to ``active`` state::
The above example will fail a re-deployment as a fake image is
defined and no instance_info/image_checksum value is defined.
As such any actual attempt to write the image out will fail as the
image_checksum value is only validated at time of an actual
image_checksum value is only validated at the time of an actual
deployment operation.
.. NOTE::
@ -176,7 +176,7 @@ Troubleshooting
Should an adoption operation fail for a node, the error that caused the
failure will be logged in the node's ``last_error`` field when viewing the
node. This error, in the case of node adoption, will largely be due to
failure of a validation step. Validation steps are dependent
the failure of a validation step. Validation steps are dependent
upon what driver is selected for the node.
Any node that is in the ``adopt failed`` state can have the ``adopt`` verb
@ -205,18 +205,18 @@ Adoption with Nova
Since there is no mechanism to create bare metal instances in Nova when nodes
are adopted into Ironic, the node adoption feature described above cannot be
used to add in production nodes to deployments which use Ironic together with
used to add in production nodes to deployments that use Ironic together with
Nova.
One option to add in production nodes to an Ironic/Nova deployment is to use
One option to add production nodes to an Ironic/Nova deployment is to use
the fake drivers. The overall idea is that for Nova the nodes are instantiated
normally to ensure the instances are properly created in the compute project
while Ironic does not touch them.
Here are some high level steps to be used as a guideline:
Here are some high-level steps to be used as a guideline:
* create a bare metal flavor and a hosting project for the instances
* enroll the nodes into Ironic, create the ports, move them to manageable
* enroll the nodes into Ironic, create the ports, and move them to manageable
* change the hardware type and the interfaces to fake drivers
* provide the nodes to make them available
* one by one, add the nodes to the placement aggregate and create instances

View File

@ -16,7 +16,7 @@ How it works
The expected workflow is as follows:
#. The node is discovered by manually powering it on and gets the
#. The node is discovered by manually powering it on and getting the
`manual-management` hardware type and `agent` power interface.
If discovery is not used, a node can be enrolled through the API and then
@ -32,7 +32,7 @@ The expected workflow is as follows:
#. A user deploys the node. Deployment happens normally via the already
running agent.
#. In the end of the deployment, the node is rebooted via the reboot command
#. At the end of the deployment, the node is rebooted via the reboot command
instead of power off+on.
Enabling

View File

@ -33,7 +33,7 @@ In both cases, the tokens are randomly generated using the Python
``secrets`` library. As of mid-2020, the default length is 43 characters.
Once the token has been provided, the token cannot be retrieved or accessed.
It remains available to the conductors, and is stored in the memory of the
It remains available to the conductors and is stored in the memory of the
``ironic-python-agent``.
.. note::
@ -76,7 +76,7 @@ Agent Configuration
An additional setting that may be leveraged with the ``ironic-python-agent``
is a ``agent_token_required`` setting. Under normal circumstances, this
setting can be asserted via the configuration supplied from the Bare Metal
service deployment upon the ``lookup`` action, but can be asserted via the
service deployment upon the ``lookup`` action but can be asserted via the
embedded configuration for the agent in the ramdisk. This setting is also
available via kernel command line as ``ipa-agent-token-required``.
available via the kernel command line as ``ipa-agent-token-required``.

View File

@ -160,7 +160,7 @@ ironic node:
the Ironic workflow, specifically with this driver, is that the generated
``agent token`` is conveyed to the booting ramdisk, facilitating it to call
back to Ironic and indicate the state. This token is randomly generated
for every deploy, and is required. Specifically, this is leveraged in the
for every deploy and is required. Specifically, this is leveraged in the
template's ``pre``, ``onerror``, and ``post`` steps.
For more information on Agent Token, please see :doc:`/admin/agent-token`.
@ -223,7 +223,7 @@ At this point, you should be able to request the baremetal node to deploy.
Standalone using a repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Anaconda supports a concept of passing a repository as opposed to a dedicated
Anaconda supports the concept of passing a repository as opposed to a dedicated
URL path which has a ``.treeinfo`` file, which tells the initial boot scripts
where to get various dependencies, such as what would be used as the anaconda
``stage2`` ramdisk. Unfortunately, this functionality is not well documented.
@ -256,7 +256,7 @@ At a high level, the mechanics of the anaconda driver work in the following
flow, where we also note the stages and purpose of each part for informational
purposes.
#. Network Boot Program (Such as iPXE) downloads the kernel, and initial
#. Network Boot Program (Such as iPXE) downloads the kernel and initial
ramdisk.
#. Kernel launches, uncompresses initial ramdisk, and executes init inside
of the ramdisk.

View File

@ -9,7 +9,7 @@ notifier capability. Based on the `notification_driver` configuration, audit eve
can be routed to messaging infrastructure (notification_driver = messagingv2)
or can be routed to a log file (`[oslo_messaging_notifications]/driver = log`).
Audit middleware creates two events per REST API interaction. First event has
Audit middleware creates two events per REST API interaction. The first event has
information extracted from request data and the second one has request outcome
(response).

View File

@ -55,9 +55,9 @@ To retrieve the cached BIOS configuration from a specified node::
BIOS settings are cached on each node cleaning operation or when settings
have been applied successfully via BIOS cleaning steps. The return of above
command is a table of last cached BIOS settings from specified node.
If ``-f json`` is added as suffix to above command, it returns BIOS settings
as following::
command is a table of the last cached BIOS settings from the specified node.
If ``-f json`` is added as a suffix to the above command, it returns BIOS
settings as following::
[
{
@ -81,8 +81,8 @@ To get a specified BIOS setting for a node::
$ baremetal node bios setting show <node> <setting-name>
If ``-f json`` is added as suffix to above command, it returns BIOS settings
as following::
If ``-f json`` is added as a suffix to the above command, it returns BIOS
settings as following::
{
"setting name":

View File

@ -7,7 +7,7 @@ Boot From Volume
Overview
========
The Bare Metal service supports booting from a Cinder iSCSI volume as of the
Pike release. This guide will primarily deal with this use case, but will be
Pike release. This guide will primarily deal with this use case but will be
updated as more paths for booting from a volume, such as FCoE, are introduced.
The boot from volume is supported on both legacy BIOS and
@ -25,12 +25,12 @@ the node OR the iPXE boot templates such that the node CAN be booted.
:width: 100%
In this example, the boot interface does the heavy lifting. For drivers the
``irmc`` and ``ilo`` hardware types with hardware type specific boot
interfaces, they are able to signal via an out of band mechanism to the
``irmc`` and ``ilo`` hardware types with hardware type-specific boot
interfaces, they are able to signal via an out-of-band mechanism to the
baremetal node's BMC that the integrated iSCSI initiators are to connect
to the supplied volume target information.
In most hardware this would be the network cards of the machine.
In most hardware, this would be the network cards of the machine.
In the case of the ``ipxe`` boot interface, templates are created on disk
which point to the iscsi target information that was either submitted
@ -39,7 +39,7 @@ requested as the baremetal's boot from volume disk upon requesting the
instance.
In terms of network access, both interface methods require connectivity
to the iscsi target. In the vendor driver specific path, additional network
to the iscsi target. In the vendor driver-specific path, additional network
configuration options may be available to allow separation of standard
network traffic and instance network traffic. In the iPXE case, this is
not possible as the OS userspace re-configures the iSCSI connection
@ -47,7 +47,7 @@ after detection inside the OS ramdisk boot.
An iPXE user *may* be able to leverage multiple VIFs, one specifically
set to be set with ``pxe_enabled`` to handle the initial instance boot
and back-end storage traffic where as external facing network traffic
and back-end storage traffic whereas external-facing network traffic
occurs on a different interface. This is a common pattern in iSCSI
based deployments in the physical realm.
@ -146,7 +146,7 @@ be utilized to attach the remote volume.
In addition to the connectors, we have a concept of a `target` that can be
defined via the API. While a user of this feature through the Compute
service would automatically have a new target record created for them,
it is not explicitly required, and can be performed manually.
it is not explicitly required and can be performed manually.
A target record can be created using a command similar to the example below::
@ -176,7 +176,7 @@ the node should or could boot from a remote volume.
It must be noted that minimal configuration or value validation occurs
with the ``external`` storage interface. The ``cinder`` storage interface
contains more extensive validation, that is likely un-necessary in a
contains more extensive validation, that is likely unnecessary in a
``external`` scenario.
Setting the external storage interface::
@ -228,7 +228,7 @@ contain support for multi-attach volumes.
When support for storage interfaces was added to the Bare Metal service,
specifically for the ``cinder`` storage interface, the concept of volume
multi-attach was accounted for, however has not been fully tested,
and is unlikely to be fully tested until there is Compute service integration
and is unlikely to be fully tested until there is a Compute service integration
as well as volume driver support.
The data model for storage of volume targets in the Bare Metal service