doc: define boot from volume in the glossary

Define it and also link to the term in a few different docs.

Change-Id: I6333deb2f6e85eba3c92128dab4e4b4d35355603
This commit is contained in:
Matt Riedemann 2019-12-13 14:54:18 -05:00
parent 8302cccff6
commit 33c7996624
6 changed files with 27 additions and 15 deletions

View File

@ -62,6 +62,6 @@ Volume Support
Volume support is provided for the PowerVM virt driver via Cinder. Currently,
the only supported volume protocol is `vSCSI`_ Fibre Channel. Attach, detach,
and extend are the operations supported by the PowerVM vSCSI FC volume adapter.
Boot from volume is not yet supported.
:term:`Boot From Volume` is not yet supported.
.. _vSCSI: https://www.ibm.com/support/knowledgecenter/en/POWER8/p8hat/p8hat_virtualscsi.htm

View File

@ -23,8 +23,8 @@ to the :cinder-doc:`block storage admin guide
<admin/blockstorage-volume-multiattach.html>` for more details about creating
multiattach-capable volumes.
Boot from volume and attaching a volume to a server that is not
SHELVED_OFFLOADED is supported. Ultimately the ability to perform
:term:`Boot from volume <Boot From Volume>` and attaching a volume to a server
that is not SHELVED_OFFLOADED is supported. Ultimately the ability to perform
these actions depends on the compute host and hypervisor driver that
is being used.

View File

@ -13,6 +13,16 @@ Glossary
For more information, refer to :doc:`/admin/aggregates`.
Boot From Volume
A server that is created with a
:doc:`Block Device Mapping </user/block-device-mapping>` with
``boot_index=0`` and ``destination_type=volume``. The root volume can
already exist when the server is created or be created by the compute
service as part of the server creation. Note that a server can have
volumes attached and not be boot-from-volume. A boot from volume server
has an empty ("") ``image`` parameter in ``GET /servers/{server_id}``
responses.
Cross-Cell Resize
A resize (or cold migrate) operation where the source and destination
compute hosts are mapped to different cells. By default, resize and

View File

@ -229,12 +229,12 @@ FAQs
root disks with volumes?
No, there is nothing automatic within nova that converts a
non-boot-from-volume request to convert the image to a root volume.
Several ideas have been discussed over time which are captured in the
spec for `volume-backed flavors`_. However, if you wish to force users
to always create volume-backed servers, you can configure the API service
by setting :oslo.config:option:`max_local_block_devices` to 0. This will
result in any non-boot-from-volume server create request to fail with a
400 response.
non-:term:`boot-from-volume <Boot From Volume>` request to convert the
image to a root volume. Several ideas have been discussed over time which
are captured in the spec for `volume-backed flavors`_. However, if you wish
to force users to always create volume-backed servers, you can configure
the API service by setting :oslo.config:option:`max_local_block_devices`
to 0. This will result in any non-boot-from-volume server create request to
fail with a 400 response.
.. _volume-backed flavors: https://review.opendev.org/511965/

View File

@ -391,10 +391,11 @@ Since the aggregates are in the API database and the cell conductor cannot
access that information, so this will fail. In the future this check could be
moved to the *nova-api* service such that the availability zone between the
instance and the volume is checked before we reach the cell, except in the
case of boot from volume where the *nova-compute* service itself creates the
volume and must tell Cinder in which availability zone to create the volume.
Long-term, volume creation during boot from volume should be moved to the
top-level superconductor which would eliminate this AZ up-call check problem.
case of :term:`boot from volume <Boot From Volume>` where the *nova-compute*
service itself creates the volume and must tell Cinder in which availability
zone to create the volume. Long-term, volume creation during boot from volume
should be moved to the top-level superconductor which would eliminate this AZ
up-call check problem.
The sixth is detailed in `bug 1781286`_ and similar to the first issue.
The issue is that servers created without a specific availability zone

View File

@ -66,7 +66,8 @@ Limitations
the ``RBD`` backend avoiding calls to download and verify on the compute.
* As of the 18.0.0 Rocky release, trusted image certification validation is
not supported with volume-backed (boot from volume) instances. The block
not supported with volume-backed
(:term:`boot from volume <Boot From Volume>`) instances. The block
storage service support may be available in a future release:
https://blueprints.launchpad.net/cinder/+spec/certificate-validate