Browse Source

docs: Document UEFI secure boot feature

Introduce two new guides on UEFI and Secure Boot. In addition, update
the flavors guide to document the secure boot feature (though this doc
should really be removed in near term in favour of the auto-generated
docs, as noted inline).

Note that this change includes our first use of the ':nova:extra-spec:'
cross-reference role and highlights a small bug in that implementation.
This is resolved.

Blueprint: allow-secure-boot-for-qemu-kvm-guests
Change-Id: I4eb370b87ba8d0403c8c0ef038a909313a48d1d6
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
changes/84/776684/9
Stephen Finucane 11 months ago
parent
commit
f4c249c692
  1. 2
      doc/ext/extra_specs.py
  2. 5
      doc/source/admin/emulated-tpm.rst
  3. 2
      doc/source/admin/index.rst
  4. 136
      doc/source/admin/secure-boot.rst
  5. 69
      doc/source/admin/uefi.rst
  6. 188
      doc/source/user/flavors.rst

2
doc/ext/extra_specs.py

@ -213,7 +213,7 @@ class NovaDomain(domains.Domain):
self, env, fromdocname, builder, typ, target, node, contnode,
):
"""Resolve cross-references"""
if typ == 'option':
if typ == 'extra-spec':
return sphinx_nodes.make_refnode(
builder,
fromdocname,

5
doc/source/admin/emulated-tpm.rst

@ -34,9 +34,10 @@ feature:
With the above requirements satisfied, verify vTPM support by inspecting the
traits on the compute node's resource provider:
.. code:: console
.. code:: bash
$ openstack resource provider trait list $compute_uuid | grep SECURITY_TPM
$ COMPUTE_UUID=$(openstack resource provider list --name $HOST -f value -c uuid)
$ openstack resource provider trait list $COMPUTE_UUID | grep SECURITY_TPM
| COMPUTE_SECURITY_TPM_1_2 |
| COMPUTE_SECURITY_TPM_2_0 |

2
doc/source/admin/index.rst

@ -113,6 +113,8 @@ instance for these kind of workloads.
ports-with-resource-requests
virtual-persistent-memory
emulated-tpm
uefi
secure-boot
managing-resource-providers

136
doc/source/admin/secure-boot.rst

@ -0,0 +1,136 @@
===========
Secure Boot
===========
.. versionadded:: 14.0.0 (Newton)
.. versionchanged:: 23.0.0 (Wallaby)
Added support for Secure Boot to the libvirt driver.
Nova supports configuring `UEFI Secure Boot`__ for guests. Secure Boot aims to
ensure no unsigned kernel code runs on a machine.
.. __: https://en.wikipedia.org/wiki/Secure_boot
Enabling Secure Boot
--------------------
Currently the configuration of UEFI guest bootloaders is only supported when
using the libvirt compute driver with a :oslo.config:option:`libvirt.virt_type`
of ``kvm`` or ``qemu`` or when using the Hyper-V compute driver with certain
machine types. In both cases, it requires the guests also be configured with a
:doc:`UEFI bootloader <uefi>`.
With these requirements satisfied, you can verify UEFI Secure Boot support by
inspecting the traits on the compute node's resource provider:
.. code:: bash
$ COMPUTE_UUID=$(openstack resource provider list --name $HOST -f value -c uuid)
$ openstack resource provider trait list $COMPUTE_UUID | grep COMPUTE_SECURITY_UEFI_SECURE_BOOT
| COMPUTE_SECURITY_UEFI_SECURE_BOOT |
Configuring a flavor or image
-----------------------------
Configuring UEFI Secure Boot for guests varies depending on the compute driver
in use. In all cases, a :doc:`UEFI guest bootloader <uefi>` must be configured
for the guest but there are also additional requirements depending on the
compute driver in use.
.. rubric:: Libvirt
As the name would suggest, UEFI Secure Boot requires that a UEFI bootloader be
configured for guests. When this is done, UEFI Secure Boot support can be
configured using the :nova:extra-spec:`os:secure_boot` extra spec or equivalent
image metadata property. For example, to configure an image that meets both of
these requirements:
.. code-block:: bash
$ openstack image set \
--property hw_firmware_type=uefi \
--property os_secure_boot=required \
$IMAGE
.. note::
On x86_64 hosts, enabling secure boot also requires configuring use of the
Q35 machine type. This can be configured on a per-guest basis using the
``hw_machine_type`` image metadata property or automatically for all guests
created on a host using the :oslo.config:option:`libvirt.hw_machine_type`
config option.
It is also possible to explicitly request that secure boot be disabled. This is
the default behavior, so this request is typically useful when an admin wishes
to explicitly prevent a user requesting secure boot by uploading their own
image with relevant image properties. For example, to disable secure boot via
the flavor:
.. code-block:: bash
$ openstack flavor set --property os:secure_boot=disabled $FLAVOR
Finally, it is possible to request that secure boot be enabled if the host
supports it. This is only possible via the image metadata property. When this
is requested, secure boot will only be enabled if the host supports this
feature and the other constraints, namely that a UEFI guest bootloader is
configured, are met. For example:
.. code-block:: bash
$ openstack image set --property os_secure_boot=optional $IMAGE
.. note::
If both the image metadata property and flavor extra spec are provided,
they must match. If they do not, an error will be raised.
.. rubric:: Hyper-V
Like libvirt, configuring a guest for UEFI Secure Boot support also requires
that it be configured with a UEFI bootloader: As noted in :doc:`uefi`, it is
not possible to do this explicitly in Hyper-V. Rather, you should configure the
guest to use the *Generation 2* machine type. In addition to this, the Hyper-V
compute driver also requires that the OS type be configured.
When both of these constraints are met, you can configure UEFI Secure Boot
support using the :nova:extra-spec:`os:secure_boot` extra spec or equivalent
image metadata property. For example, to configure an image that meets all the
above requirements:
.. code-block:: bash
$ openstack image set \
--property hw_machine_type=hyperv-gen2 \
--property os_type=windows \
--property os_secure_boot=required \
$IMAGE
As with the libvirt driver, it is also possible to request that secure boot be
disabled. This is the default behavior, so this is typically useful when an
admin wishes to explicitly prevent a user requesting secure boot. For example,
to disable secure boot via the flavor:
.. code-block:: bash
$ openstack flavor set --property os:secure_boot=disabled $IMAGE
However, unlike the libvirt driver, the Hyper-V driver does not respect the
``optional`` value for the image metadata property. If this is configured, it
will be silently ignored.
References
----------
* `Allow Secure Boot (SB) for QEMU- and KVM-based guests (spec)`__
* `Securing Secure Boot with System Management Mode`__
* `Generation 2 virtual machine security settings for Hyper-V`__
.. __: https://specs.openstack.org/openstack/nova-specs/specs/wallaby/approved/allow-secure-boot-for-qemu-kvm-guests.html
.. __: http://events17.linuxfoundation.org/sites/events/files/slides/kvmforum15-smm.pdf
.. __: https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/learn-more/generation-2-virtual-machine-security-settings-for-hyper-v

69
doc/source/admin/uefi.rst

@ -0,0 +1,69 @@
====
UEFI
====
.. versionadded:: 17.0.0 (Queens)
Nova supports configuring a `UEFI bootloader`__ for guests. This brings about
important advantages over legacy BIOS bootloaders and allows for features such
as :doc:`secure-boot`.
.. __: https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface
Enabling UEFI
-------------
Currently the configuration of UEFI guest bootloaders is only supported when
using the libvirt compute driver with a :oslo.config:option:`libvirt.virt_type`
of ``kvm`` or ``qemu`` or when using the Hyper-V compute driver with certain
machine types. When using the libvirt compute driver with AArch64-based guests,
UEFI is automatically enabled as AArch64 does not support BIOS.
.. todo::
Update this once compute drivers start reporting a trait indicating UEFI
bootloader support.
Configuring a flavor or image
-----------------------------
Configuring a UEFI bootloader varies depending on the compute driver in use.
.. rubric:: Libvirt
UEFI support is enabled by default on AArch64-based guests. For other guest
architectures, you can request UEFI support with libvirt by setting the
``hw_firmware_type`` image property to ``uefi``. For example:
.. code-block:: bash
$ openstack image set --property hw_firmware_type=uefi $IMAGE
.. rubric:: Hyper-V
It is not possible to explicitly request UEFI support with Hyper-V. Rather, it
is enabled implicitly when using `Generation 2`__ guests. You can request a
Generation 2 guest by setting the ``hw_machine_type`` image metadata property
to ``hyperv-gen2``. For example:
.. code-block:: bash
$ openstack image set --property hw_machine_type=hyperv-gen2 $IMAGE
.. __: https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/plan/should-i-create-a-generation-1-or-2-virtual-machine-in-hyper-v
References
----------
* `Hyper-V UEFI Secure Boot (spec)`__
* `Open Virtual Machine Firmware (OVMF) Status Report`__
* `Anatomy of a boot, a QEMU perspective`__
* `Should I create a generation 1 or 2 virtual machine in Hyper-V?`__
.. __: https://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/hyper-v-uefi-secureboot.html
.. __: http://www.linux-kvm.org/downloads/lersek/ovmf-whitepaper-c770f8c.txt
.. __: https://www.qemu.org/2020/07/03/anatomy-of-a-boot/
.. __: https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/plan/should-i-create-a-generation-1-or-2-virtual-machine-in-hyper-v

188
doc/source/user/flavors.rst

@ -94,15 +94,17 @@ Description
Extra Specs
~~~~~~~~~~~
.. TODO: Consider adding a table of contents here for the various extra specs
or make them sub-sections.
.. todo::
A lot of these need investigation - for example, I can find no reference to
the ``cpu_shares_level`` option outside of documentation and (possibly)
useless tests. We should assess which drivers each option actually apply to.
.. todo::
This is now documented in :doc:`/configuration/extra-specs`, so this should
be removed and the documentation moved to those specs.
.. _extra-specs-CPU-limits:
CPU limits
@ -779,8 +781,8 @@ Hiding hypervisor signature
.. _extra-specs-secure-boot:
Secure Boot
When your Compute services use the Hyper-V hypervisor, you can enable secure
boot for Windows and Linux instances.
:doc:`Secure Boot </admin/secure-boot>` can help ensure the bootloader used
for your instances is trusted, preventing a possible attack vector.
.. code:: console
@ -793,136 +795,146 @@ Secure Boot
- ``disabled`` or ``optional``: (default) Disable Secure Boot for instances
running with this flavor.
.. note::
Supported by the Hyper-V and libvirt drivers.
.. versionchanged:: 23.0.0 (Wallaby)
Added support for secure boot to the libvirt driver.
.. _extra-specs-required-resources:
Custom resource classes and standard resource classes to override
Added in the 16.0.0 Pike release.
Specify custom resource classes to require or override quantity values of
standard resource classes.
Specify custom resource classes to require or override quantity values of
standard resource classes.
The syntax of the extra spec is ``resources:<resource_class_name>=VALUE``
(``VALUE`` is integer).
The name of custom resource classes must start with ``CUSTOM_``.
Standard resource classes to override are ``VCPU``, ``MEMORY_MB`` or
``DISK_GB``. In this case, you can disable scheduling based on standard
resource classes by setting the value to ``0``.
The syntax of the extra spec is ``resources:<resource_class_name>=VALUE``
(``VALUE`` is integer).
The name of custom resource classes must start with ``CUSTOM_``.
Standard resource classes to override are ``VCPU``, ``MEMORY_MB`` or
``DISK_GB``. In this case, you can disable scheduling based on standard
resource classes by setting the value to ``0``.
For example:
For example:
- ``resources:CUSTOM_BAREMETAL_SMALL=1``
- ``resources:VCPU=0``
- resources:CUSTOM_BAREMETAL_SMALL=1
- resources:VCPU=0
See :ironic-doc:`Create flavors for use with the Bare Metal service
<install/configure-nova-flavors>` for more examples.
See :ironic-doc:`Create flavors for use with the Bare Metal service
<install/configure-nova-flavors>` for more examples.
.. versionadded:: 16.0.0 (Pike)
.. _extra-specs-required-traits:
Required traits
Added in the 17.0.0 Queens release.
Required traits allow specifying a server to build on a compute node with
the set of traits specified in the flavor. The traits are associated with
the resource provider that represents the compute node in the Placement
API. See the resource provider traits API reference for more details:
https://docs.openstack.org/api-ref/placement/#resource-provider-traits
Required traits allow specifying a server to build on a compute node with
the set of traits specified in the flavor. The traits are associated with
the resource provider that represents the compute node in the Placement
API. See the resource provider traits API reference for more details:
https://docs.openstack.org/api-ref/placement/#resource-provider-traits
The syntax of the extra spec is ``trait:<trait_name>=required``, for
example:
The syntax of the extra spec is ``trait:<trait_name>=required``, for
example:
- ``trait:HW_CPU_X86_AVX2=required``
- ``trait:STORAGE_DISK_SSD=required``
- trait:HW_CPU_X86_AVX2=required
- trait:STORAGE_DISK_SSD=required
The scheduler will pass required traits to the
``GET /allocation_candidates`` endpoint in the Placement API to include
only resource providers that can satisfy the required traits. In 17.0.0
the only valid value is ``required``. In 18.0.0 ``forbidden`` is added (see
below). Any other value will be considered
invalid.
The scheduler will pass required traits to the
``GET /allocation_candidates`` endpoint in the Placement API to include
only resource providers that can satisfy the required traits. In 17.0.0
the only valid value is ``required``. In 18.0.0 ``forbidden`` is added (see
below). Any other value will be considered
invalid.
The FilterScheduler is currently the only scheduler driver that supports
this feature.
The FilterScheduler is currently the only scheduler driver that supports
this feature.
Traits can be managed using the `osc-placement plugin`__.
Traits can be managed using the `osc-placement plugin`_.
__ https://docs.openstack.org/osc-placement/latest/index.html
.. versionadded:: 17.0.0 (Queens)
.. _extra-specs-forbidden-traits:
Forbidden traits
Added in the 18.0.0 Rocky release.
Forbidden traits are similar to required traits, described above, but
instead of specifying the set of traits that must be satisfied by a compute
node, forbidden traits must **not** be present.
Forbidden traits are similar to required traits, described above, but
instead of specifying the set of traits that must be satisfied by a compute
node, forbidden traits must **not** be present.
The syntax of the extra spec is ``trait:<trait_name>=forbidden``, for
example:
The syntax of the extra spec is ``trait:<trait_name>=forbidden``, for
example:
- ``trait:HW_CPU_X86_AVX2=forbidden``
- ``trait:STORAGE_DISK_SSD=forbidden``
- trait:HW_CPU_X86_AVX2=forbidden
- trait:STORAGE_DISK_SSD=forbidden
The FilterScheduler is currently the only scheduler driver that supports
this feature.
The FilterScheduler is currently the only scheduler driver that supports
this feature.
Traits can be managed using the `osc-placement plugin`__.
Traits can be managed using the `osc-placement plugin`_.
__ https://docs.openstack.org/osc-placement/latest/index.html
.. _osc-placement plugin: https://docs.openstack.org/osc-placement/latest/index.html
.. versionadded:: 18.0.0 (Rocky)
.. _extra-specs-numbered-resource-groupings:
Numbered groupings of resource classes and traits
Added in the 18.0.0 Rocky release.
Specify numbered groupings of resource classes and traits.
Specify numbered groupings of resource classes and traits.
The syntax is as follows (``N`` and ``VALUE`` are integers):
The syntax is as follows (``N`` and ``VALUE`` are integers):
.. parsed-literal::
.. parsed-literal::
resources\ *N*:*<resource_class_name>*\ =\ *VALUE*
trait\ *N*:*<trait_name>*\ =required
resources\ *N*:*<resource_class_name>*\ =\ *VALUE*
trait\ *N*:*<trait_name>*\ =required
A given numbered ``resources`` or ``trait`` key may be repeated to
specify multiple resources/traits in the same grouping,
just as with the un-numbered syntax.
A given numbered ``resources`` or ``trait`` key may be repeated to
specify multiple resources/traits in the same grouping,
just as with the un-numbered syntax.
Specify inter-group affinity policy via the ``group_policy`` key,
which may have the following values:
Specify inter-group affinity policy via the ``group_policy`` key,
which may have the following values:
* ``isolate``: Different numbered request groups will be satisfied by
*different* providers.
* ``none``: Different numbered request groups may be satisfied
by different providers *or* common providers.
* ``isolate``: Different numbered request groups will be satisfied by
*different* providers.
* ``none``: Different numbered request groups may be satisfied
by different providers *or* common providers.
.. note::
.. note::
If more than one group is specified then the ``group_policy`` is
mandatory in the request. However such groups might come from other
sources than flavor extra_spec (e.g. from Neutron ports with QoS
minimum bandwidth policy). If the flavor does not specify any groups
and ``group_policy`` but more than one group is coming from other
sources then nova will default the ``group_policy`` to ``none`` to
avoid scheduler failure.
If more than one group is specified then the ``group_policy`` is
mandatory in the request. However such groups might come from other
sources than flavor extra_spec (e.g. from Neutron ports with QoS
minimum bandwidth policy). If the flavor does not specify any groups
and ``group_policy`` but more than one group is coming from other
sources then nova will default the ``group_policy`` to ``none`` to
avoid scheduler failure.
For example, to create a server with the following VFs:
For example, to create a server with the following VFs:
* One SR-IOV virtual function (VF) on NET1 with bandwidth 10000 bytes/sec
* One SR-IOV virtual function (VF) on NET2 with bandwidth 20000 bytes/sec
on a *different* NIC with SSL acceleration
* One SR-IOV virtual function (VF) on NET1 with bandwidth 10000 bytes/sec
* One SR-IOV virtual function (VF) on NET2 with bandwidth 20000 bytes/sec
on a *different* NIC with SSL acceleration
It is specified in the extra specs as follows::
It is specified in the extra specs as follows::
resources1:SRIOV_NET_VF=1
resources1:NET_EGRESS_BYTES_SEC=10000
trait1:CUSTOM_PHYSNET_NET1=required
resources2:SRIOV_NET_VF=1
resources2:NET_EGRESS_BYTES_SEC:20000
trait2:CUSTOM_PHYSNET_NET2=required
trait2:HW_NIC_ACCEL_SSL=required
group_policy=isolate
resources1:SRIOV_NET_VF=1
resources1:NET_EGRESS_BYTES_SEC=10000
trait1:CUSTOM_PHYSNET_NET1=required
resources2:SRIOV_NET_VF=1
resources2:NET_EGRESS_BYTES_SEC:20000
trait2:CUSTOM_PHYSNET_NET2=required
trait2:HW_NIC_ACCEL_SSL=required
group_policy=isolate
See `Granular Resource Request Syntax`__ for more details.
See `Granular Resource Request Syntax`_ for more details.
__ https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/granular-resource-requests.html
.. _Granular Resource Request Syntax: https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/granular-resource-requests.html
.. versionadded:: 18.0.0 (Rocky)
.. _vtpm-flavor:

Loading…
Cancel
Save