Improve SEV documentation and other minor tweaks

This is a follow-up to the previous SEV commit which enables booting
SEV guests (I659cb77f12a3), making some minor improvements based on
nits highlighted during review:

- Clarify in the hypervisor-kvm.rst documentation that the
  num_memory_encrypted_guests option is optional, by rewording and
  moving it to the list of optional steps.

- Make things a bit more concise and avoid duplication of information
  between the above page and the documentation for the option
  num_memory_encrypted_guests, instead relying on appropriate
  hyperlinking between them.

- Clarify that virtio-blk can be used for boot disks in newer kernels.

- Hyperlink to a page explaining vhost-user

- Remove an unneeded mocking of a LOG object.

- A few other grammar / spelling tweaks.

blueprint: amd-sev-libvirt-support
Change-Id: I75b7ec3a45cac25f6ebf77c6ed013de86c6ac947
This commit is contained in:
Adam Spiers 2019-09-09 22:34:57 +01:00
parent 8e5d6767bb
commit 922d8bf811
4 changed files with 80 additions and 85 deletions

View File

@ -489,7 +489,12 @@ Requirements for SEV
First the operator will need to ensure the following prerequisites are met:
- SEV-capable AMD hardware as Nova compute hosts
- At least one of the Nova compute hosts must be AMD hardware capable
of supporting SEV. It is entirely possible for the compute plane to
be a mix of hardware which can and cannot support SEV, although as
per the section on `Permanent limitations`_ below, the maximum
number of simultaneously running guests with SEV will be limited by
the quantity and quality of SEV-capable hardware available.
- An appropriately configured software stack on those compute hosts,
so that the various layers are all SEV ready:
@ -507,31 +512,6 @@ Deploying SEV-capable infrastructure
In order for users to be able to use SEV, the operator will need to
perform the following steps:
- Configure the :oslo.config:option:`libvirt.num_memory_encrypted_guests`
option in :file:`nova.conf` to represent the number of guests an SEV
compute node can host concurrently with memory encrypted at the
hardware level. For example:
.. code-block:: ini
[libvirt]
num_memory_encrypted_guests = 15
Initially this is required because on AMD SEV-capable hardware, the
memory controller has a fixed number of slots for holding encryption
keys, one per guest. For example, at the time of writing, earlier
generations of hardware only have 15 slots, thereby limiting the
number of SEV guests which can be run concurrently to 15. Nova
needs to track how many slots are available and used in order to
avoid attempting to exceed that limit in the hardware.
At the time of writing (September 2019), work is in progress to allow
QEMU and libvirt to expose the number of slots available on SEV
hardware; however until this is finished and released, it will not be
possible for Nova to programatically detect the correct value. So this
configuration option serves as a stop-gap, allowing the cloud operator
to provide this value manually.
- Ensure that sufficient memory is reserved on the SEV compute hosts
for host-level services to function correctly at all times. This is
particularly important when hosting SEV-enabled guests, since they
@ -548,7 +528,7 @@ perform the following steps:
An alternative approach is to configure the
:oslo.config:option:`reserved_host_memory_mb` option in the
``[compute]`` section of :file:`nova.conf`, based on the expected
``[DEFAULT]`` section of :file:`nova.conf`, based on the expected
maximum number of SEV guests simultaneously running on the host, and
the details provided in `an earlier version of the AMD SEV spec`__
regarding memory region sizes, which cover how to calculate it
@ -568,7 +548,57 @@ perform the following steps:
to define SEV-enabled images.
Additionally the cloud operator should consider the following optional
step:
steps:
.. _num_memory_encrypted_guests:
- Configure the :oslo.config:option:`libvirt.num_memory_encrypted_guests`
option in :file:`nova.conf` to represent the number of guests an SEV
compute node can host concurrently with memory encrypted at the
hardware level. For example:
.. code-block:: ini
[libvirt]
num_memory_encrypted_guests = 15
This option exists because on AMD SEV-capable hardware, the memory
controller has a fixed number of slots for holding encryption keys,
one per guest. For example, at the time of writing, earlier
generations of hardware only have 15 slots, thereby limiting the
number of SEV guests which can be run concurrently to 15. Nova
needs to track how many slots are available and used in order to
avoid attempting to exceed that limit in the hardware.
At the time of writing (September 2019), work is in progress to
allow QEMU and libvirt to expose the number of slots available on
SEV hardware; however until this is finished and released, it will
not be possible for Nova to programmatically detect the correct
value.
So this configuration option serves as a stop-gap, allowing the
cloud operator the option of providing this value manually. It may
later be demoted to a fallback value for cases where the limit
cannot be detected programmatically, or even removed altogether when
Nova's minimum QEMU version guarantees that it can always be
detected.
.. note::
When deciding whether to use the default of ``None`` or manually
impose a limit, operators should carefully weigh the benefits
vs. the risk. The benefits of using the default are a) immediate
convenience since nothing needs to be done now, and b) convenience
later when upgrading compute hosts to future versions of Nova,
since again nothing will need to be done for the correct limit to
be automatically imposed. However the risk is that until
auto-detection is implemented, users may be able to attempt to
launch guests with encrypted memory on hosts which have already
reached the maximum number of guests simultaneously running with
encrypted memory. This risk may be mitigated by other limitations
which operators can impose, for example if the smallest RAM
footprint of any flavor imposes a maximum number of simultaneously
running guests which is less than or equal to the SEV limit.
- Configure :oslo.config:option:`libvirt.hw_machine_type` on all
SEV-capable compute hosts to include ``x86_64=q35``, so that all
@ -637,24 +667,25 @@ features:
- SEV-encrypted VMs cannot contain directly accessible host devices
(PCI passthrough). So for example mdev vGPU support will not
currently work. However technologies based on vhost-user should
currently work. However technologies based on `vhost-user`__ should
work fine.
- The boot disk of SEV-encrypted VMs cannot be ``virtio-blk``. Using
__ https://wiki.qemu.org/Features/VirtioVhostUser
- The boot disk of SEV-encrypted VMs can only be ``virtio-blk`` on
newer kernels which contain the necessary fixes. Using
``virtio-scsi`` or SATA for the boot disk works as expected, as does
``virtio-blk`` for non-boot disks.
- Operators will initially be required to manually specify the upper
limit of SEV guests for each compute host, via the new
:oslo.config:option:`libvirt.num_memory_encrypted_guests` configuration
option described above. This is a short-term workaround to the current
lack of mechanism for programmatically discovering the SEV guest limit
via libvirt.
This config option may later be demoted to a fallback value for
cases where the limit cannot be detected programmatically, or even
removed altogether when Nova's minimum QEMU version guarantees that
it can always be detected.
- QEMU and libvirt cannot yet expose the number of slots available for
encrypted guests in the memory controller on SEV hardware. Until
this is implemented, it is not possible for Nova to programmatically
detect the correct value. As a short-term workaround, operators can
optionally manually specify the upper limit of SEV guests for each
compute host, via the new
:oslo.config:option:`libvirt.num_memory_encrypted_guests`
configuration option :ref:`described above
<num_memory_encrypted_guests>`.
Permanent limitations
---------------------
@ -671,31 +702,6 @@ The following limitations are expected long-term:
- The operating system running in an encrypted virtual machine must
contain SEV support.
- The ``q35`` machine type does not provide an IDE controller,
therefore IDE devices are not supported. In particular this means
that Nova's libvirt driver's current default behaviour on the x86_64
architecture of attaching the config drive as an ``iso9660`` IDE
CD-ROM device will not work. There are two potential workarounds:
#. Change :oslo.config:option:`config_drive_format` in
:file:`nova.conf` from its default value of ``iso9660`` to ``vfat``.
This will result in ``virtio`` being used instead. However this
per-host setting could potentially break images with legacy OSes
which expect the config drive to be an IDE CD-ROM. It would also
not deal with other CD-ROM devices.
#. Set the (largely `undocumented
<https://bugs.launchpad.net/glance/+bug/1808868>`_)
``hw_cdrom_bus`` image property to ``virtio``, which is
recommended as a replacement for ``ide``, and ``hw_scsi_model``
to ``virtio-scsi``.
Some potentially cleaner long-term solutions which require code
changes have been suggested in the `Work Items section of the SEV
spec`__.
__ http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#work-items
Non-limitations
---------------

View File

@ -850,7 +850,7 @@ Maximum number of guests with encrypted memory which can run
concurrently on this compute host.
For now this is only relevant for AMD machines which support SEV
(Secure Encrypted Virtualisation). Such machines have a limited
(Secure Encrypted Virtualization). Such machines have a limited
number of slots in their memory controller for storing encryption
keys. Each running guest with encrypted memory will consume one of
these slots.
@ -870,20 +870,10 @@ limit.
.. note::
When deciding whether to use the default of ``None`` or manually
impose a limit, operators should carefully weigh the benefits
vs. the risk. The benefits are a) immediate convenience since
nothing needs to be done now, and b) convenience later when
upgrading compute hosts to future versions of Nova, since again
nothing will need to be done for the correct limit to be
automatically imposed. However the risk is that until
auto-detection is implemented, users may be able to attempt to
launch guests with encrypted memory on hosts which have already
reached the maximum number of guests simultaneously running with
encrypted memory. This risk may be mitigated by other limitations
which operators can impose, for example if the smallest RAM
footprint of any flavor imposes a maximum number of simultaneously
running guests which is less than or equal to the SEV limit.
It is recommended to read :ref:`the deployment documentation's
section on this option <num_memory_encrypted_guests>` before
deciding whether to configure this setting or leave it at the
default.
Related options:

View File

@ -23741,8 +23741,7 @@ class TestLibvirtSEVSupported(TestLibvirtSEV):
@test.patch_exists(SEV_KERNEL_PARAM_FILE, True)
@test.patch_open(SEV_KERNEL_PARAM_FILE, "1\n")
@mock.patch.object(libvirt_driver.LOG, 'warning')
def test_get_mem_encrypted_slots_config_zero_supported(self, mock_log):
def test_get_mem_encrypted_slots_config_zero_supported(self):
self.flags(num_memory_encrypted_guests=0, group='libvirt')
self.driver._host._set_amd_sev_support()
self.assertEqual(0, self.driver._get_memory_encrypted_slots())

View File

@ -7,7 +7,7 @@ features:
Virtualization) is supported, and it has certain minimum version
requirements regarding the kernel, QEMU, and libvirt.
Memory encryption can be required either via flavor which has the
Memory encryption can be required either via a flavor which has the
``hw:mem_encryption`` extra spec set to ``True``, or via an image
which has the ``hw_mem_encryption`` property set to ``True``.
These do not inherently cause a preference for SEV-capable
@ -22,7 +22,7 @@ features:
In all cases, SEV instances can only be booted from images which
have the ``hw_firmware_type`` property set to ``uefi``, and only
when the machine type is set to ``q35``. This can be set per
when the machine type is set to ``q35``. The latter can be set per
image by setting the image property ``hw_machine_type=q35``, or
per compute node by the operator via the ``hw_machine_type``
configuration option in the ``[libvirt]`` section of