Merge "Improve SEV documentation and other minor tweaks"

This commit is contained in:
Zuul 2019-09-11 20:43:02 +00:00 committed by Gerrit Code Review
commit 690b3ffd38
4 changed files with 80 additions and 85 deletions

View File

@ -489,7 +489,12 @@ Requirements for SEV
First the operator will need to ensure the following prerequisites are met: First the operator will need to ensure the following prerequisites are met:
- SEV-capable AMD hardware as Nova compute hosts - At least one of the Nova compute hosts must be AMD hardware capable
of supporting SEV. It is entirely possible for the compute plane to
be a mix of hardware which can and cannot support SEV, although as
per the section on `Permanent limitations`_ below, the maximum
number of simultaneously running guests with SEV will be limited by
the quantity and quality of SEV-capable hardware available.
- An appropriately configured software stack on those compute hosts, - An appropriately configured software stack on those compute hosts,
so that the various layers are all SEV ready: so that the various layers are all SEV ready:
@ -507,31 +512,6 @@ Deploying SEV-capable infrastructure
In order for users to be able to use SEV, the operator will need to In order for users to be able to use SEV, the operator will need to
perform the following steps: perform the following steps:
- Configure the :oslo.config:option:`libvirt.num_memory_encrypted_guests`
option in :file:`nova.conf` to represent the number of guests an SEV
compute node can host concurrently with memory encrypted at the
hardware level. For example:
.. code-block:: ini
[libvirt]
num_memory_encrypted_guests = 15
Initially this is required because on AMD SEV-capable hardware, the
memory controller has a fixed number of slots for holding encryption
keys, one per guest. For example, at the time of writing, earlier
generations of hardware only have 15 slots, thereby limiting the
number of SEV guests which can be run concurrently to 15. Nova
needs to track how many slots are available and used in order to
avoid attempting to exceed that limit in the hardware.
At the time of writing (September 2019), work is in progress to allow
QEMU and libvirt to expose the number of slots available on SEV
hardware; however until this is finished and released, it will not be
possible for Nova to programatically detect the correct value. So this
configuration option serves as a stop-gap, allowing the cloud operator
to provide this value manually.
- Ensure that sufficient memory is reserved on the SEV compute hosts - Ensure that sufficient memory is reserved on the SEV compute hosts
for host-level services to function correctly at all times. This is for host-level services to function correctly at all times. This is
particularly important when hosting SEV-enabled guests, since they particularly important when hosting SEV-enabled guests, since they
@ -548,7 +528,7 @@ perform the following steps:
An alternative approach is to configure the An alternative approach is to configure the
:oslo.config:option:`reserved_host_memory_mb` option in the :oslo.config:option:`reserved_host_memory_mb` option in the
``[compute]`` section of :file:`nova.conf`, based on the expected ``[DEFAULT]`` section of :file:`nova.conf`, based on the expected
maximum number of SEV guests simultaneously running on the host, and maximum number of SEV guests simultaneously running on the host, and
the details provided in `an earlier version of the AMD SEV spec`__ the details provided in `an earlier version of the AMD SEV spec`__
regarding memory region sizes, which cover how to calculate it regarding memory region sizes, which cover how to calculate it
@ -568,7 +548,57 @@ perform the following steps:
to define SEV-enabled images. to define SEV-enabled images.
Additionally the cloud operator should consider the following optional Additionally the cloud operator should consider the following optional
step: steps:
.. _num_memory_encrypted_guests:
- Configure the :oslo.config:option:`libvirt.num_memory_encrypted_guests`
option in :file:`nova.conf` to represent the number of guests an SEV
compute node can host concurrently with memory encrypted at the
hardware level. For example:
.. code-block:: ini
[libvirt]
num_memory_encrypted_guests = 15
This option exists because on AMD SEV-capable hardware, the memory
controller has a fixed number of slots for holding encryption keys,
one per guest. For example, at the time of writing, earlier
generations of hardware only have 15 slots, thereby limiting the
number of SEV guests which can be run concurrently to 15. Nova
needs to track how many slots are available and used in order to
avoid attempting to exceed that limit in the hardware.
At the time of writing (September 2019), work is in progress to
allow QEMU and libvirt to expose the number of slots available on
SEV hardware; however until this is finished and released, it will
not be possible for Nova to programmatically detect the correct
value.
So this configuration option serves as a stop-gap, allowing the
cloud operator the option of providing this value manually. It may
later be demoted to a fallback value for cases where the limit
cannot be detected programmatically, or even removed altogether when
Nova's minimum QEMU version guarantees that it can always be
detected.
.. note::
When deciding whether to use the default of ``None`` or manually
impose a limit, operators should carefully weigh the benefits
vs. the risk. The benefits of using the default are a) immediate
convenience since nothing needs to be done now, and b) convenience
later when upgrading compute hosts to future versions of Nova,
since again nothing will need to be done for the correct limit to
be automatically imposed. However the risk is that until
auto-detection is implemented, users may be able to attempt to
launch guests with encrypted memory on hosts which have already
reached the maximum number of guests simultaneously running with
encrypted memory. This risk may be mitigated by other limitations
which operators can impose, for example if the smallest RAM
footprint of any flavor imposes a maximum number of simultaneously
running guests which is less than or equal to the SEV limit.
- Configure :oslo.config:option:`libvirt.hw_machine_type` on all - Configure :oslo.config:option:`libvirt.hw_machine_type` on all
SEV-capable compute hosts to include ``x86_64=q35``, so that all SEV-capable compute hosts to include ``x86_64=q35``, so that all
@ -637,24 +667,25 @@ features:
- SEV-encrypted VMs cannot contain directly accessible host devices - SEV-encrypted VMs cannot contain directly accessible host devices
(PCI passthrough). So for example mdev vGPU support will not (PCI passthrough). So for example mdev vGPU support will not
currently work. However technologies based on vhost-user should currently work. However technologies based on `vhost-user`__ should
work fine. work fine.
- The boot disk of SEV-encrypted VMs cannot be ``virtio-blk``. Using __ https://wiki.qemu.org/Features/VirtioVhostUser
- The boot disk of SEV-encrypted VMs can only be ``virtio-blk`` on
newer kernels which contain the necessary fixes. Using
``virtio-scsi`` or SATA for the boot disk works as expected, as does ``virtio-scsi`` or SATA for the boot disk works as expected, as does
``virtio-blk`` for non-boot disks. ``virtio-blk`` for non-boot disks.
- Operators will initially be required to manually specify the upper - QEMU and libvirt cannot yet expose the number of slots available for
limit of SEV guests for each compute host, via the new encrypted guests in the memory controller on SEV hardware. Until
:oslo.config:option:`libvirt.num_memory_encrypted_guests` configuration this is implemented, it is not possible for Nova to programmatically
option described above. This is a short-term workaround to the current detect the correct value. As a short-term workaround, operators can
lack of mechanism for programmatically discovering the SEV guest limit optionally manually specify the upper limit of SEV guests for each
via libvirt. compute host, via the new
:oslo.config:option:`libvirt.num_memory_encrypted_guests`
This config option may later be demoted to a fallback value for configuration option :ref:`described above
cases where the limit cannot be detected programmatically, or even <num_memory_encrypted_guests>`.
removed altogether when Nova's minimum QEMU version guarantees that
it can always be detected.
Permanent limitations Permanent limitations
--------------------- ---------------------
@ -671,31 +702,6 @@ The following limitations are expected long-term:
- The operating system running in an encrypted virtual machine must - The operating system running in an encrypted virtual machine must
contain SEV support. contain SEV support.
- The ``q35`` machine type does not provide an IDE controller,
therefore IDE devices are not supported. In particular this means
that Nova's libvirt driver's current default behaviour on the x86_64
architecture of attaching the config drive as an ``iso9660`` IDE
CD-ROM device will not work. There are two potential workarounds:
#. Change :oslo.config:option:`config_drive_format` in
:file:`nova.conf` from its default value of ``iso9660`` to ``vfat``.
This will result in ``virtio`` being used instead. However this
per-host setting could potentially break images with legacy OSes
which expect the config drive to be an IDE CD-ROM. It would also
not deal with other CD-ROM devices.
#. Set the (largely `undocumented
<https://bugs.launchpad.net/glance/+bug/1808868>`_)
``hw_cdrom_bus`` image property to ``virtio``, which is
recommended as a replacement for ``ide``, and ``hw_scsi_model``
to ``virtio-scsi``.
Some potentially cleaner long-term solutions which require code
changes have been suggested in the `Work Items section of the SEV
spec`__.
__ http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#work-items
Non-limitations Non-limitations
--------------- ---------------

View File

@ -850,7 +850,7 @@ Maximum number of guests with encrypted memory which can run
concurrently on this compute host. concurrently on this compute host.
For now this is only relevant for AMD machines which support SEV For now this is only relevant for AMD machines which support SEV
(Secure Encrypted Virtualisation). Such machines have a limited (Secure Encrypted Virtualization). Such machines have a limited
number of slots in their memory controller for storing encryption number of slots in their memory controller for storing encryption
keys. Each running guest with encrypted memory will consume one of keys. Each running guest with encrypted memory will consume one of
these slots. these slots.
@ -870,20 +870,10 @@ limit.
.. note:: .. note::
When deciding whether to use the default of ``None`` or manually It is recommended to read :ref:`the deployment documentation's
impose a limit, operators should carefully weigh the benefits section on this option <num_memory_encrypted_guests>` before
vs. the risk. The benefits are a) immediate convenience since deciding whether to configure this setting or leave it at the
nothing needs to be done now, and b) convenience later when default.
upgrading compute hosts to future versions of Nova, since again
nothing will need to be done for the correct limit to be
automatically imposed. However the risk is that until
auto-detection is implemented, users may be able to attempt to
launch guests with encrypted memory on hosts which have already
reached the maximum number of guests simultaneously running with
encrypted memory. This risk may be mitigated by other limitations
which operators can impose, for example if the smallest RAM
footprint of any flavor imposes a maximum number of simultaneously
running guests which is less than or equal to the SEV limit.
Related options: Related options:

View File

@ -23741,8 +23741,7 @@ class TestLibvirtSEVSupported(TestLibvirtSEV):
@test.patch_exists(SEV_KERNEL_PARAM_FILE, True) @test.patch_exists(SEV_KERNEL_PARAM_FILE, True)
@test.patch_open(SEV_KERNEL_PARAM_FILE, "1\n") @test.patch_open(SEV_KERNEL_PARAM_FILE, "1\n")
@mock.patch.object(libvirt_driver.LOG, 'warning') def test_get_mem_encrypted_slots_config_zero_supported(self):
def test_get_mem_encrypted_slots_config_zero_supported(self, mock_log):
self.flags(num_memory_encrypted_guests=0, group='libvirt') self.flags(num_memory_encrypted_guests=0, group='libvirt')
self.driver._host._set_amd_sev_support() self.driver._host._set_amd_sev_support()
self.assertEqual(0, self.driver._get_memory_encrypted_slots()) self.assertEqual(0, self.driver._get_memory_encrypted_slots())

View File

@ -7,7 +7,7 @@ features:
Virtualization) is supported, and it has certain minimum version Virtualization) is supported, and it has certain minimum version
requirements regarding the kernel, QEMU, and libvirt. requirements regarding the kernel, QEMU, and libvirt.
Memory encryption can be required either via flavor which has the Memory encryption can be required either via a flavor which has the
``hw:mem_encryption`` extra spec set to ``True``, or via an image ``hw:mem_encryption`` extra spec set to ``True``, or via an image
which has the ``hw_mem_encryption`` property set to ``True``. which has the ``hw_mem_encryption`` property set to ``True``.
These do not inherently cause a preference for SEV-capable These do not inherently cause a preference for SEV-capable
@ -22,7 +22,7 @@ features:
In all cases, SEV instances can only be booted from images which In all cases, SEV instances can only be booted from images which
have the ``hw_firmware_type`` property set to ``uefi``, and only have the ``hw_firmware_type`` property set to ``uefi``, and only
when the machine type is set to ``q35``. This can be set per when the machine type is set to ``q35``. The latter can be set per
image by setting the image property ``hw_machine_type=q35``, or image by setting the image property ``hw_machine_type=q35``, or
per compute node by the operator via the ``hw_machine_type`` per compute node by the operator via the ``hw_machine_type``
configuration option in the ``[libvirt]`` section of configuration option in the ``[libvirt]`` section of