|
|
|
|
@@ -489,7 +489,12 @@ Requirements for SEV
|
|
|
|
|
|
|
|
|
|
First the operator will need to ensure the following prerequisites are met:
|
|
|
|
|
|
|
|
|
|
- SEV-capable AMD hardware as Nova compute hosts
|
|
|
|
|
- At least one of the Nova compute hosts must be AMD hardware capable
|
|
|
|
|
of supporting SEV. It is entirely possible for the compute plane to
|
|
|
|
|
be a mix of hardware which can and cannot support SEV, although as
|
|
|
|
|
per the section on `Permanent limitations`_ below, the maximum
|
|
|
|
|
number of simultaneously running guests with SEV will be limited by
|
|
|
|
|
the quantity and quality of SEV-capable hardware available.
|
|
|
|
|
|
|
|
|
|
- An appropriately configured software stack on those compute hosts,
|
|
|
|
|
so that the various layers are all SEV ready:
|
|
|
|
|
@@ -507,31 +512,6 @@ Deploying SEV-capable infrastructure
|
|
|
|
|
In order for users to be able to use SEV, the operator will need to
|
|
|
|
|
perform the following steps:
|
|
|
|
|
|
|
|
|
|
- Configure the :oslo.config:option:`libvirt.num_memory_encrypted_guests`
|
|
|
|
|
option in :file:`nova.conf` to represent the number of guests an SEV
|
|
|
|
|
compute node can host concurrently with memory encrypted at the
|
|
|
|
|
hardware level. For example:
|
|
|
|
|
|
|
|
|
|
.. code-block:: ini
|
|
|
|
|
|
|
|
|
|
[libvirt]
|
|
|
|
|
num_memory_encrypted_guests = 15
|
|
|
|
|
|
|
|
|
|
Initially this is required because on AMD SEV-capable hardware, the
|
|
|
|
|
memory controller has a fixed number of slots for holding encryption
|
|
|
|
|
keys, one per guest. For example, at the time of writing, earlier
|
|
|
|
|
generations of hardware only have 15 slots, thereby limiting the
|
|
|
|
|
number of SEV guests which can be run concurrently to 15. Nova
|
|
|
|
|
needs to track how many slots are available and used in order to
|
|
|
|
|
avoid attempting to exceed that limit in the hardware.
|
|
|
|
|
|
|
|
|
|
At the time of writing (September 2019), work is in progress to allow
|
|
|
|
|
QEMU and libvirt to expose the number of slots available on SEV
|
|
|
|
|
hardware; however until this is finished and released, it will not be
|
|
|
|
|
possible for Nova to programatically detect the correct value. So this
|
|
|
|
|
configuration option serves as a stop-gap, allowing the cloud operator
|
|
|
|
|
to provide this value manually.
|
|
|
|
|
|
|
|
|
|
- Ensure that sufficient memory is reserved on the SEV compute hosts
|
|
|
|
|
for host-level services to function correctly at all times. This is
|
|
|
|
|
particularly important when hosting SEV-enabled guests, since they
|
|
|
|
|
@@ -548,7 +528,7 @@ perform the following steps:
|
|
|
|
|
|
|
|
|
|
An alternative approach is to configure the
|
|
|
|
|
:oslo.config:option:`reserved_host_memory_mb` option in the
|
|
|
|
|
``[compute]`` section of :file:`nova.conf`, based on the expected
|
|
|
|
|
``[DEFAULT]`` section of :file:`nova.conf`, based on the expected
|
|
|
|
|
maximum number of SEV guests simultaneously running on the host, and
|
|
|
|
|
the details provided in `an earlier version of the AMD SEV spec`__
|
|
|
|
|
regarding memory region sizes, which cover how to calculate it
|
|
|
|
|
@@ -568,7 +548,57 @@ perform the following steps:
|
|
|
|
|
to define SEV-enabled images.
|
|
|
|
|
|
|
|
|
|
Additionally the cloud operator should consider the following optional
|
|
|
|
|
step:
|
|
|
|
|
steps:
|
|
|
|
|
|
|
|
|
|
.. _num_memory_encrypted_guests:
|
|
|
|
|
|
|
|
|
|
- Configure the :oslo.config:option:`libvirt.num_memory_encrypted_guests`
|
|
|
|
|
option in :file:`nova.conf` to represent the number of guests an SEV
|
|
|
|
|
compute node can host concurrently with memory encrypted at the
|
|
|
|
|
hardware level. For example:
|
|
|
|
|
|
|
|
|
|
.. code-block:: ini
|
|
|
|
|
|
|
|
|
|
[libvirt]
|
|
|
|
|
num_memory_encrypted_guests = 15
|
|
|
|
|
|
|
|
|
|
This option exists because on AMD SEV-capable hardware, the memory
|
|
|
|
|
controller has a fixed number of slots for holding encryption keys,
|
|
|
|
|
one per guest. For example, at the time of writing, earlier
|
|
|
|
|
generations of hardware only have 15 slots, thereby limiting the
|
|
|
|
|
number of SEV guests which can be run concurrently to 15. Nova
|
|
|
|
|
needs to track how many slots are available and used in order to
|
|
|
|
|
avoid attempting to exceed that limit in the hardware.
|
|
|
|
|
|
|
|
|
|
At the time of writing (September 2019), work is in progress to
|
|
|
|
|
allow QEMU and libvirt to expose the number of slots available on
|
|
|
|
|
SEV hardware; however until this is finished and released, it will
|
|
|
|
|
not be possible for Nova to programmatically detect the correct
|
|
|
|
|
value.
|
|
|
|
|
|
|
|
|
|
So this configuration option serves as a stop-gap, allowing the
|
|
|
|
|
cloud operator the option of providing this value manually. It may
|
|
|
|
|
later be demoted to a fallback value for cases where the limit
|
|
|
|
|
cannot be detected programmatically, or even removed altogether when
|
|
|
|
|
Nova's minimum QEMU version guarantees that it can always be
|
|
|
|
|
detected.
|
|
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
|
|
When deciding whether to use the default of ``None`` or manually
|
|
|
|
|
impose a limit, operators should carefully weigh the benefits
|
|
|
|
|
vs. the risk. The benefits of using the default are a) immediate
|
|
|
|
|
convenience since nothing needs to be done now, and b) convenience
|
|
|
|
|
later when upgrading compute hosts to future versions of Nova,
|
|
|
|
|
since again nothing will need to be done for the correct limit to
|
|
|
|
|
be automatically imposed. However the risk is that until
|
|
|
|
|
auto-detection is implemented, users may be able to attempt to
|
|
|
|
|
launch guests with encrypted memory on hosts which have already
|
|
|
|
|
reached the maximum number of guests simultaneously running with
|
|
|
|
|
encrypted memory. This risk may be mitigated by other limitations
|
|
|
|
|
which operators can impose, for example if the smallest RAM
|
|
|
|
|
footprint of any flavor imposes a maximum number of simultaneously
|
|
|
|
|
running guests which is less than or equal to the SEV limit.
|
|
|
|
|
|
|
|
|
|
- Configure :oslo.config:option:`libvirt.hw_machine_type` on all
|
|
|
|
|
SEV-capable compute hosts to include ``x86_64=q35``, so that all
|
|
|
|
|
@@ -637,24 +667,25 @@ features:
|
|
|
|
|
|
|
|
|
|
- SEV-encrypted VMs cannot contain directly accessible host devices
|
|
|
|
|
(PCI passthrough). So for example mdev vGPU support will not
|
|
|
|
|
currently work. However technologies based on vhost-user should
|
|
|
|
|
currently work. However technologies based on `vhost-user`__ should
|
|
|
|
|
work fine.
|
|
|
|
|
|
|
|
|
|
- The boot disk of SEV-encrypted VMs cannot be ``virtio-blk``. Using
|
|
|
|
|
__ https://wiki.qemu.org/Features/VirtioVhostUser
|
|
|
|
|
|
|
|
|
|
- The boot disk of SEV-encrypted VMs can only be ``virtio-blk`` on
|
|
|
|
|
newer kernels which contain the necessary fixes. Using
|
|
|
|
|
``virtio-scsi`` or SATA for the boot disk works as expected, as does
|
|
|
|
|
``virtio-blk`` for non-boot disks.
|
|
|
|
|
|
|
|
|
|
- Operators will initially be required to manually specify the upper
|
|
|
|
|
limit of SEV guests for each compute host, via the new
|
|
|
|
|
:oslo.config:option:`libvirt.num_memory_encrypted_guests` configuration
|
|
|
|
|
option described above. This is a short-term workaround to the current
|
|
|
|
|
lack of mechanism for programmatically discovering the SEV guest limit
|
|
|
|
|
via libvirt.
|
|
|
|
|
|
|
|
|
|
This config option may later be demoted to a fallback value for
|
|
|
|
|
cases where the limit cannot be detected programmatically, or even
|
|
|
|
|
removed altogether when Nova's minimum QEMU version guarantees that
|
|
|
|
|
it can always be detected.
|
|
|
|
|
- QEMU and libvirt cannot yet expose the number of slots available for
|
|
|
|
|
encrypted guests in the memory controller on SEV hardware. Until
|
|
|
|
|
this is implemented, it is not possible for Nova to programmatically
|
|
|
|
|
detect the correct value. As a short-term workaround, operators can
|
|
|
|
|
optionally manually specify the upper limit of SEV guests for each
|
|
|
|
|
compute host, via the new
|
|
|
|
|
:oslo.config:option:`libvirt.num_memory_encrypted_guests`
|
|
|
|
|
configuration option :ref:`described above
|
|
|
|
|
<num_memory_encrypted_guests>`.
|
|
|
|
|
|
|
|
|
|
Permanent limitations
|
|
|
|
|
---------------------
|
|
|
|
|
@@ -671,31 +702,6 @@ The following limitations are expected long-term:
|
|
|
|
|
- The operating system running in an encrypted virtual machine must
|
|
|
|
|
contain SEV support.
|
|
|
|
|
|
|
|
|
|
- The ``q35`` machine type does not provide an IDE controller,
|
|
|
|
|
therefore IDE devices are not supported. In particular this means
|
|
|
|
|
that Nova's libvirt driver's current default behaviour on the x86_64
|
|
|
|
|
architecture of attaching the config drive as an ``iso9660`` IDE
|
|
|
|
|
CD-ROM device will not work. There are two potential workarounds:
|
|
|
|
|
|
|
|
|
|
#. Change :oslo.config:option:`config_drive_format` in
|
|
|
|
|
:file:`nova.conf` from its default value of ``iso9660`` to ``vfat``.
|
|
|
|
|
This will result in ``virtio`` being used instead. However this
|
|
|
|
|
per-host setting could potentially break images with legacy OSes
|
|
|
|
|
which expect the config drive to be an IDE CD-ROM. It would also
|
|
|
|
|
not deal with other CD-ROM devices.
|
|
|
|
|
|
|
|
|
|
#. Set the (largely `undocumented
|
|
|
|
|
<https://bugs.launchpad.net/glance/+bug/1808868>`_)
|
|
|
|
|
``hw_cdrom_bus`` image property to ``virtio``, which is
|
|
|
|
|
recommended as a replacement for ``ide``, and ``hw_scsi_model``
|
|
|
|
|
to ``virtio-scsi``.
|
|
|
|
|
|
|
|
|
|
Some potentially cleaner long-term solutions which require code
|
|
|
|
|
changes have been suggested in the `Work Items section of the SEV
|
|
|
|
|
spec`__.
|
|
|
|
|
|
|
|
|
|
__ http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#work-items
|
|
|
|
|
|
|
|
|
|
Non-limitations
|
|
|
|
|
---------------
|
|
|
|
|
|
|
|
|
|
|