diff --git a/doc/source/admin/configuration/hypervisor-kvm.rst b/doc/source/admin/configuration/hypervisor-kvm.rst index 7639f38cfdcd..364ba4748444 100644 --- a/doc/source/admin/configuration/hypervisor-kvm.rst +++ b/doc/source/admin/configuration/hypervisor-kvm.rst @@ -469,6 +469,262 @@ See `the KVM documentation `_ for more information on these limitations. +.. _amd-sev: + +AMD SEV (Secure Encrypted Virtualization) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +`Secure Encrypted Virtualization (SEV)`__ is a technology from AMD which +enables the memory for a VM to be encrypted with a key unique to the VM. +SEV is particularly applicable to cloud computing since it can reduce the +amount of trust VMs need to place in the hypervisor and administrator of +their host system. + +__ https://developer.amd.com/sev/ + +Nova supports SEV from the Train release onwards. + +Requirements for SEV +-------------------- + +First the operator will need to ensure the following prerequisites are met: + +- SEV-capable AMD hardware as Nova compute hosts + +- An appropriately configured software stack on those compute hosts, + so that the various layers are all SEV ready: + + - kernel >= 4.16 + - QEMU >= 2.12 + - libvirt >= 4.5 + - ovmf >= commit 75b7aa9528bd 2018-07-06 + +.. _deploying-sev-capable-infrastructure: + +Deploying SEV-capable infrastructure +------------------------------------ + +In order for users to be able to use SEV, the operator will need to +perform the following steps: + +- Configure the :oslo.config:option:`libvirt.num_memory_encrypted_guests` + option in :file:`nova.conf` to represent the number of guests an SEV + compute node can host concurrently with memory encrypted at the + hardware level. For example: + + .. code-block:: ini + + [libvirt] + num_memory_encrypted_guests = 15 + + Initially this is required because on AMD SEV-capable hardware, the + memory controller has a fixed number of slots for holding encryption + keys, one per guest. For example, at the time of writing, earlier + generations of hardware only have 15 slots, thereby limiting the + number of SEV guests which can be run concurrently to 15. Nova + needs to track how many slots are available and used in order to + avoid attempting to exceed that limit in the hardware. + + At the time of writing (September 2019), work is in progress to allow + QEMU and libvirt to expose the number of slots available on SEV + hardware; however until this is finished and released, it will not be + possible for Nova to programatically detect the correct value. So this + configuration option serves as a stop-gap, allowing the cloud operator + to provide this value manually. + +- Ensure that sufficient memory is reserved on the SEV compute hosts + for host-level services to function correctly at all times. This is + particularly important when hosting SEV-enabled guests, since they + pin pages in RAM, preventing any memory overcommit which may be in + normal operation on other compute hosts. + + It is `recommended`__ to achieve this by configuring an ``rlimit`` at + the ``/machine.slice`` top-level ``cgroup`` on the host, with all VMs + placed inside that. (For extreme detail, see `this discussion on the + spec`__.) + + __ http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#memory-reservation-solutions + __ https://review.opendev.org/#/c/641994/2/specs/train/approved/amd-sev-libvirt-support.rst@167 + + An alternative approach is to configure the + :oslo.config:option:`reserved_host_memory_mb` option in the + ``[compute]`` section of :file:`nova.conf`, based on the expected + maximum number of SEV guests simultaneously running on the host, and + the details provided in `an earlier version of the AMD SEV spec`__ + regarding memory region sizes, which cover how to calculate it + correctly. + + __ https://specs.openstack.org/openstack/nova-specs/specs/stein/approved/amd-sev-libvirt-support.html#proposed-change + + See `the Memory Locking and Accounting section of the AMD SEV spec`__ + and `previous discussion for further details`__. + + __ http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#memory-locking-and-accounting + __ https://review.opendev.org/#/c/641994/2/specs/train/approved/amd-sev-libvirt-support.rst@167 + +- A cloud administrator will need to define one or more SEV-enabled + flavors :ref:`as described in the user guide + `, unless it is sufficient for users + to define SEV-enabled images. + +Additionally the cloud operator should consider the following optional +step: + +- Configure :oslo.config:option:`libvirt.hw_machine_type` on all + SEV-capable compute hosts to include ``x86_64=q35``, so that all + x86_64 images use the ``q35`` machine type by default. (Currently + Nova defaults to the ``pc`` machine type for the ``x86_64`` + architecture, although `it is expected that this will change in the + future`__.) + + Changing the default from ``pc`` to ``q35`` makes the creation and + configuration of images by users more convenient by removing the + need for the ``hw_machine_type`` property to be set to ``q35`` on + every image for which SEV booting is desired. + + .. caution:: + + Consider carefully whether to set this option. It is + particularly important since a limitation of the implementation + prevents the user from receiving an error message with a helpful + explanation if they try to boot an SEV guest when neither this + configuration option nor the image property are set to select + a ``q35`` machine type. + + On the other hand, setting it to ``q35`` may have other + undesirable side-effects on other images which were expecting to + be booted with ``pc``, so it is suggested to set it on a single + compute node or aggregate, and perform careful testing of typical + images before rolling out the setting to all SEV-capable compute + hosts. + + __ https://bugs.launchpad.net/nova/+bug/1780138 + +Launching SEV instances +----------------------- + +Once an operator has covered the above steps, users can launch SEV +instances either by requesting a flavor for which the operator set the +``hw:mem_encryption`` extra spec to ``True``, or by using an image +with the ``hw_mem_encryption`` property set to ``True``. + +These do not inherently cause a preference for SEV-capable hardware, +but for now SEV is the only way of fulfilling the requirement for +memory encryption. However in the future, support for other +hardware-level guest memory encryption technology such as Intel MKTME +may be added. If a guest specifically needs to be booted using SEV +rather than any other memory encryption technology, it is possible to +ensure this by adding ``trait:HW_CPU_X86_AMD_SEV=required`` to the +flavor extra specs or image properties. + +In all cases, SEV instances can only be booted from images which have +the ``hw_firmware_type`` property set to ``uefi``, and only when the +machine type is set to ``q35``. This can be set per image by setting +the image property ``hw_machine_type=q35``, or per compute node by +the operator via :oslo.config:option:`libvirt.hw_machine_type` as +explained above. + +Impermanent limitations +----------------------- + +The following limitations may be removed in the future as the +hardware, firmware, and various layers of software receive new +features: + +- SEV-encrypted VMs cannot yet be live-migrated or suspended, + therefore they will need to be fully shut down before migrating off + an SEV host, e.g. if maintenance is required on the host. + +- SEV-encrypted VMs cannot contain directly accessible host devices + (PCI passthrough). So for example mdev vGPU support will not + currently work. However technologies based on vhost-user should + work fine. + +- The boot disk of SEV-encrypted VMs cannot be ``virtio-blk``. Using + ``virtio-scsi`` or SATA for the boot disk works as expected, as does + ``virtio-blk`` for non-boot disks. + +- Operators will initially be required to manually specify the upper + limit of SEV guests for each compute host, via the new + :oslo.config:option:`libvirt.num_memory_encrypted_guests` configuration + option described above. This is a short-term workaround to the current + lack of mechanism for programmatically discovering the SEV guest limit + via libvirt. + + This config option may later be demoted to a fallback value for + cases where the limit cannot be detected programmatically, or even + removed altogether when Nova's minimum QEMU version guarantees that + it can always be detected. + +Permanent limitations +--------------------- + +The following limitations are expected long-term: + +- The number of SEV guests allowed to run concurrently will always be + limited. `On the first generation of EPYC machines it will be + limited to 15 guests`__; however this limit becomes much higher with + the second generation (Rome). + + __ https://www.redhat.com/archives/libvir-list/2019-January/msg00652.html + +- The operating system running in an encrypted virtual machine must + contain SEV support. + +- The ``q35`` machine type does not provide an IDE controller, + therefore IDE devices are not supported. In particular this means + that Nova's libvirt driver's current default behaviour on the x86_64 + architecture of attaching the config drive as an ``iso9660`` IDE + CD-ROM device will not work. There are two potential workarounds: + + #. Change :oslo.config:option:`config_drive_format` in + :file:`nova.conf` from its default value of ``iso9660`` to ``vfat``. + This will result in ``virtio`` being used instead. However this + per-host setting could potentially break images with legacy OSes + which expect the config drive to be an IDE CD-ROM. It would also + not deal with other CD-ROM devices. + + #. Set the (largely `undocumented + `_) + ``hw_cdrom_bus`` image property to ``virtio``, which is + recommended as a replacement for ``ide``, and ``hw_scsi_model`` + to ``virtio-scsi``. + + Some potentially cleaner long-term solutions which require code + changes have been suggested in the `Work Items section of the SEV + spec`__. + + __ http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#work-items + +Non-limitations +--------------- + +For the sake of eliminating any doubt, the following actions are *not* +expected to be limited when SEV encryption is used: + +- Cold migration or shelve, since they power off the VM before the + operation at which point there is no encrypted memory (although this + could change since there is work underway to add support for `PMEM + `_) + +- Snapshot, since it only snapshots the disk + +- ``nova evacuate`` (despite the name, more akin to resurrection than + evacuation), since this is only initiated when the VM is no longer + running + +- Attaching any volumes, as long as they do not require attaching via + an IDE bus + +- Use of spice / VNC / serial / RDP consoles + +- `VM guest virtual NUMA (a.k.a. vNUMA) + `_ + +For further technical details, see `the nova spec for SEV support`__. + +__ http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html + Guest agent support ~~~~~~~~~~~~~~~~~~~ diff --git a/doc/source/user/flavors.rst b/doc/source/user/flavors.rst index dbb50ee65994..9e8f0b78c3ca 100644 --- a/doc/source/user/flavors.rst +++ b/doc/source/user/flavors.rst @@ -538,6 +538,18 @@ NUMA topology greater than the available number of CPUs or memory respectively, an exception is raised. +.. _extra-specs-memory-encryption: + +Hardware encryption of guest memory + If there are compute hosts which support encryption of guest memory + at the hardware level, this functionality can be requested via the + ``hw:mem_encryption`` extra spec parameter: + + .. code-block:: console + + $ openstack flavor set FLAVOR-NAME \ + --property hw:mem_encryption=True + .. _extra-specs-realtime-policy: CPU real-time policy diff --git a/doc/source/user/support-matrix.ini b/doc/source/user/support-matrix.ini index e70256302640..1d918d3aafb2 100644 --- a/doc/source/user/support-matrix.ini +++ b/doc/source/user/support-matrix.ini @@ -1664,3 +1664,31 @@ driver.libvirt-vz-vm=missing driver.libvirt-vz-ct=missing driver.powervm=missing driver.zvm=missing + +[operation.boot-encrypted-vm] +title=Boot instance with secure encrypted memory +status=optional +notes=The feature allows VMs to be booted with their memory + hardware-encrypted with a key specific to the VM, to help + protect the data residing in the VM against access from anyone + other than the user of the VM. The Configuration and Security + Guides specify usage of this feature. +cli=openstack server create +driver.xenserver=missing +driver.libvirt-kvm-x86=partial +driver-notes.libvirt-kvm-x86=This feature is currently only + available with hosts which support the SEV (Secure Encrypted + Virtualization) technology from AMD. +driver.libvirt-kvm-aarch64=missing +driver.libvirt-kvm-ppc64=missing +driver.libvirt-kvm-s390x=missing +driver.libvirt-qemu-x86=missing +driver.libvirt-lxc=missing +driver.libvirt-xen=missing +driver.vmware=missing +driver.hyperv=missing +driver.ironic=missing +driver.libvirt-vz-vm=missing +driver.libvirt-vz-ct=missing +driver.powervm=missing +driver.zvm=missing diff --git a/nova/conf/libvirt.py b/nova/conf/libvirt.py index 3d02a0e88768..2e220a3bdf4b 100644 --- a/nova/conf/libvirt.py +++ b/nova/conf/libvirt.py @@ -718,10 +718,11 @@ http://man7.org/linux/man-pages/man7/random.7.html. help='For qemu or KVM guests, set this option to specify ' 'a default machine type per host architecture. ' 'You can find a list of supported machine types ' - 'in your environment by checking the output of ' - 'the "virsh capabilities" command. The format of the ' - 'value for this config option is host-arch=machine-type. ' - 'For example: x86_64=machinetype1,armv7l=machinetype2'), + 'in your environment by checking the output of the ' + ':command:`virsh capabilities` command. The format of ' + 'the value for this config option is ' + '``host-arch=machine-type``. For example: ' + '``x86_64=machinetype1,armv7l=machinetype2``.'), cfg.StrOpt('sysinfo_serial', default='unique', choices=( @@ -840,6 +841,57 @@ Related options: * ``virt_type`` must be set to ``kvm`` or ``qemu``. * ``ram_allocation_ratio`` must be set to 1.0. +"""), + cfg.IntOpt('num_memory_encrypted_guests', + default=None, + min=0, + help=""" +Maximum number of guests with encrypted memory which can run +concurrently on this compute host. + +For now this is only relevant for AMD machines which support SEV +(Secure Encrypted Virtualisation). Such machines have a limited +number of slots in their memory controller for storing encryption +keys. Each running guest with encrypted memory will consume one of +these slots. + +The option may be reused for other equivalent technologies in the +future. If the machine does not support memory encryption, the option +will be ignored and inventory will be set to 0. + +If the machine does support memory encryption, *for now* a value of +``None`` means an effectively unlimited inventory, i.e. no limit will +be imposed by Nova on the number of SEV guests which can be launched, +even though the underlying hardware will enforce its own limit. +However it is expected that in the future, auto-detection of the +inventory from the hardware will become possible, at which point +``None`` will cause auto-detection to automatically impose the correct +limit. + +.. note:: + + When deciding whether to use the default of ``None`` or manually + impose a limit, operators should carefully weigh the benefits + vs. the risk. The benefits are a) immediate convenience since + nothing needs to be done now, and b) convenience later when + upgrading compute hosts to future versions of Nova, since again + nothing will need to be done for the correct limit to be + automatically imposed. However the risk is that until + auto-detection is implemented, users may be able to attempt to + launch guests with encrypted memory on hosts which have already + reached the maximum number of guests simultaneously running with + encrypted memory. This risk may be mitigated by other limitations + which operators can impose, for example if the smallest RAM + footprint of any flavor imposes a maximum number of simultaneously + running guests which is less than or equal to the SEV limit. + +Related options: + +* :oslo.config:option:`libvirt.virt_type` must be set to ``kvm``. + +* It's recommended to consider including ``x86_64=q35`` in + :oslo.config:option:`libvirt.hw_machine_type`; see + :ref:`deploying-sev-capable-infrastructure` for more on this. """), ] diff --git a/nova/tests/functional/libvirt/test_report_cpu_traits.py b/nova/tests/functional/libvirt/test_report_cpu_traits.py index 6ffe976d7892..44be4391433a 100644 --- a/nova/tests/functional/libvirt/test_report_cpu_traits.py +++ b/nova/tests/functional/libvirt/test_report_cpu_traits.py @@ -14,10 +14,12 @@ # under the License. import mock +import os_resource_classes as orc import os_traits as ost from nova import conf +from nova.db import constants as db_const from nova import test from nova.tests.functional.libvirt import integrated_helpers from nova.tests.unit.virt.libvirt import fakelibvirt @@ -30,6 +32,23 @@ class LibvirtReportTraitsTestBase( integrated_helpers.LibvirtProviderUsageBaseTestCase): pass + def assertMemEncryptionSlotsEqual(self, slots): + inventory = self._get_provider_inventory(self.host_uuid) + if slots == 0: + self.assertNotIn(orc.MEM_ENCRYPTION_CONTEXT, inventory) + else: + self.assertEqual( + inventory[orc.MEM_ENCRYPTION_CONTEXT], + { + 'total': slots, + 'min_unit': 1, + 'max_unit': 1, + 'step_size': 1, + 'allocation_ratio': 1.0, + 'reserved': 0, + } + ) + class LibvirtReportTraitsTests(LibvirtReportTraitsTestBase): def test_report_cpu_traits(self): @@ -77,6 +96,10 @@ class LibvirtReportNoSevTraitsTests(LibvirtReportTraitsTestBase): Then test that if the SEV capability appears (again via mocking), after a restart of the compute service, the trait gets registered on the compute host. + + Also test that on both occasions, the inventory of the + MEM_ENCRYPTION_CONTEXT resource class on the compute host + corresponds to the absence or presence of the SEV capability. """ self.assertFalse(self.compute.driver._host.supports_amd_sev) @@ -88,6 +111,8 @@ class LibvirtReportNoSevTraitsTests(LibvirtReportTraitsTestBase): traits = self._get_provider_traits(self.host_uuid) self.assertNotIn(sev_trait, traits) + self.assertMemEncryptionSlotsEqual(0) + # Now simulate the host gaining SEV functionality. Here we # simulate a kernel update or reconfiguration which causes the # kvm-amd kernel module's "sev" parameter to become available @@ -121,6 +146,8 @@ class LibvirtReportNoSevTraitsTests(LibvirtReportTraitsTestBase): # Sanity check that we've still got the trait globally. self.assertIn(sev_trait, self._get_all_traits()) + self.assertMemEncryptionSlotsEqual(db_const.MAX_INT) + class LibvirtReportSevTraitsTests(LibvirtReportTraitsTestBase): STUB_INIT_HOST = False @@ -132,6 +159,7 @@ class LibvirtReportSevTraitsTests(LibvirtReportTraitsTestBase): new=fakelibvirt.virConnect._domain_capability_features_with_SEV) def setUp(self): super(LibvirtReportSevTraitsTests, self).setUp() + self.flags(num_memory_encrypted_guests=16, group='libvirt') self.start_compute() def test_sev_trait_on_off(self): @@ -143,6 +171,10 @@ class LibvirtReportSevTraitsTests(LibvirtReportTraitsTestBase): Then test that if the SEV capability disappears (again via mocking), after a restart of the compute service, the trait gets removed from the compute host. + + Also test that on both occasions, the inventory of the + MEM_ENCRYPTION_CONTEXT resource class on the compute host + corresponds to the absence or presence of the SEV capability. """ self.assertTrue(self.compute.driver._host.supports_amd_sev) @@ -154,6 +186,8 @@ class LibvirtReportSevTraitsTests(LibvirtReportTraitsTestBase): traits = self._get_provider_traits(self.host_uuid) self.assertIn(sev_trait, traits) + self.assertMemEncryptionSlotsEqual(16) + # Now simulate the host losing SEV functionality. Here we # simulate a kernel downgrade or reconfiguration which causes # the kvm-amd kernel module's "sev" parameter to become @@ -177,3 +211,5 @@ class LibvirtReportSevTraitsTests(LibvirtReportTraitsTestBase): # Sanity check that we've still got the trait globally. self.assertIn(sev_trait, self._get_all_traits()) + + self.assertMemEncryptionSlotsEqual(0) diff --git a/nova/tests/unit/virt/libvirt/test_driver.py b/nova/tests/unit/virt/libvirt/test_driver.py index 8b8d2aab87b2..c07c49dc6437 100644 --- a/nova/tests/unit/virt/libvirt/test_driver.py +++ b/nova/tests/unit/virt/libvirt/test_driver.py @@ -70,6 +70,7 @@ from nova.compute import vm_states import nova.conf from nova import context from nova.db import api as db +from nova.db import constants as db_const from nova import exception from nova.network import model as network_model from nova import objects @@ -23679,3 +23680,69 @@ class TestLibvirtMultiattach(test.NoDBTestCase): # calls = [mock.call(lv_ver=libvirt_driver.MIN_LIBVIRT_MULTIATTACH), # mock.call(hv_ver=(2, 10, 0))] # has_min_version.assert_has_calls(calls) + + +vc = fakelibvirt.virConnect + + +class TestLibvirtSEV(test.NoDBTestCase): + """Libvirt driver tests for AMD SEV support.""" + + def setUp(self): + super(TestLibvirtSEV, self).setUp() + self.useFixture(fakelibvirt.FakeLibvirtFixture()) + self.driver = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) + + +@mock.patch.object(os.path, 'exists', new=mock.Mock(return_value=False)) +class TestLibvirtSEVUnsupported(TestLibvirtSEV): + def test_get_mem_encrypted_slots_no_config(self): + self.driver._host._set_amd_sev_support() + self.assertEqual(0, self.driver._get_memory_encrypted_slots()) + + def test_get_mem_encrypted_slots_config_zero(self): + self.flags(num_memory_encrypted_guests=0, group='libvirt') + self.driver._host._set_amd_sev_support() + self.assertEqual(0, self.driver._get_memory_encrypted_slots()) + + @mock.patch.object(libvirt_driver.LOG, 'warning') + def test_get_mem_encrypted_slots_config_non_zero_unsupported( + self, mock_log): + self.flags(num_memory_encrypted_guests=16, group='libvirt') + self.driver._host._set_amd_sev_support() + # Still zero without mocked SEV support + self.assertEqual(0, self.driver._get_memory_encrypted_slots()) + mock_log.assert_called_with( + 'Host is configured with libvirt.num_memory_encrypted_guests ' + 'set to %d, but is not SEV-capable.', 16) + + def test_get_mem_encrypted_slots_unsupported(self): + self.driver._host._set_amd_sev_support() + self.assertEqual(0, self.driver._get_memory_encrypted_slots()) + + +@mock.patch.object(vc, '_domain_capability_features', + new=vc._domain_capability_features_with_SEV) +class TestLibvirtSEVSupported(TestLibvirtSEV): + """Libvirt driver tests for when AMD SEV support is present.""" + @test.patch_exists(SEV_KERNEL_PARAM_FILE, True) + @test.patch_open(SEV_KERNEL_PARAM_FILE, "1\n") + def test_get_mem_encrypted_slots_unlimited(self): + self.driver._host._set_amd_sev_support() + self.assertEqual(db_const.MAX_INT, + self.driver._get_memory_encrypted_slots()) + + @test.patch_exists(SEV_KERNEL_PARAM_FILE, True) + @test.patch_open(SEV_KERNEL_PARAM_FILE, "1\n") + def test_get_mem_encrypted_slots_config_non_zero_supported(self): + self.flags(num_memory_encrypted_guests=16, group='libvirt') + self.driver._host._set_amd_sev_support() + self.assertEqual(16, self.driver._get_memory_encrypted_slots()) + + @test.patch_exists(SEV_KERNEL_PARAM_FILE, True) + @test.patch_open(SEV_KERNEL_PARAM_FILE, "1\n") + @mock.patch.object(libvirt_driver.LOG, 'warning') + def test_get_mem_encrypted_slots_config_zero_supported(self, mock_log): + self.flags(num_memory_encrypted_guests=0, group='libvirt') + self.driver._host._set_amd_sev_support() + self.assertEqual(0, self.driver._get_memory_encrypted_slots()) diff --git a/nova/virt/libvirt/driver.py b/nova/virt/libvirt/driver.py index 95c01b42ffc4..a7e5ec5dcabb 100644 --- a/nova/virt/libvirt/driver.py +++ b/nova/virt/libvirt/driver.py @@ -84,6 +84,7 @@ from nova.console import serial as serial_console from nova.console import type as ctype from nova import context as nova_context from nova import crypto +from nova.db import constants as db_const from nova import exception from nova.i18n import _ from nova import image @@ -6875,6 +6876,17 @@ class LibvirtDriver(driver.ComputeDriver): }, } + memory_enc_slots = self._get_memory_encrypted_slots() + if memory_enc_slots > 0: + result[orc.MEM_ENCRYPTION_CONTEXT] = { + 'total': memory_enc_slots, + 'min_unit': 1, + 'max_unit': 1, + 'step_size': 1, + 'allocation_ratio': 1.0, + 'reserved': 0, + } + # If a sharing DISK_GB provider exists in the provider tree, then our # storage is shared, and we should not report the DISK_GB inventory in # the compute node provider. @@ -6915,6 +6927,35 @@ class LibvirtDriver(driver.ComputeDriver): # so that spawn() or other methods can access it thru a getter self.provider_tree = copy.deepcopy(provider_tree) + def _get_memory_encrypted_slots(self): + slots = CONF.libvirt.num_memory_encrypted_guests + if not self._host.supports_amd_sev: + if slots and slots > 0: + LOG.warning("Host is configured with " + "libvirt.num_memory_encrypted_guests set to " + "%d, but is not SEV-capable.", slots) + return 0 + + # NOTE(aspiers): Auto-detection of the number of available + # slots for AMD SEV is not yet possible, so honor the + # configured value, or impose no limit if this is not + # specified. This does incur a risk that if operators don't + # read the instructions and configure the maximum correctly, + # the maximum could be exceeded resulting in SEV guests + # failing at launch-time. However at least SEV guests will + # launch until the maximum, and when auto-detection code is + # added later, an upgrade will magically fix the issue. + # + # Note also that the configured value can be 0 on an + # SEV-capable host, since there might conceivably be good + # reasons for the operator to want to disable SEV even when + # it's available (e.g. due to performance impact, or + # implementation bugs which may surface later). + if slots is not None: + return slots + else: + return db_const.MAX_INT + @staticmethod def _is_reshape_needed_vgpu_on_root(provider_tree, nodename): """Determine if root RP has VGPU inventories. diff --git a/releasenotes/notes/bp-amd-sev-libvirt-support-4b7cf8f0756d88b8.yaml b/releasenotes/notes/bp-amd-sev-libvirt-support-4b7cf8f0756d88b8.yaml new file mode 100644 index 000000000000..54dbef2ae4bd --- /dev/null +++ b/releasenotes/notes/bp-amd-sev-libvirt-support-4b7cf8f0756d88b8.yaml @@ -0,0 +1,33 @@ +--- +features: + - | + The libvirt driver can now support requests for guest RAM to be + encrypted at the hardware level, if there are compute hosts which + support it. Currently only AMD SEV (Secure Encrypted + Virtualization) is supported, and it has certain minimum version + requirements regarding the kernel, QEMU, and libvirt. + + Memory encryption can be required either via flavor which has the + ``hw:mem_encryption`` extra spec set to ``True``, or via an image + which has the ``hw_mem_encryption`` property set to ``True``. + These do not inherently cause a preference for SEV-capable + hardware, but for now SEV is the only way of fulfilling the + requirement. However in the future, support for other + hardware-level guest memory encryption technology such as Intel + MKTME may be added. If a guest specifically needs to be booted + using SEV rather than any other memory encryption technology, it + is possible to ensure this by adding + ``trait:HW_CPU_X86_AMD_SEV=required`` to the flavor extra specs or + image properties. + + In all cases, SEV instances can only be booted from images which + have the ``hw_firmware_type`` property set to ``uefi``, and only + when the machine type is set to ``q35``. This can be set per + image by setting the image property ``hw_machine_type=q35``, or + per compute node by the operator via the ``hw_machine_type`` + configuration option in the ``[libvirt]`` section of + :file:`nova.conf`. + + For information on how to set up support for AMD SEV, please see + the `KVM section of the Configuration Guide + `_.