Browse Source

Merge "Enable booting of libvirt guests with AMD SEV memory encryption"

tags/20.0.0.0rc1
Zuul 1 month ago
parent
commit
0575eabffb

+ 256
- 0
doc/source/admin/configuration/hypervisor-kvm.rst View File

@@ -469,6 +469,262 @@ See `the KVM documentation
469 469
 <https://www.linux-kvm.org/page/Nested_Guests#Limitations>`_ for more
470 470
 information on these limitations.
471 471
 
472
+.. _amd-sev:
473
+
474
+AMD SEV (Secure Encrypted Virtualization)
475
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
476
+
477
+`Secure Encrypted Virtualization (SEV)`__ is a technology from AMD which
478
+enables the memory for a VM to be encrypted with a key unique to the VM.
479
+SEV is particularly applicable to cloud computing since it can reduce the
480
+amount of trust VMs need to place in the hypervisor and administrator of
481
+their host system.
482
+
483
+__ https://developer.amd.com/sev/
484
+
485
+Nova supports SEV from the Train release onwards.
486
+
487
+Requirements for SEV
488
+--------------------
489
+
490
+First the operator will need to ensure the following prerequisites are met:
491
+
492
+- SEV-capable AMD hardware as Nova compute hosts
493
+
494
+- An appropriately configured software stack on those compute hosts,
495
+  so that the various layers are all SEV ready:
496
+
497
+  - kernel >= 4.16
498
+  - QEMU >= 2.12
499
+  - libvirt >= 4.5
500
+  - ovmf >= commit 75b7aa9528bd 2018-07-06
501
+
502
+.. _deploying-sev-capable-infrastructure:
503
+
504
+Deploying SEV-capable infrastructure
505
+------------------------------------
506
+
507
+In order for users to be able to use SEV, the operator will need to
508
+perform the following steps:
509
+
510
+- Configure the :oslo.config:option:`libvirt.num_memory_encrypted_guests`
511
+  option in :file:`nova.conf` to represent the number of guests an SEV
512
+  compute node can host concurrently with memory encrypted at the
513
+  hardware level.  For example:
514
+
515
+  .. code-block:: ini
516
+
517
+     [libvirt]
518
+     num_memory_encrypted_guests = 15
519
+
520
+  Initially this is required because on AMD SEV-capable hardware, the
521
+  memory controller has a fixed number of slots for holding encryption
522
+  keys, one per guest.  For example, at the time of writing, earlier
523
+  generations of hardware only have 15 slots, thereby limiting the
524
+  number of SEV guests which can be run concurrently to 15.  Nova
525
+  needs to track how many slots are available and used in order to
526
+  avoid attempting to exceed that limit in the hardware.
527
+
528
+  At the time of writing (September 2019), work is in progress to allow
529
+  QEMU and libvirt to expose the number of slots available on SEV
530
+  hardware; however until this is finished and released, it will not be
531
+  possible for Nova to programatically detect the correct value.  So this
532
+  configuration option serves as a stop-gap, allowing the cloud operator
533
+  to provide this value manually.
534
+
535
+- Ensure that sufficient memory is reserved on the SEV compute hosts
536
+  for host-level services to function correctly at all times.  This is
537
+  particularly important when hosting SEV-enabled guests, since they
538
+  pin pages in RAM, preventing any memory overcommit which may be in
539
+  normal operation on other compute hosts.
540
+
541
+  It is `recommended`__ to achieve this by configuring an ``rlimit`` at
542
+  the ``/machine.slice`` top-level ``cgroup`` on the host, with all VMs
543
+  placed inside that.  (For extreme detail, see `this discussion on the
544
+  spec`__.)
545
+
546
+  __ http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#memory-reservation-solutions
547
+  __ https://review.opendev.org/#/c/641994/2/specs/train/approved/amd-sev-libvirt-support.rst@167
548
+
549
+  An alternative approach is to configure the
550
+  :oslo.config:option:`reserved_host_memory_mb` option in the
551
+  ``[compute]`` section of :file:`nova.conf`, based on the expected
552
+  maximum number of SEV guests simultaneously running on the host, and
553
+  the details provided in `an earlier version of the AMD SEV spec`__
554
+  regarding memory region sizes, which cover how to calculate it
555
+  correctly.
556
+
557
+  __ https://specs.openstack.org/openstack/nova-specs/specs/stein/approved/amd-sev-libvirt-support.html#proposed-change
558
+
559
+  See `the Memory Locking and Accounting section of the AMD SEV spec`__
560
+  and `previous discussion for further details`__.
561
+
562
+  __ http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#memory-locking-and-accounting
563
+  __ https://review.opendev.org/#/c/641994/2/specs/train/approved/amd-sev-libvirt-support.rst@167
564
+
565
+- A cloud administrator will need to define one or more SEV-enabled
566
+  flavors :ref:`as described in the user guide
567
+  <extra-specs-memory-encryption>`, unless it is sufficient for users
568
+  to define SEV-enabled images.
569
+
570
+Additionally the cloud operator should consider the following optional
571
+step:
572
+
573
+- Configure :oslo.config:option:`libvirt.hw_machine_type` on all
574
+  SEV-capable compute hosts to include ``x86_64=q35``, so that all
575
+  x86_64 images use the ``q35`` machine type by default.  (Currently
576
+  Nova defaults to the ``pc`` machine type for the ``x86_64``
577
+  architecture, although `it is expected that this will change in the
578
+  future`__.)
579
+
580
+  Changing the default from ``pc`` to ``q35`` makes the creation and
581
+  configuration of images by users more convenient by removing the
582
+  need for the ``hw_machine_type`` property to be set to ``q35`` on
583
+  every image for which SEV booting is desired.
584
+
585
+  .. caution::
586
+
587
+     Consider carefully whether to set this option.  It is
588
+     particularly important since a limitation of the implementation
589
+     prevents the user from receiving an error message with a helpful
590
+     explanation if they try to boot an SEV guest when neither this
591
+     configuration option nor the image property are set to select
592
+     a ``q35`` machine type.
593
+
594
+     On the other hand, setting it to ``q35`` may have other
595
+     undesirable side-effects on other images which were expecting to
596
+     be booted with ``pc``, so it is suggested to set it on a single
597
+     compute node or aggregate, and perform careful testing of typical
598
+     images before rolling out the setting to all SEV-capable compute
599
+     hosts.
600
+
601
+  __ https://bugs.launchpad.net/nova/+bug/1780138
602
+
603
+Launching SEV instances
604
+-----------------------
605
+
606
+Once an operator has covered the above steps, users can launch SEV
607
+instances either by requesting a flavor for which the operator set the
608
+``hw:mem_encryption`` extra spec to ``True``, or by using an image
609
+with the ``hw_mem_encryption`` property set to ``True``.
610
+
611
+These do not inherently cause a preference for SEV-capable hardware,
612
+but for now SEV is the only way of fulfilling the requirement for
613
+memory encryption.  However in the future, support for other
614
+hardware-level guest memory encryption technology such as Intel MKTME
615
+may be added.  If a guest specifically needs to be booted using SEV
616
+rather than any other memory encryption technology, it is possible to
617
+ensure this by adding ``trait:HW_CPU_X86_AMD_SEV=required`` to the
618
+flavor extra specs or image properties.
619
+
620
+In all cases, SEV instances can only be booted from images which have
621
+the ``hw_firmware_type`` property set to ``uefi``, and only when the
622
+machine type is set to ``q35``.  This can be set per image by setting
623
+the image property ``hw_machine_type=q35``, or per compute node by
624
+the operator via :oslo.config:option:`libvirt.hw_machine_type` as
625
+explained above.
626
+
627
+Impermanent limitations
628
+-----------------------
629
+
630
+The following limitations may be removed in the future as the
631
+hardware, firmware, and various layers of software receive new
632
+features:
633
+
634
+- SEV-encrypted VMs cannot yet be live-migrated or suspended,
635
+  therefore they will need to be fully shut down before migrating off
636
+  an SEV host, e.g. if maintenance is required on the host.
637
+
638
+- SEV-encrypted VMs cannot contain directly accessible host devices
639
+  (PCI passthrough).  So for example mdev vGPU support will not
640
+  currently work.  However technologies based on vhost-user should
641
+  work fine.
642
+
643
+- The boot disk of SEV-encrypted VMs cannot be ``virtio-blk``.  Using
644
+  ``virtio-scsi`` or SATA for the boot disk works as expected, as does
645
+  ``virtio-blk`` for non-boot disks.
646
+
647
+- Operators will initially be required to manually specify the upper
648
+  limit of SEV guests for each compute host, via the new
649
+  :oslo.config:option:`libvirt.num_memory_encrypted_guests` configuration
650
+  option described above.  This is a short-term workaround to the current
651
+  lack of mechanism for programmatically discovering the SEV guest limit
652
+  via libvirt.
653
+
654
+  This config option may later be demoted to a fallback value for
655
+  cases where the limit cannot be detected programmatically, or even
656
+  removed altogether when Nova's minimum QEMU version guarantees that
657
+  it can always be detected.
658
+
659
+Permanent limitations
660
+---------------------
661
+
662
+The following limitations are expected long-term:
663
+
664
+- The number of SEV guests allowed to run concurrently will always be
665
+  limited.  `On the first generation of EPYC machines it will be
666
+  limited to 15 guests`__; however this limit becomes much higher with
667
+  the second generation (Rome).
668
+
669
+  __ https://www.redhat.com/archives/libvir-list/2019-January/msg00652.html
670
+
671
+- The operating system running in an encrypted virtual machine must
672
+  contain SEV support.
673
+
674
+- The ``q35`` machine type does not provide an IDE controller,
675
+  therefore IDE devices are not supported.  In particular this means
676
+  that Nova's libvirt driver's current default behaviour on the x86_64
677
+  architecture of attaching the config drive as an ``iso9660`` IDE
678
+  CD-ROM device will not work.  There are two potential workarounds:
679
+
680
+  #. Change :oslo.config:option:`config_drive_format` in
681
+     :file:`nova.conf` from its default value of ``iso9660`` to ``vfat``.
682
+     This will result in ``virtio`` being used instead.  However this
683
+     per-host setting could potentially break images with legacy OSes
684
+     which expect the config drive to be an IDE CD-ROM.  It would also
685
+     not deal with other CD-ROM devices.
686
+
687
+  #. Set the (largely `undocumented
688
+     <https://bugs.launchpad.net/glance/+bug/1808868>`_)
689
+     ``hw_cdrom_bus`` image property to ``virtio``, which is
690
+     recommended as a replacement for ``ide``, and ``hw_scsi_model``
691
+     to ``virtio-scsi``.
692
+
693
+  Some potentially cleaner long-term solutions which require code
694
+  changes have been suggested in the `Work Items section of the SEV
695
+  spec`__.
696
+
697
+  __ http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#work-items
698
+
699
+Non-limitations
700
+---------------
701
+
702
+For the sake of eliminating any doubt, the following actions are *not*
703
+expected to be limited when SEV encryption is used:
704
+
705
+- Cold migration or shelve, since they power off the VM before the
706
+  operation at which point there is no encrypted memory (although this
707
+  could change since there is work underway to add support for `PMEM
708
+  <https://pmem.io/>`_)
709
+
710
+- Snapshot, since it only snapshots the disk
711
+
712
+- ``nova evacuate`` (despite the name, more akin to resurrection than
713
+  evacuation), since this is only initiated when the VM is no longer
714
+  running
715
+
716
+- Attaching any volumes, as long as they do not require attaching via
717
+  an IDE bus
718
+
719
+- Use of spice / VNC / serial / RDP consoles
720
+
721
+- `VM guest virtual NUMA (a.k.a. vNUMA)
722
+  <https://www.suse.com/documentation/sles-12/singlehtml/article_vt_best_practices/article_vt_best_practices.html#sec.vt.best.perf.numa.vmguest>`_
723
+
724
+For further technical details, see `the nova spec for SEV support`__.
725
+
726
+__ http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html
727
+
472 728
 
473 729
 Guest agent support
474 730
 ~~~~~~~~~~~~~~~~~~~

+ 12
- 0
doc/source/user/flavors.rst View File

@@ -538,6 +538,18 @@ NUMA topology
538 538
      greater than the available number of CPUs or memory respectively, an
539 539
      exception is raised.
540 540
 
541
+.. _extra-specs-memory-encryption:
542
+
543
+Hardware encryption of guest memory
544
+  If there are compute hosts which support encryption of guest memory
545
+  at the hardware level, this functionality can be requested via the
546
+  ``hw:mem_encryption`` extra spec parameter:
547
+
548
+  .. code-block:: console
549
+
550
+     $ openstack flavor set FLAVOR-NAME \
551
+         --property hw:mem_encryption=True
552
+
541 553
 .. _extra-specs-realtime-policy:
542 554
 
543 555
 CPU real-time policy

+ 28
- 0
doc/source/user/support-matrix.ini View File

@@ -1664,3 +1664,31 @@ driver.libvirt-vz-vm=missing
1664 1664
 driver.libvirt-vz-ct=missing
1665 1665
 driver.powervm=missing
1666 1666
 driver.zvm=missing
1667
+
1668
+[operation.boot-encrypted-vm]
1669
+title=Boot instance with secure encrypted memory
1670
+status=optional
1671
+notes=The feature allows VMs to be booted with their memory
1672
+  hardware-encrypted with a key specific to the VM, to help
1673
+  protect the data residing in the VM against access from anyone
1674
+  other than the user of the VM. The Configuration and Security
1675
+  Guides specify usage of this feature.
1676
+cli=openstack server create <usual server create parameters>
1677
+driver.xenserver=missing
1678
+driver.libvirt-kvm-x86=partial
1679
+driver-notes.libvirt-kvm-x86=This feature is currently only
1680
+  available with hosts which support the SEV (Secure Encrypted
1681
+  Virtualization) technology from AMD.
1682
+driver.libvirt-kvm-aarch64=missing
1683
+driver.libvirt-kvm-ppc64=missing
1684
+driver.libvirt-kvm-s390x=missing
1685
+driver.libvirt-qemu-x86=missing
1686
+driver.libvirt-lxc=missing
1687
+driver.libvirt-xen=missing
1688
+driver.vmware=missing
1689
+driver.hyperv=missing
1690
+driver.ironic=missing
1691
+driver.libvirt-vz-vm=missing
1692
+driver.libvirt-vz-ct=missing
1693
+driver.powervm=missing
1694
+driver.zvm=missing

+ 56
- 4
nova/conf/libvirt.py View File

@@ -718,10 +718,11 @@ http://man7.org/linux/man-pages/man7/random.7.html.
718 718
                 help='For qemu or KVM guests, set this option to specify '
719 719
                      'a default machine type per host architecture. '
720 720
                      'You can find a list of supported machine types '
721
-                     'in your environment by checking the output of '
722
-                     'the "virsh capabilities" command. The format of the '
723
-                     'value for this config option is host-arch=machine-type. '
724
-                     'For example: x86_64=machinetype1,armv7l=machinetype2'),
721
+                     'in your environment by checking the output of the '
722
+                     ':command:`virsh capabilities` command. The format of '
723
+                     'the value for this config option is '
724
+                     '``host-arch=machine-type``. For example: '
725
+                     '``x86_64=machinetype1,armv7l=machinetype2``.'),
725 726
     cfg.StrOpt('sysinfo_serial',
726 727
                default='unique',
727 728
                choices=(
@@ -840,6 +841,57 @@ Related options:
840 841
 
841 842
 * ``virt_type`` must be set to ``kvm`` or ``qemu``.
842 843
 * ``ram_allocation_ratio`` must be set to 1.0.
844
+"""),
845
+    cfg.IntOpt('num_memory_encrypted_guests',
846
+               default=None,
847
+               min=0,
848
+               help="""
849
+Maximum number of guests with encrypted memory which can run
850
+concurrently on this compute host.
851
+
852
+For now this is only relevant for AMD machines which support SEV
853
+(Secure Encrypted Virtualisation).  Such machines have a limited
854
+number of slots in their memory controller for storing encryption
855
+keys.  Each running guest with encrypted memory will consume one of
856
+these slots.
857
+
858
+The option may be reused for other equivalent technologies in the
859
+future.  If the machine does not support memory encryption, the option
860
+will be ignored and inventory will be set to 0.
861
+
862
+If the machine does support memory encryption, *for now* a value of
863
+``None`` means an effectively unlimited inventory, i.e. no limit will
864
+be imposed by Nova on the number of SEV guests which can be launched,
865
+even though the underlying hardware will enforce its own limit.
866
+However it is expected that in the future, auto-detection of the
867
+inventory from the hardware will become possible, at which point
868
+``None`` will cause auto-detection to automatically impose the correct
869
+limit.
870
+
871
+.. note::
872
+
873
+   When deciding whether to use the default of ``None`` or manually
874
+   impose a limit, operators should carefully weigh the benefits
875
+   vs. the risk.  The benefits are a) immediate convenience since
876
+   nothing needs to be done now, and b) convenience later when
877
+   upgrading compute hosts to future versions of Nova, since again
878
+   nothing will need to be done for the correct limit to be
879
+   automatically imposed.  However the risk is that until
880
+   auto-detection is implemented, users may be able to attempt to
881
+   launch guests with encrypted memory on hosts which have already
882
+   reached the maximum number of guests simultaneously running with
883
+   encrypted memory.  This risk may be mitigated by other limitations
884
+   which operators can impose, for example if the smallest RAM
885
+   footprint of any flavor imposes a maximum number of simultaneously
886
+   running guests which is less than or equal to the SEV limit.
887
+
888
+Related options:
889
+
890
+* :oslo.config:option:`libvirt.virt_type` must be set to ``kvm``.
891
+
892
+* It's recommended to consider including ``x86_64=q35`` in
893
+  :oslo.config:option:`libvirt.hw_machine_type`; see
894
+  :ref:`deploying-sev-capable-infrastructure` for more on this.
843 895
 """),
844 896
 ]
845 897
 

+ 36
- 0
nova/tests/functional/libvirt/test_report_cpu_traits.py View File

@@ -14,10 +14,12 @@
14 14
 #    under the License.
15 15
 
16 16
 import mock
17
+import os_resource_classes as orc
17 18
 import os_traits as ost
18 19
 
19 20
 
20 21
 from nova import conf
22
+from nova.db import constants as db_const
21 23
 from nova import test
22 24
 from nova.tests.functional.libvirt import integrated_helpers
23 25
 from nova.tests.unit.virt.libvirt import fakelibvirt
@@ -30,6 +32,23 @@ class LibvirtReportTraitsTestBase(
30 32
         integrated_helpers.LibvirtProviderUsageBaseTestCase):
31 33
     pass
32 34
 
35
+    def assertMemEncryptionSlotsEqual(self, slots):
36
+        inventory = self._get_provider_inventory(self.host_uuid)
37
+        if slots == 0:
38
+            self.assertNotIn(orc.MEM_ENCRYPTION_CONTEXT, inventory)
39
+        else:
40
+            self.assertEqual(
41
+                inventory[orc.MEM_ENCRYPTION_CONTEXT],
42
+                {
43
+                    'total': slots,
44
+                    'min_unit': 1,
45
+                    'max_unit': 1,
46
+                    'step_size': 1,
47
+                    'allocation_ratio': 1.0,
48
+                    'reserved': 0,
49
+                }
50
+            )
51
+
33 52
 
34 53
 class LibvirtReportTraitsTests(LibvirtReportTraitsTestBase):
35 54
     def test_report_cpu_traits(self):
@@ -77,6 +96,10 @@ class LibvirtReportNoSevTraitsTests(LibvirtReportTraitsTestBase):
77 96
         Then test that if the SEV capability appears (again via
78 97
         mocking), after a restart of the compute service, the trait
79 98
         gets registered on the compute host.
99
+
100
+        Also test that on both occasions, the inventory of the
101
+        MEM_ENCRYPTION_CONTEXT resource class on the compute host
102
+        corresponds to the absence or presence of the SEV capability.
80 103
         """
81 104
         self.assertFalse(self.compute.driver._host.supports_amd_sev)
82 105
 
@@ -88,6 +111,8 @@ class LibvirtReportNoSevTraitsTests(LibvirtReportTraitsTestBase):
88 111
         traits = self._get_provider_traits(self.host_uuid)
89 112
         self.assertNotIn(sev_trait, traits)
90 113
 
114
+        self.assertMemEncryptionSlotsEqual(0)
115
+
91 116
         # Now simulate the host gaining SEV functionality.  Here we
92 117
         # simulate a kernel update or reconfiguration which causes the
93 118
         # kvm-amd kernel module's "sev" parameter to become available
@@ -121,6 +146,8 @@ class LibvirtReportNoSevTraitsTests(LibvirtReportTraitsTestBase):
121 146
             # Sanity check that we've still got the trait globally.
122 147
             self.assertIn(sev_trait, self._get_all_traits())
123 148
 
149
+            self.assertMemEncryptionSlotsEqual(db_const.MAX_INT)
150
+
124 151
 
125 152
 class LibvirtReportSevTraitsTests(LibvirtReportTraitsTestBase):
126 153
     STUB_INIT_HOST = False
@@ -132,6 +159,7 @@ class LibvirtReportSevTraitsTests(LibvirtReportTraitsTestBase):
132 159
         new=fakelibvirt.virConnect._domain_capability_features_with_SEV)
133 160
     def setUp(self):
134 161
         super(LibvirtReportSevTraitsTests, self).setUp()
162
+        self.flags(num_memory_encrypted_guests=16, group='libvirt')
135 163
         self.start_compute()
136 164
 
137 165
     def test_sev_trait_on_off(self):
@@ -143,6 +171,10 @@ class LibvirtReportSevTraitsTests(LibvirtReportTraitsTestBase):
143 171
         Then test that if the SEV capability disappears (again via
144 172
         mocking), after a restart of the compute service, the trait
145 173
         gets removed from the compute host.
174
+
175
+        Also test that on both occasions, the inventory of the
176
+        MEM_ENCRYPTION_CONTEXT resource class on the compute host
177
+        corresponds to the absence or presence of the SEV capability.
146 178
         """
147 179
         self.assertTrue(self.compute.driver._host.supports_amd_sev)
148 180
 
@@ -154,6 +186,8 @@ class LibvirtReportSevTraitsTests(LibvirtReportTraitsTestBase):
154 186
         traits = self._get_provider_traits(self.host_uuid)
155 187
         self.assertIn(sev_trait, traits)
156 188
 
189
+        self.assertMemEncryptionSlotsEqual(16)
190
+
157 191
         # Now simulate the host losing SEV functionality.  Here we
158 192
         # simulate a kernel downgrade or reconfiguration which causes
159 193
         # the kvm-amd kernel module's "sev" parameter to become
@@ -177,3 +211,5 @@ class LibvirtReportSevTraitsTests(LibvirtReportTraitsTestBase):
177 211
 
178 212
             # Sanity check that we've still got the trait globally.
179 213
             self.assertIn(sev_trait, self._get_all_traits())
214
+
215
+            self.assertMemEncryptionSlotsEqual(0)

+ 67
- 0
nova/tests/unit/virt/libvirt/test_driver.py View File

@@ -70,6 +70,7 @@ from nova.compute import vm_states
70 70
 import nova.conf
71 71
 from nova import context
72 72
 from nova.db import api as db
73
+from nova.db import constants as db_const
73 74
 from nova import exception
74 75
 from nova.network import model as network_model
75 76
 from nova import objects
@@ -23679,3 +23680,69 @@ class TestLibvirtMultiattach(test.NoDBTestCase):
23679 23680
     #     calls = [mock.call(lv_ver=libvirt_driver.MIN_LIBVIRT_MULTIATTACH),
23680 23681
     #              mock.call(hv_ver=(2, 10, 0))]
23681 23682
     #     has_min_version.assert_has_calls(calls)
23683
+
23684
+
23685
+vc = fakelibvirt.virConnect
23686
+
23687
+
23688
+class TestLibvirtSEV(test.NoDBTestCase):
23689
+    """Libvirt driver tests for AMD SEV support."""
23690
+
23691
+    def setUp(self):
23692
+        super(TestLibvirtSEV, self).setUp()
23693
+        self.useFixture(fakelibvirt.FakeLibvirtFixture())
23694
+        self.driver = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
23695
+
23696
+
23697
+@mock.patch.object(os.path, 'exists', new=mock.Mock(return_value=False))
23698
+class TestLibvirtSEVUnsupported(TestLibvirtSEV):
23699
+    def test_get_mem_encrypted_slots_no_config(self):
23700
+        self.driver._host._set_amd_sev_support()
23701
+        self.assertEqual(0, self.driver._get_memory_encrypted_slots())
23702
+
23703
+    def test_get_mem_encrypted_slots_config_zero(self):
23704
+        self.flags(num_memory_encrypted_guests=0, group='libvirt')
23705
+        self.driver._host._set_amd_sev_support()
23706
+        self.assertEqual(0, self.driver._get_memory_encrypted_slots())
23707
+
23708
+    @mock.patch.object(libvirt_driver.LOG, 'warning')
23709
+    def test_get_mem_encrypted_slots_config_non_zero_unsupported(
23710
+            self, mock_log):
23711
+        self.flags(num_memory_encrypted_guests=16, group='libvirt')
23712
+        self.driver._host._set_amd_sev_support()
23713
+        # Still zero without mocked SEV support
23714
+        self.assertEqual(0, self.driver._get_memory_encrypted_slots())
23715
+        mock_log.assert_called_with(
23716
+            'Host is configured with libvirt.num_memory_encrypted_guests '
23717
+            'set to %d, but is not SEV-capable.', 16)
23718
+
23719
+    def test_get_mem_encrypted_slots_unsupported(self):
23720
+        self.driver._host._set_amd_sev_support()
23721
+        self.assertEqual(0, self.driver._get_memory_encrypted_slots())
23722
+
23723
+
23724
+@mock.patch.object(vc, '_domain_capability_features',
23725
+                   new=vc._domain_capability_features_with_SEV)
23726
+class TestLibvirtSEVSupported(TestLibvirtSEV):
23727
+    """Libvirt driver tests for when AMD SEV support is present."""
23728
+    @test.patch_exists(SEV_KERNEL_PARAM_FILE, True)
23729
+    @test.patch_open(SEV_KERNEL_PARAM_FILE, "1\n")
23730
+    def test_get_mem_encrypted_slots_unlimited(self):
23731
+        self.driver._host._set_amd_sev_support()
23732
+        self.assertEqual(db_const.MAX_INT,
23733
+                         self.driver._get_memory_encrypted_slots())
23734
+
23735
+    @test.patch_exists(SEV_KERNEL_PARAM_FILE, True)
23736
+    @test.patch_open(SEV_KERNEL_PARAM_FILE, "1\n")
23737
+    def test_get_mem_encrypted_slots_config_non_zero_supported(self):
23738
+        self.flags(num_memory_encrypted_guests=16, group='libvirt')
23739
+        self.driver._host._set_amd_sev_support()
23740
+        self.assertEqual(16, self.driver._get_memory_encrypted_slots())
23741
+
23742
+    @test.patch_exists(SEV_KERNEL_PARAM_FILE, True)
23743
+    @test.patch_open(SEV_KERNEL_PARAM_FILE, "1\n")
23744
+    @mock.patch.object(libvirt_driver.LOG, 'warning')
23745
+    def test_get_mem_encrypted_slots_config_zero_supported(self, mock_log):
23746
+        self.flags(num_memory_encrypted_guests=0, group='libvirt')
23747
+        self.driver._host._set_amd_sev_support()
23748
+        self.assertEqual(0, self.driver._get_memory_encrypted_slots())

+ 41
- 0
nova/virt/libvirt/driver.py View File

@@ -84,6 +84,7 @@ from nova.console import serial as serial_console
84 84
 from nova.console import type as ctype
85 85
 from nova import context as nova_context
86 86
 from nova import crypto
87
+from nova.db import constants as db_const
87 88
 from nova import exception
88 89
 from nova.i18n import _
89 90
 from nova import image
@@ -6875,6 +6876,17 @@ class LibvirtDriver(driver.ComputeDriver):
6875 6876
             },
6876 6877
         }
6877 6878
 
6879
+        memory_enc_slots = self._get_memory_encrypted_slots()
6880
+        if memory_enc_slots > 0:
6881
+            result[orc.MEM_ENCRYPTION_CONTEXT] = {
6882
+                'total': memory_enc_slots,
6883
+                'min_unit': 1,
6884
+                'max_unit': 1,
6885
+                'step_size': 1,
6886
+                'allocation_ratio': 1.0,
6887
+                'reserved': 0,
6888
+            }
6889
+
6878 6890
         # If a sharing DISK_GB provider exists in the provider tree, then our
6879 6891
         # storage is shared, and we should not report the DISK_GB inventory in
6880 6892
         # the compute node provider.
@@ -6915,6 +6927,35 @@ class LibvirtDriver(driver.ComputeDriver):
6915 6927
         # so that spawn() or other methods can access it thru a getter
6916 6928
         self.provider_tree = copy.deepcopy(provider_tree)
6917 6929
 
6930
+    def _get_memory_encrypted_slots(self):
6931
+        slots = CONF.libvirt.num_memory_encrypted_guests
6932
+        if not self._host.supports_amd_sev:
6933
+            if slots and slots > 0:
6934
+                LOG.warning("Host is configured with "
6935
+                            "libvirt.num_memory_encrypted_guests set to "
6936
+                            "%d, but is not SEV-capable.", slots)
6937
+            return 0
6938
+
6939
+        # NOTE(aspiers): Auto-detection of the number of available
6940
+        # slots for AMD SEV is not yet possible, so honor the
6941
+        # configured value, or impose no limit if this is not
6942
+        # specified.  This does incur a risk that if operators don't
6943
+        # read the instructions and configure the maximum correctly,
6944
+        # the maximum could be exceeded resulting in SEV guests
6945
+        # failing at launch-time.  However at least SEV guests will
6946
+        # launch until the maximum, and when auto-detection code is
6947
+        # added later, an upgrade will magically fix the issue.
6948
+        #
6949
+        # Note also that the configured value can be 0 on an
6950
+        # SEV-capable host, since there might conceivably be good
6951
+        # reasons for the operator to want to disable SEV even when
6952
+        # it's available (e.g. due to performance impact, or
6953
+        # implementation bugs which may surface later).
6954
+        if slots is not None:
6955
+            return slots
6956
+        else:
6957
+            return db_const.MAX_INT
6958
+
6918 6959
     @staticmethod
6919 6960
     def _is_reshape_needed_vgpu_on_root(provider_tree, nodename):
6920 6961
         """Determine if root RP has VGPU inventories.

+ 33
- 0
releasenotes/notes/bp-amd-sev-libvirt-support-4b7cf8f0756d88b8.yaml View File

@@ -0,0 +1,33 @@
1
+---
2
+features:
3
+  - |
4
+    The libvirt driver can now support requests for guest RAM to be
5
+    encrypted at the hardware level, if there are compute hosts which
6
+    support it.  Currently only AMD SEV (Secure Encrypted
7
+    Virtualization) is supported, and it has certain minimum version
8
+    requirements regarding the kernel, QEMU, and libvirt.
9
+
10
+    Memory encryption can be required either via flavor which has the
11
+    ``hw:mem_encryption`` extra spec set to ``True``, or via an image
12
+    which has the ``hw_mem_encryption`` property set to ``True``.
13
+    These do not inherently cause a preference for SEV-capable
14
+    hardware, but for now SEV is the only way of fulfilling the
15
+    requirement.  However in the future, support for other
16
+    hardware-level guest memory encryption technology such as Intel
17
+    MKTME may be added.  If a guest specifically needs to be booted
18
+    using SEV rather than any other memory encryption technology, it
19
+    is possible to ensure this by adding
20
+    ``trait:HW_CPU_X86_AMD_SEV=required`` to the flavor extra specs or
21
+    image properties.
22
+
23
+    In all cases, SEV instances can only be booted from images which
24
+    have the ``hw_firmware_type`` property set to ``uefi``, and only
25
+    when the machine type is set to ``q35``.  This can be set per
26
+    image by setting the image property ``hw_machine_type=q35``, or
27
+    per compute node by the operator via the ``hw_machine_type``
28
+    configuration option in the ``[libvirt]`` section of
29
+    :file:`nova.conf`.
30
+
31
+    For information on how to set up support for AMD SEV, please see
32
+    the `KVM section of the Configuration Guide
33
+    <https://docs.openstack.org/nova/latest/admin/configuration/hypervisor-kvm.html#amd-sev>`_.

Loading…
Cancel
Save