Browse Source

Merge "Improve SEV documentation and other minor tweaks"

changes/93/671793/24
Zuul 1 week ago
parent
commit
690b3ffd38

+ 72
- 66
doc/source/admin/configuration/hypervisor-kvm.rst View File

@@ -489,7 +489,12 @@ Requirements for SEV
489 489
 
490 490
 First the operator will need to ensure the following prerequisites are met:
491 491
 
492
-- SEV-capable AMD hardware as Nova compute hosts
492
+- At least one of the Nova compute hosts must be AMD hardware capable
493
+  of supporting SEV.  It is entirely possible for the compute plane to
494
+  be a mix of hardware which can and cannot support SEV, although as
495
+  per the section on `Permanent limitations`_ below, the maximum
496
+  number of simultaneously running guests with SEV will be limited by
497
+  the quantity and quality of SEV-capable hardware available.
493 498
 
494 499
 - An appropriately configured software stack on those compute hosts,
495 500
   so that the various layers are all SEV ready:
@@ -507,31 +512,6 @@ Deploying SEV-capable infrastructure
507 512
 In order for users to be able to use SEV, the operator will need to
508 513
 perform the following steps:
509 514
 
510
-- Configure the :oslo.config:option:`libvirt.num_memory_encrypted_guests`
511
-  option in :file:`nova.conf` to represent the number of guests an SEV
512
-  compute node can host concurrently with memory encrypted at the
513
-  hardware level.  For example:
514
-
515
-  .. code-block:: ini
516
-
517
-     [libvirt]
518
-     num_memory_encrypted_guests = 15
519
-
520
-  Initially this is required because on AMD SEV-capable hardware, the
521
-  memory controller has a fixed number of slots for holding encryption
522
-  keys, one per guest.  For example, at the time of writing, earlier
523
-  generations of hardware only have 15 slots, thereby limiting the
524
-  number of SEV guests which can be run concurrently to 15.  Nova
525
-  needs to track how many slots are available and used in order to
526
-  avoid attempting to exceed that limit in the hardware.
527
-
528
-  At the time of writing (September 2019), work is in progress to allow
529
-  QEMU and libvirt to expose the number of slots available on SEV
530
-  hardware; however until this is finished and released, it will not be
531
-  possible for Nova to programatically detect the correct value.  So this
532
-  configuration option serves as a stop-gap, allowing the cloud operator
533
-  to provide this value manually.
534
-
535 515
 - Ensure that sufficient memory is reserved on the SEV compute hosts
536 516
   for host-level services to function correctly at all times.  This is
537 517
   particularly important when hosting SEV-enabled guests, since they
@@ -548,7 +528,7 @@ perform the following steps:
548 528
 
549 529
   An alternative approach is to configure the
550 530
   :oslo.config:option:`reserved_host_memory_mb` option in the
551
-  ``[compute]`` section of :file:`nova.conf`, based on the expected
531
+  ``[DEFAULT]`` section of :file:`nova.conf`, based on the expected
552 532
   maximum number of SEV guests simultaneously running on the host, and
553 533
   the details provided in `an earlier version of the AMD SEV spec`__
554 534
   regarding memory region sizes, which cover how to calculate it
@@ -568,7 +548,57 @@ perform the following steps:
568 548
   to define SEV-enabled images.
569 549
 
570 550
 Additionally the cloud operator should consider the following optional
571
-step:
551
+steps:
552
+
553
+.. _num_memory_encrypted_guests:
554
+
555
+- Configure the :oslo.config:option:`libvirt.num_memory_encrypted_guests`
556
+  option in :file:`nova.conf` to represent the number of guests an SEV
557
+  compute node can host concurrently with memory encrypted at the
558
+  hardware level.  For example:
559
+
560
+  .. code-block:: ini
561
+
562
+     [libvirt]
563
+     num_memory_encrypted_guests = 15
564
+
565
+  This option exists because on AMD SEV-capable hardware, the memory
566
+  controller has a fixed number of slots for holding encryption keys,
567
+  one per guest.  For example, at the time of writing, earlier
568
+  generations of hardware only have 15 slots, thereby limiting the
569
+  number of SEV guests which can be run concurrently to 15.  Nova
570
+  needs to track how many slots are available and used in order to
571
+  avoid attempting to exceed that limit in the hardware.
572
+
573
+  At the time of writing (September 2019), work is in progress to
574
+  allow QEMU and libvirt to expose the number of slots available on
575
+  SEV hardware; however until this is finished and released, it will
576
+  not be possible for Nova to programmatically detect the correct
577
+  value.
578
+
579
+  So this configuration option serves as a stop-gap, allowing the
580
+  cloud operator the option of providing this value manually.  It may
581
+  later be demoted to a fallback value for cases where the limit
582
+  cannot be detected programmatically, or even removed altogether when
583
+  Nova's minimum QEMU version guarantees that it can always be
584
+  detected.
585
+
586
+  .. note::
587
+
588
+     When deciding whether to use the default of ``None`` or manually
589
+     impose a limit, operators should carefully weigh the benefits
590
+     vs. the risk.  The benefits of using the default are a) immediate
591
+     convenience since nothing needs to be done now, and b) convenience
592
+     later when upgrading compute hosts to future versions of Nova,
593
+     since again nothing will need to be done for the correct limit to
594
+     be automatically imposed.  However the risk is that until
595
+     auto-detection is implemented, users may be able to attempt to
596
+     launch guests with encrypted memory on hosts which have already
597
+     reached the maximum number of guests simultaneously running with
598
+     encrypted memory.  This risk may be mitigated by other limitations
599
+     which operators can impose, for example if the smallest RAM
600
+     footprint of any flavor imposes a maximum number of simultaneously
601
+     running guests which is less than or equal to the SEV limit.
572 602
 
573 603
 - Configure :oslo.config:option:`libvirt.hw_machine_type` on all
574 604
   SEV-capable compute hosts to include ``x86_64=q35``, so that all
@@ -637,24 +667,25 @@ features:
637 667
 
638 668
 - SEV-encrypted VMs cannot contain directly accessible host devices
639 669
   (PCI passthrough).  So for example mdev vGPU support will not
640
-  currently work.  However technologies based on vhost-user should
670
+  currently work.  However technologies based on `vhost-user`__ should
641 671
   work fine.
642 672
 
643
-- The boot disk of SEV-encrypted VMs cannot be ``virtio-blk``.  Using
673
+  __ https://wiki.qemu.org/Features/VirtioVhostUser
674
+
675
+- The boot disk of SEV-encrypted VMs can only be ``virtio-blk`` on
676
+  newer kernels which contain the necessary fixes.  Using
644 677
   ``virtio-scsi`` or SATA for the boot disk works as expected, as does
645 678
   ``virtio-blk`` for non-boot disks.
646 679
 
647
-- Operators will initially be required to manually specify the upper
648
-  limit of SEV guests for each compute host, via the new
649
-  :oslo.config:option:`libvirt.num_memory_encrypted_guests` configuration
650
-  option described above.  This is a short-term workaround to the current
651
-  lack of mechanism for programmatically discovering the SEV guest limit
652
-  via libvirt.
653
-
654
-  This config option may later be demoted to a fallback value for
655
-  cases where the limit cannot be detected programmatically, or even
656
-  removed altogether when Nova's minimum QEMU version guarantees that
657
-  it can always be detected.
680
+- QEMU and libvirt cannot yet expose the number of slots available for
681
+  encrypted guests in the memory controller on SEV hardware.  Until
682
+  this is implemented, it is not possible for Nova to programmatically
683
+  detect the correct value.  As a short-term workaround, operators can
684
+  optionally manually specify the upper limit of SEV guests for each
685
+  compute host, via the new
686
+  :oslo.config:option:`libvirt.num_memory_encrypted_guests`
687
+  configuration option :ref:`described above
688
+  <num_memory_encrypted_guests>`.
658 689
 
659 690
 Permanent limitations
660 691
 ---------------------
@@ -671,31 +702,6 @@ The following limitations are expected long-term:
671 702
 - The operating system running in an encrypted virtual machine must
672 703
   contain SEV support.
673 704
 
674
-- The ``q35`` machine type does not provide an IDE controller,
675
-  therefore IDE devices are not supported.  In particular this means
676
-  that Nova's libvirt driver's current default behaviour on the x86_64
677
-  architecture of attaching the config drive as an ``iso9660`` IDE
678
-  CD-ROM device will not work.  There are two potential workarounds:
679
-
680
-  #. Change :oslo.config:option:`config_drive_format` in
681
-     :file:`nova.conf` from its default value of ``iso9660`` to ``vfat``.
682
-     This will result in ``virtio`` being used instead.  However this
683
-     per-host setting could potentially break images with legacy OSes
684
-     which expect the config drive to be an IDE CD-ROM.  It would also
685
-     not deal with other CD-ROM devices.
686
-
687
-  #. Set the (largely `undocumented
688
-     <https://bugs.launchpad.net/glance/+bug/1808868>`_)
689
-     ``hw_cdrom_bus`` image property to ``virtio``, which is
690
-     recommended as a replacement for ``ide``, and ``hw_scsi_model``
691
-     to ``virtio-scsi``.
692
-
693
-  Some potentially cleaner long-term solutions which require code
694
-  changes have been suggested in the `Work Items section of the SEV
695
-  spec`__.
696
-
697
-  __ http://specs.openstack.org/openstack/nova-specs/specs/train/approved/amd-sev-libvirt-support.html#work-items
698
-
699 705
 Non-limitations
700 706
 ---------------
701 707
 

+ 5
- 15
nova/conf/libvirt.py View File

@@ -850,7 +850,7 @@ Maximum number of guests with encrypted memory which can run
850 850
 concurrently on this compute host.
851 851
 
852 852
 For now this is only relevant for AMD machines which support SEV
853
-(Secure Encrypted Virtualisation).  Such machines have a limited
853
+(Secure Encrypted Virtualization).  Such machines have a limited
854 854
 number of slots in their memory controller for storing encryption
855 855
 keys.  Each running guest with encrypted memory will consume one of
856 856
 these slots.
@@ -870,20 +870,10 @@ limit.
870 870
 
871 871
 .. note::
872 872
 
873
-   When deciding whether to use the default of ``None`` or manually
874
-   impose a limit, operators should carefully weigh the benefits
875
-   vs. the risk.  The benefits are a) immediate convenience since
876
-   nothing needs to be done now, and b) convenience later when
877
-   upgrading compute hosts to future versions of Nova, since again
878
-   nothing will need to be done for the correct limit to be
879
-   automatically imposed.  However the risk is that until
880
-   auto-detection is implemented, users may be able to attempt to
881
-   launch guests with encrypted memory on hosts which have already
882
-   reached the maximum number of guests simultaneously running with
883
-   encrypted memory.  This risk may be mitigated by other limitations
884
-   which operators can impose, for example if the smallest RAM
885
-   footprint of any flavor imposes a maximum number of simultaneously
886
-   running guests which is less than or equal to the SEV limit.
873
+   It is recommended to read :ref:`the deployment documentation's
874
+   section on this option <num_memory_encrypted_guests>` before
875
+   deciding whether to configure this setting or leave it at the
876
+   default.
887 877
 
888 878
 Related options:
889 879
 

+ 1
- 2
nova/tests/unit/virt/libvirt/test_driver.py View File

@@ -23741,8 +23741,7 @@ class TestLibvirtSEVSupported(TestLibvirtSEV):
23741 23741
 
23742 23742
     @test.patch_exists(SEV_KERNEL_PARAM_FILE, True)
23743 23743
     @test.patch_open(SEV_KERNEL_PARAM_FILE, "1\n")
23744
-    @mock.patch.object(libvirt_driver.LOG, 'warning')
23745
-    def test_get_mem_encrypted_slots_config_zero_supported(self, mock_log):
23744
+    def test_get_mem_encrypted_slots_config_zero_supported(self):
23746 23745
         self.flags(num_memory_encrypted_guests=0, group='libvirt')
23747 23746
         self.driver._host._set_amd_sev_support()
23748 23747
         self.assertEqual(0, self.driver._get_memory_encrypted_slots())

+ 2
- 2
releasenotes/notes/bp-amd-sev-libvirt-support-4b7cf8f0756d88b8.yaml View File

@@ -7,7 +7,7 @@ features:
7 7
     Virtualization) is supported, and it has certain minimum version
8 8
     requirements regarding the kernel, QEMU, and libvirt.
9 9
 
10
-    Memory encryption can be required either via flavor which has the
10
+    Memory encryption can be required either via a flavor which has the
11 11
     ``hw:mem_encryption`` extra spec set to ``True``, or via an image
12 12
     which has the ``hw_mem_encryption`` property set to ``True``.
13 13
     These do not inherently cause a preference for SEV-capable
@@ -22,7 +22,7 @@ features:
22 22
 
23 23
     In all cases, SEV instances can only be booted from images which
24 24
     have the ``hw_firmware_type`` property set to ``uefi``, and only
25
-    when the machine type is set to ``q35``.  This can be set per
25
+    when the machine type is set to ``q35``.  The latter can be set per
26 26
     image by setting the image property ``hw_machine_type=q35``, or
27 27
     per compute node by the operator via the ``hw_machine_type``
28 28
     configuration option in the ``[libvirt]`` section of

Loading…
Cancel
Save