2017-08-08 16:10:45 +01:00
|
|
|
===
|
|
|
|
KVM
|
|
|
|
===
|
|
|
|
|
|
|
|
KVM is configured as the default hypervisor for Compute.
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
This document contains several sections about hypervisor selection. If you
|
|
|
|
are reading this document linearly, you do not want to load the KVM module
|
|
|
|
before you install ``nova-compute``. The ``nova-compute`` service depends
|
|
|
|
on qemu-kvm, which installs ``/lib/udev/rules.d/45-qemu-kvm.rules``, which
|
|
|
|
sets the correct permissions on the ``/dev/kvm`` device node.
|
|
|
|
|
|
|
|
The KVM hypervisor supports the following virtual machine image formats:
|
|
|
|
|
|
|
|
* Raw
|
|
|
|
* QEMU Copy-on-write (QCOW2)
|
|
|
|
* QED Qemu Enhanced Disk
|
|
|
|
* VMware virtual machine disk format (vmdk)
|
|
|
|
|
|
|
|
This section describes how to enable KVM on your system. For more information,
|
|
|
|
see the following distribution-specific documentation:
|
|
|
|
|
2021-02-23 10:25:24 +00:00
|
|
|
* `Fedora: Virtualization Getting Started Guide`__
|
|
|
|
* `Ubuntu: KVM/Installation`__
|
|
|
|
* `Debian: KVM Guide`__
|
|
|
|
* `Red Hat Enterprise Linux (RHEL): Getting started with virtualization`__
|
|
|
|
* `openSUSE: Setting Up a KVM VM Host Server`__
|
|
|
|
* `SLES: Virtualization with KVM`__.
|
|
|
|
|
|
|
|
.. __: https://docs.fedoraproject.org/en-US/quick-docs/getting-started-with-virtualization/
|
|
|
|
.. __: https://help.ubuntu.com/community/KVM/Installation
|
|
|
|
.. __: https://wiki.debian.org/KVM
|
|
|
|
.. __: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_virtualization/getting-started-with-virtualization-in-rhel-8_configuring-and-managing-virtualization
|
|
|
|
.. __: https://doc.opensuse.org/documentation/leap/virtualization/html/book-virt/cha-qemu-host.html
|
|
|
|
.. __: https://documentation.suse.com/sles/11-SP4/html/SLES-all/book-kvm.html
|
|
|
|
|
|
|
|
|
|
|
|
Configuration
|
|
|
|
-------------
|
|
|
|
|
|
|
|
To enable KVM explicitly, add the following configuration options to the
|
|
|
|
``/etc/nova/nova.conf`` file:
|
|
|
|
|
|
|
|
.. code-block:: ini
|
|
|
|
|
|
|
|
[DEFAULT]
|
|
|
|
compute_driver = libvirt.LibvirtDriver
|
|
|
|
|
|
|
|
[libvirt]
|
|
|
|
virt_type = kvm
|
|
|
|
|
2017-08-08 16:10:45 +01:00
|
|
|
|
2017-09-08 17:57:38 +09:00
|
|
|
.. _enable-kvm:
|
|
|
|
|
2017-08-08 16:10:45 +01:00
|
|
|
Enable KVM
|
2021-02-23 10:25:24 +00:00
|
|
|
----------
|
2017-08-08 16:10:45 +01:00
|
|
|
|
|
|
|
The following sections outline how to enable KVM based hardware virtualization
|
|
|
|
on different architectures and platforms. To perform these steps, you must be
|
|
|
|
logged in as the ``root`` user.
|
|
|
|
|
2021-02-23 10:25:24 +00:00
|
|
|
For x86-based systems
|
|
|
|
~~~~~~~~~~~~~~~~~~~~~
|
2017-08-08 16:10:45 +01:00
|
|
|
|
|
|
|
#. To determine whether the ``svm`` or ``vmx`` CPU extensions are present, run
|
|
|
|
this command:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# grep -E 'svm|vmx' /proc/cpuinfo
|
|
|
|
|
|
|
|
This command generates output if the CPU is capable of
|
|
|
|
hardware-virtualization. Even if output is shown, you might still need to
|
|
|
|
enable virtualization in the system BIOS for full support.
|
|
|
|
|
|
|
|
If no output appears, consult your system documentation to ensure that your
|
|
|
|
CPU and motherboard support hardware virtualization. Verify that any
|
|
|
|
relevant hardware virtualization options are enabled in the system BIOS.
|
|
|
|
|
|
|
|
The BIOS for each manufacturer is different. If you must enable
|
|
|
|
virtualization in the BIOS, look for an option containing the words
|
|
|
|
``virtualization``, ``VT``, ``VMX``, or ``SVM``.
|
|
|
|
|
|
|
|
#. To list the loaded kernel modules and verify that the ``kvm`` modules are
|
|
|
|
loaded, run this command:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# lsmod | grep kvm
|
|
|
|
|
|
|
|
If the output includes ``kvm_intel`` or ``kvm_amd``, the ``kvm`` hardware
|
|
|
|
virtualization modules are loaded and your kernel meets the module
|
|
|
|
requirements for OpenStack Compute.
|
|
|
|
|
|
|
|
If the output does not show that the ``kvm`` module is loaded, run this
|
|
|
|
command to load it:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# modprobe -a kvm
|
|
|
|
|
|
|
|
Run the command for your CPU. For Intel, run this command:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# modprobe -a kvm-intel
|
|
|
|
|
|
|
|
For AMD, run this command:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# modprobe -a kvm-amd
|
|
|
|
|
|
|
|
Because a KVM installation can change user group membership, you might need
|
|
|
|
to log in again for changes to take effect.
|
|
|
|
|
|
|
|
If the kernel modules do not load automatically, use the procedures listed
|
|
|
|
in these subsections.
|
|
|
|
|
|
|
|
If the checks indicate that required hardware virtualization support or kernel
|
|
|
|
modules are disabled or unavailable, you must either enable this support on the
|
|
|
|
system or find a system with this support.
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
Some systems require that you enable VT support in the system BIOS. If you
|
|
|
|
believe your processor supports hardware acceleration but the previous
|
|
|
|
command did not produce output, reboot your machine, enter the system BIOS,
|
|
|
|
and enable the VT option.
|
|
|
|
|
|
|
|
If KVM acceleration is not supported, configure Compute to use a different
|
2020-08-31 15:33:51 +01:00
|
|
|
hypervisor, such as :ref:`QEMU <compute_qemu>`.
|
2017-08-08 16:10:45 +01:00
|
|
|
|
|
|
|
These procedures help you load the kernel modules for Intel-based and AMD-based
|
|
|
|
processors if they do not load automatically during KVM installation.
|
|
|
|
|
|
|
|
.. rubric:: Intel-based processors
|
|
|
|
|
|
|
|
If your compute host is Intel-based, run these commands as root to load the
|
|
|
|
kernel modules:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# modprobe kvm
|
|
|
|
# modprobe kvm-intel
|
|
|
|
|
|
|
|
Add these lines to the ``/etc/modules`` file so that these modules load on
|
|
|
|
reboot:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
kvm
|
|
|
|
kvm-intel
|
|
|
|
|
|
|
|
.. rubric:: AMD-based processors
|
|
|
|
|
|
|
|
If your compute host is AMD-based, run these commands as root to load the
|
|
|
|
kernel modules:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# modprobe kvm
|
|
|
|
# modprobe kvm-amd
|
|
|
|
|
|
|
|
Add these lines to ``/etc/modules`` file so that these modules load on reboot:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
kvm
|
|
|
|
kvm-amd
|
|
|
|
|
2021-02-23 10:25:24 +00:00
|
|
|
For POWER-based systems
|
|
|
|
~~~~~~~~~~~~~~~~~~~~~~~
|
2017-08-08 16:10:45 +01:00
|
|
|
|
|
|
|
KVM as a hypervisor is supported on POWER system's PowerNV platform.
|
|
|
|
|
|
|
|
#. To determine if your POWER platform supports KVM based virtualization run
|
|
|
|
the following command:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# cat /proc/cpuinfo | grep PowerNV
|
|
|
|
|
|
|
|
If the previous command generates the following output, then CPU supports
|
|
|
|
KVM based virtualization.
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
platform: PowerNV
|
|
|
|
|
|
|
|
If no output is displayed, then your POWER platform does not support KVM
|
|
|
|
based hardware virtualization.
|
|
|
|
|
|
|
|
#. To list the loaded kernel modules and verify that the ``kvm`` modules are
|
|
|
|
loaded, run the following command:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# lsmod | grep kvm
|
|
|
|
|
|
|
|
If the output includes ``kvm_hv``, the ``kvm`` hardware virtualization
|
|
|
|
modules are loaded and your kernel meets the module requirements for
|
|
|
|
OpenStack Compute.
|
|
|
|
|
|
|
|
If the output does not show that the ``kvm`` module is loaded, run the
|
|
|
|
following command to load it:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# modprobe -a kvm
|
|
|
|
|
|
|
|
For PowerNV platform, run the following command:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# modprobe -a kvm-hv
|
|
|
|
|
|
|
|
Because a KVM installation can change user group membership, you might need
|
|
|
|
to log in again for changes to take effect.
|
|
|
|
|
2021-02-23 10:25:24 +00:00
|
|
|
For AArch64-based systems
|
|
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
.. todo:: Populate this section.
|
|
|
|
|
|
|
|
|
2017-08-08 16:10:45 +01:00
|
|
|
Configure Compute backing storage
|
2021-02-23 10:25:24 +00:00
|
|
|
---------------------------------
|
2017-08-08 16:10:45 +01:00
|
|
|
|
|
|
|
Backing Storage is the storage used to provide the expanded operating system
|
|
|
|
image, and any ephemeral storage. Inside the virtual machine, this is normally
|
|
|
|
presented as two virtual hard disks (for example, ``/dev/vda`` and ``/dev/vdb``
|
|
|
|
respectively). However, inside OpenStack, this can be derived from one of these
|
|
|
|
methods: ``lvm``, ``qcow``, ``rbd`` or ``flat``, chosen using the
|
2019-02-04 14:09:08 -06:00
|
|
|
:oslo.config:option:`libvirt.images_type` option in ``nova.conf`` on the
|
|
|
|
compute node.
|
2017-08-08 16:10:45 +01:00
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
The option ``raw`` is acceptable but deprecated in favor of ``flat``. The
|
|
|
|
Flat back end uses either raw or QCOW2 storage. It never uses a backing
|
|
|
|
store, so when using QCOW2 it copies an image rather than creating an
|
|
|
|
overlay. By default, it creates raw files but will use QCOW2 when creating a
|
2019-02-04 14:09:08 -06:00
|
|
|
disk from a QCOW2 if :oslo.config:option:`force_raw_images` is not set in
|
|
|
|
configuration.
|
2017-08-08 16:10:45 +01:00
|
|
|
|
|
|
|
QCOW is the default backing store. It uses a copy-on-write philosophy to delay
|
|
|
|
allocation of storage until it is actually needed. This means that the space
|
|
|
|
required for the backing of an image can be significantly less on the real disk
|
|
|
|
than what seems available in the virtual machine operating system.
|
|
|
|
|
|
|
|
Flat creates files without any sort of file formatting, effectively creating
|
|
|
|
files with the plain binary one would normally see on a real disk. This can
|
|
|
|
increase performance, but means that the entire size of the virtual disk is
|
|
|
|
reserved on the physical disk.
|
|
|
|
|
|
|
|
Local `LVM volumes
|
|
|
|
<https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)>`__ can also be
|
2019-02-04 14:09:08 -06:00
|
|
|
used. Set the :oslo.config:option:`libvirt.images_volume_group` configuration
|
|
|
|
option to the name of the LVM group you have created.
|
2017-08-08 16:10:45 +01:00
|
|
|
|
2021-02-23 10:25:24 +00:00
|
|
|
|
2020-05-15 12:20:03 +02:00
|
|
|
Direct download of images from Ceph
|
2021-02-23 10:25:24 +00:00
|
|
|
-----------------------------------
|
2020-05-15 12:20:03 +02:00
|
|
|
|
|
|
|
When the Glance image service is set up with the Ceph backend and Nova is using
|
|
|
|
a local ephemeral store (``[libvirt]/images_type!=rbd``), it is possible to
|
|
|
|
configure Nova to download images directly into the local compute image cache.
|
|
|
|
|
|
|
|
With the following configuration, images are downloaded using the RBD export
|
|
|
|
command instead of using the Glance HTTP API. In some situations, especially
|
|
|
|
for very large images, this could be substantially faster and can improve the
|
|
|
|
boot times of instances.
|
|
|
|
|
|
|
|
On the Glance API node in ``glance-api.conf``:
|
|
|
|
|
|
|
|
.. code-block:: ini
|
|
|
|
|
|
|
|
[DEFAULT]
|
|
|
|
show_image_direct_url=true
|
|
|
|
|
|
|
|
On the Nova compute node in nova.conf:
|
|
|
|
|
|
|
|
.. code-block:: ini
|
|
|
|
|
|
|
|
[glance]
|
|
|
|
enable_rbd_download=true
|
|
|
|
rbd_user=glance
|
|
|
|
rbd_pool=images
|
|
|
|
rbd_ceph_conf=/etc/ceph/ceph.conf
|
|
|
|
rbd_connect_timeout=5
|
|
|
|
|
2018-10-11 18:01:21 +00:00
|
|
|
|
|
|
|
Nested guest support
|
2021-02-23 10:25:24 +00:00
|
|
|
--------------------
|
2018-10-11 18:01:21 +00:00
|
|
|
|
|
|
|
You may choose to enable support for nested guests --- that is, allow
|
|
|
|
your Nova instances to themselves run hardware-accelerated virtual
|
|
|
|
machines with KVM. Doing so requires a module parameter on
|
|
|
|
your KVM kernel module, and corresponding ``nova.conf`` settings.
|
|
|
|
|
2021-02-23 10:21:55 +00:00
|
|
|
Host configuration
|
2021-02-23 10:25:24 +00:00
|
|
|
~~~~~~~~~~~~~~~~~~
|
2018-10-11 18:01:21 +00:00
|
|
|
|
|
|
|
To enable nested KVM guests, your compute node must load the
|
|
|
|
``kvm_intel`` or ``kvm_amd`` module with ``nested=1``. You can enable
|
|
|
|
the ``nested`` parameter permanently, by creating a file named
|
|
|
|
``/etc/modprobe.d/kvm.conf`` and populating it with the following
|
|
|
|
content:
|
|
|
|
|
|
|
|
.. code-block:: none
|
|
|
|
|
|
|
|
options kvm_intel nested=1
|
|
|
|
options kvm_amd nested=1
|
|
|
|
|
|
|
|
A reboot may be required for the change to become effective.
|
|
|
|
|
2021-02-23 10:21:55 +00:00
|
|
|
Nova configuration
|
2021-02-23 10:25:24 +00:00
|
|
|
~~~~~~~~~~~~~~~~~~
|
2018-10-11 18:01:21 +00:00
|
|
|
|
2019-02-04 14:09:08 -06:00
|
|
|
To support nested guests, you must set your
|
|
|
|
:oslo.config:option:`libvirt.cpu_mode` configuration to one of the following
|
|
|
|
options:
|
2018-10-11 18:01:21 +00:00
|
|
|
|
2021-02-23 10:21:55 +00:00
|
|
|
Host passthrough (``host-passthrough``)
|
2018-10-11 18:01:21 +00:00
|
|
|
In this mode, nested virtualization is automatically enabled once
|
|
|
|
the KVM kernel module is loaded with nesting support.
|
|
|
|
|
|
|
|
.. code-block:: ini
|
|
|
|
|
|
|
|
[libvirt]
|
|
|
|
cpu_mode = host-passthrough
|
|
|
|
|
2021-02-23 10:21:55 +00:00
|
|
|
However, do consider the other implications that
|
|
|
|
:doc:`host passthrough </admin/cpu-models>` mode has on compute
|
|
|
|
functionality.
|
2018-10-11 18:01:21 +00:00
|
|
|
|
2021-02-23 10:21:55 +00:00
|
|
|
Host model (``host-model``)
|
2018-10-11 18:01:21 +00:00
|
|
|
In this mode, nested virtualization is automatically enabled once
|
|
|
|
the KVM kernel module is loaded with nesting support, **if** the
|
|
|
|
matching CPU model exposes the ``vmx`` feature flag to guests by
|
|
|
|
default (you can verify this with ``virsh capabilities`` on your
|
|
|
|
compute node). If your CPU model does not pass in the ``vmx`` flag,
|
2019-02-04 14:09:08 -06:00
|
|
|
you can force it with :oslo.config:option:`libvirt.cpu_model_extra_flags`:
|
2018-10-11 18:01:21 +00:00
|
|
|
|
|
|
|
.. code-block:: ini
|
|
|
|
|
|
|
|
[libvirt]
|
|
|
|
cpu_mode = host-model
|
|
|
|
cpu_model_extra_flags = vmx
|
|
|
|
|
2021-02-23 10:21:55 +00:00
|
|
|
Again, consider the other implications that apply to the
|
|
|
|
:doc:`host model </admin/cpu-models>` mode.
|
2018-10-11 18:01:21 +00:00
|
|
|
|
2021-02-23 10:21:55 +00:00
|
|
|
Custom (``custom``)
|
2018-10-11 18:01:21 +00:00
|
|
|
In custom mode, the same considerations apply as in host-model mode,
|
|
|
|
but you may *additionally* want to ensure that libvirt passes not only
|
|
|
|
the ``vmx``, but also the ``pcid`` flag to its guests:
|
|
|
|
|
|
|
|
.. code-block:: ini
|
|
|
|
|
|
|
|
[libvirt]
|
|
|
|
cpu_mode = custom
|
2019-07-11 13:25:53 +08:00
|
|
|
cpu_models = IvyBridge
|
2018-10-11 18:01:21 +00:00
|
|
|
cpu_model_extra_flags = vmx,pcid
|
|
|
|
|
2021-02-23 10:21:55 +00:00
|
|
|
More information on CPU models can be found in :doc:`/admin/cpu-models`.
|
|
|
|
|
|
|
|
Limitations
|
2021-02-23 10:25:24 +00:00
|
|
|
~~~~~~~~~~~~
|
2018-10-11 18:01:21 +00:00
|
|
|
|
|
|
|
When enabling nested guests, you should be aware of (and inform your
|
|
|
|
users about) certain limitations that are currently inherent to nested
|
|
|
|
KVM virtualization. Most importantly, guests using nested
|
|
|
|
virtualization will, *while nested guests are running*,
|
|
|
|
|
|
|
|
* fail to complete live migration;
|
|
|
|
* fail to resume from suspend.
|
|
|
|
|
|
|
|
See `the KVM documentation
|
|
|
|
<https://www.linux-kvm.org/page/Nested_Guests#Limitations>`_ for more
|
|
|
|
information on these limitations.
|
|
|
|
|
2021-02-23 10:25:24 +00:00
|
|
|
|
2017-08-08 16:10:45 +01:00
|
|
|
KVM performance tweaks
|
2021-02-23 10:25:24 +00:00
|
|
|
----------------------
|
2017-08-08 16:10:45 +01:00
|
|
|
|
|
|
|
The `VHostNet <http://www.linux-kvm.org/page/VhostNet>`_ kernel module improves
|
|
|
|
network performance. To load the kernel module, run the following command as
|
|
|
|
root:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# modprobe vhost_net
|
|
|
|
|
2021-02-23 10:25:24 +00:00
|
|
|
|
|
|
|
Troubleshooting
|
|
|
|
---------------
|
2017-08-08 16:10:45 +01:00
|
|
|
|
|
|
|
Trying to launch a new virtual machine instance fails with the ``ERROR`` state,
|
|
|
|
and the following error appears in the ``/var/log/nova/nova-compute.log`` file:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
libvirtError: internal error no supported architecture for os type 'hvm'
|
|
|
|
|
|
|
|
This message indicates that the KVM kernel modules were not loaded.
|
|
|
|
|
|
|
|
If you cannot start VMs after installation without rebooting, the permissions
|
|
|
|
might not be set correctly. This can happen if you load the KVM module before
|
|
|
|
you install ``nova-compute``. To check whether the group is set to ``kvm``,
|
|
|
|
run:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# ls -l /dev/kvm
|
|
|
|
|
|
|
|
If it is not set to ``kvm``, run:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# udevadm trigger
|