Implement file backed memory for instances in libvirt

File backed memory is enabled per Nova compute host. When enabled, host
will report 'file_backed_memory_capacity' for available memory.

When enabled, instances will create memory backing files in the
directory specified in libvirt's qemu.conf file 'memory_backing_dir'
config option.

This feature is not compatible with memory overcommit, and requires
'ram_allocation_ratio' to be set to 1.0

Change-Id: I676291ec0faa1dea0bd5050ef8e3426d171de4c6
Implements: blueprint libvirt-file-backed-memory
This commit is contained in:
Zack Cornelius 2018-06-11 14:31:24 -05:00
parent a0bdebac04
commit cbc28f0d15
18 changed files with 647 additions and 17 deletions

View File

@ -27,3 +27,4 @@ instance for these kind of workloads.
cpu-topologies
huge-pages
virtual-gpu
file-backed-memory

View File

@ -0,0 +1,113 @@
==================
File backed memory
==================
.. important::
As of the 18.0.0 Rocky release, the functionality described below is
only supported by the libvirt/KVM driver.
The file backed memory feature in Openstack allows a Nova node to serve guest
memory from a file backing store. This mechanism uses the libvirt file memory
source, causing guest instance memory to be allocated as files within the
libvirt memory backing directory.
Since instance performance will be related to the speed of the backing store,
this feature works best when used with very fast block devices or virtual file
systems - such as flash or RAM devices.
When configured, ``nova-compute`` will report the capacity configured for
file backed memory to placement in place of the total system memory capacity.
This allows the node to run more instances than would normally fit
within system memory.
To enable file backed memory, follow the steps below:
#. `Configure the backing store`_
#. `Configure Nova Compute for file backed memory`_
.. important::
It is not possible to live migrate from a node running a version of
OpenStack that does not support file backed memory to a node with file
backed memory enabled. It is recommended that all Nova compute nodes are
upgraded to Rocky before enabling file backed memory.
Prerequisites and Limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Libvirt
File backed memory requires libvirt version 4.0.0 or newer
Qemu
File backed memory requires qemu version 2.6.0 or newer
Memory overcommit
File backed memory is not compatible with memory overcommit.
``ram_allocation_ratio`` must be set to ``1.0`` in ``nova.conf``, and the
host must not be added to a host aggregate with ``ram_allocation_ratio``
set to anything but ``1.0``.
Huge pages
File backed memory is not compatible with huge pages. Instances with huge
pages configured will not start on a host with file backed memory enabled. It
is recommended to use host aggregates to ensure instances configured for
huge pages are not placed on hosts with file backed memory configured
Handling these limitations could be optimized with a scheduler filter in the
future
Configure the backing store
~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. note::
``/dev/sdb`` and the ``ext4`` filesystem are used here as an example. This
will differ between environments.
.. note::
``/var/lib/libvirt/qemu/ram`` is the default location. The value can be
set via ``memory_backing_dir`` in ``/etc/libvirt/qemu.conf``, and the
mountpoint must match the value configured there.
By default, Libvirt with qemu/KVM allocates memory within
``/var/lib/libvirt/qemu/ram/``. To utilize this, you need to have the backing
store mounted at (or above) this location.
#. Create a filesystem on the backing device
.. code-block:: console
# mkfs.ext4 /dev/sdb
#. Mount the backing device
Add the backing device to ``/etc/fstab`` for automatic mounting to
``/var/lib/libvirt/qemu/ram``
Mount the device
.. code-block:: console
# mount /dev/sdb /var/lib/libvirt/qemu/ram
Configure Nova Compute for file backed memory
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. Enable File backed memory in ``nova-compute``
Configure Nova to utilize file backed memory with the capacity of the
backing store in MiB. 1048576 MiB (1 TiB) is used in this example.
Edit ``/etc/nova/nova.conf``
.. code-block:: ini
[libvirt]
file_backed_memory=1048576
#. Restart the ``nova-compute`` service

View File

@ -55,6 +55,10 @@ Transparent Huge Pages (THP)
Enabling huge pages on the host
-------------------------------
.. important::
Huge pages may not be used on a host configured for file backed memory. See
`File backed memory`_ for details
Persistent huge pages are required owing to their guaranteed availability.
However, persistent huge pages are not enabled by default in most environments.
The steps for enabling huge pages differ from platform to platform and only the
@ -236,3 +240,4 @@ guide.
.. _`Linux THP guide`: https://www.kernel.org/doc/Documentation/vm/transhuge.txt
.. _`Linux hugetlbfs guide`: https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt
.. _`Image metadata`: https://docs.openstack.org/image-guide/image-metadata.html
.. _`File backed memory`: https://docs.openstack.org/nova/latest/admin/file-backed-memory.html

View File

@ -1476,3 +1476,26 @@ driver-impl-ironic=missing
driver-impl-libvirt-vz-vm=complete
driver-impl-libvirt-vz-ct=complete
driver-impl-powervm=missing
[operation.file-backed-memory]
title=File backed memory
status=optional
notes=The file backed memory feature in Openstack allows a Nova node to serve
guest memory from a file backing store. This mechanism uses the libvirt
file memory source, causing guest instance memory to be allocated as files
within the libvirt memory backing directory. This is only supported if
qemu>2.6 and libivrt>4.0.0
cli=
driver-impl-xenserver=missing
driver-impl-libvirt-kvm-x86=complete
driver-impl-libvirt-kvm-aarch64=complete
driver-impl-libvirt-kvm-ppc64=complete
driver-impl-libvirt-kvm-s390x=complete
driver-impl-libvirt-qemu-x86=complete
driver-impl-libvirt-lxc=missing
driver-impl-libvirt-xen=missing
driver-impl-vmware=missing
driver-impl-hyperv=missing
driver-impl-ironic=missing
driver-impl-libvirt-vz-vm=missing
driver-impl-libvirt-vz-ct=missing
driver-impl-powervm=missing

View File

@ -742,6 +742,33 @@ More info: https://github.com/qemu/qemu/blob/master/docs/pcie.txt
Due to QEMU limitations for aarch64/virt maximum value is set to '28'.
Default value '0' moves calculating amount of ports to libvirt.
"""),
cfg.IntOpt('file_backed_memory',
default=0,
min=0,
help="""
Available capacity in MiB for file backed memory.
Set to 0 to disable file backed memory.
When enabled, instances will create memory files in the directory specified
in ``/etc/libvirt/qemu.conf``'s ``memory_backing_dir`` option. The default
location is ``/var/lib/libvirt/qemu/ram``.
When enabled, the value defined for this option is reported as the node memory
capacity. Compute node system memory will be used as a cache for file-backed
memory, via the kernel's pagecache mechanism.
.. note::
This feature is not compatible with hugepages.
.. note::
This feature is not compatible with memory overcommit.
Related options:
* ``virt_type`` must be set to ``kvm`` or ``qemu``.
* ``ram_allocation_ratio`` must be set to 1.0.
"""),
]

View File

@ -133,7 +133,8 @@ class LibvirtLiveMigrateData(LiveMigrateData):
# Version 1.4: Added old_vol_attachment_ids
# Version 1.5: Added src_supports_native_luks
# Version 1.6: Added wait_for_vif_plugged
VERSION = '1.6'
# Version 1.7: Added dst_wants_file_backed_memory
VERSION = '1.7'
fields = {
'filename': fields.StringField(),
@ -153,12 +154,16 @@ class LibvirtLiveMigrateData(LiveMigrateData):
'target_connect_addr': fields.StringField(nullable=True),
'supported_perf_events': fields.ListOfStringsField(),
'src_supports_native_luks': fields.BooleanField(),
'dst_wants_file_backed_memory': fields.BooleanField(),
}
def obj_make_compatible(self, primitive, target_version):
super(LibvirtLiveMigrateData, self).obj_make_compatible(
primitive, target_version)
target_version = versionutils.convert_version_to_tuple(target_version)
if target_version < (1, 7):
if 'dst_wants_file_backed_memory' in primitive:
del primitive['dst_wants_file_backed_memory']
if target_version < (1, 6) and 'wait_for_vif_plugged' in primitive:
del primitive['wait_for_vif_plugged']
if target_version < (1, 5):

View File

@ -31,7 +31,7 @@ LOG = logging.getLogger(__name__)
# NOTE(danms): This is the global service version counter
SERVICE_VERSION = 31
SERVICE_VERSION = 32
# NOTE(danms): This is our SERVICE_VERSION history. The idea is that any
@ -133,6 +133,11 @@ SERVICE_VERSION_HISTORY = (
{'compute_rpc': '5.0'},
# Version 31: The compute manager checks if 'trusted_certs' are supported
{'compute_rpc': '5.0'},
# Version 32: Add 'file_backed_memory' support. The service version bump is
# needed to allow the destination of a live migration to reject the
# migration if 'file_backed_memory' is enabled and the source does not
# support 'file_backed_memory'
{'compute_rpc': '5.0'},
)

View File

@ -233,7 +233,8 @@ class _TestLibvirtLiveMigrateData(object):
old_vol_attachment_ids={uuids.volume: uuids.attachment},
supported_perf_events=[],
serial_listen_addr='127.0.0.1',
target_connect_addr='127.0.0.1')
target_connect_addr='127.0.0.1',
dst_wants_file_backed_memory=False)
data = lambda x: x['nova_object.data']
@ -244,6 +245,7 @@ class _TestLibvirtLiveMigrateData(object):
self.assertNotIn('supported_perf_events', primitive)
self.assertNotIn('old_vol_attachment_ids', primitive)
self.assertNotIn('src_supports_native_luks', primitive)
self.assertNotIn('dst_wants_file_backed_memory', primitive)
primitive = data(obj.obj_to_primitive(target_version='1.1'))
self.assertNotIn('serial_listen_ports', primitive)
primitive = data(obj.obj_to_primitive(target_version='1.2'))
@ -252,6 +254,8 @@ class _TestLibvirtLiveMigrateData(object):
self.assertNotIn('old_vol_attachment_ids', primitive)
primitive = data(obj.obj_to_primitive(target_version='1.4'))
self.assertNotIn('src_supports_native_luks', primitive)
primitive = data(obj.obj_to_primitive(target_version='1.6'))
self.assertNotIn('dst_wants_file_backed_memory', primitive)
def test_bdm_obj_make_compatible(self):
obj = migrate_data.LibvirtLiveMigrateBDMInfo(

View File

@ -1114,7 +1114,7 @@ object_data = {
'InstancePCIRequest': '1.2-6344dd8bd1bf873e7325c07afe47f774',
'InstancePCIRequests': '1.1-65e38083177726d806684cb1cc0136d2',
'LibvirtLiveMigrateBDMInfo': '1.1-5f4a68873560b6f834b74e7861d71aaf',
'LibvirtLiveMigrateData': '1.6-9c8e7200a6f80fa7a626b8855c5b394b',
'LibvirtLiveMigrateData': '1.7-746b3163f022f8811da62afa035ecf66',
'KeyPair': '1.4-1244e8d1b103cc69d038ed78ab3a8cc6',
'KeyPairList': '1.3-94aad3ac5c938eef4b5e83da0212f506',
'MemoryDiagnostics': '1.0-2c995ae0f2223bb0f8e523c5cc0b83da',

View File

@ -3405,6 +3405,19 @@ class LibvirtConfigGuestMemoryBackingTest(LibvirtConfigBaseTest):
<locked/>
</memoryBacking>""")
def test_config_memory_backing_source_all(self):
obj = config.LibvirtConfigGuestMemoryBacking()
obj.sharedaccess = True
obj.allocateimmediate = True
obj.filesource = True
xml = obj.to_xml()
self.assertXmlEqual(xml, """
<memoryBacking>
<source type="file"/>
<access mode="shared"/>
<allocation mode="immediate"/>
</memoryBacking>""")
class LibvirtConfigGuestMemoryTuneTest(LibvirtConfigBaseTest):
def test_config_memory_backing_none(self):

View File

@ -1069,6 +1069,65 @@ class LibvirtConnTestCase(test.NoDBTestCase,
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
drvr.init_host("dummyhost")
@mock.patch.object(
libvirt_driver.LibvirtDriver, "_check_file_backed_memory_support",)
def test_file_backed_memory_support_called(self, mock_file_backed_support):
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
drvr.init_host("dummyhost")
self.assertTrue(mock_file_backed_support.called)
@mock.patch.object(fakelibvirt.Connection, 'getLibVersion',
return_value=versionutils.convert_version_to_int(
libvirt_driver.MIN_LIBVIRT_FILE_BACKED_VERSION))
@mock.patch.object(fakelibvirt.Connection, 'getVersion',
return_value=versionutils.convert_version_to_int(
libvirt_driver.MIN_QEMU_FILE_BACKED_VERSION))
def test_min_version_file_backed_ok(self, mock_libv, mock_qemu):
self.flags(file_backed_memory=1024, group='libvirt')
self.flags(ram_allocation_ratio=1.0)
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
drvr._check_file_backed_memory_support()
@mock.patch.object(fakelibvirt.Connection, 'getLibVersion',
return_value=versionutils.convert_version_to_int(
libvirt_driver.MIN_LIBVIRT_FILE_BACKED_VERSION) - 1)
@mock.patch.object(fakelibvirt.Connection, 'getVersion',
return_value=versionutils.convert_version_to_int(
libvirt_driver.MIN_QEMU_FILE_BACKED_VERSION))
def test_min_version_file_backed_old_libvirt(self, mock_libv, mock_qemu):
self.flags(file_backed_memory=1024, group="libvirt")
self.flags(ram_allocation_ratio=1.0)
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
self.assertRaises(exception.InternalError,
drvr._check_file_backed_memory_support)
@mock.patch.object(fakelibvirt.Connection, 'getLibVersion',
return_value=versionutils.convert_version_to_int(
libvirt_driver.MIN_LIBVIRT_FILE_BACKED_VERSION))
@mock.patch.object(fakelibvirt.Connection, 'getVersion',
return_value=versionutils.convert_version_to_int(
libvirt_driver.MIN_QEMU_FILE_BACKED_VERSION) - 1)
def test_min_version_file_backed_old_qemu(self, mock_libv, mock_qemu):
self.flags(file_backed_memory=1024, group="libvirt")
self.flags(ram_allocation_ratio=1.0)
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
self.assertRaises(exception.InternalError,
drvr._check_file_backed_memory_support)
@mock.patch.object(fakelibvirt.Connection, 'getLibVersion',
return_value=versionutils.convert_version_to_int(
libvirt_driver.MIN_LIBVIRT_FILE_BACKED_VERSION))
@mock.patch.object(fakelibvirt.Connection, 'getVersion',
return_value=versionutils.convert_version_to_int(
libvirt_driver.MIN_QEMU_FILE_BACKED_VERSION))
def test_min_version_file_backed_bad_ram_allocation_ratio(self, mock_libv,
mock_qemu):
self.flags(file_backed_memory=1024, group="libvirt")
self.flags(ram_allocation_ratio=1.5)
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
self.assertRaises(exception.InternalError,
drvr._check_file_backed_memory_support)
def _do_test_parse_migration_flags(self, lm_expected=None,
bm_expected=None):
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
@ -2377,6 +2436,43 @@ class LibvirtConnTestCase(test.NoDBTestCase,
self.assertTrue(membacking.locked)
self.assertFalse(membacking.sharedpages)
def test_get_guest_memory_backing_config_file_backed(self):
self.flags(file_backed_memory=1024, group="libvirt")
result = self._test_get_guest_memory_backing_config(
None, None, None
)
self.assertTrue(result.sharedaccess)
self.assertTrue(result.filesource)
self.assertTrue(result.allocateimmediate)
def test_get_guest_memory_backing_config_file_backed_hugepages(self):
self.flags(file_backed_memory=1024, group="libvirt")
host_topology = objects.NUMATopology(
cells=[
objects.NUMACell(
id=3, cpuset=set([1]), siblings=[set([1])], memory=1024,
mempages=[
objects.NUMAPagesTopology(size_kb=4, total=2000,
used=0),
objects.NUMAPagesTopology(size_kb=2048, total=512,
used=0),
objects.NUMAPagesTopology(size_kb=1048576, total=0,
used=0),
])])
inst_topology = objects.InstanceNUMATopology(cells=[
objects.InstanceNUMACell(
id=3, cpuset=set([0, 1]), memory=1024, pagesize=2048)])
numa_tune = vconfig.LibvirtConfigGuestNUMATune()
numa_tune.memnodes = [vconfig.LibvirtConfigGuestNUMATuneMemNode()]
numa_tune.memnodes[0].cellid = 0
numa_tune.memnodes[0].nodeset = [3]
self.assertRaises(exception.MemoryPagesUnsupported,
self._test_get_guest_memory_backing_config,
host_topology, inst_topology, numa_tune)
@mock.patch.object(
host.Host, "is_cpu_control_policy_capable", return_value=True)
def test_get_guest_config_numa_host_instance_pci_no_numa_info(
@ -8464,7 +8560,8 @@ class LibvirtConnTestCase(test.NoDBTestCase,
'disk_available_mb': 409600,
"disk_over_commit": False,
"block_migration": True,
"is_volume_backed": False},
"is_volume_backed": False,
"dst_wants_file_backed_memory": False},
matchers.DictMatches(return_value.to_legacy_dict()))
@mock.patch.object(objects.Service, 'get_by_compute_host')
@ -8497,7 +8594,8 @@ class LibvirtConnTestCase(test.NoDBTestCase,
'disk_available_mb': 102400,
"disk_over_commit": True,
"block_migration": True,
"is_volume_backed": False},
"is_volume_backed": False,
"dst_wants_file_backed_memory": False},
matchers.DictMatches(return_value.to_legacy_dict()))
@mock.patch.object(objects.Service, 'get_by_compute_host')
@ -8527,7 +8625,8 @@ class LibvirtConnTestCase(test.NoDBTestCase,
"block_migration": False,
"disk_over_commit": False,
"disk_available_mb": 409600,
"is_volume_backed": False},
"is_volume_backed": False,
"dst_wants_file_backed_memory": False},
matchers.DictMatches(return_value.to_legacy_dict()))
@mock.patch.object(objects.Service, 'get_by_compute_host')
@ -8599,7 +8698,8 @@ class LibvirtConnTestCase(test.NoDBTestCase,
"block_migration": False,
"disk_over_commit": False,
"disk_available_mb": 1024,
"is_volume_backed": False}
"is_volume_backed": False,
"dst_wants_file_backed_memory": False}
self.assertEqual(expected_result, result.to_legacy_dict())
@mock.patch.object(objects.Service, 'get_by_compute_host')
@ -8633,9 +8733,43 @@ class LibvirtConnTestCase(test.NoDBTestCase,
"block_migration": False,
"disk_over_commit": False,
"disk_available_mb": 1024,
"is_volume_backed": False},
"is_volume_backed": False,
"dst_wants_file_backed_memory": False},
matchers.DictMatches(return_value.to_legacy_dict()))
@mock.patch.object(objects.Service, 'get_by_compute_host')
@mock.patch.object(libvirt_driver.LibvirtDriver,
'_create_shared_storage_test_file')
@mock.patch.object(fakelibvirt.Connection, 'compareCPU')
def test_check_can_live_migrate_dest_file_backed(
self, mock_cpu, mock_test_file, mock_svc):
self.flags(file_backed_memory=1024, group='libvirt')
instance_ref = objects.Instance(**self.test_instance)
instance_ref.vcpu_model = test_vcpu_model.fake_vcpumodel
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
compute_info = {'disk_available_least': 400,
'cpu_info': 'asdf',
}
filename = "file"
svc = objects.Service()
svc.version = 32
mock_svc.return_value = svc
# _check_cpu_match
mock_cpu.return_value = 1
# mounted_on_same_shared_storage
mock_test_file.return_value = filename
# No need for the src_compute_info
return_value = drvr.check_can_live_migrate_destination(self.context,
instance_ref, None, compute_info, False)
self.assertTrue(return_value.dst_wants_file_backed_memory)
@mock.patch.object(objects.Service, 'get_by_compute_host')
@mock.patch.object(fakelibvirt.Connection, 'compareCPU')
def test_check_can_live_migrate_dest_incompatible_cpu_raises(
@ -8645,12 +8779,40 @@ class LibvirtConnTestCase(test.NoDBTestCase,
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
compute_info = {'cpu_info': 'asdf', 'disk_available_least': 1}
svc = objects.Service(host="old")
svc.version = 32
mock_svc.return_value = svc
mock_cpu.side_effect = exception.InvalidCPUInfo(reason='foo')
self.assertRaises(exception.InvalidCPUInfo,
drvr.check_can_live_migrate_destination,
self.context, instance_ref,
compute_info, compute_info, False)
@mock.patch.object(objects.Service, 'get_by_compute_host')
@mock.patch.object(fakelibvirt.Connection, 'compareCPU')
@mock.patch('nova.objects.Service.version', 30)
def test_check_can_live_migrate_dest_incompatible_file_backed(
self, mock_cpu, mock_svc):
self.flags(file_backed_memory=1024, group='libvirt')
instance_ref = objects.Instance(**self.test_instance)
# _check_cpu_match
mock_cpu.return_value = 1
svc = objects.Service(host="old")
svc.version = 31
mock_svc.return_value = svc
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
compute_info = {'cpu_info': 'asdf', 'disk_available_least': 1}
self.assertRaises(exception.MigrationPreCheckError,
drvr.check_can_live_migrate_destination,
self.context, instance_ref,
compute_info, compute_info, False)
@mock.patch.object(host.Host, 'compare_cpu')
@mock.patch.object(nova.virt.libvirt, 'config')
def test_compare_cpu_compatible_host_cpu(self, mock_vconfig, mock_compare):
@ -17516,6 +17678,13 @@ class TestUpdateProviderTree(test.NoDBTestCase):
self.assertEqual(shared_rp_inv,
(self.pt.data(self.shared_rp.uuid)).inventory)
def test_update_provider_tree_with_file_backed_memory(self):
self.flags(file_backed_memory=1024,
group="libvirt")
self._test_update_provider_tree()
self.assertEqual(self._get_inventory(),
(self.pt.data(self.cn_rp.uuid)).inventory)
class LibvirtDriverTestCase(test.NoDBTestCase):
"""Test for nova.virt.libvirt.libvirt_driver.LibvirtDriver."""

View File

@ -675,6 +675,12 @@ class HostTestCase(test.NoDBTestCase):
mock_conn().getInfo.return_value = ['zero', 'one', 'two']
self.assertEqual('one', self.host.get_memory_mb_total())
def test_get_memory_total_file_backed(self):
self.flags(file_backed_memory=1048576,
group="libvirt")
self.assertEqual(1048576, self.host.get_memory_mb_total())
def test_get_memory_used(self):
m = mock.mock_open(read_data="""
MemTotal: 16194180 kB
@ -750,7 +756,51 @@ Active: 8381604 kB
) as (mock_sumDomainMemory, mock_platform):
mock_sumDomainMemory.return_value = 8192
self.assertEqual(8192, self.host.get_memory_mb_used())
mock_sumDomainMemory.assert_called_once_with()
mock_sumDomainMemory.assert_called_once_with(include_host=True)
def test_sum_domain_memory_mb_file_backed(self):
class DiagFakeDomain(object):
def __init__(self, id, memmb):
self.id = id
self.memmb = memmb
def info(self):
return [0, 0, self.memmb * 1024]
def ID(self):
return self.id
def name(self):
return "instance000001"
def UUIDString(self):
return uuids.fake
with test.nested(
mock.patch.object(host.Host,
"list_guests"),
mock.patch('sys.platform', 'linux2'),
) as (mock_list, mock_platform):
mock_list.return_value = [
libvirt_guest.Guest(DiagFakeDomain(0, 4096)),
libvirt_guest.Guest(DiagFakeDomain(1, 2048)),
libvirt_guest.Guest(DiagFakeDomain(2, 1024)),
libvirt_guest.Guest(DiagFakeDomain(3, 1024))]
self.assertEqual(8192,
self.host._sum_domain_memory_mb(include_host=False))
def test_get_memory_used_file_backed(self):
self.flags(file_backed_memory=1048576,
group='libvirt')
with test.nested(
mock.patch.object(self.host, "_sum_domain_memory_mb"),
mock.patch('sys.platform', 'linux2')
) as (mock_sumDomainMemory, mock_platform):
mock_sumDomainMemory.return_value = 8192
self.assertEqual(8192, self.host.get_memory_mb_used())
mock_sumDomainMemory.assert_called_once_with(include_host=False)
def test_get_cpu_stats(self):
stats = self.host.get_cpu_stats()

View File

@ -92,13 +92,14 @@ class UtilityMigrationTestCase(test.NoDBTestCase):
self.assertEqual([], ports)
@mock.patch('lxml.etree.tostring')
@mock.patch.object(migration, '_update_memory_backing_xml')
@mock.patch.object(migration, '_update_perf_events_xml')
@mock.patch.object(migration, '_update_graphics_xml')
@mock.patch.object(migration, '_update_serial_xml')
@mock.patch.object(migration, '_update_volume_xml')
def test_get_updated_guest_xml(
self, mock_volume, mock_serial, mock_graphics,
mock_perf_events_xml, mock_tostring):
mock_perf_events_xml, mock_memory_backing, mock_tostring):
data = objects.LibvirtLiveMigrateData()
mock_guest = mock.Mock(spec=libvirt_guest.Guest)
get_volume_config = mock.MagicMock()
@ -109,6 +110,7 @@ class UtilityMigrationTestCase(test.NoDBTestCase):
mock_serial.assert_called_once_with(mock.ANY, data)
mock_volume.assert_called_once_with(mock.ANY, data, get_volume_config)
mock_perf_events_xml.assert_called_once_with(mock.ANY, data)
mock_memory_backing.assert_called_once_with(mock.ANY, data)
self.assertEqual(1, mock_tostring.called)
def test_update_serial_xml_serial(self):
@ -536,6 +538,63 @@ class UtilityMigrationTestCase(test.NoDBTestCase):
</perf>
</domain>"""))
def test_update_memory_backing_xml_remove(self):
data = objects.LibvirtLiveMigrateData(
dst_wants_file_backed_memory=False)
xml = """<domain>
<memoryBacking>
<source type="file"/>
<access mode="shared"/>
<allocation mode="immediate"/>
</memoryBacking>
</domain>"""
doc = etree.fromstring(xml)
res = etree.tostring(migration._update_memory_backing_xml(doc, data),
encoding='unicode')
self.assertThat(res, matchers.XMLMatches("""<domain>
<memoryBacking/>
</domain>"""))
def test_update_memory_backing_xml_add(self):
data = objects.LibvirtLiveMigrateData(
dst_wants_file_backed_memory=True)
xml = """<domain/>"""
doc = etree.fromstring(xml)
res = etree.tostring(migration._update_memory_backing_xml(doc, data),
encoding='unicode')
self.assertThat(res, matchers.XMLMatches("""<domain>
<memoryBacking>
<source type="file"/>
<access mode="shared"/>
<allocation mode="immediate"/>
</memoryBacking>
</domain>"""))
def test_update_memory_backing_xml_keep(self):
data = objects.LibvirtLiveMigrateData(
dst_wants_file_backed_memory=True)
xml = """<domain>
<memoryBacking>
<source type="file"/>
<access mode="shared"/>
<allocation mode="immediate"/>
</memoryBacking>
</domain>"""
doc = etree.fromstring(xml)
res = etree.tostring(migration._update_memory_backing_xml(doc, data),
encoding='unicode')
self.assertThat(res, matchers.XMLMatches("""<domain>
<memoryBacking>
<source type="file"/>
<access mode="shared"/>
<allocation mode="immediate"/>
</memoryBacking>
</domain>"""))
class MigrationMonitorTestCase(test.NoDBTestCase):
def setUp(self):

View File

@ -2051,6 +2051,9 @@ class LibvirtConfigGuestMemoryBacking(LibvirtConfigObject):
self.hugepages = []
self.sharedpages = True
self.locked = False
self.filesource = False
self.sharedaccess = False
self.allocateimmediate = False
def format_dom(self):
root = super(LibvirtConfigGuestMemoryBacking, self).format_dom()
@ -2064,6 +2067,12 @@ class LibvirtConfigGuestMemoryBacking(LibvirtConfigObject):
root.append(etree.Element("nosharepages"))
if self.locked:
root.append(etree.Element("locked"))
if self.filesource:
root.append(etree.Element("source", type="file"))
if self.sharedaccess:
root.append(etree.Element("access", mode="shared"))
if self.allocateimmediate:
root.append(etree.Element("allocation", mode="immediate"))
return root

View File

@ -285,6 +285,9 @@ MIN_LIBVIRT_MULTIATTACH = (3, 10, 0)
MIN_LIBVIRT_LUKS_VERSION = (2, 2, 0)
MIN_QEMU_LUKS_VERSION = (2, 6, 0)
MIN_LIBVIRT_FILE_BACKED_VERSION = (4, 0, 0)
MIN_QEMU_FILE_BACKED_VERSION = (2, 6, 0)
VGPU_RESOURCE_SEMAPHORE = "vgpu_resources"
@ -476,6 +479,8 @@ class LibvirtDriver(driver.ComputeDriver):
self._set_multiattach_support()
self._check_file_backed_memory_support()
if (CONF.libvirt.virt_type == 'lxc' and
not (CONF.libvirt.uid_maps and CONF.libvirt.gid_maps)):
LOG.warning("Running libvirt-lxc without user namespaces is "
@ -580,6 +585,36 @@ class LibvirtDriver(driver.ComputeDriver):
'versions of QEMU and libvirt. QEMU must be less than '
'2.10 or libvirt must be greater than or equal to 3.10.')
def _check_file_backed_memory_support(self):
if CONF.libvirt.file_backed_memory:
# file_backed_memory is only compatible with qemu/kvm virts
if CONF.libvirt.virt_type not in ("qemu", "kvm"):
raise exception.InternalError(
_('Running Nova with file_backed_memory and virt_type '
'%(type)s is not supported. file_backed_memory is only '
'supported with qemu and kvm types.') %
{'type': CONF.libvirt.virt_type})
# Check needed versions for file_backed_memory
if not self._host.has_min_version(
MIN_LIBVIRT_FILE_BACKED_VERSION,
MIN_QEMU_FILE_BACKED_VERSION):
raise exception.InternalError(
_('Running Nova with file_backed_memory requires libvirt '
'version %(libvirt)s and qemu version %(qemu)s') %
{'libvirt': libvirt_utils.version_to_string(
MIN_LIBVIRT_FILE_BACKED_VERSION),
'qemu': libvirt_utils.version_to_string(
MIN_QEMU_FILE_BACKED_VERSION)})
# file backed memory doesn't work with memory overcommit.
# Block service startup if file backed memory is enabled and
# ram_allocation_ratio is not 1.0
if CONF.ram_allocation_ratio != 1.0:
raise exception.InternalError(
'Running Nova with file_backed_memory requires '
'ram_allocation_ratio configured to 1.0')
def _prepare_migration_flags(self):
migration_flags = 0
@ -4675,6 +4710,14 @@ class LibvirtDriver(driver.ComputeDriver):
wantsrealtime = hardware.is_realtime_enabled(flavor)
wantsfilebacked = CONF.libvirt.file_backed_memory > 0
if wantsmempages and wantsfilebacked:
# Can't use file backed memory with hugepages
LOG.warning("Instance requested huge pages, but file-backed "
"memory is enabled, and incompatible with huge pages")
raise exception.MemoryPagesUnsupported()
membacking = None
if wantsmempages:
pages = self._get_memory_backing_hugepages_support(
@ -4687,6 +4730,12 @@ class LibvirtDriver(driver.ComputeDriver):
membacking = vconfig.LibvirtConfigGuestMemoryBacking()
membacking.locked = True
membacking.sharedpages = False
if wantsfilebacked:
if not membacking:
membacking = vconfig.LibvirtConfigGuestMemoryBacking()
membacking.filesource = True
membacking.sharedaccess = True
membacking.allocateimmediate = True
return membacking
@ -6501,6 +6550,22 @@ class LibvirtDriver(driver.ComputeDriver):
:param disk_over_commit: if true, allow disk over commit
:returns: a LibvirtLiveMigrateData object
"""
# TODO(zcornelius): Remove this check in Stein, as we'll only support
# Rocky and newer computes.
# If file_backed_memory is enabled on this host, we have to make sure
# the source is new enough to support it. Since the source generates
# the XML for the destination, we depend on the source generating a
# file-backed XML for us, so fail if it won't do that.
if CONF.libvirt.file_backed_memory > 0:
srv = objects.Service.get_by_compute_host(context, instance.host)
if srv.version < 32:
msg = ("Cannot migrate instance %(uuid)s from node %(node)s. "
"Node %(node)s is not compatible with "
"file_backed_memory" % {"uuid": instance.uuid,
"node": srv.host})
raise exception.MigrationPreCheckError(reason=msg)
if disk_over_commit:
disk_available_gb = dst_compute_info['local_gb']
else:
@ -6534,6 +6599,9 @@ class LibvirtDriver(driver.ComputeDriver):
if disk_over_commit is not None:
data.disk_over_commit = disk_over_commit
data.disk_available_mb = disk_available_mb
data.dst_wants_file_backed_memory = \
CONF.libvirt.file_backed_memory > 0
return data
def cleanup_live_migration_destination_check(self, context,

View File

@ -765,13 +765,16 @@ class Host(object):
:returns: the total amount of memory(MB).
"""
return self._get_hardware_info()[1]
if CONF.libvirt.file_backed_memory > 0:
return CONF.libvirt.file_backed_memory
else:
return self._get_hardware_info()[1]
def _sum_domain_memory_mb(self):
def _sum_domain_memory_mb(self, include_host=True):
"""Get the total memory consumed by guest domains
Subtract available host memory from dom0 to get real used memory
within dom0
If include_host is True, subtract available host memory from guest 0
to get real used memory within dom0 within xen
"""
used = 0
for guest in self.list_guests(only_guests=False):
@ -783,7 +786,7 @@ class Host(object):
" %(uuid)s, exception: %(ex)s",
{"uuid": guest.uuid, "ex": e})
continue
if guest.id == 0:
if include_host and guest.id == 0:
# Memory usage for the host domain (dom0 in xen) is the
# reported memory minus available memory
used += (dom_mem - self._get_avail_memory_kb())
@ -814,7 +817,11 @@ class Host(object):
if CONF.libvirt.virt_type == 'xen':
# For xen, report the sum of all domains, with
return self._sum_domain_memory_mb()
return self._sum_domain_memory_mb(include_host=True)
elif CONF.libvirt.file_backed_memory > 0:
# For file_backed_memory, report the total usage of guests,
# ignoring host memory
return self._sum_domain_memory_mb(include_host=False)
else:
return (self.get_memory_mb_total() -
(self._get_avail_memory_kb() // units.Ki))

View File

@ -83,6 +83,7 @@ def get_updated_guest_xml(guest, migrate_data, get_volume_config):
xml_doc = _update_serial_xml(xml_doc, migrate_data)
xml_doc = _update_volume_xml(xml_doc, migrate_data, get_volume_config)
xml_doc = _update_perf_events_xml(xml_doc, migrate_data)
xml_doc = _update_memory_backing_xml(xml_doc, migrate_data)
return etree.tostring(xml_doc, encoding='unicode')
@ -222,6 +223,52 @@ def _update_perf_events_xml(xml_doc, migrate_data):
return xml_doc
def _update_memory_backing_xml(xml_doc, migrate_data):
"""Update libvirt domain XML for file backed memory
If incoming XML has a memoryBacking element, remove access, source,
and allocation children elements to get it to a known consistent state.
If no incoming memoryBacking element, create one.
If destination wants file backed memory, add source, access,
and allocation children.
"""
old_xml_has_memory_backing = True
file_backed = False
memory_backing = xml_doc.findall('./memoryBacking')
if 'dst_wants_file_backed_memory' in migrate_data:
file_backed = migrate_data.dst_wants_file_backed_memory
if not memory_backing:
# Create memoryBacking element
memory_backing = etree.Element("memoryBacking")
old_xml_has_memory_backing = False
else:
memory_backing = memory_backing[0]
# Remove existing file backed memory tags, if they exist.
for name in ("access", "source", "allocation"):
tag = memory_backing.findall(name)
if tag:
memory_backing.remove(tag[0])
# Leave empty memoryBacking element
if not file_backed:
return xml_doc
# Add file_backed memoryBacking children
memory_backing.append(etree.Element("source", type="file"))
memory_backing.append(etree.Element("access", mode="shared"))
memory_backing.append(etree.Element("allocation", mode="immediate"))
if not old_xml_has_memory_backing:
xml_doc.append(memory_backing)
return xml_doc
def find_job_type(guest, instance):
"""Determine the (likely) current migration job type

View File

@ -0,0 +1,25 @@
---
features:
- |
The libvirt driver now allows utilizing file backed memory for qemu/KVM
virtual machines, via a new configuration attribute
``[libvirt]/file_backed_memory``, defaulting to 0 (disabled).
``[libvirt]/file_backed_memory`` specifies the available capacity in MiB
for file backed memory, at the directory configured for
``memory_backing_dir`` in libvirt's ``qemu.conf``. When enabled, the
libvirt driver will report the configured value for the total memory
capacity of the node, and will report used memory as the sum of all
configured guest memory.
Live migrations from nodes not compatible with file backed memory to nodes
with file backed memory is not allowed, and will result in an error. It's
recommended to upgrade all nodes before enabling file backed memory.
Note that file backed memory is not compatible with hugepages, and is not
compatible with memory overcommit. If file backed memory is enabled,
``ram_allocation_ratio`` must be configured to ``1.0``
For more details, see the admin guide documentation:
https://docs.openstack.org/nova/latest/admin/file-backed-memory.html