From 1d5c45d703b8343d6683f30056fc2fb01c4405ca Mon Sep 17 00:00:00 2001 From: Jay Faulkner Date: Mon, 11 Mar 2024 17:29:58 +0100 Subject: [PATCH] Inspect non-raw images for safety This is a backport of two changes merged together to facilitate backporting: The first is a refactor of disk utilities: Import disk_{utils,partitioner} from ironic-lib With the iscsi deploy long gone, these modules are only used in IPA and in fact represent a large part of its critical logic. Having them separately sometimes makes fixing issues tricky if an interface of a function needs changing. This change imports the code mostly as it is, just removing run_as_root and a deprecated function, as well as moving configuration options to config.py. Also migrates one relevant function from ironic_lib.utils. The second is the fix for the security issue: Inspect non-raw images for safety When IPA gets a non-raw image, it performs an on-the-fly conversion using qemu-img convert, as well as running qemu-img frequently to get basic information about the image before validating it. Now, we ensure that before any qemu-img calls are made, that we have inspected the image for safety and pass through the detected format. If given a disk_format=raw image and image streaming is enabled (default), we retain the existing behavior of not inspecting it in any way and streaming it bit-perfect to the device. In this case, we never use qemu-based tools on the image at all. If given a disk_format=raw image and image streaming is disabled, this change fixes a bug where the image may have been converted if it was not actually raw in the first place. We now stream these bit-perfect to the device. Adds two config options: - [DEFAULT]/disable_deep_image_inspection, which can be set to "True" in order to disable all security features. Do not do this. - [DEFAULT]/permitted_image_formats, default raw,qcow2, for image types IPA should accept. Both of these configuration options are wired up to be set by the lookup data returned by Ironic at lookup time. This uses a image format inspection module imported from Nova; this inspector will eventually live in oslo.utils, at which point we'll migrate our usage of the inspector to it. Closes-Bug: #2071740 Co-Authored-By: Dmitry Tantsur Change-Id: I5254b80717cb5a7f9084e3eff32a00b968f987b7 --- ironic_python_agent/agent.py | 6 + ironic_python_agent/config.py | 76 +- ironic_python_agent/disk_partitioner.py | 125 ++ ironic_python_agent/disk_utils.py | 787 ++++++++++++ ironic_python_agent/errors.py | 9 + ironic_python_agent/extensions/standby.py | 86 +- ironic_python_agent/format_inspector.py | 1044 ++++++++++++++++ ironic_python_agent/partition_utils.py | 11 +- ironic_python_agent/qemu_img.py | 153 +++ ironic_python_agent/tests/unit/base.py | 2 + .../tests/unit/extensions/test_standby.py | 187 ++- .../tests/unit/test_disk_partitioner.py | 202 +++ .../tests/unit/test_disk_utils.py | 1088 +++++++++++++++++ .../tests/unit/test_partition_utils.py | 53 +- .../tests/unit/test_qemu_img.py | 332 +++++ .../image-security-5c23b890409101c9.yaml | 48 + 16 files changed, 4101 insertions(+), 108 deletions(-) create mode 100644 ironic_python_agent/disk_partitioner.py create mode 100644 ironic_python_agent/disk_utils.py create mode 100644 ironic_python_agent/format_inspector.py create mode 100644 ironic_python_agent/qemu_img.py create mode 100644 ironic_python_agent/tests/unit/test_disk_partitioner.py create mode 100644 ironic_python_agent/tests/unit/test_disk_utils.py create mode 100644 ironic_python_agent/tests/unit/test_qemu_img.py create mode 100644 releasenotes/notes/image-security-5c23b890409101c9.yaml diff --git a/ironic_python_agent/agent.py b/ironic_python_agent/agent.py index d41a03b61..fbf6edb8c 100644 --- a/ironic_python_agent/agent.py +++ b/ironic_python_agent/agent.py @@ -467,6 +467,12 @@ class IronicPythonAgent(base.ExecuteCommandMixin): if config.get('metrics_statsd'): for opt, val in config.items(): setattr(cfg.CONF.metrics_statsd, opt, val) + if config.get('disable_deep_image_inspection') is not None: + cfg.CONF.set_override('disable_deep_image_inspection', + config['disable_deep_image_inspection']) + if config.get('permitted_image_formats') is not None: + cfg.CONF.set_override('permitted_image_formats', + config['permitted_image_formats']) md5_allowed = config.get('agent_md5_checksum_enable') if md5_allowed is not None: cfg.CONF.set_override('md5_enabled', md5_allowed) diff --git a/ironic_python_agent/config.py b/ironic_python_agent/config.py index 35cde2729..346cd2400 100644 --- a/ironic_python_agent/config.py +++ b/ironic_python_agent/config.py @@ -369,13 +369,82 @@ cli_opts = [ help='If the agent should rebuild the configuration drive ' 'using a local filesystem, instead of letting Ironic ' 'determine if this action is necessary.'), + cfg.BoolOpt('disable_deep_image_inspection', + default=False, + help='This disables the additional deep image inspection ' + 'the agent does before converting and writing an image. ' + 'Generally, this should remain enabled for maximum ' + 'security, but this option allows disabling it if there ' + 'is a compatability concern.'), + cfg.ListOpt('permitted_image_formats', + default='raw,qcow2', + help='The supported list of image formats which are ' + 'permitted for deployment with Ironic Python Agent. If ' + 'an image format outside of this list is detected, the ' + 'image validation logic will fail the deployment ' + 'process. This check is skipped if deep image ' + 'inspection is disabled.'), ] -CONF.register_cli_opts(cli_opts) +disk_utils_opts = [ + cfg.IntOpt('efi_system_partition_size', + default=550, + help='Size of EFI system partition in MiB when configuring ' + 'UEFI systems for local boot. A common minimum is ~200 ' + 'megabytes, however OS driven firmware updates and ' + 'unikernel usage generally requires more space on the ' + 'efi partition.'), + cfg.IntOpt('bios_boot_partition_size', + default=1, + help='Size of BIOS Boot partition in MiB when configuring ' + 'GPT partitioned systems for local boot in BIOS.'), + cfg.StrOpt('dd_block_size', + default='1M', + help='Block size to use when writing to the nodes disk.'), + cfg.IntOpt('partition_detection_attempts', + default=3, + min=1, + help='Maximum attempts to detect a newly created partition.'), + cfg.IntOpt('partprobe_attempts', + default=10, + help='Maximum number of attempts to try to read the ' + 'partition.'), + cfg.IntOpt('image_convert_memory_limit', + default=2048, + help='Memory limit for "qemu-img convert" in MiB. Implemented ' + 'via the address space resource limit.'), + cfg.IntOpt('image_convert_attempts', + default=3, + help='Number of attempts to convert an image.'), +] + +disk_part_opts = [ + cfg.IntOpt('check_device_interval', + default=1, + help='After Ironic has completed creating the partition table, ' + 'it continues to check for activity on the attached iSCSI ' + 'device status at this interval prior to copying the image' + ' to the node, in seconds'), + cfg.IntOpt('check_device_max_retries', + default=20, + help='The maximum number of times to check that the device is ' + 'not accessed by another process. If the device is still ' + 'busy after that, the disk partitioning will be treated as' + ' having failed.') +] def list_opts(): - return [('DEFAULT', cli_opts)] + return [('DEFAULT', cli_opts), + ('disk_utils', disk_utils_opts), + ('disk_partitioner', disk_part_opts)] + + +def populate_config(): + """Populate configuration. In a method so tests can easily utilize it.""" + CONF.register_cli_opts(cli_opts) + CONF.register_opts(disk_utils_opts, group='disk_utils') + CONF.register_opts(disk_part_opts, group='disk_partitioner') def override(params): @@ -402,3 +471,6 @@ def override(params): LOG.warning('Unable to override configuration option %(key)s ' 'with %(value)r: %(exc)s', {'key': key, 'value': value, 'exc': exc}) + + +populate_config() diff --git a/ironic_python_agent/disk_partitioner.py b/ironic_python_agent/disk_partitioner.py new file mode 100644 index 000000000..762c75606 --- /dev/null +++ b/ironic_python_agent/disk_partitioner.py @@ -0,0 +1,125 @@ +# Copyright 2014 Red Hat, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +""" +Code for creating partitions on a disk. + +Imported from ironic-lib's disk_utils as of the following commit: +https://opendev.org/openstack/ironic-lib/commit/42fa5d63861ba0f04b9a4f67212173d7013a1332 +""" + +import logging + +from ironic_lib.common.i18n import _ +from ironic_lib import exception +from ironic_lib import utils +from oslo_config import cfg + +CONF = cfg.CONF + +LOG = logging.getLogger(__name__) + + +class DiskPartitioner(object): + + def __init__(self, device, disk_label='msdos', alignment='optimal'): + """A convenient wrapper around the parted tool. + + :param device: The device path. + :param disk_label: The type of the partition table. Valid types are: + "bsd", "dvh", "gpt", "loop", "mac", "msdos", + "pc98", or "sun". + :param alignment: Set alignment for newly created partitions. + Valid types are: none, cylinder, minimal and + optimal. + + """ + self._device = device + self._disk_label = disk_label + self._alignment = alignment + self._partitions = [] + + def _exec(self, *args): + # NOTE(lucasagomes): utils.execute() is already a wrapper on top + # of processutils.execute() which raises specific + # exceptions. It also logs any failure so we don't + # need to log it again here. + utils.execute('parted', '-a', self._alignment, '-s', self._device, + '--', 'unit', 'MiB', *args, use_standard_locale=True, + run_as_root=True) + + def add_partition(self, size, part_type='primary', fs_type='', + boot_flag=None, extra_flags=None): + """Add a partition. + + :param size: The size of the partition in MiB. + :param part_type: The type of the partition. Valid values are: + primary, logical, or extended. + :param fs_type: The filesystem type. Valid types are: ext2, fat32, + fat16, HFS, linux-swap, NTFS, reiserfs, ufs. + If blank (''), it will create a Linux native + partition (83). + :param boot_flag: Boot flag that needs to be configured on the + partition. Ignored if None. It can take values + 'bios_grub', 'boot'. + :param extra_flags: List of flags to set on the partition. Ignored + if None. + :returns: The partition number. + + """ + self._partitions.append({'size': size, + 'type': part_type, + 'fs_type': fs_type, + 'boot_flag': boot_flag, + 'extra_flags': extra_flags}) + return len(self._partitions) + + def get_partitions(self): + """Get the partitioning layout. + + :returns: An iterator with the partition number and the + partition layout. + + """ + return enumerate(self._partitions, 1) + + def commit(self): + """Write to the disk.""" + LOG.debug("Committing partitions to disk.") + cmd_args = ['mklabel', self._disk_label] + # NOTE(lucasagomes): Lead in with 1MiB to allow room for the + # partition table itself. + start = 1 + for num, part in self.get_partitions(): + end = start + part['size'] + cmd_args.extend(['mkpart', part['type'], part['fs_type'], + str(start), str(end)]) + if part['boot_flag']: + cmd_args.extend(['set', str(num), part['boot_flag'], 'on']) + if part['extra_flags']: + for flag in part['extra_flags']: + cmd_args.extend(['set', str(num), flag, 'on']) + start = end + + self._exec(*cmd_args) + + try: + from ironic_python_agent import disk_utils # circular dependency + disk_utils.wait_for_disk_to_become_available(self._device) + except exception.IronicException as e: + raise exception.InstanceDeployFailure( + _('Disk partitioning failed on device %(device)s. ' + 'Error: %(error)s') + % {'device': self._device, 'error': e}) diff --git a/ironic_python_agent/disk_utils.py b/ironic_python_agent/disk_utils.py new file mode 100644 index 000000000..66b8f5ac2 --- /dev/null +++ b/ironic_python_agent/disk_utils.py @@ -0,0 +1,787 @@ +# Copyright 2014 Red Hat, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +""" +Various utilities related to disk handling. + +Imported from ironic-lib's disk_utils as of the following commit: +https://opendev.org/openstack/ironic-lib/commit/42fa5d63861ba0f04b9a4f67212173d7013a1332 +""" + +import logging +import os +import re +import stat +import time + +from ironic_lib.common.i18n import _ +from ironic_lib import exception +from ironic_lib import utils +from oslo_concurrency import processutils +from oslo_config import cfg +from oslo_utils import excutils +import tenacity + +from ironic_python_agent import disk_partitioner +from ironic_python_agent import errors +from ironic_python_agent import format_inspector +from ironic_python_agent import qemu_img + +CONF = cfg.CONF + +LOG = logging.getLogger(__name__) + +_PARTED_PRINT_RE = re.compile(r"^(\d+):([\d\.]+)MiB:" + r"([\d\.]+)MiB:([\d\.]+)MiB:(\w*):(.*):(.*);") +_PARTED_TABLE_TYPE_RE = re.compile(r'^.*partition\s+table\s*:\s*(gpt|msdos)', + re.IGNORECASE | re.MULTILINE) + +CONFIGDRIVE_LABEL = "config-2" +MAX_CONFIG_DRIVE_SIZE_MB = 64 + +GPT_SIZE_SECTORS = 33 + +# Maximum disk size supported by MBR is 2TB (2 * 1024 * 1024 MB) +MAX_DISK_SIZE_MB_SUPPORTED_BY_MBR = 2097152 + + +def list_partitions(device): + """Get partitions information from given device. + + :param device: The device path. + :returns: list of dictionaries (one per partition) with keys: + number, start, end, size (in MiB), filesystem, partition_name, + flags, path. + """ + output = utils.execute( + 'parted', '-s', '-m', device, 'unit', 'MiB', 'print', + use_standard_locale=True, run_as_root=True)[0] + if isinstance(output, bytes): + output = output.decode("utf-8") + lines = [line for line in output.split('\n') if line.strip()][2:] + # Example of line: 1:1.00MiB:501MiB:500MiB:ext4::boot + fields = ('number', 'start', 'end', 'size', 'filesystem', 'partition_name', + 'flags') + result = [] + for line in lines: + match = _PARTED_PRINT_RE.match(line) + if match is None: + LOG.warning("Partition information from parted for device " + "%(device)s does not match " + "expected format: %(line)s", + dict(device=device, line=line)) + continue + # Cast int fields to ints (some are floats and we round them down) + groups = [int(float(x)) if i < 4 else x + for i, x in enumerate(match.groups())] + item = dict(zip(fields, groups)) + item['path'] = partition_index_to_path(device, item['number']) + result.append(item) + return result + + +def count_mbr_partitions(device): + """Count the number of primary and logical partitions on a MBR + + :param device: The device path. + :returns: A tuple with the number of primary partitions and logical + partitions. + :raise: ValueError if the device does not have a valid MBR partition + table. + """ + # -d do not update the kernel table + # -s print a summary of the partition table + output, err = utils.execute('partprobe', '-d', '-s', device, + use_standard_locale=True, run_as_root=True) + if 'msdos' not in output: + raise ValueError('The device %s does not have a valid MBR ' + 'partition table' % device) + # Sample output: /dev/vdb: msdos partitions 1 2 3 <5 6 7> + # The partitions with number > 4 (and inside <>) are logical partitions + output = output.replace('<', '').replace('>', '') + partitions = [int(s) for s in output.split() if s.isdigit()] + + return (sum(i < 5 for i in partitions), sum(i > 4 for i in partitions)) + + +def get_disk_identifier(dev): + """Get the disk identifier from the disk being exposed by the ramdisk. + + This disk identifier is appended to the pxe config which will then be + used by chain.c32 to detect the correct disk to chainload. This is helpful + in deployments to nodes with multiple disks. + + http://www.syslinux.org/wiki/index.php/Comboot/chain.c32#mbr: + + :param dev: Path for the already populated disk device. + :raises OSError: When the hexdump binary is unavailable. + :returns: The Disk Identifier. + """ + disk_identifier = utils.execute('hexdump', '-s', '440', '-n', '4', + '-e', '''\"0x%08x\"''', + dev, attempts=5, delay_on_retry=True, + run_as_root=True) + return disk_identifier[0] + + +def get_partition_table_type(device): + """Get partition table type, msdos or gpt. + + :param device: the name of the device + :return: dos, gpt or None + """ + out = utils.execute('parted', '--script', device, '--', 'print', + use_standard_locale=True, run_as_root=True)[0] + m = _PARTED_TABLE_TYPE_RE.search(out) + if m: + return m.group(1) + + LOG.warning("Unable to get partition table type for device %s", device) + return 'unknown' + + +def _blkid(device, probe=False, fields=None): + args = [] + if probe: + args.append('-p') + if fields: + args += sum((['-s', field] for field in fields), []) + + output, err = utils.execute('blkid', device, *args, + use_standard_locale=True, run_as_root=True) + if output.strip(): + return output.split(': ', 1)[1] + else: + return "" + + +def _lsblk(device, deps=True, fields=None): + args = ['--pairs', '--bytes', '--ascii'] + if not deps: + args.append('--nodeps') + if fields: + args.extend(['--output', ','.join(fields)]) + else: + args.append('--output-all') + + output, err = utils.execute('lsblk', device, *args, + use_standard_locale=True, run_as_root=True) + return output.strip() + + +def get_device_information(device, fields=None): + """Get information about a device using blkid. + + Can be applied to all block devices: disks, RAID, partitions. + + :param device: Device name. + :param fields: A list of fields to request (all by default). + :return: A dictionary with requested fields as keys. + :raises: ProcessExecutionError + """ + output = _lsblk(device, fields=fields, deps=False) + if output: + return next(utils.parse_device_tags(output)) + else: + return {} + + +def find_efi_partition(device): + """Looks for the EFI partition on a given device. + + A boot partition on a GPT disk is assumed to be an EFI partition as well. + + :param device: the name of the device + :return: the EFI partition record from `list_partitions` or None + """ + is_gpt = get_partition_table_type(device) == 'gpt' + for part in list_partitions(device): + flags = {x.strip() for x in part['flags'].split(',')} + if 'esp' in flags or ('boot' in flags and is_gpt): + LOG.debug("Found EFI partition %s on device %s", part, device) + return part + else: + LOG.debug("No efi partition found on device %s", device) + + +_ISCSI_PREFIX = "iqn.2008-10.org.openstack:" + + +def is_last_char_digit(dev): + """check whether device name ends with a digit""" + if len(dev) >= 1: + return dev[-1].isdigit() + return False + + +def partition_index_to_path(device, index): + """Guess a partition path based on its device and index. + + :param device: Device path. + :param index: Partition index. + """ + # the actual device names in the baremetal are like /dev/sda, /dev/sdb etc. + # While for the iSCSI device, the naming convention has a format which has + # iqn also embedded in it. + # When this function is called by ironic-conductor, the iSCSI device name + # should be appended by "part%d". While on the baremetal, it should name + # the device partitions as /dev/sda1 and not /dev/sda-part1. + if _ISCSI_PREFIX in device: + part_template = '%s-part%d' + elif is_last_char_digit(device): + part_template = '%sp%d' + else: + part_template = '%s%d' + return part_template % (device, index) + + +def make_partitions(dev, root_mb, swap_mb, ephemeral_mb, + configdrive_mb, node_uuid, commit=True, + boot_option="netboot", boot_mode="bios", + disk_label=None, cpu_arch=""): + """Partition the disk device. + + Create partitions for root, swap, ephemeral and configdrive on a + disk device. + + :param dev: Path for the device to work on. + :param root_mb: Size of the root partition in mebibytes (MiB). + :param swap_mb: Size of the swap partition in mebibytes (MiB). If 0, + no partition will be created. + :param ephemeral_mb: Size of the ephemeral partition in mebibytes (MiB). + If 0, no partition will be created. + :param configdrive_mb: Size of the configdrive partition in + mebibytes (MiB). If 0, no partition will be created. + :param commit: True/False. Default for this setting is True. If False + partitions will not be written to disk. + :param boot_option: Can be "local" or "netboot". "netboot" by default. + :param boot_mode: Can be "bios" or "uefi". "bios" by default. + :param node_uuid: Node's uuid. Used for logging. + :param disk_label: The disk label to be used when creating the + partition table. Valid values are: "msdos", "gpt" or None; If None + Ironic will figure it out according to the boot_mode parameter. + :param cpu_arch: Architecture of the node the disk device belongs to. + When using the default value of None, no architecture specific + steps will be taken. This default should be used for x86_64. When + set to ppc64*, architecture specific steps are taken for booting a + partition image locally. + :returns: A dictionary containing the partition type as Key and partition + path as Value for the partitions created by this method. + + """ + LOG.debug("Starting to partition the disk device: %(dev)s " + "for node %(node)s", + {'dev': dev, 'node': node_uuid}) + part_dict = {} + + if disk_label is None: + disk_label = 'gpt' if boot_mode == 'uefi' else 'msdos' + + dp = disk_partitioner.DiskPartitioner(dev, disk_label=disk_label) + + # For uefi localboot, switch partition table to gpt and create the efi + # system partition as the first partition. + if boot_mode == "uefi" and boot_option == "local": + part_num = dp.add_partition(CONF.disk_utils.efi_system_partition_size, + fs_type='fat32', + boot_flag='boot') + part_dict['efi system partition'] = partition_index_to_path( + dev, part_num) + + if (boot_mode == "bios" and boot_option == "local" and disk_label == "gpt" + and not cpu_arch.startswith('ppc64')): + part_num = dp.add_partition(CONF.disk_utils.bios_boot_partition_size, + boot_flag='bios_grub') + part_dict['BIOS Boot partition'] = partition_index_to_path( + dev, part_num) + + # NOTE(mjturek): With ppc64* nodes, partition images are expected to have + # a PrEP partition at the start of the disk. This is an 8 MiB partition + # with the boot and prep flags set. The bootloader should be installed + # here. + if (cpu_arch.startswith("ppc64") and boot_mode == "bios" + and boot_option == "local"): + LOG.debug("Add PReP boot partition (8 MB) to device: " + "%(dev)s for node %(node)s", + {'dev': dev, 'node': node_uuid}) + boot_flag = 'boot' if disk_label == 'msdos' else None + part_num = dp.add_partition(8, part_type='primary', + boot_flag=boot_flag, extra_flags=['prep']) + part_dict['PReP Boot partition'] = partition_index_to_path( + dev, part_num) + if ephemeral_mb: + LOG.debug("Add ephemeral partition (%(size)d MB) to device: %(dev)s " + "for node %(node)s", + {'dev': dev, 'size': ephemeral_mb, 'node': node_uuid}) + part_num = dp.add_partition(ephemeral_mb) + part_dict['ephemeral'] = partition_index_to_path(dev, part_num) + if swap_mb: + LOG.debug("Add Swap partition (%(size)d MB) to device: %(dev)s " + "for node %(node)s", + {'dev': dev, 'size': swap_mb, 'node': node_uuid}) + part_num = dp.add_partition(swap_mb, fs_type='linux-swap') + part_dict['swap'] = partition_index_to_path(dev, part_num) + if configdrive_mb: + LOG.debug("Add config drive partition (%(size)d MB) to device: " + "%(dev)s for node %(node)s", + {'dev': dev, 'size': configdrive_mb, 'node': node_uuid}) + part_num = dp.add_partition(configdrive_mb) + part_dict['configdrive'] = partition_index_to_path(dev, part_num) + + # NOTE(lucasagomes): Make the root partition the last partition. This + # enables tools like cloud-init's growroot utility to expand the root + # partition until the end of the disk. + LOG.debug("Add root partition (%(size)d MB) to device: %(dev)s " + "for node %(node)s", + {'dev': dev, 'size': root_mb, 'node': node_uuid}) + + boot_val = 'boot' if (not cpu_arch.startswith("ppc64") + and boot_mode == "bios" + and boot_option == "local" + and disk_label == "msdos") else None + + part_num = dp.add_partition(root_mb, boot_flag=boot_val) + + part_dict['root'] = partition_index_to_path(dev, part_num) + + if commit: + # write to the disk + dp.commit() + trigger_device_rescan(dev) + return part_dict + + +def is_block_device(dev): + """Check whether a device is block or not.""" + attempts = CONF.disk_utils.partition_detection_attempts + for attempt in range(attempts): + try: + s = os.stat(dev) + except OSError as e: + LOG.debug("Unable to stat device %(dev)s. Attempt %(attempt)d " + "out of %(total)d. Error: %(err)s", + {"dev": dev, "attempt": attempt + 1, + "total": attempts, "err": e}) + time.sleep(1) + else: + return stat.S_ISBLK(s.st_mode) + msg = _("Unable to stat device %(dev)s after attempting to verify " + "%(attempts)d times.") % {'dev': dev, 'attempts': attempts} + LOG.error(msg) + raise exception.InstanceDeployFailure(msg) + + +def dd(src, dst, conv_flags=None): + """Execute dd from src to dst.""" + if conv_flags: + extra_args = ['conv=%s' % conv_flags] + else: + extra_args = [] + + utils.dd(src, dst, 'bs=%s' % CONF.disk_utils.dd_block_size, 'oflag=direct', + *extra_args) + + +def _image_inspection(filename): + try: + inspector_cls = format_inspector.detect_file_format(filename) + if (not inspector_cls + or not hasattr(inspector_cls, 'safety_check') + or not inspector_cls.safety_check()): + err = "Security: Image failed safety check" + LOG.error(err) + raise errors.InvalidImage(details=err) + + except (format_inspector.ImageFormatError, AttributeError): + # NOTE(JayF): Because we already validated the format is OK and matches + # expectation, it should be impossible for us to get an + # ImageFormatError or AttributeError. We handle it anyway + # for completeness. + msg = "Security: Unable to safety check image" + LOG.error(msg) + raise errors.InvalidImage(details=msg) + + return inspector_cls + + +def get_and_validate_image_format(filename, ironic_disk_format): + """Get the format of a given image file and ensure it's allowed. + + This method uses the format inspector originally written for glance to + safely detect the image format. It also sanity checks to ensure any + specified format matches the provided one (except raw; which in some + cases is a request to convert to raw) and that the format is in the + allowed list of formats. + + It also performs a basic safety check on the image. + + This entire process can be bypassed, and the older code path used, + by setting CONF.disable_deep_image_inspection to True. + + See https://bugs.launchpad.net/ironic/+bug/2071740 for full details on + why this must always happen. + + :param filename: The name of the image file to validate. + :param ironic_disk_format: The ironic-provided expected format of the image + :returns: tuple of validated img_format and size + """ + if CONF.disable_deep_image_inspection: + data = qemu_img.image_info(filename) + img_format = data.file_format + size = data.virtual_size + else: + if ironic_disk_format == 'raw': + # NOTE(JayF): IPA unconditionally writes raw images to disk without + # conversion with dd or raw python, not qemu-img, it's + # not required to safety check raw images. + img_format = ironic_disk_format + size = os.path.getsize(filename) + else: + img_format_cls = _image_inspection(filename) + img_format = str(img_format_cls) + size = img_format_cls.virtual_size + if img_format not in CONF.permitted_image_formats: + msg = ("Security: Detected image format was %s, but only %s " + "are allowed") + fmts = ', '.join(CONF.permitted_image_formats) + LOG.error(msg, img_format, fmts) + raise errors.InvalidImage( + details=msg % (img_format, fmts) + ) + elif ironic_disk_format and ironic_disk_format != img_format: + msg = ("Security: Expected format was %s, but image was " + "actually %s" % (ironic_disk_format, img_format)) + LOG.error(msg) + raise errors.InvalidImage(details=msg) + + return img_format, size + + +def populate_image(src, dst, conv_flags=None, + source_format=None, is_raw=False): + """Populate a provided destination device with the image + + :param src: An image already security checked in format disk_format + :param dst: A location, usually a partition or block device, + to write the image + :param conv_flags: Conversion flags to pass to dd if provided + :param source_format: format of the image + :param is_raw: Ironic indicates image is raw; do not convert! + """ + if is_raw: + dd(src, dst, conv_flags=conv_flags) + else: + qemu_img.convert_image(src, dst, 'raw', True, + sparse_size='0', source_format=source_format) + + +def block_uuid(dev): + """Get UUID of a block device. + + Try to fetch the UUID, if that fails, try to fetch the PARTUUID. + """ + info = get_device_information(dev, fields=['UUID', 'PARTUUID']) + if info.get('UUID'): + return info['UUID'] + else: + LOG.debug('Falling back to partition UUID as the block device UUID ' + 'was not found while examining %(device)s', + {'device': dev}) + return info.get('PARTUUID', '') + + +def get_dev_block_size(dev): + """Get the device size in 512 byte sectors.""" + block_sz, cmderr = utils.execute('blockdev', '--getsz', dev, + run_as_root=True) + return int(block_sz) + + +def destroy_disk_metadata(dev, node_uuid): + """Destroy metadata structures on node's disk. + + Ensure that node's disk magic strings are wiped without zeroing the + entire drive. To do this we use the wipefs tool from util-linux. + + :param dev: Path for the device to work on. + :param node_uuid: Node's uuid. Used for logging. + """ + # NOTE(NobodyCam): This is needed to work around bug: + # https://bugs.launchpad.net/ironic/+bug/1317647 + LOG.debug("Start destroy disk metadata for node %(node)s.", + {'node': node_uuid}) + try: + utils.execute('wipefs', '--force', '--all', dev, + use_standard_locale=True, run_as_root=True) + except processutils.ProcessExecutionError as e: + with excutils.save_and_reraise_exception() as ctxt: + # NOTE(zhenguo): Check if --force option is supported for wipefs, + # if not, we should try without it. + if '--force' in str(e): + ctxt.reraise = False + utils.execute('wipefs', '--all', dev, + use_standard_locale=True, run_as_root=True) + # NOTE(TheJulia): sgdisk attempts to load and make sense of the + # partition tables in advance of wiping the partition data. + # This means when a CRC error is found, sgdisk fails before + # erasing partition data. + # This is the same bug as + # https://bugs.launchpad.net/ironic-python-agent/+bug/1737556 + + # Overwrite the Primary GPT, catch very small partitions (like EBRs) + dd_device = 'of=%s' % dev + dd_count = 'count=%s' % GPT_SIZE_SECTORS + dev_size = get_dev_block_size(dev) + if dev_size < GPT_SIZE_SECTORS: + dd_count = 'count=%s' % dev_size + utils.execute('dd', 'bs=512', 'if=/dev/zero', dd_device, dd_count, + 'oflag=direct', use_standard_locale=True, run_as_root=True) + + # Overwrite the Secondary GPT, do this only if there could be one + if dev_size > GPT_SIZE_SECTORS: + gpt_backup = dev_size - GPT_SIZE_SECTORS + dd_seek = 'seek=%i' % gpt_backup + dd_count = 'count=%s' % GPT_SIZE_SECTORS + utils.execute('dd', 'bs=512', 'if=/dev/zero', dd_device, dd_count, + 'oflag=direct', dd_seek, use_standard_locale=True, + run_as_root=True) + + # Go ahead and let sgdisk run as well. + utils.execute('sgdisk', '-Z', dev, use_standard_locale=True, + run_as_root=True) + + try: + wait_for_disk_to_become_available(dev) + except exception.IronicException as e: + raise exception.InstanceDeployFailure( + _('Destroying metadata failed on device %(device)s. ' + 'Error: %(error)s') + % {'device': dev, 'error': e}) + + LOG.info("Disk metadata on %(dev)s successfully destroyed for node " + "%(node)s", {'dev': dev, 'node': node_uuid}) + + +def _fix_gpt_structs(device, node_uuid): + """Checks backup GPT data structures and moves them to end of the device + + :param device: The device path. + :param node_uuid: UUID of the Node. Used for logging. + :raises: InstanceDeployFailure, if any disk partitioning related + commands fail. + """ + try: + output, _err = utils.execute('sgdisk', '-v', device, run_as_root=True) + + search_str = "it doesn't reside\nat the end of the disk" + if search_str in output: + utils.execute('sgdisk', '-e', device, run_as_root=True) + except (processutils.UnknownArgumentError, + processutils.ProcessExecutionError, OSError) as e: + msg = (_('Failed to fix GPT data structures on disk %(disk)s ' + 'for node %(node)s. Error: %(error)s') % + {'disk': device, 'node': node_uuid, 'error': e}) + LOG.error(msg) + raise exception.InstanceDeployFailure(msg) + + +def fix_gpt_partition(device, node_uuid): + """Fix GPT partition + + Fix GPT table information when image is written to a disk which + has a bigger extend (e.g. 30GB image written on a 60Gb physical disk). + + :param device: The device path. + :param node_uuid: UUID of the Node. + :raises: InstanceDeployFailure if exception is caught. + """ + try: + disk_is_gpt_partitioned = (get_partition_table_type(device) == 'gpt') + if disk_is_gpt_partitioned: + _fix_gpt_structs(device, node_uuid) + except Exception as e: + msg = (_('Failed to fix GPT partition on disk %(disk)s ' + 'for node %(node)s. Error: %(error)s') % + {'disk': device, 'node': node_uuid, 'error': e}) + LOG.error(msg) + raise exception.InstanceDeployFailure(msg) + + +def udev_settle(): + """Wait for the udev event queue to settle. + + Wait for the udev event queue to settle to make sure all devices + are detected once the machine boots up. + + :return: True on success, False otherwise. + """ + LOG.debug('Waiting until udev event queue is empty') + try: + utils.execute('udevadm', 'settle') + except processutils.ProcessExecutionError as e: + LOG.warning('Something went wrong when waiting for udev ' + 'to settle. Error: %s', e) + return False + else: + return True + + +def partprobe(device, attempts=None): + """Probe partitions on the given device. + + :param device: The block device containing partitions that is attempting + to be updated. + :param attempts: Number of attempts to run partprobe, the default is read + from the configuration. + :return: True on success, False otherwise. + """ + if attempts is None: + attempts = CONF.disk_utils.partprobe_attempts + + try: + utils.execute('partprobe', device, attempts=attempts, run_as_root=True) + except (processutils.UnknownArgumentError, + processutils.ProcessExecutionError, OSError) as e: + LOG.warning("Unable to probe for partitions on device %(device)s, " + "the partitioning table may be broken. Error: %(error)s", + {'device': device, 'error': e}) + return False + else: + return True + + +def trigger_device_rescan(device, attempts=None): + """Sync and trigger device rescan. + + Disk partition performed via parted, when performed on a ramdisk + do not have to honor the fsync mechanism. In essence, fsync is used + on the file representing the block device, which falls to the kernel + filesystem layer to trigger a sync event. On a ramdisk using ramfs, + this is an explicit non-operation. + + As a result of this, we need to trigger a system wide sync operation + which will trigger cache to flush to disk, after which partition changes + should be visible upon re-scan. + + When ramdisks are not in use, this also helps ensure that data has + been safely flushed across the wire, such as on iscsi connections. + + :param device: The block device containing partitions that is attempting + to be updated. + :param attempts: Number of attempts to run partprobe, the default is read + from the configuration. + :return: True on success, False otherwise. + """ + LOG.debug('Explicitly calling sync to force buffer/cache flush') + utils.execute('sync') + # Make sure any additions to the partitioning are reflected in the + # kernel. + udev_settle() + partprobe(device, attempts=attempts) + udev_settle() + try: + # Also verify that the partitioning is correct now. + utils.execute('sgdisk', '-v', device, run_as_root=True) + except processutils.ProcessExecutionError as exc: + LOG.warning('Failed to verify partition tables on device %(dev)s: ' + '%(err)s', {'dev': device, 'err': exc}) + return False + else: + return True + + +# NOTE(dtantsur): this function was in ironic_lib.utils before migration +# (presumably to avoid a circular dependency with disk_partitioner) +def wait_for_disk_to_become_available(device): + """Wait for a disk device to become available. + + Waits for a disk device to become available for use by + waiting until all process locks on the device have been + released. + + Timeout and iteration settings come from the configuration + options used by the in-library disk_partitioner: + ``check_device_interval`` and ``check_device_max_retries``. + + :params device: The path to the device. + :raises: IronicException If the disk fails to become + available. + """ + pids = [''] + stderr = [''] + interval = CONF.disk_partitioner.check_device_interval + max_retries = CONF.disk_partitioner.check_device_max_retries + + def _wait_for_disk(): + # A regex is likely overkill here, but variations in fuser + # means we should likely use it. + fuser_pids_re = re.compile(r'\d+') + + # There are 'psmisc' and 'busybox' versions of the 'fuser' program. The + # 'fuser' programs differ in how they output data to stderr. The + # busybox version does not output the filename to stderr, while the + # standard 'psmisc' version does output the filename to stderr. How + # they output to stdout is almost identical in that only the PIDs are + # output to stdout, with the 'psmisc' version adding a leading space + # character to the list of PIDs. + try: + # NOTE(ifarkas): fuser returns a non-zero return code if none of + # the specified files is accessed. + # NOTE(TheJulia): fuser does not report LVM devices as in use + # unless the LVM device-mapper device is the + # device that is directly polled. + # NOTE(TheJulia): The -m flag allows fuser to reveal data about + # mounted filesystems, which should be considered + # busy/locked. That being said, it is not used + # because busybox fuser has a different behavior. + # NOTE(TheJuia): fuser outputs a list of found PIDs to stdout. + # All other text is returned via stderr, and the + # output to a terminal is merged as a result. + out, err = utils.execute('fuser', device, check_exit_code=[0, 1], + run_as_root=True) + + if not out and not err: + return True + + stderr[0] = err + # NOTE: findall() returns a list of matches, or an empty list if no + # matches + pids[0] = fuser_pids_re.findall(out) + + except processutils.ProcessExecutionError as exc: + LOG.warning('Failed to check the device %(device)s with fuser:' + ' %(err)s', {'device': device, 'err': exc}) + return False + + retry = tenacity.retry( + retry=tenacity.retry_if_result(lambda r: not r), + stop=tenacity.stop_after_attempt(max_retries), + wait=tenacity.wait_fixed(interval), + reraise=True) + try: + retry(_wait_for_disk)() + except tenacity.RetryError: + if pids[0]: + raise exception.IronicException( + _('Processes with the following PIDs are holding ' + 'device %(device)s: %(pids)s. ' + 'Timed out waiting for completion.') + % {'device': device, 'pids': ', '.join(pids[0])}) + else: + raise exception.IronicException( + _('Fuser exited with "%(fuser_err)s" while checking ' + 'locks for device %(device)s. Timed out waiting for ' + 'completion.') + % {'device': device, 'fuser_err': stderr[0]}) diff --git a/ironic_python_agent/errors.py b/ironic_python_agent/errors.py index d004e90b3..0a678c29f 100644 --- a/ironic_python_agent/errors.py +++ b/ironic_python_agent/errors.py @@ -376,3 +376,12 @@ class ProtectedDeviceError(CleaningError): self.message = details super(CleaningError, self).__init__(details) + + +class InvalidImage(DeploymentError): + """Error raised when an image fails validation for any reason.""" + + message = 'The provided image is not valid for use' + + def __init__(self, details=None): + super(InvalidImage, self).__init__(details) diff --git a/ironic_python_agent/extensions/standby.py b/ironic_python_agent/extensions/standby.py index 14b1d1223..e4ded5dba 100644 --- a/ironic_python_agent/extensions/standby.py +++ b/ironic_python_agent/extensions/standby.py @@ -19,17 +19,19 @@ import tempfile import time from urllib import parse as urlparse -from ironic_lib import disk_utils from ironic_lib import exception from oslo_concurrency import processutils from oslo_config import cfg from oslo_log import log +from oslo_utils import units import requests +from ironic_python_agent import disk_utils from ironic_python_agent import errors from ironic_python_agent.extensions import base from ironic_python_agent import hardware from ironic_python_agent import partition_utils +from ironic_python_agent import qemu_img from ironic_python_agent import utils CONF = cfg.CONF @@ -276,7 +278,8 @@ def _fetch_checksum(checksum, image_info): checksum, "Checksum file does not contain name %s" % expected_fname) -def _write_partition_image(image, image_info, device, configdrive=None): +def _write_partition_image(image, image_info, device, configdrive=None, + source_format=None, is_raw=False, size=0): """Call disk_util to create partition and write the partition image. :param image: Local path to image file to be written to the partition. @@ -287,6 +290,10 @@ def _write_partition_image(image, image_info, device, configdrive=None): :param configdrive: A string containing the location of the config drive as a URL OR the contents (as gzip/base64) of the configdrive. Optional, defaults to None. + :param source_format: The actual format of the partition image. + Must be provided if deep image inspection is enabled. + :param is_raw: Ironic indicates the image is raw; do not convert it + :param size: Virtual size, in MB, of provided image. :raises: InvalidCommandParamsError if the partition is too small for the provided image. @@ -306,10 +313,9 @@ def _write_partition_image(image, image_info, device, configdrive=None): cpu_arch = hardware.dispatch_to_managers('get_cpus').architecture if image is not None: - image_mb = disk_utils.get_image_mb(image) - if image_mb > int(root_mb): + if size > int(root_mb): msg = ('Root partition is too small for requested image. Image ' - 'virtual size: {} MB, Root size: {} MB').format(image_mb, + 'virtual size: {} MB, Root size: {} MB').format(size, root_mb) raise errors.InvalidCommandParamsError(msg) @@ -323,12 +329,15 @@ def _write_partition_image(image, image_info, device, configdrive=None): configdrive=configdrive, boot_mode=boot_mode, disk_label=disk_label, - cpu_arch=cpu_arch) + cpu_arch=cpu_arch, + source_format=source_format, + is_raw=is_raw) except processutils.ProcessExecutionError as e: raise errors.ImageWriteError(device, e.exit_code, e.stdout, e.stderr) -def _write_whole_disk_image(image, image_info, device): +def _write_whole_disk_image(image, image_info, device, source_format=None, + is_raw=False): """Writes a whole disk image to the specified device. :param image: Local path to image file to be written to the disk. @@ -336,22 +345,40 @@ def _write_whole_disk_image(image, image_info, device): This parameter is currently unused by the function. :param device: The device name, as a string, on which to store the image. Example: '/dev/sda' - + :param source_format: The format of the whole disk image to be written. + :param is_raw: Ironic indicates the image is raw; do not convert it :raises: ImageWriteError if the command to write the image encounters an error. + :raises: InvalidImage if asked to write an image without a format when + not permitted """ # FIXME(dtantsur): pass the real node UUID for logging disk_utils.destroy_disk_metadata(device, '') disk_utils.udev_settle() - command = ['qemu-img', 'convert', - '-t', 'directsync', '-S', '0', '-O', 'host_device', '-W', - image, device] - LOG.info('Writing image with command: %s', ' '.join(command)) try: - disk_utils.convert_image(image, device, out_format='host_device', - cache='directsync', out_of_order=True, - sparse_size='0') + if is_raw: + # TODO(JayF): We should unify all these dd/convert_image calls + # into disk_utils.populate_image(). + # NOTE(JayF): Since we do not safety check raw images, we must use + # dd to write them to ensure maximum security. This may cause + # failures in situations where images are configured as raw but + # are actually in need of conversion. Those cases can no longer + # be transparently handled safely. + LOG.info('Writing raw image %s to device %s', image, device) + disk_utils.dd(image, device) + else: + command = ['qemu-img', 'convert', + '-t', 'directsync', '-S', '0', '-O', 'host_device', + '-W'] + if source_format: + command += ['-f', source_format] + command += [image, device] + LOG.info('Writing image with command: %s', ' '.join(command)) + qemu_img.convert_image(image, device, out_format='host_device', + cache='directsync', out_of_order=True, + sparse_size='0', + source_format=source_format) except processutils.ProcessExecutionError as e: raise errors.ImageWriteError(device, e.exit_code, e.stdout, e.stderr) @@ -369,14 +396,28 @@ def _write_image(image_info, device, configdrive=None): of the configdrive. Optional, defaults to None. :raises: ImageWriteError if the command to write the image encounters an error. + :raises: InvalidImage if the image does not pass security inspection """ starttime = time.time() image = _image_location(image_info) + ironic_disk_format = image_info.get('disk_format') + is_raw = ironic_disk_format == 'raw' + # NOTE(JayF): The below method call performs a required security check + # and must remain in place. See bug #2071740 + source_format, size = disk_utils.get_and_validate_image_format( + image, ironic_disk_format) + size_mb = int((size + units.Mi - 1) / units.Mi) + uuids = {} if image_info.get('image_type') == 'partition': - uuids = _write_partition_image(image, image_info, device, configdrive) + uuids = _write_partition_image(image, image_info, device, + configdrive, + source_format=source_format, + is_raw=is_raw, size=size_mb) else: - _write_whole_disk_image(image, image_info, device) + _write_whole_disk_image(image, image_info, device, + source_format=source_format, + is_raw=is_raw) totaltime = time.time() - starttime LOG.info('Image %(image)s written to device %(device)s in %(totaltime)s ' 'seconds', {'image': image, 'device': device, @@ -916,16 +957,20 @@ class StandbyExtension(base.BaseAgentExtension): device = hardware.dispatch_to_managers('get_os_install_device', permit_refresh=True) - disk_format = image_info.get('disk_format') + requested_disk_format = image_info.get('disk_format') + stream_raw_images = image_info.get('stream_raw_images', False) + # don't write image again if already cached if self.cached_image_id != image_info['id']: if self.cached_image_id is not None: LOG.debug('Already had %s cached, overwriting', self.cached_image_id) - if stream_raw_images and disk_format == 'raw': + if stream_raw_images and requested_disk_format == 'raw': if image_info.get('image_type') == 'partition': + # NOTE(JayF): This only creates partitions due to image + # being None self.partition_uuids = _write_partition_image(None, image_info, device, @@ -935,6 +980,9 @@ class StandbyExtension(base.BaseAgentExtension): self.partition_uuids = {} stream_to = device + # NOTE(JayF): Images that claim to be raw are not inspected at + # all, as they never interact with qemu-img and are + # streamed directly to disk unmodified. self._stream_raw_image_onto_device(image_info, stream_to) else: self._cache_and_write_image(image_info, device, configdrive) diff --git a/ironic_python_agent/format_inspector.py b/ironic_python_agent/format_inspector.py new file mode 100644 index 000000000..44cdd4ae7 --- /dev/null +++ b/ironic_python_agent/format_inspector.py @@ -0,0 +1,1044 @@ +# Copyright 2020 Red Hat, Inc +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +""" +This is a python implementation of virtual disk format inspection routines +gathered from various public specification documents, as well as qemu disk +driver code. It attempts to store and parse the minimum amount of data +required, and in a streaming-friendly manner to collect metadata about +complex-format images. + +This was imported from the Ironic fix. A copy of this inspector +exists in multiple projects, including Ironic, Nova, and Cinder. Do not +modify this version without modifying all versions. + +TODO(JayF): Remove this module, replace with oslo_utils version once released +""" + +import struct + +from oslo_log import log as logging +from oslo_utils import units + +LOG = logging.getLogger(__name__) + + +def chunked_reader(fileobj, chunk_size=512): + while True: + chunk = fileobj.read(chunk_size) + if not chunk: + break + yield chunk + + +class CaptureRegion(object): + """Represents a region of a file we want to capture. + + A region of a file we want to capture requires a byte offset into + the file and a length. This is expected to be used by a data + processing loop, calling capture() with the most recently-read + chunk. This class handles the task of grabbing the desired region + of data across potentially multiple fractional and unaligned reads. + + :param offset: Byte offset into the file starting the region + :param length: The length of the region + """ + + def __init__(self, offset, length): + self.offset = offset + self.length = length + self.data = b'' + + @property + def complete(self): + """Returns True when we have captured the desired data.""" + return self.length == len(self.data) + + def capture(self, chunk, current_position): + """Process a chunk of data. + + This should be called for each chunk in the read loop, at least + until complete returns True. + + :param chunk: A chunk of bytes in the file + :param current_position: The position of the file processed by the + read loop so far. Note that this will be + the position in the file *after* the chunk + being presented. + """ + read_start = current_position - len(chunk) + if (read_start <= self.offset <= current_position + or self.offset <= read_start <= (self.offset + self.length)): + if read_start < self.offset: + lead_gap = self.offset - read_start + else: + lead_gap = 0 + self.data += chunk[lead_gap:] + self.data = self.data[:self.length] + + +class ImageFormatError(Exception): + """An unrecoverable image format error that aborts the process.""" + pass + + +class TraceDisabled(object): + """A logger-like thing that swallows tracing when we do not want it.""" + + def debug(self, *a, **k): + pass + + info = debug + warning = debug + error = debug + + +class FileInspector(object): + """A stream-based disk image inspector. + + This base class works on raw images and is subclassed for more + complex types. It is to be presented with the file to be examined + one chunk at a time, during read processing and will only store + as much data as necessary to determine required attributes of + the file. + """ + + def __init__(self, tracing=False): + self._total_count = 0 + + # NOTE(danms): The logging in here is extremely verbose for a reason, + # but should never really be enabled at that level at runtime. To + # retain all that work and assist in future debug, we have a separate + # debug flag that can be passed from a manual tool to turn it on. + if tracing: + self._log = logging.getLogger(str(self)) + else: + self._log = TraceDisabled() + self._capture_regions = {} + + def _capture(self, chunk, only=None): + for name, region in self._capture_regions.items(): + if only and name not in only: + continue + if not region.complete: + region.capture(chunk, self._total_count) + + def eat_chunk(self, chunk): + """Call this to present chunks of the file to the inspector.""" + pre_regions = set(self._capture_regions.keys()) + + # Increment our position-in-file counter + self._total_count += len(chunk) + + # Run through the regions we know of to see if they want this + # data + self._capture(chunk) + + # Let the format do some post-read processing of the stream + self.post_process() + + # Check to see if the post-read processing added new regions + # which may require the current chunk. + new_regions = set(self._capture_regions.keys()) - pre_regions + if new_regions: + self._capture(chunk, only=new_regions) + + def post_process(self): + """Post-read hook to process what has been read so far. + + This will be called after each chunk is read and potentially captured + by the defined regions. If any regions are defined by this call, + those regions will be presented with the current chunk in case it + is within one of the new regions. + """ + pass + + def region(self, name): + """Get a CaptureRegion by name.""" + return self._capture_regions[name] + + def new_region(self, name, region): + """Add a new CaptureRegion by name.""" + if self.has_region(name): + # This is a bug, we tried to add the same region twice + raise ImageFormatError('Inspector re-added region %s' % name) + self._capture_regions[name] = region + + def has_region(self, name): + """Returns True if named region has been defined.""" + return name in self._capture_regions + + @property + def format_match(self): + """Returns True if the file appears to be the expected format.""" + return True + + @property + def virtual_size(self): + """Returns the virtual size of the disk image, or zero if unknown.""" + return self._total_count + + @property + def actual_size(self): + """Returns the total size of the file, usually smaller than virtual_size. + + NOTE: this will only be accurate if the entire file is read and processed. + """ # noqa + return self._total_count + + @property + def complete(self): + """Returns True if we have all the information needed.""" + return all(r.complete for r in self._capture_regions.values()) + + def __str__(self): + """The string name of this file format.""" + return 'raw' + + @property + def context_info(self): + """Return info on amount of data held in memory for auditing. + + This is a dict of region:sizeinbytes items that the inspector + uses to examine the file. + """ + return {name: len(region.data) for name, region in + self._capture_regions.items()} + + @classmethod + def from_file(cls, filename): + """Read as much of a file as necessary to complete inspection. + + NOTE: Because we only read as much of the file as necessary, the + actual_size property will not reflect the size of the file, but the + amount of data we read before we satisfied the inspector. + + Raises ImageFormatError if we cannot parse the file. + """ + inspector = cls() + with open(filename, 'rb') as f: + for chunk in chunked_reader(f): + inspector.eat_chunk(chunk) + if inspector.complete: + # No need to eat any more data + break + if not inspector.complete or not inspector.format_match: + raise ImageFormatError('File is not in requested format') + return inspector + + def safety_check(self): + """Perform some checks to determine if this file is safe. + + Returns True if safe, False otherwise. It may raise ImageFormatError + if safety cannot be guaranteed because of parsing or other errors. + """ + return True + + +# The qcow2 format consists of a big-endian 72-byte header, of which +# only a small portion has information we care about: +# +# Dec Hex Name +# 0 0x00 Magic 4-bytes 'QFI\xfb' +# 4 0x04 Version (uint32_t, should always be 2 for modern files) +# . . . +# 8 0x08 Backing file offset (uint64_t) +# 24 0x18 Size in bytes (unint64_t) +# . . . +# 72 0x48 Incompatible features bitfield (6 bytes) +# +# https://gitlab.com/qemu-project/qemu/-/blob/master/docs/interop/qcow2.txt +class QcowInspector(FileInspector): + """QEMU QCOW2 Format + + This should only require about 32 bytes of the beginning of the file + to determine the virtual size, and 104 bytes to perform the safety check. + """ + + BF_OFFSET = 0x08 + BF_OFFSET_LEN = 8 + I_FEATURES = 0x48 + I_FEATURES_LEN = 8 + I_FEATURES_DATAFILE_BIT = 3 + I_FEATURES_MAX_BIT = 4 + + def __init__(self, *a, **k): + super(QcowInspector, self).__init__(*a, **k) + self.new_region('header', CaptureRegion(0, 512)) + + def _qcow_header_data(self): + magic, version, bf_offset, bf_sz, cluster_bits, size = ( + struct.unpack('>4sIQIIQ', self.region('header').data[:32])) + return magic, size + + @property + def has_header(self): + return self.region('header').complete + + @property + def virtual_size(self): + if not self.region('header').complete: + return 0 + if not self.format_match: + return 0 + magic, size = self._qcow_header_data() + return size + + @property + def format_match(self): + if not self.region('header').complete: + return False + magic, size = self._qcow_header_data() + return magic == b'QFI\xFB' + + @property + def has_backing_file(self): + if not self.region('header').complete: + return None + if not self.format_match: + return False + bf_offset_bytes = self.region('header').data[ + self.BF_OFFSET:self.BF_OFFSET + self.BF_OFFSET_LEN] + # nonzero means "has a backing file" + bf_offset, = struct.unpack('>Q', bf_offset_bytes) + return bf_offset != 0 + + @property + def has_unknown_features(self): + if not self.region('header').complete: + return None + if not self.format_match: + return False + i_features = self.region('header').data[ + self.I_FEATURES:self.I_FEATURES + self.I_FEATURES_LEN] + + # This is the maximum byte number we should expect any bits to be set + max_byte = self.I_FEATURES_MAX_BIT // 8 + + # The flag bytes are in big-endian ordering, so if we process + # them in index-order, they're reversed + for i, byte_num in enumerate(reversed(range(self.I_FEATURES_LEN))): + if byte_num == max_byte: + # If we're in the max-allowed byte, allow any bits less than + # the maximum-known feature flag bit to be set + allow_mask = ((1 << self.I_FEATURES_MAX_BIT) - 1) + elif byte_num > max_byte: + # If we're above the byte with the maximum known feature flag + # bit, then we expect all zeroes + allow_mask = 0x0 + else: + # Any earlier-than-the-maximum byte can have any of the flag + # bits set + allow_mask = 0xFF + + if i_features[i] & ~allow_mask: + LOG.warning('Found unknown feature bit in byte %i: %s/%s', + byte_num, bin(i_features[byte_num] & ~allow_mask), + bin(allow_mask)) + return True + + return False + + @property + def has_data_file(self): + if not self.region('header').complete: + return None + if not self.format_match: + return False + i_features = self.region('header').data[ + self.I_FEATURES:self.I_FEATURES + self.I_FEATURES_LEN] + + # First byte of bitfield, which is i_features[7] + byte = self.I_FEATURES_LEN - 1 - self.I_FEATURES_DATAFILE_BIT // 8 + # Third bit of bitfield, which is 0x04 + bit = 1 << (self.I_FEATURES_DATAFILE_BIT - 1 % 8) + return bool(i_features[byte] & bit) + + def __str__(self): + return 'qcow2' + + def safety_check(self): + return (not self.has_backing_file + and not self.has_data_file + and not self.has_unknown_features) + + +class QEDInspector(FileInspector): + def __init__(self, tracing=False): + super().__init__(tracing) + self.new_region('header', CaptureRegion(0, 512)) + + @property + def format_match(self): + if not self.region('header').complete: + return False + return self.region('header').data.startswith(b'QED\x00') + + def safety_check(self): + # QED format is not supported by anyone, but we want to detect it + # and mark it as just always unsafe. + return False + + +# The VHD (or VPC as QEMU calls it) format consists of a big-endian +# 512-byte "footer" at the beginning of the file with various +# information, most of which does not matter to us: +# +# Dec Hex Name +# 0 0x00 Magic string (8-bytes, always 'conectix') +# 40 0x28 Disk size (uint64_t) +# +# https://github.com/qemu/qemu/blob/master/block/vpc.c +class VHDInspector(FileInspector): + """Connectix/MS VPC VHD Format + + This should only require about 512 bytes of the beginning of the file + to determine the virtual size. + """ + + def __init__(self, *a, **k): + super(VHDInspector, self).__init__(*a, **k) + self.new_region('header', CaptureRegion(0, 512)) + + @property + def format_match(self): + return self.region('header').data.startswith(b'conectix') + + @property + def virtual_size(self): + if not self.region('header').complete: + return 0 + + if not self.format_match: + return 0 + + return struct.unpack('>Q', self.region('header').data[40:48])[0] + + def __str__(self): + return 'vhd' + + +# The VHDX format consists of a complex dynamic little-endian +# structure with multiple regions of metadata and data, linked by +# offsets with in the file (and within regions), identified by MSFT +# GUID strings. The header is a 320KiB structure, only a few pieces of +# which we actually need to capture and interpret: +# +# Dec Hex Name +# 0 0x00000 Identity (Technically 9-bytes, padded to 64KiB, the first +# 8 bytes of which are 'vhdxfile') +# 196608 0x30000 The Region table (64KiB of a 32-byte header, followed +# by up to 2047 36-byte region table entry structures) +# +# The region table header includes two items we need to read and parse, +# which are: +# +# 196608 0x30000 4-byte signature ('regi') +# 196616 0x30008 Entry count (uint32-t) +# +# The region table entries follow the region table header immediately +# and are identified by a 16-byte GUID, and provide an offset of the +# start of that region. We care about the "metadata region", identified +# by the METAREGION class variable. The region table entry is (offsets +# from the beginning of the entry, since it could be in multiple places): +# +# 0 0x00000 16-byte MSFT GUID +# 16 0x00010 Offset of the actual metadata region (uint64_t) +# +# When we find the METAREGION table entry, we need to grab that offset +# and start examining the region structure at that point. That +# consists of a metadata table of structures, which point to places in +# the data in an unstructured space that follows. The header is +# (offsets relative to the region start): +# +# 0 0x00000 8-byte signature ('metadata') +# . . . +# 16 0x00010 2-byte entry count (up to 2047 entries max) +# +# This header is followed by the specified number of metadata entry +# structures, identified by GUID: +# +# 0 0x00000 16-byte MSFT GUID +# 16 0x00010 4-byte offset (uint32_t, relative to the beginning of +# the metadata region) +# +# We need to find the "Virtual Disk Size" metadata item, identified by +# the GUID in the VIRTUAL_DISK_SIZE class variable, grab the offset, +# add it to the offset of the metadata region, and examine that 8-byte +# chunk of data that follows. +# +# The "Virtual Disk Size" is a naked uint64_t which contains the size +# of the virtual disk, and is our ultimate target here. +# +# https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-vhdx/83e061f8-f6e2-4de1-91bd-5d518a43d477 +class VHDXInspector(FileInspector): + """MS VHDX Format + + This requires some complex parsing of the stream. The first 256KiB + of the image is stored to get the header and region information, + and then we capture the first metadata region to read those + records, find the location of the virtual size data and parse + it. This needs to store the metadata table entries up until the + VDS record, which may consist of up to 2047 32-byte entries at + max. Finally, it must store a chunk of data at the offset of the + actual VDS uint64. + + """ + METAREGION = '8B7CA206-4790-4B9A-B8FE-575F050F886E' + VIRTUAL_DISK_SIZE = '2FA54224-CD1B-4876-B211-5DBED83BF4B8' + VHDX_METADATA_TABLE_MAX_SIZE = 32 * 2048 # From qemu + + def __init__(self, *a, **k): + super(VHDXInspector, self).__init__(*a, **k) + self.new_region('ident', CaptureRegion(0, 32)) + self.new_region('header', CaptureRegion(192 * 1024, 64 * 1024)) + + def post_process(self): + # After reading a chunk, we may have the following conditions: + # + # 1. We may have just completed the header region, and if so, + # we need to immediately read and calculate the location of + # the metadata region, as it may be starting in the same + # read we just did. + # 2. We may have just completed the metadata region, and if so, + # we need to immediately calculate the location of the + # "virtual disk size" record, as it may be starting in the + # same read we just did. + if self.region('header').complete and not self.has_region('metadata'): + region = self._find_meta_region() + if region: + self.new_region('metadata', region) + elif self.has_region('metadata') and not self.has_region('vds'): + region = self._find_meta_entry(self.VIRTUAL_DISK_SIZE) + if region: + self.new_region('vds', region) + + @property + def format_match(self): + return self.region('ident').data.startswith(b'vhdxfile') + + @staticmethod + def _guid(buf): + """Format a MSFT GUID from the 16-byte input buffer.""" + guid_format = '= 2048: + raise ImageFormatError('Region count is %i (limit 2047)' % count) + + # Process the regions until we find the metadata one; grab the + # offset and return + self._log.debug('Region entry first is %x', region_entry_first) + self._log.debug('Region entries %i', count) + meta_offset = 0 + for i in range(0, count): + entry_start = region_entry_first + (i * 32) + entry_end = entry_start + 32 + entry = self.region('header').data[entry_start:entry_end] + self._log.debug('Entry offset is %x', entry_start) + + # GUID is the first 16 bytes + guid = self._guid(entry[:16]) + if guid == self.METAREGION: + # This entry is the metadata region entry + meta_offset, meta_len, meta_req = struct.unpack( + '= 2048: + raise ImageFormatError( + 'Metadata item count is %i (limit 2047)' % count) + + for i in range(0, count): + entry_offset = 32 + (i * 32) + guid = self._guid(meta_buffer[entry_offset:entry_offset + 16]) + if guid == desired_guid: + # Found the item we are looking for by id. + # Stop our region from capturing + item_offset, item_length, _reserved = struct.unpack( + ' 1: + all_formats = [str(inspector) for inspector in detections] + raise ImageFormatError( + 'Multiple formats detected: %s' % ', '.join(all_formats)) + + return inspectors['raw'] if not detections else detections[0] diff --git a/ironic_python_agent/partition_utils.py b/ironic_python_agent/partition_utils.py index bfc01c9cb..7e8b87d50 100644 --- a/ironic_python_agent/partition_utils.py +++ b/ironic_python_agent/partition_utils.py @@ -26,7 +26,6 @@ import shutil import stat import tempfile -from ironic_lib import disk_utils from ironic_lib import exception from ironic_lib import utils from oslo_concurrency import processutils @@ -37,6 +36,7 @@ from oslo_utils import units from oslo_utils import uuidutils import requests +from ironic_python_agent import disk_utils from ironic_python_agent import errors from ironic_python_agent import hardware from ironic_python_agent import utils as ipa_utils @@ -187,7 +187,8 @@ def get_labelled_partition(device_path, label, node_uuid): def work_on_disk(dev, root_mb, swap_mb, ephemeral_mb, ephemeral_format, image_path, node_uuid, preserve_ephemeral=False, configdrive=None, boot_mode="bios", - tempdir=None, disk_label=None, cpu_arch="", conv_flags=None): + tempdir=None, disk_label=None, cpu_arch="", conv_flags=None, + source_format=None, is_raw=False): """Create partitions and copy an image to the root partition. :param dev: Path for the device to work on. @@ -218,6 +219,9 @@ def work_on_disk(dev, root_mb, swap_mb, ephemeral_mb, ephemeral_format, :param conv_flags: Flags that need to be sent to the dd command, to control the conversion of the original file when copying to the host. It can contain several options separated by commas. + :param source_format: The format of the disk image to be written. + If set, must be "raw" or the actual disk format of the image. + :param is_raw: Ironic indictor image is raw; not to be converted :returns: a dictionary containing the following keys: 'root uuid': UUID of root partition 'efi system partition uuid': UUID of the uefi system partition @@ -295,7 +299,8 @@ def work_on_disk(dev, root_mb, swap_mb, ephemeral_mb, ephemeral_format, utils.unlink_without_raise(configdrive_file) if image_path is not None: - disk_utils.populate_image(image_path, root_part, conv_flags=conv_flags) + disk_utils.populate_image(image_path, root_part, conv_flags=conv_flags, + source_format=source_format, is_raw=is_raw) LOG.info("Image for %(node)s successfully populated", {'node': node_uuid}) else: diff --git a/ironic_python_agent/qemu_img.py b/ironic_python_agent/qemu_img.py new file mode 100644 index 000000000..7ce38a09a --- /dev/null +++ b/ironic_python_agent/qemu_img.py @@ -0,0 +1,153 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import logging +import os + +from ironic_lib import utils +from oslo_concurrency import processutils +from oslo_config import cfg +from oslo_utils import imageutils +from oslo_utils import units +import tenacity + +from ironic_python_agent import errors + +""" +Imported from ironic_lib/qemu-img.py from commit +c3d59dfffc9804273b49c0556ee09419a35917c1 + +See https://bugs.launchpad.net/ironic/+bug/2071740 for more details as to why +it moved. + +This module also exists in the Ironic repo. Do not modify this module +without also modifying that module. +""" + +CONF = cfg.CONF +LOG = logging.getLogger(__name__) + +# Limit the memory address space to 1 GiB when running qemu-img +QEMU_IMG_LIMITS = None + + +def _qemu_img_limits(): + global QEMU_IMG_LIMITS + if QEMU_IMG_LIMITS is None: + QEMU_IMG_LIMITS = processutils.ProcessLimits( + address_space=CONF.disk_utils.image_convert_memory_limit + * units.Mi) + return QEMU_IMG_LIMITS + + +def _retry_on_res_temp_unavailable(exc): + if (isinstance(exc, processutils.ProcessExecutionError) + and ('Resource temporarily unavailable' in exc.stderr + or 'Cannot allocate memory' in exc.stderr)): + return True + return False + + +def image_info(path, source_format=None): + """Return an object containing the parsed output from qemu-img info. + + This must only be called on images already validated as safe by the + format inspector. + + :param path: The path to an image you need information on + :param source_format: The format of the source image. If this is omitted + when deep inspection is enabled, this will raise + InvalidImage. + """ + # NOTE(JayF): This serves as a final exit hatch: if we have deep + # image inspection enabled, but someone calls this method without an + # explicit disk_format, there's no way for us to do the call securely. + if not source_format and not CONF.disable_deep_image_inspection: + msg = ("Security: qemu_img.image_info called unsafely while deep " + "image inspection is enabled. This should not be possible, " + "please contact Ironic developers.") + raise errors.InvalidImage(details=msg) + + if not os.path.exists(path): + raise FileNotFoundError("File %s does not exist" % path) + + cmd = [ + 'env', 'LC_ALL=C', 'LANG=C', + 'qemu-img', 'info', path, + '--output=json' + ] + + if source_format: + cmd += ['-f', source_format] + + out, err = utils.execute(cmd, prlimit=_qemu_img_limits()) + return imageutils.QemuImgInfo(out, format='json') + + +@tenacity.retry( + retry=tenacity.retry_if_exception(_retry_on_res_temp_unavailable), + stop=tenacity.stop_after_attempt(CONF.disk_utils.image_convert_attempts), + reraise=True) +def convert_image(source, dest, out_format, run_as_root=False, cache=None, + out_of_order=False, sparse_size=None, source_format=None): + """Convert image to other format. + + This method is only to be run against images who have passed + format_inspector's safety check, and with the format reported by it + passed in. Any other usage is a major security risk. + """ + cmd = ['qemu-img', 'convert', '-O', out_format] + if cache is not None: + cmd += ['-t', cache] + if sparse_size is not None: + cmd += ['-S', sparse_size] + + if source_format is not None: + cmd += ['-f', source_format] + elif not CONF.disable_deep_image_inspection: + # NOTE(JayF): This serves as a final exit hatch: if we have deep + # image inspection enabled, but someone calls this method without an + # explicit disk_format, there's no way for us to do the conversion + # securely. + msg = ("Security: qemu_img.convert_image called unsafely while deep " + "image inspection is enabled. This should not be possible, " + "please notify Ironic developers.") + LOG.error(msg) + raise errors.InvalidImage(details=msg) + + if out_of_order: + cmd.append('-W') + cmd += [source, dest] + # NOTE(TheJulia): Statically set the MALLOC_ARENA_MAX to prevent leaking + # and the creation of new malloc arenas which will consume the system + # memory. If limited to 1, qemu-img consumes ~250 MB of RAM, but when + # another thread tries to access a locked section of memory in use with + # another thread, then by default a new malloc arena is created, + # which essentially balloons the memory requirement of the machine. + # Default for qemu-img is 8 * nCPU * ~250MB (based on defaults + + # thread/code/process/library overhead. In other words, 64 GB. Limiting + # this to 3 keeps the memory utilization in happy cases below the overall + # threshold which is in place in case a malicious image is attempted to + # be passed through qemu-img. + env_vars = {'MALLOC_ARENA_MAX': '3'} + try: + utils.execute(*cmd, run_as_root=run_as_root, + prlimit=_qemu_img_limits(), + use_standard_locale=True, + env_variables=env_vars) + except processutils.ProcessExecutionError as e: + if ('Resource temporarily unavailable' in e.stderr + or 'Cannot allocate memory' in e.stderr): + LOG.debug('Failed to convert image, retrying. Error: %s', e) + # Sync disk caches before the next attempt + utils.execute('sync') + raise diff --git a/ironic_python_agent/tests/unit/base.py b/ironic_python_agent/tests/unit/base.py index 22da1ac8f..f6920f4b3 100644 --- a/ironic_python_agent/tests/unit/base.py +++ b/ironic_python_agent/tests/unit/base.py @@ -25,6 +25,7 @@ from oslo_log import log from oslo_service import sslutils from oslotest import base as test_base +from ironic_python_agent import config from ironic_python_agent.extensions import base as ext_base from ironic_python_agent import hardware @@ -40,6 +41,7 @@ class IronicAgentTest(test_base.BaseTestCase): def setUp(self): super(IronicAgentTest, self).setUp() + config.populate_config() self._set_config() # Ban running external processes via 'execute' like functions. If the diff --git a/ironic_python_agent/tests/unit/extensions/test_standby.py b/ironic_python_agent/tests/unit/extensions/test_standby.py index dda3e7ce7..b20b919f2 100644 --- a/ironic_python_agent/tests/unit/extensions/test_standby.py +++ b/ironic_python_agent/tests/unit/extensions/test_standby.py @@ -20,6 +20,7 @@ from unittest import mock from ironic_lib import exception from oslo_concurrency import processutils from oslo_config import cfg +from oslo_utils import units import requests from ironic_python_agent import errors @@ -33,6 +34,11 @@ from ironic_python_agent import utils CONF = cfg.CONF +def _virtual_size(size=1): + """Convert a virtual size in mb to bytes""" + return (size * units.Mi) + 1 - units.Mi + + def _build_fake_image_info(url='http://example.org'): return { 'id': 'fake_id', @@ -41,6 +47,7 @@ def _build_fake_image_info(url='http://example.org'): 'image_type': 'whole-disk-image', 'os_hash_algo': 'sha256', 'os_hash_value': 'fake-checksum', + 'disk_format': 'qcow2' } @@ -60,7 +67,9 @@ def _build_fake_partition_image_info(): 'disk_label': 'msdos', 'deploy_boot_mode': 'bios', 'os_hash_algo': 'sha256', - 'os_hash_value': 'fake-checksum'} + 'os_hash_value': 'fake-checksum', + 'disk_format': 'qcow2' + } class TestStandbyExtension(base.IronicAgentTest): @@ -279,15 +288,23 @@ class TestStandbyExtension(base.IronicAgentTest): None, image_info['id']) - @mock.patch('ironic_lib.disk_utils.fix_gpt_partition', autospec=True) - @mock.patch('ironic_lib.disk_utils.trigger_device_rescan', autospec=True) - @mock.patch('ironic_lib.disk_utils.convert_image', autospec=True) - @mock.patch('ironic_lib.disk_utils.udev_settle', autospec=True) - @mock.patch('ironic_lib.disk_utils.destroy_disk_metadata', autospec=True) + @mock.patch( + 'ironic_python_agent.disk_utils.get_and_validate_image_format', + autospec=True) + @mock.patch('ironic_python_agent.disk_utils.fix_gpt_partition', + autospec=True) + @mock.patch('ironic_python_agent.disk_utils.trigger_device_rescan', + autospec=True) + @mock.patch('ironic_python_agent.qemu_img.convert_image', autospec=True) + @mock.patch('ironic_python_agent.disk_utils.udev_settle', autospec=True) + @mock.patch('ironic_python_agent.disk_utils.destroy_disk_metadata', + autospec=True) def test_write_image(self, wipe_mock, udev_mock, convert_mock, - rescan_mock, fix_gpt_mock): + rescan_mock, fix_gpt_mock, validate_mock): image_info = _build_fake_image_info() device = '/dev/sda' + source_format = image_info['disk_format'] + validate_mock.return_value = (source_format, 0) location = standby._image_location(image_info) standby._write_image(image_info, device) @@ -296,30 +313,45 @@ class TestStandbyExtension(base.IronicAgentTest): out_format='host_device', cache='directsync', out_of_order=True, - sparse_size='0') + sparse_size='0', + source_format=source_format) + validate_mock.assert_called_once_with(location, source_format) wipe_mock.assert_called_once_with(device, '') udev_mock.assert_called_once_with() rescan_mock.assert_called_once_with(device) fix_gpt_mock.assert_called_once_with(device, node_uuid=None) - @mock.patch('ironic_lib.disk_utils.fix_gpt_partition', autospec=True) - @mock.patch('ironic_lib.disk_utils.trigger_device_rescan', autospec=True) - @mock.patch('ironic_lib.disk_utils.convert_image', autospec=True) - @mock.patch('ironic_lib.disk_utils.udev_settle', autospec=True) - @mock.patch('ironic_lib.disk_utils.destroy_disk_metadata', autospec=True) - def test_write_image_gpt_fails(self, wipe_mock, udev_mock, convert_mock, - rescan_mock, fix_gpt_mock): - image_info = _build_fake_image_info() + @mock.patch('ironic_python_agent.disk_utils.fix_gpt_partition', + autospec=True) + @mock.patch('ironic_python_agent.disk_utils.trigger_device_rescan', + autospec=True) + @mock.patch('ironic_python_agent.qemu_img.convert_image', autospec=True) + @mock.patch('ironic_python_agent.disk_utils.udev_settle', autospec=True) + @mock.patch('ironic_python_agent.disk_utils.destroy_disk_metadata', + autospec=True) + @mock.patch( + 'ironic_python_agent.disk_utils.get_and_validate_image_format', + autospec=True) + def test_write_image_gpt_fails(self, validate_mock, wipe_mock, udev_mock, + convert_mock, rescan_mock, fix_gpt_mock): device = '/dev/sda' + image_info = _build_fake_image_info() + validate_mock.return_value = (image_info['disk_format'], 0) fix_gpt_mock.side_effect = exception.InstanceDeployFailure standby._write_image(image_info, device) - @mock.patch('ironic_lib.disk_utils.convert_image', autospec=True) - @mock.patch('ironic_lib.disk_utils.udev_settle', autospec=True) - @mock.patch('ironic_lib.disk_utils.destroy_disk_metadata', autospec=True) - def test_write_image_fails(self, wipe_mock, udev_mock, convert_mock): + @mock.patch('ironic_python_agent.qemu_img.convert_image', autospec=True) + @mock.patch('ironic_python_agent.disk_utils.udev_settle', autospec=True) + @mock.patch('ironic_python_agent.disk_utils.destroy_disk_metadata', + autospec=True) + @mock.patch( + 'ironic_python_agent.disk_utils.get_and_validate_image_format', + autospec=True) + def test_write_image_fails(self, validate_mock, wipe_mock, udev_mock, + convert_mock): image_info = _build_fake_image_info() + validate_mock.return_value = (image_info['disk_format'], 0) device = '/dev/sda' convert_mock.side_effect = processutils.ProcessExecutionError @@ -332,10 +364,12 @@ class TestStandbyExtension(base.IronicAgentTest): @mock.patch.object(hardware, 'dispatch_to_managers', autospec=True) @mock.patch('builtins.open', autospec=True) @mock.patch('ironic_python_agent.utils.execute', autospec=True) - @mock.patch('ironic_lib.disk_utils.get_image_mb', autospec=True) + @mock.patch( + 'ironic_python_agent.disk_utils.get_and_validate_image_format', + autospec=True) @mock.patch.object(partition_utils, 'work_on_disk', autospec=True) def test_write_partition_image_exception(self, work_on_disk_mock, - image_mb_mock, + validate_mock, execute_mock, open_mock, dispatch_mock): image_info = _build_fake_partition_image_info() @@ -348,11 +382,13 @@ class TestStandbyExtension(base.IronicAgentTest): pr_ep = image_info['preserve_ephemeral'] boot_mode = image_info['deploy_boot_mode'] disk_label = image_info['disk_label'] + source_format = image_info['disk_format'] cpu_arch = self.fake_cpu.architecture image_path = standby._image_location(image_info) - image_mb_mock.return_value = 1 + validate_mock.return_value = (image_info['disk_format'], + _virtual_size(1)) dispatch_mock.return_value = self.fake_cpu exc = errors.ImageWriteError Exception_returned = processutils.ProcessExecutionError @@ -360,7 +396,7 @@ class TestStandbyExtension(base.IronicAgentTest): self.assertRaises(exc, standby._write_image, image_info, device, 'configdrive') - image_mb_mock.assert_called_once_with(image_path) + validate_mock.assert_called_once_with(image_path, source_format) work_on_disk_mock.assert_called_once_with(device, root_mb, swap_mb, ephemeral_mb, ephemeral_format, @@ -370,16 +406,20 @@ class TestStandbyExtension(base.IronicAgentTest): preserve_ephemeral=pr_ep, boot_mode=boot_mode, disk_label=disk_label, - cpu_arch=cpu_arch) + cpu_arch=cpu_arch, + source_format=source_format, + is_raw=False) @mock.patch.object(utils, 'get_node_boot_mode', lambda self: 'bios') @mock.patch.object(hardware, 'dispatch_to_managers', autospec=True) @mock.patch('builtins.open', autospec=True) @mock.patch('ironic_python_agent.utils.execute', autospec=True) - @mock.patch('ironic_lib.disk_utils.get_image_mb', autospec=True) + @mock.patch( + 'ironic_python_agent.disk_utils.get_and_validate_image_format', + autospec=True) @mock.patch.object(partition_utils, 'work_on_disk', autospec=True) def test_write_partition_image_no_node_uuid(self, work_on_disk_mock, - image_mb_mock, + validate_mock, execute_mock, open_mock, dispatch_mock): image_info = _build_fake_partition_image_info() @@ -393,19 +433,19 @@ class TestStandbyExtension(base.IronicAgentTest): pr_ep = image_info['preserve_ephemeral'] boot_mode = image_info['deploy_boot_mode'] disk_label = image_info['disk_label'] + source_format = image_info['disk_format'] cpu_arch = self.fake_cpu.architecture image_path = standby._image_location(image_info) - image_mb_mock.return_value = 1 + validate_mock.return_value = (source_format, _virtual_size(1)) dispatch_mock.return_value = self.fake_cpu uuids = {'root uuid': 'root_uuid'} expected_uuid = {'root uuid': 'root_uuid'} - image_mb_mock.return_value = 1 work_on_disk_mock.return_value = uuids standby._write_image(image_info, device, 'configdrive') - image_mb_mock.assert_called_once_with(image_path) + validate_mock.assert_called_once_with(image_path, source_format) work_on_disk_mock.assert_called_once_with(device, root_mb, swap_mb, ephemeral_mb, ephemeral_format, @@ -415,7 +455,9 @@ class TestStandbyExtension(base.IronicAgentTest): preserve_ephemeral=pr_ep, boot_mode=boot_mode, disk_label=disk_label, - cpu_arch=cpu_arch) + cpu_arch=cpu_arch, + source_format=source_format, + is_raw=False) self.assertEqual(expected_uuid, work_on_disk_mock.return_value) self.assertIsNone(node_uuid) @@ -423,26 +465,29 @@ class TestStandbyExtension(base.IronicAgentTest): @mock.patch.object(hardware, 'dispatch_to_managers', autospec=True) @mock.patch('builtins.open', autospec=True) @mock.patch('ironic_python_agent.utils.execute', autospec=True) - @mock.patch('ironic_lib.disk_utils.get_image_mb', autospec=True) + @mock.patch( + 'ironic_python_agent.disk_utils.get_and_validate_image_format', + autospec=True) @mock.patch.object(partition_utils, 'work_on_disk', autospec=True) def test_write_partition_image_exception_image_mb(self, work_on_disk_mock, - image_mb_mock, + validate_mock, execute_mock, open_mock, dispatch_mock): dispatch_mock.return_value = self.fake_cpu image_info = _build_fake_partition_image_info() device = '/dev/sda' + source_format = image_info['disk_format'] image_path = standby._image_location(image_info) - image_mb_mock.return_value = 20 + validate_mock.return_value = (source_format, _virtual_size(20)) exc = errors.InvalidCommandParamsError self.assertRaises(exc, standby._write_image, image_info, device) - image_mb_mock.assert_called_once_with(image_path) + validate_mock.assert_called_once_with(image_path, source_format) self.assertFalse(work_on_disk_mock.called) @mock.patch.object(utils, 'get_node_boot_mode', lambda self: 'bios') @@ -450,8 +495,10 @@ class TestStandbyExtension(base.IronicAgentTest): @mock.patch('builtins.open', autospec=True) @mock.patch('ironic_python_agent.utils.execute', autospec=True) @mock.patch.object(partition_utils, 'work_on_disk', autospec=True) - @mock.patch('ironic_lib.disk_utils.get_image_mb', autospec=True) - def test_write_partition_image(self, image_mb_mock, work_on_disk_mock, + @mock.patch( + 'ironic_python_agent.disk_utils.get_and_validate_image_format', + autospec=True) + def test_write_partition_image(self, validate_mock, work_on_disk_mock, execute_mock, open_mock, dispatch_mock): image_info = _build_fake_partition_image_info() device = '/dev/sda' @@ -463,17 +510,18 @@ class TestStandbyExtension(base.IronicAgentTest): pr_ep = image_info['preserve_ephemeral'] boot_mode = image_info['deploy_boot_mode'] disk_label = image_info['disk_label'] + source_format = image_info['disk_format'] cpu_arch = self.fake_cpu.architecture image_path = standby._image_location(image_info) uuids = {'root uuid': 'root_uuid'} expected_uuid = {'root uuid': 'root_uuid'} - image_mb_mock.return_value = 1 + validate_mock.return_value = (source_format, _virtual_size(1)) dispatch_mock.return_value = self.fake_cpu work_on_disk_mock.return_value = uuids standby._write_image(image_info, device, 'configdrive') - image_mb_mock.assert_called_once_with(image_path) + validate_mock.assert_called_once_with(image_path, source_format) work_on_disk_mock.assert_called_once_with(device, root_mb, swap_mb, ephemeral_mb, ephemeral_format, @@ -483,7 +531,9 @@ class TestStandbyExtension(base.IronicAgentTest): preserve_ephemeral=pr_ep, boot_mode=boot_mode, disk_label=disk_label, - cpu_arch=cpu_arch) + cpu_arch=cpu_arch, + source_format=source_format, + is_raw=False) self.assertEqual(expected_uuid, work_on_disk_mock.return_value) @@ -837,11 +887,10 @@ class TestStandbyExtension(base.IronicAgentTest): standby.ImageDownload, image_info) - @mock.patch('ironic_lib.disk_utils.get_disk_identifier', + @mock.patch('ironic_python_agent.disk_utils.get_disk_identifier', lambda dev: 'ROOT') - @mock.patch('ironic_python_agent.utils.execute', - autospec=True) - @mock.patch('ironic_lib.disk_utils.list_partitions', + @mock.patch('ironic_lib.utils.execute', autospec=True) + @mock.patch('ironic_python_agent.disk_utils.list_partitions', autospec=True) @mock.patch.object(partition_utils, 'create_config_drive_partition', autospec=True) @@ -892,8 +941,8 @@ class TestStandbyExtension(base.IronicAgentTest): self.assertEqual({'root uuid': 'ROOT'}, self.agent_extension.partition_uuids) - @mock.patch('ironic_python_agent.utils.execute', autospec=True) - @mock.patch('ironic_lib.disk_utils.list_partitions', + @mock.patch('ironic_lib.utils.execute', autospec=True) + @mock.patch('ironic_python_agent.disk_utils.list_partitions', autospec=True) @mock.patch.object(partition_utils, 'create_config_drive_partition', autospec=True) @@ -964,12 +1013,12 @@ class TestStandbyExtension(base.IronicAgentTest): self.assertEqual({'root uuid': 'root_uuid'}, self.agent_extension.partition_uuids) - @mock.patch('ironic_lib.disk_utils.get_disk_identifier', + @mock.patch('ironic_python_agent.disk_utils.get_disk_identifier', lambda dev: 'ROOT') - @mock.patch('ironic_python_agent.utils.execute', autospec=True) + @mock.patch('ironic_lib.utils.execute', autospec=True) @mock.patch.object(partition_utils, 'create_config_drive_partition', autospec=True) - @mock.patch('ironic_lib.disk_utils.list_partitions', + @mock.patch('ironic_python_agent.disk_utils.list_partitions', autospec=True) @mock.patch('ironic_python_agent.hardware.dispatch_to_managers', autospec=True) @@ -1009,12 +1058,12 @@ class TestStandbyExtension(base.IronicAgentTest): 'root_uuid=ROOT').format(image_info['id'], 'manager') self.assertEqual(cmd_result, async_result.command_result['result']) - @mock.patch('ironic_lib.disk_utils.get_disk_identifier', + @mock.patch('ironic_python_agent.disk_utils.get_disk_identifier', lambda dev: 'ROOT') @mock.patch.object(partition_utils, 'work_on_disk', autospec=True) @mock.patch.object(partition_utils, 'create_config_drive_partition', autospec=True) - @mock.patch('ironic_lib.disk_utils.list_partitions', + @mock.patch('ironic_python_agent.disk_utils.list_partitions', autospec=True) @mock.patch('ironic_python_agent.hardware.dispatch_to_managers', autospec=True) @@ -1056,11 +1105,11 @@ class TestStandbyExtension(base.IronicAgentTest): self.assertFalse(configdrive_copy_mock.called) self.assertEqual('FAILED', async_result.command_status) - @mock.patch('ironic_lib.disk_utils.get_disk_identifier', + @mock.patch('ironic_python_agent.disk_utils.get_disk_identifier', side_effect=OSError, autospec=True) - @mock.patch('ironic_python_agent.utils.execute', + @mock.patch('ironic_lib.utils.execute', autospec=True) - @mock.patch('ironic_lib.disk_utils.list_partitions', + @mock.patch('ironic_python_agent.disk_utils.list_partitions', autospec=True) @mock.patch.object(partition_utils, 'create_config_drive_partition', autospec=True) @@ -1111,10 +1160,10 @@ class TestStandbyExtension(base.IronicAgentTest): attempts=mock.ANY) self.assertEqual({}, self.agent_extension.partition_uuids) - @mock.patch('ironic_python_agent.utils.execute', mock.Mock()) - @mock.patch('ironic_lib.disk_utils.list_partitions', + @mock.patch('ironic_lib.utils.execute', mock.Mock()) + @mock.patch('ironic_python_agent.disk_utils.list_partitions', lambda _dev: [mock.Mock()]) - @mock.patch('ironic_lib.disk_utils.get_disk_identifier', + @mock.patch('ironic_python_agent.disk_utils.get_disk_identifier', lambda dev: 'ROOT') @mock.patch.object(partition_utils, 'work_on_disk', autospec=True) @mock.patch.object(partition_utils, 'create_config_drive_partition', @@ -1349,8 +1398,9 @@ class TestStandbyExtension(base.IronicAgentTest): 'configdrive_data') @mock.patch('ironic_python_agent.extensions.standby.LOG', autospec=True) - @mock.patch('ironic_lib.disk_utils.block_uuid', autospec=True) - @mock.patch('ironic_lib.disk_utils.fix_gpt_partition', autospec=True) + @mock.patch('ironic_python_agent.disk_utils.block_uuid', autospec=True) + @mock.patch('ironic_python_agent.disk_utils.fix_gpt_partition', + autospec=True) @mock.patch('hashlib.new', autospec=True) @mock.patch('builtins.open', autospec=True) @mock.patch('requests.get', autospec=True) @@ -1447,7 +1497,8 @@ class TestStandbyExtension(base.IronicAgentTest): mock.call(b'some')] file_mock.write.assert_has_calls(write_calls) - @mock.patch('ironic_lib.disk_utils.fix_gpt_partition', autospec=True) + @mock.patch('ironic_python_agent.disk_utils.fix_gpt_partition', + autospec=True) @mock.patch('hashlib.new', autospec=True) @mock.patch('builtins.open', autospec=True) @mock.patch('requests.get', autospec=True) @@ -1573,11 +1624,13 @@ class TestStandbyExtension(base.IronicAgentTest): @mock.patch.object(hardware, 'dispatch_to_managers', autospec=True) @mock.patch('builtins.open', autospec=True) @mock.patch('ironic_python_agent.utils.execute', autospec=True) - @mock.patch('ironic_lib.disk_utils.get_image_mb', autospec=True) + @mock.patch( + 'ironic_python_agent.disk_utils.get_and_validate_image_format', + autospec=True) @mock.patch.object(partition_utils, 'work_on_disk', autospec=True) def test_write_partition_image_no_node_uuid_uefi( self, work_on_disk_mock, - image_mb_mock, + validate_mock, execute_mock, open_mock, dispatch_mock): image_info = _build_fake_partition_image_info() @@ -1589,19 +1642,19 @@ class TestStandbyExtension(base.IronicAgentTest): ephemeral_format = image_info['ephemeral_format'] node_uuid = image_info['node_uuid'] pr_ep = image_info['preserve_ephemeral'] + source_format = image_info['disk_format'] + validate_mock.return_value = (source_format, _virtual_size(1)) cpu_arch = self.fake_cpu.architecture image_path = standby._image_location(image_info) - image_mb_mock.return_value = 1 dispatch_mock.return_value = self.fake_cpu uuids = {'root uuid': 'root_uuid'} expected_uuid = {'root uuid': 'root_uuid'} - image_mb_mock.return_value = 1 work_on_disk_mock.return_value = uuids standby._write_image(image_info, device, 'configdrive') - image_mb_mock.assert_called_once_with(image_path) + validate_mock.assert_called_once_with(image_path, source_format) work_on_disk_mock.assert_called_once_with(device, root_mb, swap_mb, ephemeral_mb, ephemeral_format, @@ -1611,7 +1664,9 @@ class TestStandbyExtension(base.IronicAgentTest): preserve_ephemeral=pr_ep, boot_mode='uefi', disk_label='gpt', - cpu_arch=cpu_arch) + cpu_arch=cpu_arch, + source_format=source_format, + is_raw=False) self.assertEqual(expected_uuid, work_on_disk_mock.return_value) self.assertIsNone(node_uuid) diff --git a/ironic_python_agent/tests/unit/test_disk_partitioner.py b/ironic_python_agent/tests/unit/test_disk_partitioner.py new file mode 100644 index 000000000..c94ad3225 --- /dev/null +++ b/ironic_python_agent/tests/unit/test_disk_partitioner.py @@ -0,0 +1,202 @@ +# Copyright 2014 Red Hat, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from unittest import mock + +from ironic_lib import exception +from ironic_lib.tests import base +from ironic_lib import utils + +from ironic_python_agent import disk_partitioner + + +CONF = disk_partitioner.CONF + + +class DiskPartitionerTestCase(base.IronicLibTestCase): + + def test_add_partition(self): + dp = disk_partitioner.DiskPartitioner('/dev/fake') + dp.add_partition(1024) + dp.add_partition(512, fs_type='linux-swap') + dp.add_partition(2048, boot_flag='boot') + dp.add_partition(2048, boot_flag='bios_grub') + expected = [(1, {'boot_flag': None, + 'extra_flags': None, + 'fs_type': '', + 'type': 'primary', + 'size': 1024}), + (2, {'boot_flag': None, + 'extra_flags': None, + 'fs_type': 'linux-swap', + 'type': 'primary', + 'size': 512}), + (3, {'boot_flag': 'boot', + 'extra_flags': None, + 'fs_type': '', + 'type': 'primary', + 'size': 2048}), + (4, {'boot_flag': 'bios_grub', + 'extra_flags': None, + 'fs_type': '', + 'type': 'primary', + 'size': 2048})] + partitions = [(n, p) for n, p in dp.get_partitions()] + self.assertEqual(4, len(partitions)) + self.assertEqual(expected, partitions) + + @mock.patch.object(disk_partitioner.DiskPartitioner, '_exec', + autospec=True) + @mock.patch.object(utils, 'execute', autospec=True) + def test_commit(self, mock_utils_exc, mock_disk_partitioner_exec): + dp = disk_partitioner.DiskPartitioner('/dev/fake') + fake_parts = [(1, {'boot_flag': None, + 'extra_flags': None, + 'fs_type': 'fake-fs-type', + 'type': 'fake-type', + 'size': 1}), + (2, {'boot_flag': 'boot', + 'extra_flags': None, + 'fs_type': 'fake-fs-type', + 'type': 'fake-type', + 'size': 1}), + (3, {'boot_flag': 'bios_grub', + 'extra_flags': None, + 'fs_type': 'fake-fs-type', + 'type': 'fake-type', + 'size': 1}), + (4, {'boot_flag': 'boot', + 'extra_flags': ['prep', 'fake-flag'], + 'fs_type': 'fake-fs-type', + 'type': 'fake-type', + 'size': 1})] + with mock.patch.object(dp, 'get_partitions', autospec=True) as mock_gp: + mock_gp.return_value = fake_parts + mock_utils_exc.return_value = ('', '') + dp.commit() + + mock_disk_partitioner_exec.assert_called_once_with( + mock.ANY, 'mklabel', 'msdos', + 'mkpart', 'fake-type', 'fake-fs-type', '1', '2', + 'mkpart', 'fake-type', 'fake-fs-type', '2', '3', + 'set', '2', 'boot', 'on', + 'mkpart', 'fake-type', 'fake-fs-type', '3', '4', + 'set', '3', 'bios_grub', 'on', + 'mkpart', 'fake-type', 'fake-fs-type', '4', '5', + 'set', '4', 'boot', 'on', 'set', '4', 'prep', 'on', + 'set', '4', 'fake-flag', 'on') + mock_utils_exc.assert_called_once_with( + 'fuser', '/dev/fake', check_exit_code=[0, 1], run_as_root=True) + + @mock.patch.object(disk_partitioner.DiskPartitioner, '_exec', + autospec=True) + @mock.patch.object(utils, 'execute', autospec=True) + def test_commit_with_device_is_busy_once(self, mock_utils_exc, + mock_disk_partitioner_exec): + CONF.set_override('check_device_interval', 0, group='disk_partitioner') + dp = disk_partitioner.DiskPartitioner('/dev/fake') + fake_parts = [(1, {'boot_flag': None, + 'extra_flags': None, + 'fs_type': 'fake-fs-type', + 'type': 'fake-type', + 'size': 1}), + (2, {'boot_flag': 'boot', + 'extra_flags': None, + 'fs_type': 'fake-fs-type', + 'type': 'fake-type', + 'size': 1})] + # Test as if the 'psmisc' version of 'fuser' which has stderr output + fuser_outputs = iter([(" 10000 10001", '/dev/fake:\n'), ('', '')]) + + with mock.patch.object(dp, 'get_partitions', autospec=True) as mock_gp: + mock_gp.return_value = fake_parts + mock_utils_exc.side_effect = fuser_outputs + dp.commit() + + mock_disk_partitioner_exec.assert_called_once_with( + mock.ANY, 'mklabel', 'msdos', + 'mkpart', 'fake-type', 'fake-fs-type', '1', '2', + 'mkpart', 'fake-type', 'fake-fs-type', '2', '3', + 'set', '2', 'boot', 'on') + mock_utils_exc.assert_called_with( + 'fuser', '/dev/fake', check_exit_code=[0, 1], run_as_root=True) + self.assertEqual(2, mock_utils_exc.call_count) + + @mock.patch.object(disk_partitioner.DiskPartitioner, '_exec', + autospec=True) + @mock.patch.object(utils, 'execute', autospec=True) + def test_commit_with_device_is_always_busy(self, mock_utils_exc, + mock_disk_partitioner_exec): + CONF.set_override('check_device_interval', 0, group='disk_partitioner') + dp = disk_partitioner.DiskPartitioner('/dev/fake') + fake_parts = [(1, {'boot_flag': None, + 'extra_flags': None, + 'fs_type': 'fake-fs-type', + 'type': 'fake-type', + 'size': 1}), + (2, {'boot_flag': 'boot', + 'extra_flags': None, + 'fs_type': 'fake-fs-type', + 'type': 'fake-type', + 'size': 1})] + + with mock.patch.object(dp, 'get_partitions', autospec=True) as mock_gp: + mock_gp.return_value = fake_parts + # Test as if the 'busybox' version of 'fuser' which does not have + # stderr output + mock_utils_exc.return_value = ("10000 10001", '') + self.assertRaises(exception.InstanceDeployFailure, dp.commit) + + mock_disk_partitioner_exec.assert_called_once_with( + mock.ANY, 'mklabel', 'msdos', + 'mkpart', 'fake-type', 'fake-fs-type', '1', '2', + 'mkpart', 'fake-type', 'fake-fs-type', '2', '3', + 'set', '2', 'boot', 'on') + mock_utils_exc.assert_called_with( + 'fuser', '/dev/fake', check_exit_code=[0, 1], run_as_root=True) + self.assertEqual(20, mock_utils_exc.call_count) + + @mock.patch.object(disk_partitioner.DiskPartitioner, '_exec', + autospec=True) + @mock.patch.object(utils, 'execute', autospec=True) + def test_commit_with_device_disconnected(self, mock_utils_exc, + mock_disk_partitioner_exec): + CONF.set_override('check_device_interval', 0, group='disk_partitioner') + dp = disk_partitioner.DiskPartitioner('/dev/fake') + fake_parts = [(1, {'boot_flag': None, + 'extra_flags': None, + 'fs_type': 'fake-fs-type', + 'type': 'fake-type', + 'size': 1}), + (2, {'boot_flag': 'boot', + 'extra_flags': None, + 'fs_type': 'fake-fs-type', + 'type': 'fake-type', + 'size': 1})] + + with mock.patch.object(dp, 'get_partitions', autospec=True) as mock_gp: + mock_gp.return_value = fake_parts + mock_utils_exc.return_value = ('', "Specified filename /dev/fake" + " does not exist.") + self.assertRaises(exception.InstanceDeployFailure, dp.commit) + + mock_disk_partitioner_exec.assert_called_once_with( + mock.ANY, 'mklabel', 'msdos', + 'mkpart', 'fake-type', 'fake-fs-type', '1', '2', + 'mkpart', 'fake-type', 'fake-fs-type', '2', '3', + 'set', '2', 'boot', 'on') + mock_utils_exc.assert_called_with( + 'fuser', '/dev/fake', check_exit_code=[0, 1], run_as_root=True) + self.assertEqual(20, mock_utils_exc.call_count) diff --git a/ironic_python_agent/tests/unit/test_disk_utils.py b/ironic_python_agent/tests/unit/test_disk_utils.py new file mode 100644 index 000000000..866440018 --- /dev/null +++ b/ironic_python_agent/tests/unit/test_disk_utils.py @@ -0,0 +1,1088 @@ +# Copyright 2014 Red Hat, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import json +import os +import stat +from unittest import mock + +from ironic_lib import exception +from ironic_lib.tests import base +from ironic_lib import utils +from oslo_concurrency import processutils +from oslo_config import cfg +from oslo_utils.imageutils import QemuImgInfo +from oslo_utils import units + +from ironic_python_agent import disk_utils +from ironic_python_agent.errors import InvalidImage +from ironic_python_agent import format_inspector +from ironic_python_agent import qemu_img + +CONF = cfg.CONF + + +class MockFormatInspectorCls(object): + def __init__(self, img_format='qcow2', virtual_size_mb=0, safe=False): + self.img_format = img_format + self.virtual_size_mb = virtual_size_mb + self.safe = safe + + def __str__(self): + return self.img_format + + @property + def virtual_size(self): + # NOTE(JayF): Allow the mock-user to input MBs but + # backwards-calculate so code in _write_image can still work + if self.virtual_size_mb == 0: + return 0 + else: + return (self.virtual_size_mb * units.Mi) + 1 - units.Mi + + def safety_check(self): + return self.safe + + +def _get_fake_qemu_image_info(file_format='qcow2', virtual_size=0): + fake_data = {'format': file_format, 'virtual-size': virtual_size, } + return QemuImgInfo(cmd_output=json.dumps(fake_data), format='json') + + +@mock.patch.object(utils, 'execute', autospec=True) +class ListPartitionsTestCase(base.IronicLibTestCase): + + def test_correct(self, execute_mock): + output = """ +BYT; +/dev/sda:500107862016B:scsi:512:4096:msdos:ATA HGST HTS725050A7:; +1:1.00MiB:501MiB:500MiB:ext4::boot; +2:501MiB:476940MiB:476439MiB:::; +""" + expected = [ + {'number': 1, 'start': 1, 'end': 501, 'size': 500, + 'filesystem': 'ext4', 'partition_name': '', 'flags': 'boot', + 'path': '/dev/fake1'}, + {'number': 2, 'start': 501, 'end': 476940, 'size': 476439, + 'filesystem': '', 'partition_name': '', 'flags': '', + 'path': '/dev/fake2'}, + ] + execute_mock.return_value = (output, '') + result = disk_utils.list_partitions('/dev/fake') + self.assertEqual(expected, result) + execute_mock.assert_called_once_with( + 'parted', '-s', '-m', '/dev/fake', 'unit', 'MiB', 'print', + use_standard_locale=True, run_as_root=True) + + @mock.patch.object(disk_utils.LOG, 'warning', autospec=True) + def test_incorrect(self, log_mock, execute_mock): + output = """ +BYT; +/dev/sda:500107862016B:scsi:512:4096:msdos:ATA HGST HTS725050A7:; +1:XX1076MiB:---:524MiB:ext4::boot; +""" + execute_mock.return_value = (output, '') + self.assertEqual([], disk_utils.list_partitions('/dev/fake')) + self.assertEqual(1, log_mock.call_count) + + def test_correct_gpt_nvme(self, execute_mock): + output = """ +BYT; +/dev/vda:40960MiB:virtblk:512:512:gpt:Virtio Block Device:; +2:1.00MiB:2.00MiB:1.00MiB::Bios partition:bios_grub; +1:4.00MiB:5407MiB:5403MiB:ext4:Root partition:; +3:5407MiB:5507MiB:100MiB:fat16:Boot partition:boot, esp; +""" + expected = [ + {'end': 2, 'number': 2, 'start': 1, 'flags': 'bios_grub', + 'filesystem': '', 'partition_name': 'Bios partition', 'size': 1, + 'path': '/dev/fake0p2'}, + {'end': 5407, 'number': 1, 'start': 4, 'flags': '', + 'filesystem': 'ext4', 'partition_name': 'Root partition', + 'size': 5403, 'path': '/dev/fake0p1'}, + {'end': 5507, 'number': 3, 'start': 5407, + 'flags': 'boot, esp', 'filesystem': 'fat16', + 'partition_name': 'Boot partition', 'size': 100, + 'path': '/dev/fake0p3'}, + ] + execute_mock.return_value = (output, '') + result = disk_utils.list_partitions('/dev/fake0') + self.assertEqual(expected, result) + execute_mock.assert_called_once_with( + 'parted', '-s', '-m', '/dev/fake0', 'unit', 'MiB', 'print', + use_standard_locale=True, run_as_root=True) + + @mock.patch.object(disk_utils.LOG, 'warning', autospec=True) + def test_incorrect_gpt(self, log_mock, execute_mock): + output = """ +BYT; +/dev/vda:40960MiB:virtblk:512:512:gpt:Virtio Block Device:; +2:XX1.00MiB:---:1.00MiB::primary:bios_grub; +""" + execute_mock.return_value = (output, '') + self.assertEqual([], disk_utils.list_partitions('/dev/fake')) + self.assertEqual(1, log_mock.call_count) + + +@mock.patch.object(utils, 'execute', autospec=True) +class MakePartitionsTestCase(base.IronicLibTestCase): + + def setUp(self): + super(MakePartitionsTestCase, self).setUp() + self.dev = 'fake-dev' + self.root_mb = 1024 + self.swap_mb = 512 + self.ephemeral_mb = 0 + self.configdrive_mb = 0 + self.node_uuid = "12345678-1234-1234-1234-1234567890abcxyz" + self.efi_size = CONF.disk_utils.efi_system_partition_size + self.bios_size = CONF.disk_utils.bios_boot_partition_size + + def _get_parted_cmd(self, dev, label=None): + if label is None: + label = 'msdos' + + return ['parted', '-a', 'optimal', '-s', dev, + '--', 'unit', 'MiB', 'mklabel', label] + + def _add_efi_sz(self, x): + return str(x + self.efi_size) + + def _add_bios_sz(self, x): + return str(x + self.bios_size) + + def _test_make_partitions(self, mock_exc, boot_option, boot_mode='bios', + disk_label=None, cpu_arch=""): + mock_exc.return_value = ('', '') + disk_utils.make_partitions(self.dev, self.root_mb, self.swap_mb, + self.ephemeral_mb, self.configdrive_mb, + self.node_uuid, boot_option=boot_option, + boot_mode=boot_mode, disk_label=disk_label, + cpu_arch=cpu_arch) + + if boot_option == "local" and boot_mode == "uefi": + expected_mkpart = ['mkpart', 'primary', 'fat32', '1', + self._add_efi_sz(1), + 'set', '1', 'boot', 'on', + 'mkpart', 'primary', 'linux-swap', + self._add_efi_sz(1), self._add_efi_sz(513), + 'mkpart', 'primary', '', self._add_efi_sz(513), + self._add_efi_sz(1537)] + else: + if boot_option == "local": + if disk_label == "gpt": + if cpu_arch.startswith('ppc64'): + expected_mkpart = ['mkpart', 'primary', '', '1', '9', + 'set', '1', 'prep', 'on', + 'mkpart', 'primary', 'linux-swap', + '9', '521', 'mkpart', 'primary', + '', '521', '1545'] + else: + expected_mkpart = ['mkpart', 'primary', '', '1', + self._add_bios_sz(1), + 'set', '1', 'bios_grub', 'on', + 'mkpart', 'primary', 'linux-swap', + self._add_bios_sz(1), + self._add_bios_sz(513), + 'mkpart', 'primary', '', + self._add_bios_sz(513), + self._add_bios_sz(1537)] + elif cpu_arch.startswith('ppc64'): + expected_mkpart = ['mkpart', 'primary', '', '1', '9', + 'set', '1', 'boot', 'on', + 'set', '1', 'prep', 'on', + 'mkpart', 'primary', 'linux-swap', + '9', '521', 'mkpart', 'primary', + '', '521', '1545'] + else: + expected_mkpart = ['mkpart', 'primary', 'linux-swap', '1', + '513', 'mkpart', 'primary', '', '513', + '1537', 'set', '2', 'boot', 'on'] + else: + expected_mkpart = ['mkpart', 'primary', 'linux-swap', '1', + '513', 'mkpart', 'primary', '', '513', + '1537'] + self.dev = 'fake-dev' + parted_cmd = (self._get_parted_cmd(self.dev, disk_label) + + expected_mkpart) + parted_call = mock.call(*parted_cmd, use_standard_locale=True, + run_as_root=True) + fuser_cmd = ['fuser', 'fake-dev'] + fuser_call = mock.call(*fuser_cmd, check_exit_code=[0, 1], + run_as_root=True) + + sync_calls = [mock.call('sync'), + mock.call('udevadm', 'settle'), + mock.call('partprobe', self.dev, attempts=10, + run_as_root=True), + mock.call('udevadm', 'settle'), + mock.call('sgdisk', '-v', self.dev, run_as_root=True)] + + mock_exc.assert_has_calls([parted_call, fuser_call] + sync_calls) + + def test_make_partitions(self, mock_exc): + self._test_make_partitions(mock_exc, boot_option="netboot") + + def test_make_partitions_local_boot(self, mock_exc): + self._test_make_partitions(mock_exc, boot_option="local") + + def test_make_partitions_local_boot_uefi(self, mock_exc): + self._test_make_partitions(mock_exc, boot_option="local", + boot_mode="uefi", disk_label="gpt") + + def test_make_partitions_local_boot_gpt_bios(self, mock_exc): + self._test_make_partitions(mock_exc, boot_option="local", + disk_label="gpt") + + def test_make_partitions_disk_label_gpt(self, mock_exc): + self._test_make_partitions(mock_exc, boot_option="netboot", + disk_label="gpt") + + def test_make_partitions_mbr_with_prep(self, mock_exc): + self._test_make_partitions(mock_exc, boot_option="local", + disk_label="msdos", cpu_arch="ppc64le") + + def test_make_partitions_gpt_with_prep(self, mock_exc): + self._test_make_partitions(mock_exc, boot_option="local", + disk_label="gpt", cpu_arch="ppc64le") + + def test_make_partitions_with_ephemeral(self, mock_exc): + self.ephemeral_mb = 2048 + expected_mkpart = ['mkpart', 'primary', '', '1', '2049', + 'mkpart', 'primary', 'linux-swap', '2049', '2561', + 'mkpart', 'primary', '', '2561', '3585'] + self.dev = 'fake-dev' + cmd = self._get_parted_cmd(self.dev) + expected_mkpart + mock_exc.return_value = ('', '') + disk_utils.make_partitions(self.dev, self.root_mb, self.swap_mb, + self.ephemeral_mb, self.configdrive_mb, + self.node_uuid) + + parted_call = mock.call(*cmd, use_standard_locale=True, + run_as_root=True) + mock_exc.assert_has_calls([parted_call]) + + def test_make_partitions_with_iscsi_device(self, mock_exc): + self.ephemeral_mb = 2048 + expected_mkpart = ['mkpart', 'primary', '', '1', '2049', + 'mkpart', 'primary', 'linux-swap', '2049', '2561', + 'mkpart', 'primary', '', '2561', '3585'] + self.dev = '/dev/iqn.2008-10.org.openstack:%s.fake-9' % self.node_uuid + ep = '/dev/iqn.2008-10.org.openstack:%s.fake-9-part1' % self.node_uuid + swap = ('/dev/iqn.2008-10.org.openstack:%s.fake-9-part2' + % self.node_uuid) + root = ('/dev/iqn.2008-10.org.openstack:%s.fake-9-part3' + % self.node_uuid) + expected_result = {'ephemeral': ep, + 'swap': swap, + 'root': root} + cmd = self._get_parted_cmd(self.dev) + expected_mkpart + mock_exc.return_value = ('', '') + result = disk_utils.make_partitions( + self.dev, self.root_mb, self.swap_mb, self.ephemeral_mb, + self.configdrive_mb, self.node_uuid) + + parted_call = mock.call(*cmd, use_standard_locale=True, + run_as_root=True) + mock_exc.assert_has_calls([parted_call]) + self.assertEqual(expected_result, result) + + def test_make_partitions_with_nvme_device(self, mock_exc): + self.ephemeral_mb = 2048 + expected_mkpart = ['mkpart', 'primary', '', '1', '2049', + 'mkpart', 'primary', 'linux-swap', '2049', '2561', + 'mkpart', 'primary', '', '2561', '3585'] + self.dev = '/dev/nvmefake-9' + ep = '/dev/nvmefake-9p1' + swap = '/dev/nvmefake-9p2' + root = '/dev/nvmefake-9p3' + expected_result = {'ephemeral': ep, + 'swap': swap, + 'root': root} + cmd = self._get_parted_cmd(self.dev) + expected_mkpart + mock_exc.return_value = ('', '') + result = disk_utils.make_partitions( + self.dev, self.root_mb, self.swap_mb, self.ephemeral_mb, + self.configdrive_mb, self.node_uuid) + + parted_call = mock.call(*cmd, use_standard_locale=True, + run_as_root=True) + mock_exc.assert_has_calls([parted_call]) + self.assertEqual(expected_result, result) + + def test_make_partitions_with_local_device(self, mock_exc): + self.ephemeral_mb = 2048 + expected_mkpart = ['mkpart', 'primary', '', '1', '2049', + 'mkpart', 'primary', 'linux-swap', '2049', '2561', + 'mkpart', 'primary', '', '2561', '3585'] + self.dev = 'fake-dev' + expected_result = {'ephemeral': 'fake-dev1', + 'swap': 'fake-dev2', + 'root': 'fake-dev3'} + cmd = self._get_parted_cmd(self.dev) + expected_mkpart + mock_exc.return_value = ('', '') + result = disk_utils.make_partitions( + self.dev, self.root_mb, self.swap_mb, self.ephemeral_mb, + self.configdrive_mb, self.node_uuid) + + parted_call = mock.call(*cmd, use_standard_locale=True, + run_as_root=True) + mock_exc.assert_has_calls([parted_call]) + self.assertEqual(expected_result, result) + + +@mock.patch.object(utils, 'execute', autospec=True) +class DestroyMetaDataTestCase(base.IronicLibTestCase): + + def setUp(self): + super(DestroyMetaDataTestCase, self).setUp() + self.dev = 'fake-dev' + self.node_uuid = "12345678-1234-1234-1234-1234567890abcxyz" + + def test_destroy_disk_metadata(self, mock_exec): + # Note(TheJulia): This list will get-reused, but only the second + # execution returning a string is needed for the test as otherwise + # command output is not used. + mock_exec.side_effect = iter([ + (None, None), + ('1024\n', None), + (None, None), + (None, None), + (None, None), + (None, None)]) + + expected_calls = [mock.call('wipefs', '--force', '--all', 'fake-dev', + use_standard_locale=True, + run_as_root=True), + mock.call('blockdev', '--getsz', 'fake-dev', + run_as_root=True), + mock.call('dd', 'bs=512', 'if=/dev/zero', + 'of=fake-dev', 'count=33', 'oflag=direct', + use_standard_locale=True, + run_as_root=True), + mock.call('dd', 'bs=512', 'if=/dev/zero', + 'of=fake-dev', 'count=33', 'oflag=direct', + 'seek=991', use_standard_locale=True, + run_as_root=True), + mock.call('sgdisk', '-Z', 'fake-dev', + use_standard_locale=True, + run_as_root=True), + mock.call('fuser', self.dev, check_exit_code=[0, 1], + run_as_root=True)] + disk_utils.destroy_disk_metadata(self.dev, self.node_uuid) + mock_exec.assert_has_calls(expected_calls) + + def test_destroy_disk_metadata_wipefs_fail(self, mock_exec): + mock_exec.side_effect = processutils.ProcessExecutionError + + expected_call = [mock.call('wipefs', '--force', '--all', 'fake-dev', + use_standard_locale=True, run_as_root=True)] + self.assertRaises(processutils.ProcessExecutionError, + disk_utils.destroy_disk_metadata, + self.dev, + self.node_uuid) + mock_exec.assert_has_calls(expected_call) + + def test_destroy_disk_metadata_sgdisk_fail(self, mock_exec): + expected_calls = [mock.call('wipefs', '--force', '--all', 'fake-dev', + use_standard_locale=True, + run_as_root=True), + mock.call('blockdev', '--getsz', 'fake-dev', + run_as_root=True), + mock.call('dd', 'bs=512', 'if=/dev/zero', + 'of=fake-dev', 'count=33', 'oflag=direct', + use_standard_locale=True, + run_as_root=True), + mock.call('dd', 'bs=512', 'if=/dev/zero', + 'of=fake-dev', 'count=33', 'oflag=direct', + 'seek=991', use_standard_locale=True, + run_as_root=True), + mock.call('sgdisk', '-Z', 'fake-dev', + use_standard_locale=True, + run_as_root=True)] + mock_exec.side_effect = iter([ + (None, None), + ('1024\n', None), + (None, None), + (None, None), + processutils.ProcessExecutionError()]) + self.assertRaises(processutils.ProcessExecutionError, + disk_utils.destroy_disk_metadata, + self.dev, + self.node_uuid) + mock_exec.assert_has_calls(expected_calls) + + def test_destroy_disk_metadata_wipefs_not_support_force(self, mock_exec): + mock_exec.side_effect = iter([ + processutils.ProcessExecutionError(description='--force'), + (None, None), + ('1024\n', None), + (None, None), + (None, None), + (None, None), + (None, None)]) + + expected_call = [mock.call('wipefs', '--force', '--all', 'fake-dev', + use_standard_locale=True, run_as_root=True), + mock.call('wipefs', '--all', 'fake-dev', + use_standard_locale=True, run_as_root=True), + mock.call('blockdev', '--getsz', 'fake-dev', + run_as_root=True), + mock.call('dd', 'bs=512', 'if=/dev/zero', + 'of=fake-dev', 'count=33', 'oflag=direct', + use_standard_locale=True, + run_as_root=True), + mock.call('dd', 'bs=512', 'if=/dev/zero', + 'of=fake-dev', 'count=33', 'oflag=direct', + 'seek=991', use_standard_locale=True, + run_as_root=True), + mock.call('sgdisk', '-Z', 'fake-dev', + use_standard_locale=True, + run_as_root=True), + mock.call('fuser', 'fake-dev', + check_exit_code=[0, 1], run_as_root=True) + ] + disk_utils.destroy_disk_metadata(self.dev, self.node_uuid) + mock_exec.assert_has_calls(expected_call) + + def test_destroy_disk_metadata_ebr(self, mock_exec): + expected_calls = [mock.call('wipefs', '--force', '--all', 'fake-dev', + use_standard_locale=True, + run_as_root=True), + mock.call('blockdev', '--getsz', 'fake-dev', + run_as_root=True), + mock.call('dd', 'bs=512', 'if=/dev/zero', + 'of=fake-dev', 'count=2', 'oflag=direct', + use_standard_locale=True, + run_as_root=True), + mock.call('sgdisk', '-Z', 'fake-dev', + use_standard_locale=True, + run_as_root=True)] + mock_exec.side_effect = iter([ + (None, None), + ('2\n', None), # an EBR is 2 sectors + (None, None), + (None, None), + (None, None), + (None, None)]) + disk_utils.destroy_disk_metadata(self.dev, self.node_uuid) + mock_exec.assert_has_calls(expected_calls) + + def test_destroy_disk_metadata_tiny_partition(self, mock_exec): + expected_calls = [mock.call('wipefs', '--force', '--all', 'fake-dev', + use_standard_locale=True, + run_as_root=True), + mock.call('blockdev', '--getsz', 'fake-dev', + run_as_root=True), + mock.call('dd', 'bs=512', 'if=/dev/zero', + 'of=fake-dev', 'count=33', 'oflag=direct', + use_standard_locale=True, + run_as_root=True), + mock.call('dd', 'bs=512', 'if=/dev/zero', + 'of=fake-dev', 'count=33', 'oflag=direct', + 'seek=9', use_standard_locale=True, + run_as_root=True), + mock.call('sgdisk', '-Z', 'fake-dev', + use_standard_locale=True, + run_as_root=True)] + mock_exec.side_effect = iter([ + (None, None), + ('42\n', None), + (None, None), + (None, None), + (None, None), + (None, None)]) + disk_utils.destroy_disk_metadata(self.dev, self.node_uuid) + mock_exec.assert_has_calls(expected_calls) + + +@mock.patch.object(utils, 'execute', autospec=True) +class GetDeviceBlockSizeTestCase(base.IronicLibTestCase): + + def setUp(self): + super(GetDeviceBlockSizeTestCase, self).setUp() + self.dev = 'fake-dev' + self.node_uuid = "12345678-1234-1234-1234-1234567890abcxyz" + + def test_get_dev_block_size(self, mock_exec): + mock_exec.return_value = ("64", "") + expected_call = [mock.call('blockdev', '--getsz', self.dev, + run_as_root=True)] + disk_utils.get_dev_block_size(self.dev) + mock_exec.assert_has_calls(expected_call) + + +@mock.patch.object(disk_utils, 'dd', autospec=True) +@mock.patch.object(qemu_img, 'convert_image', autospec=True) +class PopulateImageTestCase(base.IronicLibTestCase): + + def test_populate_raw_image(self, mock_cg, mock_dd): + source_format = 'raw' + disk_utils.populate_image('src', 'dst', + source_format=source_format, + is_raw=True) + mock_dd.assert_called_once_with('src', 'dst', conv_flags=None) + self.assertFalse(mock_cg.called) + + def test_populate_qcow2_image(self, mock_cg, mock_dd): + source_format = 'qcow2' + disk_utils.populate_image('src', 'dst', + source_format=source_format, is_raw=False) + mock_cg.assert_called_once_with('src', 'dst', 'raw', True, + sparse_size='0', + source_format=source_format) + self.assertFalse(mock_dd.called) + + +@mock.patch('time.sleep', lambda sec: None) +class OtherFunctionTestCase(base.IronicLibTestCase): + + @mock.patch.object(os, 'stat', autospec=True) + @mock.patch.object(stat, 'S_ISBLK', autospec=True) + def test_is_block_device_works(self, mock_is_blk, mock_os): + device = '/dev/disk/by-path/ip-1.2.3.4:5678-iscsi-iqn.fake-lun-9' + mock_is_blk.return_value = True + mock_os().st_mode = 10000 + self.assertTrue(disk_utils.is_block_device(device)) + mock_is_blk.assert_called_once_with(mock_os().st_mode) + + @mock.patch.object(os, 'stat', autospec=True) + def test_is_block_device_raises(self, mock_os): + device = '/dev/disk/by-path/ip-1.2.3.4:5678-iscsi-iqn.fake-lun-9' + mock_os.side_effect = OSError + self.assertRaises(exception.InstanceDeployFailure, + disk_utils.is_block_device, device) + mock_os.assert_has_calls([mock.call(device)] * 3) + + @mock.patch.object(os, 'stat', autospec=True) + def test_is_block_device_attempts(self, mock_os): + CONF.set_override('partition_detection_attempts', 2, + group='disk_utils') + device = '/dev/disk/by-path/ip-1.2.3.4:5678-iscsi-iqn.fake-lun-9' + mock_os.side_effect = OSError + self.assertRaises(exception.InstanceDeployFailure, + disk_utils.is_block_device, device) + mock_os.assert_has_calls([mock.call(device)] * 2) + + def _test_count_mbr_partitions(self, output, mock_execute): + mock_execute.return_value = (output, '') + out = disk_utils.count_mbr_partitions('/dev/fake') + mock_execute.assert_called_once_with('partprobe', '-d', '-s', + '/dev/fake', + use_standard_locale=True, + run_as_root=True) + return out + + @mock.patch.object(utils, 'execute', autospec=True) + def test_count_mbr_partitions(self, mock_execute): + output = "/dev/fake: msdos partitions 1 2 3 <5 6>" + pp, lp = self._test_count_mbr_partitions(output, mock_execute) + self.assertEqual(3, pp) + self.assertEqual(2, lp) + + @mock.patch.object(utils, 'execute', autospec=True) + def test_count_mbr_partitions_no_logical_partitions(self, mock_execute): + output = "/dev/fake: msdos partitions 1 2" + pp, lp = self._test_count_mbr_partitions(output, mock_execute) + self.assertEqual(2, pp) + self.assertEqual(0, lp) + + @mock.patch.object(utils, 'execute', autospec=True) + def test_count_mbr_partitions_wrong_partition_table(self, mock_execute): + output = "/dev/fake: gpt partitions 1 2 3 4 5 6" + mock_execute.return_value = (output, '') + self.assertRaises(ValueError, disk_utils.count_mbr_partitions, + '/dev/fake') + mock_execute.assert_called_once_with('partprobe', '-d', '-s', + '/dev/fake', + use_standard_locale=True, + run_as_root=True) + + @mock.patch.object(disk_utils, 'get_device_information', autospec=True) + def test_block_uuid(self, mock_get_device_info): + mock_get_device_info.return_value = {'UUID': '123', + 'PARTUUID': '123456'} + self.assertEqual('123', disk_utils.block_uuid('/dev/fake')) + mock_get_device_info.assert_called_once_with( + '/dev/fake', fields=['UUID', 'PARTUUID']) + + @mock.patch.object(disk_utils, 'get_device_information', autospec=True) + def test_block_uuid_fallback_to_uuid(self, mock_get_device_info): + mock_get_device_info.return_value = {'PARTUUID': '123456'} + self.assertEqual('123456', disk_utils.block_uuid('/dev/fake')) + mock_get_device_info.assert_called_once_with( + '/dev/fake', fields=['UUID', 'PARTUUID']) + + +@mock.patch.object(utils, 'execute', autospec=True) +class FixGptStructsTestCases(base.IronicLibTestCase): + + def setUp(self): + super(FixGptStructsTestCases, self).setUp() + self.dev = "/dev/fake" + self.config_part_label = "config-2" + self.node_uuid = "12345678-1234-1234-1234-1234567890abcxyz" + + def test_fix_gpt_structs_fix_required(self, mock_execute): + sgdisk_v_output = """ +Problem: The secondary header's self-pointer indicates that it doesn't reside +at the end of the disk. If you've added a disk to a RAID array, use the 'e' +option on the experts' menu to adjust the secondary header's and partition +table's locations. + +Identified 1 problems! +""" + mock_execute.return_value = (sgdisk_v_output, '') + execute_calls = [ + mock.call('sgdisk', '-v', '/dev/fake', run_as_root=True), + mock.call('sgdisk', '-e', '/dev/fake', run_as_root=True), + ] + disk_utils._fix_gpt_structs('/dev/fake', self.node_uuid) + mock_execute.assert_has_calls(execute_calls) + + def test_fix_gpt_structs_fix_not_required(self, mock_execute): + mock_execute.return_value = ('', '') + + disk_utils._fix_gpt_structs('/dev/fake', self.node_uuid) + mock_execute.assert_called_once_with('sgdisk', '-v', '/dev/fake', + run_as_root=True) + + @mock.patch.object(disk_utils.LOG, 'error', autospec=True) + def test_fix_gpt_structs_exc(self, mock_log, mock_execute): + mock_execute.side_effect = processutils.ProcessExecutionError + self.assertRaisesRegex(exception.InstanceDeployFailure, + 'Failed to fix GPT data structures on disk', + disk_utils._fix_gpt_structs, + self.dev, self.node_uuid) + mock_execute.assert_called_once_with('sgdisk', '-v', '/dev/fake', + run_as_root=True) + self.assertEqual(1, mock_log.call_count) + + +@mock.patch.object(utils, 'execute', autospec=True) +class TriggerDeviceRescanTestCase(base.IronicLibTestCase): + def test_trigger(self, mock_execute): + self.assertTrue(disk_utils.trigger_device_rescan('/dev/fake')) + mock_execute.assert_has_calls([ + mock.call('sync'), + mock.call('udevadm', 'settle'), + mock.call('partprobe', '/dev/fake', attempts=10, + run_as_root=True), + mock.call('udevadm', 'settle'), + mock.call('sgdisk', '-v', '/dev/fake', + run_as_root=True), + ]) + + def test_custom_attempts(self, mock_execute): + self.assertTrue( + disk_utils.trigger_device_rescan('/dev/fake', attempts=1)) + mock_execute.assert_has_calls([ + mock.call('sync'), + mock.call('udevadm', 'settle'), + mock.call('partprobe', '/dev/fake', attempts=1, + run_as_root=True), + mock.call('udevadm', 'settle'), + mock.call('sgdisk', '-v', '/dev/fake', + run_as_root=True), + ]) + + def test_fails(self, mock_execute): + mock_execute.side_effect = [('', '')] * 4 + [ + processutils.ProcessExecutionError + ] + self.assertFalse(disk_utils.trigger_device_rescan('/dev/fake')) + mock_execute.assert_has_calls([ + mock.call('sync'), + mock.call('udevadm', 'settle'), + mock.call('partprobe', '/dev/fake', attempts=10, + run_as_root=True), + mock.call('udevadm', 'settle'), + mock.call('sgdisk', '-v', '/dev/fake', + run_as_root=True), + ]) + + +BLKID_PROBE = (""" +/dev/disk/by-path/ip-10.1.0.52:3260-iscsi-iqn.2008-10.org.openstack: """ + """PTUUID="123456" PTTYPE="gpt" + """) + +LSBLK_NORMAL = ( + 'UUID="123" BLOCK_SIZE="512" TYPE="vfat" ' + 'PARTLABEL="EFI System Partition" PARTUUID="123456"' +) + + +@mock.patch.object(utils, 'execute', autospec=True) +class GetDeviceInformationTestCase(base.IronicLibTestCase): + + def test_normal(self, mock_execute): + mock_execute.return_value = LSBLK_NORMAL, "" + result = disk_utils.get_device_information('/dev/fake') + self.assertEqual( + {'UUID': '123', 'BLOCK_SIZE': '512', 'TYPE': 'vfat', + 'PARTLABEL': 'EFI System Partition', 'PARTUUID': '123456'}, + result + ) + mock_execute.assert_called_once_with( + 'lsblk', '/dev/fake', '--pairs', '--bytes', '--ascii', '--nodeps', + '--output-all', use_standard_locale=True, run_as_root=True) + + def test_fields(self, mock_execute): + mock_execute.return_value = LSBLK_NORMAL, "" + result = disk_utils.get_device_information('/dev/fake', + fields=['UUID', 'LABEL']) + # No filtering on our side, so returning all fake fields + self.assertEqual( + {'UUID': '123', 'BLOCK_SIZE': '512', 'TYPE': 'vfat', + 'PARTLABEL': 'EFI System Partition', 'PARTUUID': '123456'}, + result + ) + mock_execute.assert_called_once_with( + 'lsblk', '/dev/fake', '--pairs', '--bytes', '--ascii', '--nodeps', + '--output', 'UUID,LABEL', + use_standard_locale=True, run_as_root=True) + + def test_empty(self, mock_execute): + mock_execute.return_value = "\n", "" + result = disk_utils.get_device_information('/dev/fake') + self.assertEqual({}, result) + mock_execute.assert_called_once_with( + 'lsblk', '/dev/fake', '--pairs', '--bytes', '--ascii', '--nodeps', + '--output-all', use_standard_locale=True, run_as_root=True) + + +@mock.patch.object(utils, 'execute', autospec=True) +class GetPartitionTableTypeTestCase(base.IronicLibTestCase): + def test_gpt(self, mocked_execute): + self._test_by_type(mocked_execute, 'gpt', 'gpt') + + def test_msdos(self, mocked_execute): + self._test_by_type(mocked_execute, 'msdos', 'msdos') + + def test_unknown(self, mocked_execute): + self._test_by_type(mocked_execute, 'whatever', 'unknown') + + def _test_by_type(self, mocked_execute, table_type_output, + expected_table_type): + parted_ret = PARTED_OUTPUT_UNFORMATTED.format(table_type_output) + + mocked_execute.side_effect = [ + (parted_ret, None), + ] + + ret = disk_utils.get_partition_table_type('hello') + mocked_execute.assert_called_once_with( + 'parted', '--script', 'hello', '--', 'print', + use_standard_locale=True, run_as_root=True) + self.assertEqual(expected_table_type, ret) + + +PARTED_OUTPUT_UNFORMATTED = '''Model: whatever +Disk /dev/sda: 450GB +Sector size (logical/physical): 512B/512B +Partition Table: {} +Disk Flags: + +Number Start End Size File system Name Flags +14 1049kB 5243kB 4194kB bios_grub +15 5243kB 116MB 111MB fat32 boot, esp + 1 116MB 2361MB 2245MB ext4 +''' + + +@mock.patch.object(disk_utils, 'list_partitions', autospec=True) +@mock.patch.object(disk_utils, 'get_partition_table_type', autospec=True) +class FindEfiPartitionTestCase(base.IronicLibTestCase): + + def test_find_efi_partition(self, mocked_type, mocked_parts): + mocked_parts.return_value = [ + {'number': '1', 'flags': ''}, + {'number': '14', 'flags': 'bios_grub'}, + {'number': '15', 'flags': 'esp, boot'}, + ] + ret = disk_utils.find_efi_partition('/dev/sda') + self.assertEqual({'number': '15', 'flags': 'esp, boot'}, ret) + + def test_find_efi_partition_only_boot_flag_gpt(self, mocked_type, + mocked_parts): + mocked_type.return_value = 'gpt' + mocked_parts.return_value = [ + {'number': '1', 'flags': ''}, + {'number': '14', 'flags': 'bios_grub'}, + {'number': '15', 'flags': 'boot'}, + ] + ret = disk_utils.find_efi_partition('/dev/sda') + self.assertEqual({'number': '15', 'flags': 'boot'}, ret) + + def test_find_efi_partition_only_boot_flag_mbr(self, mocked_type, + mocked_parts): + mocked_type.return_value = 'msdos' + mocked_parts.return_value = [ + {'number': '1', 'flags': ''}, + {'number': '14', 'flags': 'bios_grub'}, + {'number': '15', 'flags': 'boot'}, + ] + self.assertIsNone(disk_utils.find_efi_partition('/dev/sda')) + + def test_find_efi_partition_not_found(self, mocked_type, mocked_parts): + mocked_parts.return_value = [ + {'number': '1', 'flags': ''}, + {'number': '14', 'flags': 'bios_grub'}, + ] + self.assertIsNone(disk_utils.find_efi_partition('/dev/sda')) + + +class WaitForDisk(base.IronicLibTestCase): + + def setUp(self): + super(WaitForDisk, self).setUp() + CONF.set_override('check_device_interval', .01, + group='disk_partitioner') + CONF.set_override('check_device_max_retries', 2, + group='disk_partitioner') + + @mock.patch.object(utils, 'execute', autospec=True) + def test_wait_for_disk_to_become_available(self, mock_exc): + mock_exc.return_value = ('', '') + disk_utils.wait_for_disk_to_become_available('fake-dev') + fuser_cmd = ['fuser', 'fake-dev'] + fuser_call = mock.call(*fuser_cmd, check_exit_code=[0, 1], + run_as_root=True) + self.assertEqual(1, mock_exc.call_count) + mock_exc.assert_has_calls([fuser_call]) + + @mock.patch.object(utils, 'execute', autospec=True, + side_effect=processutils.ProcessExecutionError( + stderr='fake')) + def test_wait_for_disk_to_become_available_no_fuser(self, mock_exc): + self.assertRaises(exception.IronicException, + disk_utils.wait_for_disk_to_become_available, + 'fake-dev') + fuser_cmd = ['fuser', 'fake-dev'] + fuser_call = mock.call(*fuser_cmd, check_exit_code=[0, 1], + run_as_root=True) + self.assertEqual(2, mock_exc.call_count) + mock_exc.assert_has_calls([fuser_call, fuser_call]) + + @mock.patch.object(utils, 'execute', autospec=True) + def test_wait_for_disk_to_become_available_device_in_use_psmisc( + self, mock_exc): + # Test that the device is not available. This version has the 'psmisc' + # version of 'fuser' values for stdout and stderr. + # NOTE(TheJulia): Looks like fuser returns the actual list of pids + # in the stdout output, where as all other text is returned in + # stderr. + # The 'psmisc' version has a leading space character in stdout. The + # filename is output to stderr + mock_exc.side_effect = [(' 1234 ', 'fake-dev: '), + (' 15503 3919 15510 15511', 'fake-dev:')] + expected_error = ('Processes with the following PIDs are ' + 'holding device fake-dev: 15503, 3919, 15510, ' + '15511. Timed out waiting for completion.') + self.assertRaisesRegex( + exception.IronicException, + expected_error, + disk_utils.wait_for_disk_to_become_available, + 'fake-dev') + fuser_cmd = ['fuser', 'fake-dev'] + fuser_call = mock.call(*fuser_cmd, check_exit_code=[0, 1], + run_as_root=True) + self.assertEqual(2, mock_exc.call_count) + mock_exc.assert_has_calls([fuser_call, fuser_call]) + + @mock.patch.object(utils, 'execute', autospec=True) + def test_wait_for_disk_to_become_available_device_in_use_busybox( + self, mock_exc): + # Test that the device is not available. This version has the 'busybox' + # version of 'fuser' values for stdout and stderr. + # NOTE(TheJulia): Looks like fuser returns the actual list of pids + # in the stdout output, where as all other text is returned in + # stderr. + # The 'busybox' version does not have a leading space character in + # stdout. Also nothing is output to stderr. + mock_exc.side_effect = [('1234', ''), + ('15503 3919 15510 15511', '')] + expected_error = ('Processes with the following PIDs are ' + 'holding device fake-dev: 15503, 3919, 15510, ' + '15511. Timed out waiting for completion.') + self.assertRaisesRegex( + exception.IronicException, + expected_error, + disk_utils.wait_for_disk_to_become_available, + 'fake-dev') + fuser_cmd = ['fuser', 'fake-dev'] + fuser_call = mock.call(*fuser_cmd, check_exit_code=[0, 1], + run_as_root=True) + self.assertEqual(2, mock_exc.call_count) + mock_exc.assert_has_calls([fuser_call, fuser_call]) + + @mock.patch.object(utils, 'execute', autospec=True) + def test_wait_for_disk_to_become_available_no_device(self, mock_exc): + # NOTE(TheJulia): Looks like fuser returns the actual list of pids + # in the stdout output, where as all other text is returned in + # stderr. + + mock_exc.return_value = ('', 'Specified filename /dev/fake ' + 'does not exist.') + expected_error = ('Fuser exited with "Specified filename ' + '/dev/fake does not exist." while checking ' + 'locks for device fake-dev. Timed out waiting ' + 'for completion.') + self.assertRaisesRegex( + exception.IronicException, + expected_error, + disk_utils.wait_for_disk_to_become_available, + 'fake-dev') + fuser_cmd = ['fuser', 'fake-dev'] + fuser_call = mock.call(*fuser_cmd, check_exit_code=[0, 1], + run_as_root=True) + self.assertEqual(2, mock_exc.call_count) + mock_exc.assert_has_calls([fuser_call, fuser_call]) + + @mock.patch.object(utils, 'execute', autospec=True) + def test_wait_for_disk_to_become_available_dev_becomes_avail_psmisc( + self, mock_exc): + # Test that initially device is not available but then becomes + # available. This version has the 'psmisc' version of 'fuser' values + # for stdout and stderr. + # The 'psmisc' version has a leading space character in stdout. The + # filename is output to stderr + mock_exc.side_effect = [(' 1234 ', 'fake-dev: '), + ('', '')] + disk_utils.wait_for_disk_to_become_available('fake-dev') + fuser_cmd = ['fuser', 'fake-dev'] + fuser_call = mock.call(*fuser_cmd, check_exit_code=[0, 1], + run_as_root=True) + self.assertEqual(2, mock_exc.call_count) + mock_exc.assert_has_calls([fuser_call, fuser_call]) + + @mock.patch.object(utils, 'execute', autospec=True) + def test_wait_for_disk_to_become_available_dev_becomes_avail_busybox( + self, mock_exc): + # Test that initially device is not available but then becomes + # available. This version has the 'busybox' version of 'fuser' values + # for stdout and stderr. + # The 'busybox' version does not have a leading space character in + # stdout. Also nothing is output to stderr. + mock_exc.side_effect = [('1234 5895', ''), + ('', '')] + disk_utils.wait_for_disk_to_become_available('fake-dev') + fuser_cmd = ['fuser', 'fake-dev'] + fuser_call = mock.call(*fuser_cmd, check_exit_code=[0, 1], + run_as_root=True) + self.assertEqual(2, mock_exc.call_count) + mock_exc.assert_has_calls([fuser_call, fuser_call]) + + +class GetAndValidateImageFormat(base.IronicLibTestCase): + @mock.patch.object(disk_utils, '_image_inspection', autospec=True) + @mock.patch('os.path.getsize', autospec=True) + def test_happy_raw(self, mock_size, mock_ii): + """Valid raw image""" + CONF.set_override('disable_deep_image_inspection', False) + mock_size.return_value = 13 + fmt = 'raw' + self.assertEqual( + (fmt, 13), + disk_utils.get_and_validate_image_format('/fake/path', fmt)) + mock_ii.assert_not_called() + mock_size.assert_called_once_with('/fake/path') + + @mock.patch.object(disk_utils, '_image_inspection', autospec=True) + def test_happy_qcow2(self, mock_ii): + """Valid qcow2 image""" + CONF.set_override('disable_deep_image_inspection', False) + fmt = 'qcow2' + mock_ii.return_value = MockFormatInspectorCls(fmt, 0, True) + self.assertEqual( + (fmt, 0), + disk_utils.get_and_validate_image_format('/fake/path', fmt) + ) + mock_ii.assert_called_once_with('/fake/path') + + @mock.patch.object(disk_utils, '_image_inspection', autospec=True) + def test_format_type_disallowed(self, mock_ii): + """qcow3 images are not allowed in default config""" + CONF.set_override('disable_deep_image_inspection', False) + fmt = 'qcow3' + mock_ii.return_value = MockFormatInspectorCls(fmt, 0, True) + self.assertRaises(InvalidImage, + disk_utils.get_and_validate_image_format, + '/fake/path', fmt) + mock_ii.assert_called_once_with('/fake/path') + + @mock.patch.object(disk_utils, '_image_inspection', autospec=True) + def test_format_mismatch(self, mock_ii): + """ironic_disk_format=qcow2, but we detect it as a qcow3""" + CONF.set_override('disable_deep_image_inspection', False) + fmt = 'qcow2' + mock_ii.return_value = MockFormatInspectorCls('qcow3', 0, True) + self.assertRaises(InvalidImage, + disk_utils.get_and_validate_image_format, + '/fake/path', fmt) + + @mock.patch.object(disk_utils, '_image_inspection', autospec=True) + @mock.patch.object(qemu_img, 'image_info', autospec=True) + def test_format_mismatch_but_disabled(self, mock_info, mock_ii): + """qcow3 ironic_disk_format ignored because deep inspection disabled""" + CONF.set_override('disable_deep_image_inspection', True) + fmt = 'qcow2' + fake_info = _get_fake_qemu_image_info(file_format=fmt, virtual_size=0) + qemu_img.image_info.return_value = fake_info + # note the input is qcow3, the output is qcow2: this mismatch is + # forbidden if CONF.disable_deep_image_inspection is False + self.assertEqual( + (fmt, 0), + disk_utils.get_and_validate_image_format('/fake/path', 'qcow3')) + mock_ii.assert_not_called() + mock_info.assert_called_once() + + @mock.patch.object(disk_utils, '_image_inspection', autospec=True) + @mock.patch.object(qemu_img, 'image_info', autospec=True) + def test_safety_check_fail_but_disabled(self, mock_info, mock_ii): + """unsafe image ignored because inspection is disabled""" + CONF.set_override('disable_deep_image_inspection', True) + fmt = 'qcow2' + fake_info = _get_fake_qemu_image_info(file_format=fmt, virtual_size=0) + qemu_img.image_info.return_value = fake_info + # note the input is qcow3, the output is qcow2: this mismatch is + # forbidden if CONF.disable_deep_image_inspection is False + self.assertEqual( + (fmt, 0), + disk_utils.get_and_validate_image_format('/fake/path', 'qcow3')) + mock_ii.assert_not_called() + mock_info.assert_called_once() + + +class ImageInspectionTest(base.IronicLibTestCase): + @mock.patch.object(format_inspector, 'detect_file_format', autospec=True) + def test_image_inspection_pass(self, mock_fi): + inspector = MockFormatInspectorCls('qcow2', 0, True) + mock_fi.return_value = inspector + self.assertEqual(inspector, disk_utils._image_inspection('/fake/path')) + + @mock.patch.object(format_inspector, 'detect_file_format', autospec=True) + def test_image_inspection_fail_safety_check(self, mock_fi): + inspector = MockFormatInspectorCls('qcow2', 0, False) + mock_fi.return_value = inspector + self.assertRaises(InvalidImage, disk_utils._image_inspection, + '/fake/path') + + @mock.patch.object(format_inspector, 'detect_file_format', autospec=True) + def test_image_inspection_fail_format_error(self, mock_fi): + mock_fi.side_effect = format_inspector.ImageFormatError + self.assertRaises(InvalidImage, disk_utils._image_inspection, + '/fake/path') diff --git a/ironic_python_agent/tests/unit/test_partition_utils.py b/ironic_python_agent/tests/unit/test_partition_utils.py index 055f72159..1253d025a 100644 --- a/ironic_python_agent/tests/unit/test_partition_utils.py +++ b/ironic_python_agent/tests/unit/test_partition_utils.py @@ -15,17 +15,18 @@ import shutil import tempfile from unittest import mock -from ironic_lib import disk_partitioner -from ironic_lib import disk_utils from ironic_lib import exception from ironic_lib import utils from oslo_concurrency import processutils from oslo_config import cfg import requests +from ironic_python_agent import disk_partitioner +from ironic_python_agent import disk_utils from ironic_python_agent import errors from ironic_python_agent import hardware from ironic_python_agent import partition_utils +from ironic_python_agent import qemu_img from ironic_python_agent.tests.unit import base @@ -452,13 +453,15 @@ class WorkOnDiskTestCase(base.IronicAgentTest): @mock.patch.object(utils, 'mkfs', lambda fs, path, label=None: None) @mock.patch.object(disk_utils, 'block_uuid', lambda p: 'uuid') @mock.patch.object(disk_utils, 'populate_image', lambda image_path, - root_path, conv_flags=None: None) + root_path, conv_flags=None, source_format=None, + is_raw=False: None) def test_gpt_disk_label(self): ephemeral_part = '/dev/fake-part1' swap_part = '/dev/fake-part2' root_part = '/dev/fake-part3' ephemeral_mb = 256 ephemeral_format = 'exttest' + source_format = 'raw' self.mock_mp.return_value = {'ephemeral': ephemeral_part, 'swap': swap_part, @@ -471,7 +474,8 @@ class WorkOnDiskTestCase(base.IronicAgentTest): self.swap_mb, ephemeral_mb, ephemeral_format, self.image_path, self.node_uuid, - disk_label='gpt', conv_flags=None) + disk_label='gpt', conv_flags=None, + source_format=source_format, is_raw=True) self.assertEqual(self.mock_ibd.call_args_list, calls) self.mock_mp.assert_called_once_with(self.dev, self.root_mb, self.swap_mb, ephemeral_mb, @@ -491,6 +495,8 @@ class WorkOnDiskTestCase(base.IronicAgentTest): """Test that we create a fat filesystem with UEFI localboot.""" root_part = '/dev/fake-part1' efi_part = '/dev/fake-part2' + source_format = 'format' + self.mock_mp.return_value = {'root': root_part, 'efi system partition': efi_part} self.mock_ibd.return_value = True @@ -501,7 +507,8 @@ class WorkOnDiskTestCase(base.IronicAgentTest): self.swap_mb, self.ephemeral_mb, self.ephemeral_format, self.image_path, self.node_uuid, - boot_mode="uefi") + boot_mode="uefi", + source_format=source_format, is_raw=False) self.mock_mp.assert_called_once_with(self.dev, self.root_mb, self.swap_mb, self.ephemeral_mb, @@ -514,8 +521,9 @@ class WorkOnDiskTestCase(base.IronicAgentTest): self.assertEqual(self.mock_ibd.call_args_list, mock_ibd_calls) mock_mkfs.assert_called_once_with(fs='vfat', path=efi_part, label='efi-part') - mock_populate_image.assert_called_once_with(self.image_path, - root_part, conv_flags=None) + mock_populate_image.assert_called_once_with( + self.image_path, root_part, conv_flags=None, + source_format=source_format, is_raw=False) mock_block_uuid.assert_any_call(root_part) mock_block_uuid.assert_any_call(efi_part) mock_trigger_device_rescan.assert_called_once_with(self.dev) @@ -594,6 +602,7 @@ class WorkOnDiskTestCase(base.IronicAgentTest): root_part = '/dev/fake-part3' ephemeral_mb = 256 ephemeral_format = 'exttest' + fmt = 'format' self.mock_mp.return_value = {'ephemeral': ephemeral_part, 'swap': swap_part, @@ -603,11 +612,15 @@ class WorkOnDiskTestCase(base.IronicAgentTest): self.swap_mb, ephemeral_mb, ephemeral_format, self.image_path, self.node_uuid, - disk_label='gpt', conv_flags='sparse') + disk_label='gpt', conv_flags='sparse', + source_format=fmt, + is_raw=False) mock_populate_image.assert_called_once_with(self.image_path, root_part, - conv_flags='sparse') + conv_flags='sparse', + source_format=fmt, + is_raw=False) class CreateConfigDriveTestCases(base.IronicAgentTest): @@ -700,9 +713,9 @@ class CreateConfigDriveTestCases(base.IronicAgentTest): self.dev, run_as_root=True), mock.call('sync'), mock.call('udevadm', 'settle'), - mock.call('partprobe', self.dev, attempts=10, run_as_root=True), + mock.call('partprobe', self.dev, run_as_root=True, attempts=10), + mock.call('udevadm', 'settle'), mock.call('sgdisk', '-v', self.dev, run_as_root=True), - mock.call('udevadm', 'settle'), mock.call('test', '-e', expected_part, attempts=15, delay_on_retry=True) @@ -761,7 +774,8 @@ class CreateConfigDriveTestCases(base.IronicAgentTest): self.dev, run_as_root=True), mock.call('sync'), mock.call('udevadm', 'settle'), - mock.call('partprobe', self.dev, attempts=10, run_as_root=True), + mock.call('partprobe', self.dev, run_as_root=True, attempts=10), + mock.call('udevadm', 'settle'), mock.call('sgdisk', '-v', self.dev, run_as_root=True), mock.call('udevadm', 'settle'), @@ -826,9 +840,9 @@ class CreateConfigDriveTestCases(base.IronicAgentTest): self.dev, run_as_root=True), mock.call('sync'), mock.call('udevadm', 'settle'), - mock.call('partprobe', self.dev, attempts=10, run_as_root=True), + mock.call('partprobe', self.dev, run_as_root=True, attempts=10), + mock.call('udevadm', 'settle'), mock.call('sgdisk', '-v', self.dev, run_as_root=True), - mock.call('udevadm', 'settle'), mock.call('test', '-e', expected_part, attempts=15, delay_on_retry=True) @@ -929,7 +943,8 @@ class CreateConfigDriveTestCases(base.IronicAgentTest): parted_call, mock.call('sync'), mock.call('udevadm', 'settle'), - mock.call('partprobe', self.dev, attempts=10, run_as_root=True), + mock.call('partprobe', self.dev, run_as_root=True, attempts=10), + mock.call('udevadm', 'settle'), mock.call('sgdisk', '-v', self.dev, run_as_root=True), mock.call('udevadm', 'settle'), mock.call('test', '-e', expected_part, attempts=15, @@ -1029,7 +1044,8 @@ class CreateConfigDriveTestCases(base.IronicAgentTest): run_as_root=True), mock.call('sync'), mock.call('udevadm', 'settle'), - mock.call('partprobe', self.dev, attempts=10, run_as_root=True), + mock.call('partprobe', self.dev, run_as_root=True, attempts=10), + mock.call('udevadm', 'settle'), mock.call('sgdisk', '-v', self.dev, run_as_root=True), ]) @@ -1224,11 +1240,12 @@ class CreateConfigDriveTestCases(base.IronicAgentTest): # NOTE(TheJulia): trigger_device_rescan is systemwide thus pointless # to execute in the file test case. Also, CI unit test jobs lack sgdisk. @mock.patch.object(disk_utils, 'trigger_device_rescan', autospec=True) -@mock.patch.object(utils, 'wait_for_disk_to_become_available', autospec=True) +@mock.patch.object(disk_utils, 'wait_for_disk_to_become_available', + autospec=True) @mock.patch.object(disk_utils, 'is_block_device', autospec=True) @mock.patch.object(disk_utils, 'block_uuid', autospec=True) @mock.patch.object(disk_utils, 'dd', autospec=True) -@mock.patch.object(disk_utils, 'convert_image', autospec=True) +@mock.patch.object(qemu_img, 'convert_image', autospec=True) @mock.patch.object(utils, 'mkfs', autospec=True) # NOTE(dtantsur): destroy_disk_metadata resets file size, disabling it @mock.patch.object(disk_utils, 'destroy_disk_metadata', autospec=True) diff --git a/ironic_python_agent/tests/unit/test_qemu_img.py b/ironic_python_agent/tests/unit/test_qemu_img.py new file mode 100644 index 000000000..8645eb8c6 --- /dev/null +++ b/ironic_python_agent/tests/unit/test_qemu_img.py @@ -0,0 +1,332 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import os +from unittest import mock + +from ironic_lib.tests import base +from ironic_lib import utils +from oslo_concurrency import processutils +from oslo_config import cfg +from oslo_utils import imageutils + +from ironic_python_agent import errors +from ironic_python_agent import qemu_img + + +CONF = cfg.CONF + + +class ImageInfoTestCase(base.IronicLibTestCase): + + @mock.patch.object(os.path, 'exists', return_value=False, autospec=True) + def test_image_info_path_doesnt_exist_disabled(self, path_exists_mock): + CONF.set_override('disable_deep_image_inspection', True) + self.assertRaises(FileNotFoundError, qemu_img.image_info, 'noimg') + path_exists_mock.assert_called_once_with('noimg') + + @mock.patch.object(utils, 'execute', return_value=('out', 'err'), + autospec=True) + @mock.patch.object(imageutils, 'QemuImgInfo', autospec=True) + @mock.patch.object(os.path, 'exists', return_value=True, autospec=True) + def test_image_info_path_exists_disabled(self, path_exists_mock, + image_info_mock, execute_mock): + CONF.set_override('disable_deep_image_inspection', True) + qemu_img.image_info('img') + path_exists_mock.assert_called_once_with('img') + execute_mock.assert_called_once_with( + ['env', 'LC_ALL=C', 'LANG=C', 'qemu-img', 'info', 'img', + '--output=json'], prlimit=mock.ANY) + image_info_mock.assert_called_once_with('out', format='json') + + @mock.patch.object(utils, 'execute', return_value=('out', 'err'), + autospec=True) + @mock.patch.object(imageutils, 'QemuImgInfo', autospec=True) + @mock.patch.object(os.path, 'exists', return_value=True, autospec=True) + def test_image_info_path_exists_safe( + self, path_exists_mock, image_info_mock, execute_mock): + qemu_img.image_info('img', source_format='qcow2') + path_exists_mock.assert_called_once_with('img') + execute_mock.assert_called_once_with( + ['env', 'LC_ALL=C', 'LANG=C', 'qemu-img', 'info', 'img', + '--output=json', '-f', 'qcow2'], + prlimit=mock.ANY + ) + image_info_mock.assert_called_once_with('out', format='json') + + @mock.patch.object(utils, 'execute', return_value=('out', 'err'), + autospec=True) + @mock.patch.object(imageutils, 'QemuImgInfo', autospec=True) + @mock.patch.object(os.path, 'exists', return_value=True, autospec=True) + def test_image_info_path_exists_unsafe( + self, path_exists_mock, image_info_mock, execute_mock): + # Call without source_format raises + self.assertRaises(errors.InvalidImage, + qemu_img.image_info, 'img') + # safety valve! Don't run **anything** against the image without + # source_format unless specifically permitted + path_exists_mock.assert_not_called() + execute_mock.assert_not_called() + image_info_mock.assert_not_called() + + +class ConvertImageTestCase(base.IronicLibTestCase): + + @mock.patch.object(utils, 'execute', autospec=True) + def test_convert_image_disabled(self, execute_mock): + CONF.set_override('disable_deep_image_inspection', True) + qemu_img.convert_image('source', 'dest', 'out_format') + execute_mock.assert_called_once_with( + 'qemu-img', 'convert', '-O', + 'out_format', 'source', 'dest', + run_as_root=False, + prlimit=mock.ANY, + use_standard_locale=True, + env_variables={'MALLOC_ARENA_MAX': '3'}) + + @mock.patch.object(utils, 'execute', autospec=True) + def test_convert_image_flags_disabled(self, execute_mock): + CONF.set_override('disable_deep_image_inspection', True) + qemu_img.convert_image('source', 'dest', 'out_format', + cache='directsync', out_of_order=True, + sparse_size='0') + execute_mock.assert_called_once_with( + 'qemu-img', 'convert', '-O', + 'out_format', '-t', 'directsync', + '-S', '0', '-W', 'source', 'dest', + run_as_root=False, + prlimit=mock.ANY, + use_standard_locale=True, + env_variables={'MALLOC_ARENA_MAX': '3'}) + + @mock.patch.object(utils, 'execute', autospec=True) + def test_convert_image_retries_disabled(self, execute_mock): + CONF.set_override('disable_deep_image_inspection', True) + ret_err = 'qemu: qemu_thread_create: Resource temporarily unavailable' + execute_mock.side_effect = [ + processutils.ProcessExecutionError(stderr=ret_err), ('', ''), + processutils.ProcessExecutionError(stderr=ret_err), ('', ''), + ('', ''), + ] + + qemu_img.convert_image('source', 'dest', 'out_format') + convert_call = mock.call('qemu-img', 'convert', '-O', + 'out_format', 'source', 'dest', + run_as_root=False, + prlimit=mock.ANY, + use_standard_locale=True, + env_variables={'MALLOC_ARENA_MAX': '3'}) + execute_mock.assert_has_calls([ + convert_call, + mock.call('sync'), + convert_call, + mock.call('sync'), + convert_call, + ]) + + @mock.patch.object(utils, 'execute', autospec=True) + def test_convert_image_retries_alternate_error_disabled(self, exe_mock): + CONF.set_override('disable_deep_image_inspection', True) + ret_err = 'Failed to allocate memory: Cannot allocate memory\n' + exe_mock.side_effect = [ + processutils.ProcessExecutionError(stderr=ret_err), ('', ''), + processutils.ProcessExecutionError(stderr=ret_err), ('', ''), + ('', ''), + ] + + qemu_img.convert_image('source', 'dest', 'out_format') + convert_call = mock.call('qemu-img', 'convert', '-O', + 'out_format', 'source', 'dest', + run_as_root=False, + prlimit=mock.ANY, + use_standard_locale=True, + env_variables={'MALLOC_ARENA_MAX': '3'}) + exe_mock.assert_has_calls([ + convert_call, + mock.call('sync'), + convert_call, + mock.call('sync'), + convert_call, + ]) + + @mock.patch.object(utils, 'execute', autospec=True) + def test_convert_image_retries_and_fails_disabled(self, execute_mock): + CONF.set_override('disable_deep_image_inspection', True) + ret_err = 'qemu: qemu_thread_create: Resource temporarily unavailable' + execute_mock.side_effect = [ + processutils.ProcessExecutionError(stderr=ret_err), ('', ''), + processutils.ProcessExecutionError(stderr=ret_err), ('', ''), + processutils.ProcessExecutionError(stderr=ret_err), ('', ''), + processutils.ProcessExecutionError(stderr=ret_err), + ] + + self.assertRaises(processutils.ProcessExecutionError, + qemu_img.convert_image, + 'source', 'dest', 'out_format') + convert_call = mock.call('qemu-img', 'convert', '-O', + 'out_format', 'source', 'dest', + run_as_root=False, + prlimit=mock.ANY, + use_standard_locale=True, + env_variables={'MALLOC_ARENA_MAX': '3'}) + execute_mock.assert_has_calls([ + convert_call, + mock.call('sync'), + convert_call, + mock.call('sync'), + convert_call, + ]) + + @mock.patch.object(utils, 'execute', autospec=True) + def test_convert_image_just_fails_disabled(self, execute_mock): + CONF.set_override('disable_deep_image_inspection', True) + ret_err = 'Aliens' + execute_mock.side_effect = [ + processutils.ProcessExecutionError(stderr=ret_err), + ] + + self.assertRaises(processutils.ProcessExecutionError, + qemu_img.convert_image, + 'source', 'dest', 'out_format') + convert_call = mock.call('qemu-img', 'convert', '-O', + 'out_format', 'source', 'dest', + run_as_root=False, + prlimit=mock.ANY, + use_standard_locale=True, + env_variables={'MALLOC_ARENA_MAX': '3'}) + execute_mock.assert_has_calls([ + convert_call, + ]) + + @mock.patch.object(utils, 'execute', autospec=True) + def test_convert_image(self, execute_mock): + qemu_img.convert_image('source', 'dest', 'out_format', + source_format='fmt') + execute_mock.assert_called_once_with( + 'qemu-img', 'convert', '-O', + 'out_format', '-f', 'fmt', + 'source', 'dest', + run_as_root=False, + prlimit=mock.ANY, + use_standard_locale=True, + env_variables={'MALLOC_ARENA_MAX': '3'}) + + @mock.patch.object(utils, 'execute', autospec=True) + def test_convert_image_flags(self, execute_mock): + qemu_img.convert_image('source', 'dest', 'out_format', + cache='directsync', out_of_order=True, + sparse_size='0', source_format='fmt') + execute_mock.assert_called_once_with( + 'qemu-img', 'convert', '-O', + 'out_format', '-t', 'directsync', + '-S', '0', '-f', 'fmt', '-W', 'source', 'dest', + run_as_root=False, + prlimit=mock.ANY, + use_standard_locale=True, + env_variables={'MALLOC_ARENA_MAX': '3'}) + + @mock.patch.object(utils, 'execute', autospec=True) + def test_convert_image_retries(self, execute_mock): + ret_err = 'qemu: qemu_thread_create: Resource temporarily unavailable' + execute_mock.side_effect = [ + processutils.ProcessExecutionError(stderr=ret_err), ('', ''), + processutils.ProcessExecutionError(stderr=ret_err), ('', ''), + ('', ''), + ] + + qemu_img.convert_image('source', 'dest', 'out_format', + source_format='fmt') + convert_call = mock.call('qemu-img', 'convert', '-O', + 'out_format', '-f', 'fmt', 'source', 'dest', + run_as_root=False, + prlimit=mock.ANY, + use_standard_locale=True, + env_variables={'MALLOC_ARENA_MAX': '3'}) + execute_mock.assert_has_calls([ + convert_call, + mock.call('sync'), + convert_call, + mock.call('sync'), + convert_call, + ]) + + @mock.patch.object(utils, 'execute', autospec=True) + def test_convert_image_retries_alternate_error(self, execute_mock): + ret_err = 'Failed to allocate memory: Cannot allocate memory\n' + execute_mock.side_effect = [ + processutils.ProcessExecutionError(stderr=ret_err), ('', ''), + processutils.ProcessExecutionError(stderr=ret_err), ('', ''), + ('', ''), + ] + + qemu_img.convert_image('source', 'dest', 'out_format', + source_format='fmt') + convert_call = mock.call('qemu-img', 'convert', '-O', + 'out_format', '-f', 'fmt', 'source', 'dest', + run_as_root=False, + prlimit=mock.ANY, + use_standard_locale=True, + env_variables={'MALLOC_ARENA_MAX': '3'}) + execute_mock.assert_has_calls([ + convert_call, + mock.call('sync'), + convert_call, + mock.call('sync'), + convert_call, + ]) + + @mock.patch.object(utils, 'execute', autospec=True) + def test_convert_image_retries_and_fails(self, execute_mock): + ret_err = 'qemu: qemu_thread_create: Resource temporarily unavailable' + execute_mock.side_effect = [ + processutils.ProcessExecutionError(stderr=ret_err), ('', ''), + processutils.ProcessExecutionError(stderr=ret_err), ('', ''), + processutils.ProcessExecutionError(stderr=ret_err), ('', ''), + processutils.ProcessExecutionError(stderr=ret_err), + ] + + self.assertRaises(processutils.ProcessExecutionError, + qemu_img.convert_image, + 'source', 'dest', 'out_format', source_format='fmt') + convert_call = mock.call('qemu-img', 'convert', '-O', + 'out_format', '-f', 'fmt', 'source', 'dest', + run_as_root=False, + prlimit=mock.ANY, + use_standard_locale=True, + env_variables={'MALLOC_ARENA_MAX': '3'}) + execute_mock.assert_has_calls([ + convert_call, + mock.call('sync'), + convert_call, + mock.call('sync'), + convert_call, + ]) + + @mock.patch.object(utils, 'execute', autospec=True) + def test_convert_image_just_fails(self, execute_mock): + ret_err = 'Aliens' + execute_mock.side_effect = [ + processutils.ProcessExecutionError(stderr=ret_err), + ] + + self.assertRaises(processutils.ProcessExecutionError, + qemu_img.convert_image, + 'source', 'dest', 'out_format', source_format='fmt') + convert_call = mock.call('qemu-img', 'convert', '-O', + 'out_format', '-f', 'fmt', 'source', 'dest', + run_as_root=False, + prlimit=mock.ANY, + use_standard_locale=True, + env_variables={'MALLOC_ARENA_MAX': '3'}) + execute_mock.assert_has_calls([ + convert_call, + ]) diff --git a/releasenotes/notes/image-security-5c23b890409101c9.yaml b/releasenotes/notes/image-security-5c23b890409101c9.yaml new file mode 100644 index 000000000..3bb9a66eb --- /dev/null +++ b/releasenotes/notes/image-security-5c23b890409101c9.yaml @@ -0,0 +1,48 @@ +--- +security: + - | + Ironic-Python-Agent now checks any supplied image format value against + the detected format of the image file and will prevent deployments should + the values mismatch. + - | + Images previously misconfigured as raw despite being in another format, + in some non-default configurations, may have been mistakenly converted if + needed. Ironic-Python-Agent will no longer perform conversion in any case + for images with metadata indicating in raw format. + - | + Ironic-Python-Agent *always* inspects any non-raw user image content for + safety before running any qemu-based utilities on the image. This is + utilized to identify the format of the image and to verify the overall + safety of the image. Any images with unknown or unsafe feature uses are + explicitly rejected. This can be disabled in both IPA and Ironic by setting + ``[conductor]disable_deep_image_inspection`` to ``True`` for the Ironic + deployment. Image inspection is the primary mitigation for CVE-2024-44082 + being tracked in + `bug 2071740 `_. + Operators may desire to set + ``[conductor]conductor_always_validates_images`` on Ironic conductors to + mitigate the issue before they have upgraded their Ironic-Python-Agent. + - | + Ironic-Python-Agent now explicitly enforces a list of permitted image + types for deployment, defaulting to "raw" and "qcow2". Other image types + may work, but are not explicitly supported and must be enabled. This can + be modified by setting ``[conductor]permitted_image_formats`` for all + Ironic services. +fixes: + - | + Fixes multiple issues in the handling of images as it related to + execution of the ``qemu-img`` utility. When using this utility to convert + an unsafe image, a malicious user can extract information from a node + while Ironic-Python-Agent is deploying or converting an image. + Ironic-Python-Agent now inspects all non-raw images for safety, and never + runs qemu-based utilities on raw images. This fix is tracked as + CVE-2024-44082 and `bug 2071740 `_. + - | + Images with metadata indicating a "raw" disk format may have been + transparently converted from another format. Now, these images will have + their exact contents imaged to disk without modification. +upgrade: + - | + Deployers implementing their own ``HardwareManagers`` must to audit + their code for unsafe uses of `qemu-img` and related methods.