This is an automatically generated patch to ensure unit testing
is in place for all the of the tested runtimes for antelope. Also,
updating the template name to generic one.
See also the PTI in governance [1].
[1]: https://governance.openstack.org/tc/reference/project-testing-interface.html
Change-Id: I91232b32f26842802fc42c1d9e28a6ea791ecb7b
Add file to the reno documentation build to show release notes for
stable/zed.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/zed.
Sem-Ver: feature
Change-Id: Iff9b5efee0b436357d5cae3909a89cd09d5e6070
Extend the ability to skip disks to RAID devices
This allows users to specify the volume name of
a logical device in the skip list which is then not cleaned
or created again during the create/apply configuration phase
The volume name can be specified in target raid config provided
the change https://review.opendev.org/c/openstack/ironic-python-agent/+/853182/
passes
Story: 2010233
Change-Id: Ib9290a97519bc48e585e1bafb0b60cc14e621e0f
Use 'volume_name' field from 'target_raid_config' to create logical
disks if it is present
Do not allow two logical disks to have the same volume name
Change-Id: If3e4e9f8698ec3e0cb49717f8ed2087d2ba03f2c
In the event a device name is set to contain a raid device path,
it is possible for the Name and Events field values of mdadm's
detailed output to contain text which inadvertently gets captured and
mapped as component data for the "holder" devices of the RAID set.
This would cause invalid values to get passed to UEFI methods
which would cause a deployment to fail under these circumstances.
We now ignore the Name and Events fields in mdadm output.
Change-Id: If721dfe1caa5915326482969e55fbf4697538231
Fix minor issues suggested by dtantsur
Add an example of skip list specification to the documentation
A follow-up patch to I3bdad3cca8acb3e0a69ebb218216e8c8419e9d65
Change-Id: Ic94a33b7bc0572a1cc8f92b330474ec63a173e81
Introduce a field skip_block_devices in properties - this is a list of dictionaries
Create a helper function list_block_devices_check_skip_list
Update tests of erase_devices_express to use node when calling _list_erasable_devices
Add tests covering various options of the skip list definition
Use the helper function in get_os_install_device when node is cached
Story: 2009914
Change-Id: I3bdad3cca8acb3e0a69ebb218216e8c8419e9d65
The 5 lines of code were extracted from erase_devices_metadata to _list_erasable_devices, but now are duplicated in both functions.
The variable block_devices is not used in erase_devices_metadata.
Change-Id: I89f56c69d90fb0eb61907d6667266fbd57d333af
Certain filesystems are sometimes used in specialty computing
environments where a shared storage infrastructure or fabric exists.
These filesystems allow for multi-host shared concurrent read/write
access to the underlying block device by *not* locking the entire
device for exclusive use. Generally ranges of the disk are reserved
for each interacting node to write to, and locking schemes are used
to prevent collissions.
These filesystems are common for use cases where high availability
is required or ability for individual computers to collaborate on a
given workload is critical, such as a group of hypervisors supporting
virtual machines because it can allow for nearly seamless transfer
of workload from one machine to another.
Similar technologies are also used for cluster quorum and cluster
durable state sharing, however that is not specifically considered
in scope.
Where things get difficult is becuase the entire device is not
exclusively locked with the storage fabrics, and in some cases locking
is handled by a Distributed Lock Manager on the network, or via special
sector interactions amongst the cluster members which understand
and support the filesystem.
As a reult of this IO/Interaction model, an Ironic-Python-Agent
performing cleaning can effectively destroy the cluster just by
attempting to clean storage which it percieves as attached locally.
This is not IPA's fault, often this case occurs when a Storage
Administrator forgot to update LUN masking or volume settings on
a SAN as it relates to an individual host in the overall
computing environment. The net result of one node cleaning the
shared volume may include restoration from snapshot, backup
storage, or may ultimately cause permenant data loss, depending
on the environment and the usage of that environment.
Included in this patch:
- IBM GPFS - Can be used on a shared block device... apparently according
to IBM's documentation. The standard use of GPFS is more Ceph
like in design... however GPFS is also a specially licensed
commercial offering, so it is a red flag if this is
encountered, and should be investigated by the environment's
systems operator.
- Red Hat GFS2 - Is used with shared common block devices in clusters.
- VMware VMFS - Is used with shared SAN block devices, as well as
local block devices. With shared block devices,
ranges of the disk are locked instead of the whole
disk, and the ranges are mapped to virtual machine
disk interfaces.
It is unknown, due to lack of information, if this
will detect and prevent erasure of VMFS logical
extent volumes.
Co-Authored-by: Jay Faulkner <jay@jvf.cc>
Change-Id: Ic8cade008577516e696893fdbdabf70999c06a5b
Story: 2009978
Task: 44985
IPA dropped support for Python2 long ago,
and now Python2 is not even available in newer distros,
breaking installation if IPA binary dependencies.
Depends-On: https://review.opendev.org/c/openstack/ironic/+/848345
Change-Id: I75a618f94de58f6de2bd96b37de1894bb0e61998
Currently, if smartctl is not found by IPA, it will silently skip ATA
secure erase and proceed to shred (if enabled). This is supposedly for
backwards compatibility, but is quite hard to diagnose.
This change adds a warning message to make it more obvious what is
happening.
TrivialFix
Change-Id: I03a381e99de79f201ec7e9a388777c3d48457e93
We don't need it anymore as we don't support python < 3.8
Also it was removed from global requirements so it breaks the
requirements check.
Change-Id: Ia12cbef3515f823fdd627a36020cf7801bf6d734
Use pure json instead of jsonutils.
Borrow encode function from oslo.serialization to be used in the
utils module.
Change-Id: Ied9a2259a4329a86b4f0853bd1fb187563c0a036
UDev prefix is DM_ not ID_ for them. On top of that, they don't have
short serials (or at least don't always have).
Change-Id: I5b6075fbff72201a2fd620f789978acceafc417b
The lsblk output is available in json format since version 2.27 of
util-linux [1]
https: //mirrors.edge.kernel.org/pub/linux/utils/util-linux/v2.27/v2.27-ReleaseNotes
Change-Id: I0c5812736b7a320cc4ecc333f80db70eb78cc76d
It has been removed from Zed and we're now testing with Python 3.9 in
tinyipa and Python 3.8 in CentOS Stream 9
Change-Id: I028121d593b910e585f44e464c7fcb3e635420e8
Previously IPA was set to enforce a minimum version of 3.36.0 which
was a Python2 version build which does did not support more recent
versions of python. Given that relationship is realistically impossible
moving the minimum to something released a bit more recently, in this
case 4.6.1.
Change-Id: Ibfbcc1196eb9f583ba9d79bae7988d64de514f6d
Removes multipath base devices from consideration by
default, and instead allows the device-mapper device
managed by multipath to be picked up and utilized
instead.
In effect, allowing us to ignore standby paths *and*
leverage multiple concurrent IO paths if so offered
via ALUA.
In reality, anyone who has previously built IPA with
multipath tooling might not have encountered issues
previously because they used Active/Active SAN storage
environments. They would have worked because the IO lock
would have been exchanged between controllers and paths.
However, Active/Passive environments will block passive
paths from access, ultimately preventing new locks from
being established without proper negotiation. Ultimately
requiring multipathing *and* the agent to be smart enough
to know to disqualify underlying paths to backend storage
volumes.
An additional benefit of this is active/active MPIO devices
will, as long as ``multipath`` is present inside the ramdisk,
no longer possibly result in duplicate IO wipes occuring
accross numerous devices.
Story: #2010003
Task: #45108
Resolves: rhbz#2076622
Resolves: rhbz#2070519
Change-Id: I0fd6356f036d5ff17510fb838eaf418164cdfc92