A Python agent for provisioning and deprovisioning Bare Metal servers.
Go to file
Julia Kreger beb7484858 Guard shared device/cluster filesystems
Certain filesystems are sometimes used in specialty computing
environments where a shared storage infrastructure or fabric exists.
These filesystems allow for multi-host shared concurrent read/write
access to the underlying block device by *not* locking the entire
device for exclusive use. Generally ranges of the disk are reserved
for each interacting node to write to, and locking schemes are used
to prevent collissions.

These filesystems are common for use cases where high availability
is required or ability for individual computers to collaborate on a
given workload is critical, such as a group of hypervisors supporting
virtual machines because it can allow for nearly seamless transfer
of workload from one machine to another.

Similar technologies are also used for cluster quorum and cluster
durable state sharing, however that is not specifically considered
in scope.

Where things get difficult is becuase the entire device is not
exclusively locked with the storage fabrics, and in some cases locking
is handled by a Distributed Lock Manager on the network, or via special
sector interactions amongst the cluster members which understand
and support the filesystem.

As a reult of this IO/Interaction model, an Ironic-Python-Agent
performing cleaning can effectively destroy the cluster just by
attempting to clean storage which it percieves as attached locally.
This is not IPA's fault, often this case occurs when a Storage
Administrator forgot to update LUN masking or volume settings on
a SAN as it relates to an individual host in the overall
computing environment. The net result of one node cleaning the
shared volume may include restoration from snapshot, backup
storage, or may ultimately cause permenant data loss, depending
on the environment and the usage of that environment.

Included in this patch:
- IBM GPFS - Can be used on a shared block device... apparently according
             to IBM's documentation. The standard use of GPFS is more Ceph
             like in design... however GPFS is also a specially licensed
             commercial offering, so it is a red flag if this is
             encountered, and should be investigated by the environment's
             systems operator.
- Red Hat GFS2 - Is used with shared common block devices in clusters.
- VMware VMFS - Is used with shared SAN block devices, as well as
                local block devices. With shared block devices,
                ranges of the disk are locked instead of the whole
                disk, and the ranges are mapped to virtual machine
                disk interfaces.
                It is unknown, due to lack of information, if this
                will detect and prevent erasure of VMFS logical
                extent volumes.

Co-Authored-by: Jay Faulkner <jay@jvf.cc>
Change-Id: Ic8cade008577516e696893fdbdabf70999c06a5b
Story: 2009978
Task: 44985
2022-07-19 13:24:03 -07:00
doc Guard shared device/cluster filesystems 2022-07-19 13:24:03 -07:00
examples Increase version of hacking and pycodestyle 2021-07-30 14:34:33 +02:00
imagebuild Remove imagebuild/common, it's not longer used by IPA-builder 2019-10-16 14:14:13 +02:00
ironic_python_agent Guard shared device/cluster filesystems 2022-07-19 13:24:03 -07:00
releasenotes Guard shared device/cluster filesystems 2022-07-19 13:24:03 -07:00
tools Adds bandit template and exclude some of tests 2019-06-20 14:39:36 +08:00
zuul.d Merge "CI: Removing ironic job queue" 2022-07-05 18:10:42 +00:00
.gitignore Remove the configuration sample file 2019-12-02 12:11:58 +01:00
.gitreview OpenDev Migration Patch 2019-04-19 19:48:56 +00:00
.stestr.conf Migrate to stestr as unit tests runner 2017-09-26 09:23:53 -07:00
bindep.txt Drop python2 from bindep.txt 2022-06-30 23:33:05 +00:00
CONTRIBUTING.rst Change launchpad to StoryBoard 2018-03-28 14:15:29 +00:00
LICENSE add license file 2013-09-17 13:41:59 -07:00
plugin-requirements.txt Update hardware to 0.24,0 2020-01-15 12:44:31 +01:00
proxy.sh Add support for proxy servers during image build 2016-02-04 14:27:49 -08:00
README.rst Replace git.openstack.org URLs with opendev.org URLs 2019-04-25 09:23:11 +08:00
requirements.txt Merge "Remove oslo.serialization dependency" 2022-07-02 02:56:44 +00:00
setup.cfg Drop support for Python 3.6 and 3.7 2022-05-31 09:46:49 +02:00
setup.py Cleanup py27 support 2020-04-05 10:46:10 +02:00
test-requirements.txt Upgrade version of doc8 2020-12-14 14:47:57 +01:00
tox.ini Drop lower-constraints.txt and its testing 2022-05-10 09:46:19 +02:00

Ironic Python Agent

Team and repository tags

image

Overview

An agent for controlling and deploying Ironic controlled baremetal nodes.

The ironic-python-agent works with the agent driver in Ironic to provision the node. Starting with ironic-python-agent running on a ramdisk on the unprovisioned node, Ironic makes API calls to ironic-python-agent to provision the machine. This allows for greater control and flexibility of the entire deployment process.

The ironic-python-agent may also be used with the original Ironic pxe drivers as of the Kilo OpenStack release.

Building the IPA deployment ramdisk

For more information see the Image Builder section of the Ironic Python Agent developer guide.

Using IPA with devstack

This is covered in the Deploying Ironic with DevStack section of the Ironic dev-quickstart guide.

Project Resources

Project status, features, and bugs are tracked on StoryBoard:

https://storyboard.openstack.org/#!/project/947

Developer documentation can be found here:

https://docs.openstack.org/ironic-python-agent/latest/

Release notes for the project are available at:

https://docs.openstack.org/releasenotes/ironic-python-agent/

Source code repository for the project is located at:

https://opendev.org/openstack/ironic-python-agent/

IRC channel:

#openstack-ironic

To contribute, start here: Openstack: How to contribute.