Changes a neutron call to be project scoped as system
scoped can't create a resource and, and removes the unset
which no longer makes sense now that
I86ffa9cd52454f1c1c72d29b3a0e0caa3e44b829
has merged removing the legacy vars from devstack.
Also renames intenral use setting of OS_CLOUD to IRONIC_OS_CLOUD
as some services were still working with system scope or some sort
of mixed state occuring previously as some of the environment variables
were present still, however they have been removed from devstack.
This change *does* explicitly set an OS_CLOUD variable as well on
the base ironic job. This is because things like grenade for Xena
will expect the variable to be present.
Depends-On: https://review.opendev.org/c/openstack/devstack/+/818449
Change-Id: I912527d7396a9c6d8ee7e90f0c3fd84461d443c1
Change the default boot mode to UEFI, as discussed during the end
of the Wallaby release cycle and previously agreed a very long time
ago by the Ironic community.
Change-Id: I6d735604d56d1687f42d0573a2eed765cbb08aec
Neutron's firewall initialization with OVS seems
to be the source of our pain with ports not being found
by ironic jobs. This is because firewall startup errors
crashes out the agent with a RuntimeError while it is deep
in it's initial __init__ sequence.
This ultimately seems to be rooted with communication
with OVS itself, but perhaps the easiest solution is
to just disable the firewall....
Related: https://bugs.launchpad.net/neutron/+bug/1944201
Change-Id: I303989a825a7e35f1cb7b401134fd63553f6791c
Observed an OOM incident causing
ironic-tempest-ipa-partition-pxe_ipmitool to fail.
One vm started, the other seemed to try to start twice, but both times
stopped shortly into the run and the base OS had recorded in it an OOM
failure.
It appears the actual QEMU memory footprint being consumed when
configured at 3GB is upwards of 4GB, which obviously is too big to
fit in our 8GB VM instance.
Dialing back slightly, in hopes it stabilizes the job.
Change-Id: Id8cef722ed305e96d89b9960a8f60f751f900221
This is part of the work to add jobs which confirm ironic works with
FIPS enabled, but this change is also appropriate non-FIPS jobs.
Change-Id: I4af4e811104088d28d7be6df53c26e72db039e08
The lower-constraints test was removed becuase of an issue where pip
could not correctly determine the required packages versions to install,
ending in an almost infinite loop that would end up in timeout, failure,
and general mayhem.
Recently the issue has been fixed and, if properly configured, the
lower-constraints test can provide good indication of which minimum
versions are required to support the current code.
This patch adds the test back to the current development branch.
The long term goal is to keep the lower-constraints file in the stable
branches, but remove the test job to avoid issues during backports.
Change-Id: I5fff32ec5dd1a045116bcf02349650b1f5e3a196
The devstack default limit enforcement for glance defaults
to 1GB, and unfortunately this is too small for many to use
larger images such as centos which includes hardware firmware
images for execution on baremetal where drivers need the vendor
blobs in order to load/run.
Sets ironic-base to 5GB, and updates examples accordingly.
Depends-On: https://review.opendev.org/c/openstack/devstack/+/801309
Change-Id: I41294eb571d07a270a69e5b816cdbad530749a94
Adds support to the ironic devstack plugin to configure
ironic to be used in a scope-enforcing mode in line with
the Secure RBAC effort. This change also defines two new
integration jobs *and* changes one of the existing
integration.
In these cases, we're testing functional crub interactions,
integration with nova, and integration with ironic-inspector.
As other services come online with their plugins and
devstack code being able to set the appropriate scope
enforcement configuration, we will be able to change the
overall operating default for all of ironic's jobs and
exclude the differences.
This effort identified issues in ironic-tempest-plugin,
tempest, devstack, and required plugin support in
ironic-inspector as well, and is ultimately required
to ensure we do not break the Secure RBAC.
Luckilly, it all works.
Change-Id: Ic40e47cb11a6b6e9915efcb12e7912861f25cae7
Utilizes a simple bifrost job to stand up a simple ironic deployment
where fake nodes will be created added, and a simple benchmark will
then be executed.
Change-Id: I33e29ee303b2cf4987b36c7aad2482bc3673f669
At current Zuul job in zuul.d/ironic-jobs.yaml, items of
required-project are like this (without leading hostname)
required-projects:
- openstack/ironic
- openstack/ABCD
but not like this (with leading hostname)
required-projects:
- opendev.org/openstack/ironic
- opendev.org/openstack/ABCD
With first format, if we have two openstack/ironic entries in
Zuul's tenant configuration file (Zuul tenant config file in 3rd
party CI environment usually has 2 entries: one to fetch upstream
code, another for Gerrit event stream to trigger Zuul job), we'll
have warning in zuul-scheduler's log
Project name 'openstack/ironic' is ambiguous,
please fully qualify the project with a hostname
With second format, that warning doesn't appear. And Zuul running at
3rd party CI environment can reuse Zuul jobs in zuul.d/ironic-jobs.yaml
in their Zuul jobs.
This commit modifies all Zuul jobs in zuul.d/ironic-jobs.yaml
to use second format.
Story: 2008724
Task: 42068
Change-Id: I85adf3c8b3deaf0d1b2d58dcd82724c7e412e2db
A recent magnum bug (https://storyboard.openstack.org/#!/story/2008494)
when running with uwsgi has yielded an interesting question if Ironic
is orphaning rpc clients or not. Ironic's code is slightly different
but also very similar, and similar bugs have been observed in the past
where the python garbage collection never cleans up the old connection
objects and effectively orphans the connection.
So we should likely at least try to collect some of the information
so we can determine if this is the case in our CI jobs. Hence this
change and attempt to collect that data after running CI.
Change-Id: I4e80c56449c0b3d07b160ae6c933a7a16c63c5c5
As discussed during the upstream ironic community meeting on
Monday Dec 14 2020, the lower-constraints job is being removed.
Change-Id: I116d99014a7bf77ca77b796ea3b759800dd808ce
The non-base job is designed for the integrated gate and may have
unnecessary side effects. It has recently overriding the OVS agent
bridge settings, breaking our job.
Make the job voting again.
Change-Id: Ied8cafd32c3e634d498467ebe878a411f0b24e6d
* move pep8 dependencies from test-requirements to tox.ini,
they're not needed there and are hard to constraint properly.
* add oslo.cache to l-c to avoid bump of dependencies
Change-Id: Ia5330f3d5778ee62811da081c28a16965e512b55
All the tox jobs are based on openstack-tox, we should convert
ironic-tox-unit-with-driver-libs too.
Change-Id: I20836d586edccfb8cd8fed1f3a89f1497ff96943
The standalone job at present has a high chance of failure
due to two separate things occuring:
1) The deployed nodes from raid tests can be left in a dirty state
as the raid configuration remains and is chosen as the root
device for the next deployment. IF this is chosen by any job,
such as rescue or a deployment test that attempts to login,
then the job fails with unable to ssh. The fix for this is
in the ironic-tempest-plugin but we need to get other fixes
into stablilize the gate first.
https://review.opendev.org/#/c/757141/
2) Long running scenarios run in cleaning such as deployment with
RAID in the standalone suite can encounter conditions where
the conductor tries to send the next command along before the
present configuration command has completed. An example is
downloading the image is still running, while a heartbeat
has occured in the background and the conductor then seeks
to perform a second action. This then causes the entire
deployment to fail, even though it was transitory.
This should be a relatively easy fix.
https://review.opendev.org/759906
Change-Id: I6b02be0fa353daac90abf2b1576800c0710f651e
We're seeing cases where cleaning barely manages to finish after
a 2nd PXE retry, failing a job.
Also make the PXE retry timeout consistent between the CI and
local devstack installations.
Change-Id: I6dc7a91d1a482008cf4ec855a60a95ec0a1abe28
As per victoria cycle testing runtime and community goal
we need to migrate upstream CI/CD to Ubuntu Focal(20.04).
keeping few jobs running on bionic nodeset till
https://storyboard.openstack.org/#!/story/2008185 is fixed
otherwise base devstack jobs switching to Focal will block
the gate.
Change-Id: I1106c5c2b400e7db899959550eb1dc92577b319d
Story: #2007865
Task: #40188
It was supposed to be made voting shortly after the split, but we
sort of forgot. It provides coverage for things (like ansible deploy)
that we used to have voting jobs for.
Change-Id: Id99586d5e01b940089d55c133d9181db05bfdc7e
This change marks the iscsi deploy interface as deprecated and
stops enabling it by default.
An online data migration is provided for iscsi->direct, provided that:
1) the direct deploy is enabled,
2) image_download_source!=swift.
The CI coverage for iscsi deploy is left only on standalone jobs.
Story: #2008114
Task: #40830
Change-Id: I4a66401b24c49c705861e0745867b7fc706a7509
The minimum amount of disk space on CI test nodes
may be approximately 60GB on /opt with now only 1GB
of available swap space by default.
This means we're constrained on the number of VMs and
their disk storage capacity in some cases.
Change-Id: Ia6dac22081c92bbccc803f233dd53740f6b48abb
Infra's disk/swap availability has been apparently
reduced with the new focal node sets such that we
have ~60GB of disk space and only 1GB of swap.
If we configure more swap, then naturally that means
we take away from available VMs as well.
And as such, we should be able to complete grenade
with only four instances, I hope.
Change-Id: I36f8fc8130ed914e8a2c2a11c9679144d931ad73
Currently ironic-base defaults to 2 and our tests try to introspect
all of them. This puts unnecessary strain on the CI systems, return
the number back to 1.
Change-Id: I820bba1347954b659fd7469ed542f98ef0a6eaf0