None of the existing ironic-python-agent integer config options included
min or max values. Added appropriate min/max values for the integer
config options.
Two of the integer options are for ports (listen_port and
advertise_port). These were changed to use the more appropriate
oslo_config cfg.PortOpt instead of cfg.IntOpt. PortOpt has the proper
min and max values built in.
Change-Id: I98709a45d099aea62c9973beb6817591cb445a9c
Story: 1731950
You can generate this error if after having provisioned a node
using GPT partitioning, you clean its MBR using say
dd if=/dev/zero bs=1024 count=1 of=/dev/sda
and then cleanup all Ironic/Bifrost informations to get it
reprovisioned.
In this case sgdisk -Z returns an error and last_error field
in Ironic contains:
Error writing image to device: Writing image to device
/dev/sda failed with exit code 2
Caution: invalid main GPT header, but valid backup;
regenerating main header\nfrom backup!\n
\nInvalid partition data!\
Change-Id: Ib617737fff5e40cb376edda0232e0726d9c71231
hdparm versions prior to 9.51 interpret the value, NULL, as a
password with string value: "NULL".
Example output of hdparm with NULL password:
[root@localhost ~]# hdparm --user-master u --security-unlock NULL /dev/sda
security_password="NULL"
/dev/sda:
Issuing SECURITY_UNLOCK command, password="NULL", user=user
SECURITY_UNLOCK: Input/output error
Example output of hdparm with "" as password:
[root@localhost ~]# hdparm --user-master u --security-unlock "" /dev/sda
security_password=""
/dev/sda:
Issuing SECURITY_UNLOCK command, password="", user=user
Note the values of security_password in the output above. The output
was observed on a CentOS 7 system, which ships hdparm 9.43 in the
offical repositories.
This change attempts to unlock the drive with the empty string if an
unlock with NULL was unsucessful.
Issuing a security-unlock will cause a state transition from SEC4
(security enabled, locked, not frozen) to SEC5 (security enabled,
unlocked, not frozen). In order to check that a password unlock attempt
was successful it makes sense to check that the drive is in the unlocked
state (a necessary condition for SEC5). Only after all unlock attempts
fail, do we consider the drive out of our control.
The conditions to check the drive is in the right state have been
adjusted to ensure that the drive is in the SEC5 state prior to issuing
a secure erase. Previously, on the "recovery from previous fail" path,
the security state was asserted to be "not enabled" after an unlock -
this could never have been the case.
A good overview of the ATA security states can be found here:
http://www.admin-magazine.com/Archive/2014/19/Using-the-ATA-security-features-of-modern-hard-disks-and-SSDs
Change-Id: Ic24b706a04ff6c08d750b9e3d79eb79eab2952ad
Story: 2001762
Task: 12161
Story: 2001763
Task: 12162
It seems the udhcpc script is not executable and no sleeping
cause tinyipa fails to acquire IP in multi-tenant env.
Story: #2002024
Change-Id: I3a693d75bfa54fe905bd3cd0587bb139934c087c
These tests exercise Ironic API with the fake driver, thus they provide
no coverage for IPA and can be excluded.
Change-Id: I02eb41b112f1da413178cbdc5834d2904e9d26e9
Increases the amount of ram for CoreOS IPA to 2GB
as the base CoreOS image is now 310MB.
Bumped CPU count for CoreOS runs to 2 CPUs as the
concurrency helps boot times for the CoreOS ramdisk.
Adds netbase, udev, and open-iscsi to debian jessie container
as they are no longer present in the default container.
Explicitly set path variable for execution in the debian
container as udevadm is in /sbin, and we may not have
/sbin on the path that is passed through to the
chroot.
Also fixed new pep8 test failures.
Story: #1600228
Task: #16287
Change-Id: I488445dfd261b7bca322a0be7b4d8ca6105750a3
hacking is not capped in g-r and it is in
blacklist for requirement as hacking new version
can break the gate jobs.
Hacking can break gate jobs because of various
reasons:
- There might be new rule addition in hacking
- Some rules becomes default from non-default
- Updates in pycodestyle etc
That was the main reason it was not added in g-r
auto sync also. Most of the project maintained the
compatible and cap the hacking version in
test-requirements.txt and update to new version when
project is ready. Bumping new version might need code
fix also on project side depends on what new in that
version.
If project does not have cap the hacking version then,
there is possibility of gate failure whenever new hacking
version is released by QA team.
Example of such failure in recent release of hacking 1.1.0
- http://lists.openstack.org/pipermail/openstack-dev/2018-May/130282.html
Change-Id: I2c84d3368bd6675c28ebba695e0c1afdd2867588
Migrate the legacy job to start using our bindep role from zuul-jobs.
This will allow openstack-infra to delete
slave_scripts/install-distro-packages.sh in the future.
Change-Id: If4a5b5c1d85e1491c1544378479c0fc82ad2af03
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
Fix the lower constraints settings to match the expected values.
We will manage the eventlet version using constraints now. See the
thread starting at
http://lists.openstack.org/pipermail/openstack-dev/2018-April/129096.html
for more details.
Change-Id: I66b4e20bb565ac7fa9ca5cf48410f29161ef7b3a
Signed-off-by: Doug Hellmann <doug@doughellmann.com>
W503 according to the [1] pycodestyle docs is not supposed to be
enabled. But it is.
W503 is something we will likely never enable as it is a personal
style decision and can change depending on the code. There is no one
right answer. Interestingly there is also a W504 which is the opposite
check.
[1] http://pycodestyle.pycqa.org/en/latest/intro.html#error-codes
Change-Id: I1025f21a57837e97280f82baba50fdd823a190cc
Create a tox environment for running the unit tests against the lower
bounds of the dependencies.
Create a lower-constraints.txt to be used to enforce the lower bounds
in those tests.
Add openstack-tox-lower-constraints job to the zuul configuration.
See http://lists.openstack.org/pipermail/openstack-dev/2018-March/128352.html
for more details.
Change-Id: I8d773d4ee3d0835fb2a9183fe2154e82db085bd5
Depends-On: https://review.openstack.org/555034
Signed-off-by: Doug Hellmann <doug@doughellmann.com>
This change modifies the playbooks to use the 'ipmi' hardware type.
It also removes redundant conditions. The job names are not changed
to simplify the patch.
Change-Id: Ie7609ab4cceb5c01806a7a0728a4087f790a590e
This is the follow-up patch addresses outstanding comments for
commit 689dbf6b5c6ec1dcaf1fa37d288518c91eedf4ec.
Change-Id: I72c189988c5c274c32d61a2b9aea5a84da2b2c6e
Related-Bug: #1526449
This adds documentation for rescue mode, including the finalize_rescue
command as well as upstream support in agent images.
Change-Id: Id0834941ee4dacf2e7c0feaa65126d63e8a97c39
Partial-Bug: 1526449
Even though it was working opening the file in 'read' mode, it really
should be opened in 'write' mode, since we are redirecting the output
to the file.
Interestingly it does fail in 'read' mode if the command is:
echo something
But passes in 'write' mode.
Change-Id: Ic67091881e0be377e527b78d270ab48962881ae0
In Python 2.7, functools.wraps() does not provide the '__wrapped__'
attribute. This attribute is used by
oslo_utils.reflection.get_signature() when getting the signature of a
function. If a function is decorated without the '__wrapped__'
attribute then the signature will be of the decorator rather than the
underlying function.
From the six documentation for six.wraps():
This is exactly the functools.wraps() decorator, but it sets the
__wrapped__ attribute on what it decorates as functools.wraps()
does on Python versions after 3.2.
Change-Id: Ic0f7a6be9bc3e474a0229b264d1bfe6c8f7e6d85