Update the zed release notes configuration to build from
zed-eom, which is the last point at which the cinder project
team had responsitibility for the branch.
Change-Id: Idc31718defa028fe6067d3e37afd9a85bdc2d61b
Only change is to remove the py38 classifier from setup.cfg;
did not modify the python_requires (which is currently >=3.8)
The brick tox.ini doesn't specify specific python versions, so
no modifications there. We rely on the openstack-python3-jobs
template to specify what versions of python should run in the
gate.
Change-Id: Id27841ba03b915fe3c1cde86273c76e24c521b88
Add file to the reno documentation build to show release notes for
stable/2024.1.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/2024.1.
Sem-Ver: feature
Change-Id: I8c58db8c491fb9c66933ea44be60cadadd7b66b1
When fetching the target value (T in HCTL) for the storage HBAs,
we use the /sys/class/fc_transport path to find available targets.
However, this path only contains targets that already have a LUN
attached from, to the host.
Scenario:
If we have 2 controllers on the backend side with 4 target HBAs each (total 8).
For the first LUN mapping from controller1, we will do a wildcard
scan and find the 4 targets from controller1 which will get
populated in the /fc_transport path.
If we try mapping a LUN from controller2, we try to find targets in the
fc_transport path but the path only contains targets from controller1 so
we will not be able to discover the LUN from controller2 and fail with
NoFibreChannelVolumeDeviceFound exception.
Solution:
In each rescan attempt, we will first search for targets in the
fc_transport path: "/sys/class/fc_transport/target<host>*".
If the target in not found then we will search in the fc_remote_ports
path: "/sys/class/fc_remote_ports/rport-<host>*"
If a [c,t,l] combination is found from either path, we add it to
the list of ctls we later use it for scanning.
This way, we don't alter the current "working" mechanism of scanning
but also add an additional way of discovering targets and improving
the scan to avoid failure scenarios in each rescan attempt.
Closes-Bug: #2051237
Change-Id: Ia74b0fc24e0cf92453e65d15b4a76e565ed04d16
As per the current release tested runtime, we test
till python 3.11 so updating the same in python
classifier in setup.cfg
Change-Id: I06e453b6d02ac8c7b615d3d61b06173eb249dc27
The nvme cli has changed its behavior, now they no longer differentiate
between errors returning a different exit code.
Exit code 1 is for errors and 0 for success.
This patch fixes the detection of race conditions to also look for the
message in case it's a newer CLI version.
Together with change I318f167baa0ba7789f4ca2c7c12a8de5568195e0 we are
ready for nvme CLI v2.
Closes-Bug: #1961222
Change-Id: Idf4d79527e1f03cec754ad708d069b2905b90d3f
Attaching NVMe-oF no longer works in CentosOS 9 stream using nvme 2.4
and libnvme 1.4.
The reason is that the 'address' file in sysfs now has the 'src_addr'
information.
Before we had:
traddr=127.0.0.1,trsvcid=4420 After:
Now we have:
traddr=127.0.0.1,trsvcid=4420,src_addr=127.0.0.1
This patch fixes this issue and future proofs for any additional
information that may be added by parsing the contents and searching for
the parts we care: destination address and port.
Closes-Bug: #2035811
Change-Id: I7a33f38fb1b215dd23e2cff3ffa79025cf19def7
When an nvme subsystem has all portals in connecting state and we try
to attach a new volume to that same subsystem it will fail.
We can reproduce it with LVM+nvmet if we configure it to share targets
and then:
- Create instance
- Attach 2 volumes
- Delete instance (this leaves the subsystem in connecting state [1])
- Create instance
- Attach volume <== FAILS
The problem comes from the '_connect_target' method that ignores
subsystems in 'connecting' state, so if they are all in that state it
considers it equivalent to all portals being inaccessible.
This patch changes this behavior and if we cannot connect to a target
but we have portals in 'connecting' state we wait for the next retry of
the nvme linux driver. Specifically we wait 10 more seconds that the
interval between retries.
[1]: https://bugs.launchpad.net/nova/+bug/2035375
Closes-Bug: #2035695
Change-Id: Ife710f52c339d67f2dcb160c20ad0d75480a1f48
Dell Powerflex 4.x changed the error code of VOLUME_NOT_MAPPED_ERROR
to 4039. This patch adds that error code.
Closes-Bug: #2046810
Change-Id: I76aa9e353747b1651480efb0f3de11c707fe5abe
The mypy job complaints about 'exc' variable[1] since it was used
for ExceptionChainer as well as TargetPortalNotFound exception.
Changing the variable name for TargetPortalNotFound exception from
'exc' to 'target_exc' makes the 'type: ignore' comments unnecessary.
[1] Trying to read deleted variable 'exc'
Change-Id: I4b10db0754f0e00bb02d3a60f9aaf88b90466a8f
This patch improves the creation of the /etc/nvme/hostnqn file by using
the system UUID value we usually already know.
This saves us one or two calls to the nvme-cli command and it also
allows older nvme-cli versions that don't have the `show-hostnqn`
command or have it but can only read from file to generate the same
value every time, which may be useful when running inside a container
under some circumstances.
Change-Id: Ib250d213295695390dbdbb3506cb297a86e95218
The Dell PowerFlex (scaleio) connector maintains a token cache
for PowerFlex OS.
The cache was overwritten with None by misktake
in Change-ID I6f01a178616b74ed9a86876ca46e7e46eb360518.
This patch fixes the broken cache to avoid unnecessary login.
Closes-Bug: #2004630
Change-Id: I2399b0b2af8254cd5697b44dcfcec553c2845bec
This reverts commit 33661ece808a6c32ad36aee0acb46a3c0624d7ce.
Reason for revert: breaks reading password from the config file
Change-Id: I840d8c4d66daf0ab8636617b42cdb47dd4313cc9
from an image
This patch fixes the issue of password getting writen in plain text in
logs while creating a new volume. It created a new logger with default
log level at error.
Closes-Bug: #2003179
Change-Id: I0292a30f402e5acddd8bbc31dfaef12ce24bf0b9
Dell Powerflex 4.x changed the error code of VOLUME_ALREADY_MAPPED_ERROR
to 4037. This patch adds that error code.
Closes-Bug: #2013749
Change-Id: I928c97ea977f6d0a0b654f15c80c00523c141406
In some old nvme-cli versions the NVMe-oF create_hostnqn method fails.
This happens specifically on versions between not having the
show-hostnqn command and having it always return a value. On those
version the command only returns the value present in the file and never
tries to return an idempotent or random value.
This patch adds for that specific case, which is identified by the
stderr message:
hostnqn is not available -- use nvme gen-hostnqn
Closes-Bug: #2035606
Change-Id: Ic57d0fd85daf358e2b23326022fc471f034b0a2f