You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
master
stable/2023.1
stable/wallaby
stable/zed
stable/xena
stable/yoga
stable/train
stable/ussuri
stable/victoria
stable/stein
stable/rocky
6.2.0
queens-eol
wallaby-em
4.3.4
6.1.0
5.0.3
5.2.2
6.0.0
5.2.1
victoria-em
4.0.5
4.3.3
5.0.2
5.2.0
5.1.0
ussuri-em
3.0.8
5.0.1
4.0.4
4.3.2
3.0.7
5.0.0
4.4.0
train-em
2.10.7
2.10.6
3.0.6
4.0.3
4.3.1
3.0.5
4.0.2
4.3.0
4.2.0
3.0.4
4.1.0
stein-em
4.0.1
2.8.7
3.0.3
2.10.5
4.0.0
3.2.1
pike-eol
3.2.0
ocata-eol
2.10.4
2.8.6
2.10.3
2.8.5
3.1.0
3.0.2
3.0.1
2.8.4
2.10.2
rocky-em
2.5.10
3.0.0
2.5.9
2.8.3
2.10.1
2.11.0
queens-em
2.3.9
2.10.0
2.3.8
2.5.8
2.8.2
2.9.1
2.9.0
2.3.7
2.5.7
pike-em
ocata-em
1.15.9
2.3.6
2.5.6
1.15.8
2.8.1
2.8.0
2.3.5
1.15.7
2.5.5
2.7.0
2.6.2
2.5.4
2.3.4
1.15.6
2.6.1
2.6.0
2.3.3
2.5.3
2.5.2
2.5.1
2.3.2
2.5.0
2.3.1
2.4.0
2.3.0
1.11.1
1.15.5
2.2.0
2.1.1
2.1.0
1.15.4
2.0.0
newton-eol
1.15.3
1.15.2
1.15.1
1.15.0
mitaka-eol
1.14.0
1.13.1
1.13.0
1.12.0
1.6.2
1.11.0
1.10.0
1.9.0
liberty-eol
1.8.0
1.7.0
1.6.1
1.6.0
1.5.0
1.4.0
1.3.0
1.2.0
1.1.0
1.0.0
0.8.0
0.7.0
0.6.0
0.5.0
0.4.0
0.3.2
0.3.1
0.3.0
0.2.0
0.1.1
0.1.0
${ noResults }
![]() Currently os-brick is using in-process locks that will only prevent concurrent access to critical sections to threads within a single process. But based on the comment from iSCSI it seems like the code assumed that these were file based locks that prevented concurrent access from multiple processes. Mentioned iSCSI comment is being removed because it's not correct that our current retry mechanism will work with connect and disconnect concurrency issues. The reason why we haven't seen errors in Nova is because it runs a single process and locks will be effective. This is probably also not an issue in some transport protocols, such as FC and RBD, and it wouldn't be an issue in iSCSI connections that don't share targets. But for others, such as iSCSI with shared targets and NVMe-OF, not using file locks will create race conditions in the following cases: - More than 1 cinder backend: Because we can have one backend doing a detach in a create volume from image and the other an attach for an offline migration. - Backup/Restore if backup and volume services are running on the same host. - HCI scenarios where cinder volume and nova compute are running on the same host, even if the same lock path if configured. - Glance using Cinder as backend and is running on the same node as cinder-volume or cinder-backup. The problematic race conditions happen because the disconnect will do a logout of the iSCSI target once the connect call has already confirmed that the session to the target exists. We could just add the file locks to iSCSI and NVMe, but I think it's safer to add it to all the connectors and then, after proper testing, we can can change back the locks that can be changed, and remove or reduce the critical section in others. Closes-Bug: #1947370 Change-Id: I6f7f7d19540361204d4ae3ead2bd6dcddb8fcd68 (cherry picked from commit |
9 months ago | |
---|---|---|
.. | ||
notes | Use file locks in connectors | 9 months ago |
source | Bump the openstackdocstheme extension to 1.20 | 4 years ago |