
1) Removed some remaining unsupported ceph-disk related code. 2) Refactored the code that determines when a disk should be zapped. Now there will be only one place where disk_zap is called. 3) Refactored the code that determines when LVM prepare should be called. 4) Improved the logging within the OSD init files Change-Id: I194c82985f1f71b30d172f9e41438fa814500601
27 lines
1.3 KiB
YAML
27 lines
1.3 KiB
YAML
---
|
|
ceph-osd:
|
|
- 0.1.0 Initial Chart
|
|
- 0.1.1 Change helm-toolkit dependency to >= 0.1.0
|
|
- 0.1.2 wait for only osd pods from post apply job
|
|
- 0.1.3 Search for complete logical volume name for OSD data volumes
|
|
- 0.1.4 Don't try to prepare OSD disks that are already deployed
|
|
- 0.1.5 Fix the sync issue between osds when using shared disk for metadata
|
|
- 0.1.6 Logic improvement for used osd disk detection
|
|
- 0.1.7 Synchronization audit for the ceph-volume osd-init script
|
|
- 0.1.8 Update post apply job
|
|
- 0.1.9 Check inactive PGs multiple times
|
|
- 0.1.10 Fix typo in check inactive PGs logic
|
|
- 0.1.11 Fix post-apply job failure related to fault tolerance
|
|
- 0.1.12 Add a check for misplaced objects to the post-apply job
|
|
- 0.1.13 Remove default OSD configuration
|
|
- 0.1.14 Alias synchronized commands and fix descriptor leak
|
|
- 0.1.15 Correct naming convention for logical volumes in disk_zap()
|
|
- 0.1.16 dmsetup remove logical devices using correct device names
|
|
- 0.1.17 Fix a bug with DB orphan volume removal
|
|
- 0.1.18 Uplift from Nautilus to Octopus release
|
|
- 0.1.19 Update rbac api version
|
|
- 0.1.20 Update directory-based OSD deployment for image changes
|
|
- 0.1.21 Refactor Ceph OSD Init Scripts - First PS
|
|
- 0.1.22 Refactor Ceph OSD Init Scripts - Second PS
|
|
...
|