Restore libvirtd cgroupfs mount

It was removed in [1] as part of cgroupsv2 cleanup.
However, the testing did not catch the fact that the legacy
cgroups behaviour was actually still breaking despite latest
Docker and setting to use host's cgroups namespace.

[1] 286a03bad2

Closes-Bug: #1941706
Change-Id: I629bb9e70a3fd6bd1e26b2ca22ffcff5e9e8c731
This commit is contained in:
Radosław Piliszek 2021-08-30 09:33:31 +00:00
parent d04eb75a2a
commit 34c49b9dbe
2 changed files with 14 additions and 0 deletions

View File

@ -346,6 +346,7 @@ nova_libvirt_default_volumes:
- "/lib/modules:/lib/modules:ro"
- "/run/:/run/:shared"
- "/dev:/dev"
- "/sys/fs/cgroup:/sys/fs/cgroup"
- "kolla_logs:/var/log/kolla/"
- "libvirtd:/var/lib/libvirt"
- "{{ nova_instance_datadir_volume }}:/var/lib/nova/"

View File

@ -0,0 +1,13 @@
---
critical:
- |
Fixes a critical bug which caused Nova instances (VMs) using libvirtd
(the default/usual choice) to get killed on libvirtd (``nova_libvirt``)
container stop (and thus any restart - either manual or done by running
Kolla Ansible). It was affecting Wallaby+ on CentOS, Ubuntu and Debian
Buster (not Bullseye). If your deployment is also affected, please read the
referenced Launchpad bug report, comment #22, for how to fix it without
risking data loss. In short: fixing requires redeploying and this will
trigger the bug so one has to first migrate important VMs away and only
then redeploy empty compute nodes.
`LP#1941706 <https://launchpad.net/bugs/1941706>`__