Sysctl rules fined in charm config fails to be
applied in ovn enviroments because the rules are
applied before nf_conntrack is loaded.
By adding nf_conntrack to the /etc/modules file
it guarantees that it will be loaded before the
rules are applied.
Closes-Bug: #1922778
Change-Id: I51dae65cdc06e35230160bcaedda99710a72617d
This change reworks previous changes [1] and [2] that had
been respectively reverted and abandoned.
When using the config libvirt-image-backend=rbd, VMs
created from image have their disk data stored in ceph
instead of the compute node itself.
When performing live-migrations, both nodes need to
access the same ceph credentials to access the VM's
disk in ceph, but this is currently not possible
if the nodes involved pertain to different
nova-compute charm apps.
This patch changes app name sent to ceph to
'nova-compute-ceph-auth-c91ce26f', a unique name common to
all nova-compute apps, allowing all nova-compute apps to
use the same ceph auth.
This change also ensures newly deployed nodes install
the old credentials first on ceph-joined hook,
and then supercedes it with the new credentials
on ceph-changed hook, therefore also retaining
the old credentials.
This patch also includes the charmhelpers sync
from PR: #840
[1] https://review.opendev.org/889642
[2] https://review.opendev.org/896155
Closes-bug: #2028559
Related-bug: #2037003
Func-Test-Pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/1149
Change-Id: I1ae12d787a1f8e7761ca06b5a80049c1c62e9e90
This reverts commit c3c2cf0349c086dad7f23b180c3ee9ea0f865e8f.
Reason for revert: This introduces an undesired behavior when scaling-out that needs to be addressed in a complementary patch.
Change-Id: I21c127aa565e489ba4d93a1efc8ddba63ef32e87
When using the config libvirt-image-backend=rbd, VMs
created from image have their disk data stored in ceph
instead of the compute node itself.
When performing live-migrations, both nodes need to
access the same ceph credentials to access the VM's
disk in ceph, but this is currently not possible
if the nodes involved pertain to different
nova-compute charm apps.
This patch changes app name sent to ceph to 'nova-compute',
allowing all nova-compute apps to use the same ceph auth.
This patch also includes the charmhelpers sync
from PR: #840
Closes-bug: #2028559
Change-Id: I7222661017655fd7225db0c677f1a8f5ebb7984d
The subordinate charms should manage the services that
they deploys and configure, not the principle they are related to.
This change switches the approach for restarting services
from having the nova-compute charm doing it directly to having
nova-compute triggering the restart by request a restart down
the existing relations.
Closes-Bug: #1947585
Change-Id: I7419e39d68c70d21a11d03deeff9699421b0571e
OVS introduced a new service called ovs-record-hostname.service which
records the hostname on the first start in the ovs database to identify
the ovn chassis, this is how it achieved a stable hostname and be
resilient to the changes in the FQDN when the DNS gets available.
This change introduces the same approach for nova-compute charm. In the
first run of the NovaComputeHostInfoContext the value passed in the
context as host_fqdn is stored in the unit's kv db, and re-used on every
subsequent call.
This change affects only new installs since the hint to store (or not)
the host fqdn is set in the install hook.
Change-Id: I2aa74442ec25b21201a47070077df27899465814
Closes-Bug: #1896630
It was found that the modules test_actions_openstack_upgrade and
test_actions_package_upgrade were mocking different classes and
functions right before importing the modules under test
(openstack_upgrade and package_upgrade respectively), although these
mocks weren't being reset making tests executions coming after them to
get benefitted (or impacted) by the mocks in memory.
This patch takes advantage of mock.patch() decorator at the class level
and importlib.reload() to make sure the mocks don't outsurvive the
module.
When the teardown was in place it was found a different set of functions
that were relying on that mocking, so they were patched to allow the
tests run in the expected (mock'ed) environment.
Summary of changes:
- Move get_availability_zone() to contexts module, nova_compute_utils
depends on nova_compute_context, the latter shouldn't be importing
code from the former since it breaks the layering, even when the
import is being done within a function's body.
- Mock env variable JUJU_UNIT_NAME per test case, the tests defined
in the test_nova_compute_utils and test_nova_compute_contexts were
relying on the leakage of mocks set by other test modules, this
makes them run in an isolated fashion.
- Move update_nrpe_config testing to its own class, the main class
NovaComputeRelationsTests mocks the function update_nrpe_config()
making it difficult to test it in a test method, hence making the
test part of its own class it's posible to not mock the function
and correctly runs its implementation.
- Teardown mocks made at import level.
Func-Test-Pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/997
Change-Id: I4468ef1a0619befc75c6af2bad8df316125a7cf5
Enable vTPM support in nova-compute charm. This adds new packages to be
installed swtpm and swtpm-tools as well as updates the nova-compute.conf
file and the qemu.conf file to set appropriate user/groups for swtpm.
func-test-pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/696
Change-Id: Idf0d19d75b9231f029fa6a7dc557d2a9ee04915b
Add an extra-repositories config option to nova-compute in order to
allow configuring additional apt repositories. This is useful when some
packages are not available in the distro or cloud archive.
Change-Id: Ie3b76ff3bc07b83e416c80fab1da2560d48df498
When re-writing files, the apparmor configuration is not written,
this will fix this to ensure that aa-profile-mode is respected
Closes-Bug: 1947389
Change-Id: I79e9625ab4d261845822d4825ba1ed7e31d0b1e0
Fix use of ephemeral-device with instances-path to ensure that
the configured block device is mounted in the desired location.
Ensure instances-path directory actually exists.
Change-Id: I81725f602ba3086bc142d59104e4bfc80918d8cf
Closes-Bug: 1909141
Commit 9f4369d9 added a feature to set the availability zone of
the nova-compute unit on the cloud-compute relation. This uses the
value of the JUJU_AVAILABILITY_ZONE environment variable, which is
not consistent with how the nova-compute service sets its availability
zone.
Use the nova_compute_utils.get_availability_zone() method instead.
Closes-Bug #1925412
Change-Id: Ie68ecd808a60baf0d5bfe526f4355ce3c7ae5c77
A new Juju action, to the nova-cloud-controller charm, will be
added to sync the nova-compute units Juju availability zones with
the availability from OpenStack.
It is useful in the context of a MAAS deployment, in order to map
MAAS AZs to OpenStack AZs.
Change-Id: I62f68f0c0c97aeca20a8afb32095d2972abd8473
If more than a single Ceph key is set as part of the relation data,
make sure that all of them are configured.
Makes sure that the previous relation data is handled as well
in order to maintain backwards compatibility.
Co-authored-by: Ionut Balutoiu <ibalutoiu@cloudbasesolutions.com>
Change-Id: I24be0ed48edd5af517e1699df77ef0d96ef20aa2
Enable support for use of Erasure Coded (EC) pools for
nova disks when RBD is used to back ephemeral storage volumes.
Add the standard set of EC based configuration options to the
charm.
Update Ceph broker request to create a replicated pool, an erasure
coding profile and an erasure coded pool (using the profile) when
pool-type == erasure-coded is specified.
Resync charm-helpers to pick changes to the standard ceph.conf
template and associated contexts for rbd default data pool mangle
due to lack for explicit support in OpenStack Services.
Update context to use metadata pool name in nova configuration
when erasure-coding is enabled.
Change-Id: Ida0b9c889ddf9fcc0847a9cee01b3206239d9318
Depends-On: Iec4de19f7b39f0b08158d96c5cc1561b40aefa10
Ensure that the 'migration' network space binding or the fallback
configuration option is passed to the nova-cloud-controller application
so that the correct IP address is SSH host scanned during setup of
live migration between hypervisors.
Change-Id: I6e20cd0b03f564ee9c110cf58fb0466f6a1f6c82
Closes-Bug: 1874235
This patch adds a focal-ussuri and bionic-ussuri bundles to the tests
for the charm. The linked bug is concerned with installing
nova-network, which is not available on Ussuri.
Closes-Bug: #1872770
Change-Id: Iea5a682aaebeb6f6941cf9d8f5780473f457e455
The current implementations use of a specific interface to build
FQDN from has the undesired side effect of the ``nova-compute`` and
``neutron-openvswitch`` charms ending up with using different
hostnames in some situations. It may also lead to use of a
identifier that is mutable throughout the lifetime of a deployment.
Use of a specific interface was chosen due to ``socket.getfqdn()``
not giving reliable results (https://bugs.python.org/issue5004).
This patch gets the FQDN by mimickingthe behaviour of a call to
``hostname -f`` with fallback to shortname on failure.
Add relevant update from c-h.
Needed-By: Ic8f8742261b773484687985aa0a366391cd2737a
Change-Id: I82db81937e5a46dc6bd222b7160ca1fa5b190c10
Closes-Bug: #1839300
This fixes the referenced bug by ensuring that the action does initiate
remote restarts for container scoped related units.
Change-Id: I149b753355b64113adfd8fd4eea972978b7ed20b
Closes-Bug:#1835557
The referenced bug indicates that if the charm upgrades on a previous
py2 version of the charm then the upgrade fails due to a missing package
(in this python3-yaml). This patchset ensures that the python3 versions
of the dependency packages are installed.
Change-Id: I8b07de3b2f950237518c0555db1288f9b2c0aabf
Closes-Bug: #1746650
If a new version of the charm is used to install a version of
OpenStack prior to Stein, do not enable the FQDN registration.
Change-Id: Ib86d6a48ee34c9efc42b4455ac669ef0cb8dc3bb
Closes-Bug: #1846781
The change of behaviour will only have affect on newly installed
deployments on OpenStack Train and onwards.
Also set upper constraint for ``python-cinderclient`` in the
functional test requirements as it relies on the v1 client
which has been removed. We will not fix this in Amulet, charm
pending migration to the Zaza framework.
Change-Id: Ia73ed6b76fc7f18014d4fa913397cc069e51ff07
Depends-On: Iee73164358745628a4b8658614608bc872771fd1
Closes-Bug: #1839300
Function 'is_broker_action_done' should return False when it
finds a response from ceph broker not marked done, in order
to trigger a nova restart. However, it also returns False if
there is no response data from ceph broker, triggering an
unecessary restart.
The function 'ceph_changed' is invoked under different remote
unit contexts when there are updates to the relation. When
querying the broker response, only the context of the remote
unit that is the broker can see the response, unless
specifically queried for that given unit.
The 'ceph_changed' invocations under a remote context that
are not the broker end up returning False in
'is_broker_action_done' and causing restarts, even after
the action is already marked done. This also happens on
'config-changed' hooks.
To fix this problem, the logic is now changed have each
'ceph_changed' invocation loop through units and process
the broker response, regardless of remote context.
This is an initial change to address the issue locally
in nova-compute charm. A later change will be worked on
to move the new helper methods to charmhelpers,
refactoring the existing ones there.
Change-Id: I2b41f8b252f4ccb68830e90c5e68456e15372bcf
Closes-bug: #1835045
A recent change (commit ceab1e91dc2e3948f6ba7c121c1801ad1641643c)
removed libvirt from the pause/resume list of services. This affected
series upgrade. Libvirt stayed down and did not start back up on resume.
This change checks the hook being executed and if it is
post-series-upgrade it includes libvirt as a service to start on resume.
Closes-Bug: #1839495
Change-Id: Ibfa24b678c1077b441464da8c38114ce23d14963
This patch reverts the non-charmhelpers changes from
the patch landed for bug 1835045 that is causing a
regression whereby new deployments that relate to
ceph-mon are prevented from sending broker requests
to do e.g. create the pool needed by
libvirt-image-backend=rbd.
Change-Id: I29421ce240a3810d945b76e662a743b4b8497ac8
Related-Bug: 1835045
Closes-Bug: 1839297
Nova-compute service restarts make sense only on relation
config updates received from the ceph-mon leader.
Also, synced charm helpers as PR #347 is used as
part of the fix.
Change-Id: I406f369b1e376db82b8683fd48fbe4de106da16f
Closes-bug: #1835045
Currently, the charm does not verify if iscsid is installed or
running. This patch adds those verifications on install and
update hooks.
Closes-bug: #1816435
Change-Id: I23992832a82557f406999427fe8d151f6a2b63af
Take advantage of a new feature in charmhelpers to ignore missing keys
when running sysctl. This addresses the issue where we want to include
conntrack settings for nova-compute with a kvm driver, but don't want
to include those settings for lxd. And it addresses it in a generic
way to handle similar situations going forward.
Change-Id: I29c36c11345b8876c4c39651c124a64860dd35db
Closes-Bug: 1820635
Some testing use cases make use of containers to host nova-compute;
sysctl options can't be tweaked in this use case so detect and
skip if executing in a container.
Change-Id: I9ff894e728e46b229068e91a290b84cde73eb09c
When clouds have a large number of hosts, the default size of the ARP
cache is too small. The cache can overflow, which means that the
system has no way to reach some ip addresses.
Setting the threshold limits higher addresses the situation, in a
reasonably safe way (the maximum impact is 5MB or so of additional RAM
used). Docs on ARP at http://man7.org/linux/man-pages/man7/arp.7.html,
and more discussion of the issue in the bug.
Change-Id: Iaf8382ee0b42e1444cfea589bb05a687cd0c23fa
Closes-Bug: 1780348
This change ensures that the multipath dependencies are installed
on the compute node when the use-multipath config flag is enabled.
Change-Id: I39b017398b95f5901d9bc57ffa0c59ff59f3a359
Closes-Bug: #1806830
The upgrade-charm hook installs any new packages required for the
new charm version however this needs to be filtered against previously
installed packages to ensure that pending package updates don't get
applied to the system as a side effect of upgrading the charm.
Change-Id: I6c490f9af2312dc42f2b56e0b7ce8c802e3aac1d
Closes-Bug: 1812982
On charm upgrade the charm may switch to py3 packages. If so, ensure
the old py2 packages are purged. If the purge occurs then restart
services.
Change-Id: I17abef16afbb8c62dae1b725c74c39cec414f4f8
Closes-Bug: 1803451
Send trigger to ceilometer-agent after an upgrade to trigger an
upgrade. This follows the same pattern as other OpenStack
subordinatees.
Change-Id: Ic07d53c02b84d210aefd47e49a42002fac801ff4
Closes-Bug: #1802400