A new Juju action, to the nova-cloud-controller charm, will be
added to sync the nova-compute units Juju availability zones with
the availability from OpenStack.
It is useful in the context of a MAAS deployment, in order to map
MAAS AZs to OpenStack AZs.
Change-Id: I62f68f0c0c97aeca20a8afb32095d2972abd8473
If more than a single Ceph key is set as part of the relation data,
make sure that all of them are configured.
Makes sure that the previous relation data is handled as well
in order to maintain backwards compatibility.
Co-authored-by: Ionut Balutoiu <ibalutoiu@cloudbasesolutions.com>
Change-Id: I24be0ed48edd5af517e1699df77ef0d96ef20aa2
Enable support for use of Erasure Coded (EC) pools for
nova disks when RBD is used to back ephemeral storage volumes.
Add the standard set of EC based configuration options to the
charm.
Update Ceph broker request to create a replicated pool, an erasure
coding profile and an erasure coded pool (using the profile) when
pool-type == erasure-coded is specified.
Resync charm-helpers to pick changes to the standard ceph.conf
template and associated contexts for rbd default data pool mangle
due to lack for explicit support in OpenStack Services.
Update context to use metadata pool name in nova configuration
when erasure-coding is enabled.
Change-Id: Ida0b9c889ddf9fcc0847a9cee01b3206239d9318
Depends-On: Iec4de19f7b39f0b08158d96c5cc1561b40aefa10
Ensure that the 'migration' network space binding or the fallback
configuration option is passed to the nova-cloud-controller application
so that the correct IP address is SSH host scanned during setup of
live migration between hypervisors.
Change-Id: I6e20cd0b03f564ee9c110cf58fb0466f6a1f6c82
Closes-Bug: 1874235
The current implementations use of a specific interface to build
FQDN from has the undesired side effect of the ``nova-compute`` and
``neutron-openvswitch`` charms ending up with using different
hostnames in some situations. It may also lead to use of a
identifier that is mutable throughout the lifetime of a deployment.
Use of a specific interface was chosen due to ``socket.getfqdn()``
not giving reliable results (https://bugs.python.org/issue5004).
This patch gets the FQDN by mimickingthe behaviour of a call to
``hostname -f`` with fallback to shortname on failure.
Add relevant update from c-h.
Needed-By: Ic8f8742261b773484687985aa0a366391cd2737a
Change-Id: I82db81937e5a46dc6bd222b7160ca1fa5b190c10
Closes-Bug: #1839300
If a new version of the charm is used to install a version of
OpenStack prior to Stein, do not enable the FQDN registration.
Change-Id: Ib86d6a48ee34c9efc42b4455ac669ef0cb8dc3bb
Closes-Bug: #1846781
Function 'is_broker_action_done' should return False when it
finds a response from ceph broker not marked done, in order
to trigger a nova restart. However, it also returns False if
there is no response data from ceph broker, triggering an
unecessary restart.
The function 'ceph_changed' is invoked under different remote
unit contexts when there are updates to the relation. When
querying the broker response, only the context of the remote
unit that is the broker can see the response, unless
specifically queried for that given unit.
The 'ceph_changed' invocations under a remote context that
are not the broker end up returning False in
'is_broker_action_done' and causing restarts, even after
the action is already marked done. This also happens on
'config-changed' hooks.
To fix this problem, the logic is now changed have each
'ceph_changed' invocation loop through units and process
the broker response, regardless of remote context.
This is an initial change to address the issue locally
in nova-compute charm. A later change will be worked on
to move the new helper methods to charmhelpers,
refactoring the existing ones there.
Change-Id: I2b41f8b252f4ccb68830e90c5e68456e15372bcf
Closes-bug: #1835045
This patch reverts the non-charmhelpers changes from
the patch landed for bug 1835045 that is causing a
regression whereby new deployments that relate to
ceph-mon are prevented from sending broker requests
to do e.g. create the pool needed by
libvirt-image-backend=rbd.
Change-Id: I29421ce240a3810d945b76e662a743b4b8497ac8
Related-Bug: 1835045
Closes-Bug: 1839297
Nova-compute service restarts make sense only on relation
config updates received from the ceph-mon leader.
Also, synced charm helpers as PR #347 is used as
part of the fix.
Change-Id: I406f369b1e376db82b8683fd48fbe4de106da16f
Closes-bug: #1835045
Currently, the charm does not verify if iscsid is installed or
running. This patch adds those verifications on install and
update hooks.
Closes-bug: #1816435
Change-Id: I23992832a82557f406999427fe8d151f6a2b63af
Take advantage of a new feature in charmhelpers to ignore missing keys
when running sysctl. This addresses the issue where we want to include
conntrack settings for nova-compute with a kvm driver, but don't want
to include those settings for lxd. And it addresses it in a generic
way to handle similar situations going forward.
Change-Id: I29c36c11345b8876c4c39651c124a64860dd35db
Closes-Bug: 1820635
Some testing use cases make use of containers to host nova-compute;
sysctl options can't be tweaked in this use case so detect and
skip if executing in a container.
Change-Id: I9ff894e728e46b229068e91a290b84cde73eb09c
When clouds have a large number of hosts, the default size of the ARP
cache is too small. The cache can overflow, which means that the
system has no way to reach some ip addresses.
Setting the threshold limits higher addresses the situation, in a
reasonably safe way (the maximum impact is 5MB or so of additional RAM
used). Docs on ARP at http://man7.org/linux/man-pages/man7/arp.7.html,
and more discussion of the issue in the bug.
Change-Id: Iaf8382ee0b42e1444cfea589bb05a687cd0c23fa
Closes-Bug: 1780348
This change ensures that the multipath dependencies are installed
on the compute node when the use-multipath config flag is enabled.
Change-Id: I39b017398b95f5901d9bc57ffa0c59ff59f3a359
Closes-Bug: #1806830
On charm upgrade the charm may switch to py3 packages. If so, ensure
the old py2 packages are purged. If the purge occurs then restart
services.
Change-Id: I17abef16afbb8c62dae1b725c74c39cec414f4f8
Closes-Bug: 1803451
As well as destroying the network we should also undefine it
to ensure it does not return.
Change-Id: I57738856adb25c190357b1b23dd8a1245798cb14
Closes-Bug: #1800160
The change adds an option to the charm to use JUJU_AVAILABILITY_ZONE
environment variable set by Juju for the hook environment based on the
underlying provider's availability zone information for a given machine.
This information is used to configure default_availability_zone for nova
and availability_zone for subordinate networking charms.
Change-Id: Idc7112e7fe7b76d15cf9c4896b702b8ffd8c0e8e
Closes-Bug: #1796068
In a cells deployment the credentials for the nova-compute
application will no longer be available via the
nova-cloud-controller in the local cell. This change adds the
scaffolding for a cell to utilise a new cloud-credentials relation
to allow it to retrieve credentials directly from keystone.
Change-Id: I9d1a7353d730f7cb8e93cc9eea5b788f7c956c3d
Since Icehouse nova-compute does not need to access to a database, all
accesses are made through nova-conductor.
This patch removes shared-db relation and their hook handler.
Change-Id: I7c4f6a70785d7dad1727d52cf86508209849ca35
Closes-Bug: 1713807
When using ceph as a backend request the additional privilege
class-read on rbd_children. This fixes bug 1696073.
Change-Id: I468cfb5026751b96feba013b4e6ae74ff8da38ca
Closes-Bug: #1696073
Add support for encryption of the underlying block device providing
storage for local instances.
This commit introduces a new juju storage binding and configuration
option to provide a single block device for use for local instance
storage; this block device is formatted and mounted at
/var/lib/nova/instances. In a MAAS deployment, this could be a
bcache fronted device.
The configuration option is preferred over the Juju storage binding
if both are supplied.
This block device can optionally be encrypted using dm-crypt/LUKS
with encryption keys stored in Hashicorp Vault using vaultlocker.
vaultlocker ensures that keys are never persisted to local storage,
providing assurance around security of data at rest in the event
that disks/server are stolen.
Charm support is implemented using a new configuration option 'encrypt'
which when set enforces a mandatory relationship to an instance
of the vault application.
Copy the 'ephemeral-unmount' config option and assocaited code from
the ceph-osd and swift-storage charms to enable testing in cloudy
environments.
Change-Id: I772baa61f45ff430f706ec4864f3018488026148
Drop support for deployment from Git repositories, as deprecated
in the 17.02 charm release. This feature is unmaintained and has
no known users.
Change-Id: I44a7a92d5d4ae493bab4d5b81e9757cb12149a66
Remove postgresql DB support; This feature is untested as part
of the charms, is not in use and was deprecated as part of
the 1708 charms release.
Change-Id: I327a686a35edf9b6ff5c1c3cbc4165f8faeef688
Support for the ZeroMQ messaging driver has bit-rotted over the
last few years across the OpenStack charms; drop support for ZMQ
inline with deprecation notices issues in 17.02 charm release.
Change-Id: I66330dd29972e39e45e9bf81cb0570d5749b8312
Previously if enable-live-migration was true and migration-auth-type
was anything other than "ssh" then migration setup would be invalid.
The change puts the charm in a blocked state to make it clear that
the migration settings are not valid.
Change-Id: I796b54e9a08e8eab5c2b316a2aff0b29ee7e6bd9
Closes-Bug: #1431685
Once a ceph broker request has completed and the
nova-compute service has been restarted as a result,
ensure that this does not get retriggered when the
ceph relation fires as a result of a peer unit's data
coming onto the wire and no new information is provided
to the local unit.
Change-Id: Ie359a0ec9af7edfb9d453dcf4dbd9880af324d37
Closes-Bug: 1694963
The nova-lxd driver is growing support for ceph storage backends,
but as this is a host based integration, it relies on keyrings
being placed in /etc/ceph for cephx authentication, rather than
using some other secret storage mechanism as done for libvirt.
Tweak the ceph-access-relation-changed handling to deal with this
and ensure that the ceph-common package is installed with relations
of this type are joined.
Change-Id: I887e0be007c5606614e00c6a374416368d675c4d
As of Ocata, the ceph key used to access a specific Cinder
Ceph backend must match the name of the key used by cinder,
with an appropriate secret configured for libvirt use with
the cephx key used by the cinder-ceph charm.
Add support for the new ceph-access relation to allow
nova-compute units to communicate with multiple ceph
backends using different cephx keys and user names.
The side effect of this change is that nova-compute will
have a key for use with its own ephemeral backend ceph
access, and a key for each cinder ceph backend configured
in the deployment.
Change-Id: I638473fc46c99a8bfe301f9a0c844de9efd47a2a
Closes-Bug: 1671422
When nova compute is deployed with multiple network interfaces
configured. It may be necessary to specify which interface to use for
the cloud-compute relation.
Do not use unit_get('private-address') which gives unpredictable
results. Instead, leverage network-get for network spaces aware address
selection.
Charm helpers sync to pull in generalized get_relation_ip() used in
other charms which checks network-get after checking for all of the edge
cases including IPv6 and config overrides. Use get_relation_ip() for
address selection.
This allows for specifying a network space in a bundle when using MAAS
2.x. This guarantees the charm will select the correct interface and IP
for the network space.
nova-compute:
bindings:
cloud-compute: internal-space
Closes-bug: #1670866
Change-Id: Ib43a7a52acf5a07b68dea808082da0ba6eb237c1
Sync charmhelpers and add configuration option to allow access
to ceph pools to be limited based on grouping.
Nova will require access to volumes, images and vms pool groups.
Change-Id: I1c188d983609577ab34f7aef7854954c104b58bd
Partial-Bug: 1424771
Lowering the value of vm.swappiness to 1 (minimum amount of
swapping without disabling it entirely) should reduce the
guest memory latency.
The default of 1 in the charm can be changed by the user by
explicitly setting vm.swappiness in the sysctl charm setting.
Change-Id: If9d3a9a0d15a84b86cfa7ba3620b66e7ea61d414
Closes-Bug: 1660248
Neutron subordinate charms that manage underlying PCI devices, such
as SR-IOV VF's, need to ensure that the nova-compute service is
restarted after the underlying device configuration has been
completed, so that the nova pci_devices database is correctly
populated as part of the nova-compute startup process.
Add a service_restart_handler to the neutron-plugin-relation-changed
hook to allow subordinate charms to request restarts using the
standard restart-nonce trigger mechanism.
Change-Id: I2a4fcaa0988dae0b85904ff84ff5e492d651a043
I've added support for 'default_availability_zone' parameter. I've added
charm parameter, modified 'nova.conf' templates and implemented it to be
exposed via 'neutron-plugin' relation settings.
Change-Id: I85008ac0f3540a2b5c817893d63e497b63f43043
Closes-Bug: 1595937
The qemu-kvm service should not be configured as monitored in Nagios
when the NRPE relation is set, since it's a one-shot service.
Closes-Bug: #1645822
Change-Id: I20b4eeb7971bae69f29183814e8c61a977e80bf0
The method assertEquals has been deprecated since python 2.7.
http://docs.python.org/2/library/unittest.html#deprecated-aliases
Also in Python 3, a deprecated warning is raised when using assertEquals
therefore we should use assertEqual instead.
Change-Id: I037d7f19ccfbb8a9dbf8c013f2cef30688dd7b06
Closes-Bug: #1218185
Yakkety swiches:
a) the name of the libvirt daemon from libvirt-bin -> libvirtd
b) the default libvirt user from libvirtd -> libvirt
Update contexts, restarts, templates and enable yakkety test
to deal with this change.
Resync charm-helpers to support enablement of yakkety amulet test.
Change-Id: I58eb3a5da53d4a12390968c835c0ff408a42d1b5
Provide the weight option to the Ceph broker request API for requesting
the creation of a new Ceph storage pool. The weight is used to indicate
the percentage of the data that the pool is expected to consume. Each
environment may have slightly different needs based on the type of
workload so a config option labelled ceph-pool-weight is provided to
allow the operator to tune this value.
Closes-Bug: #1492742
Change-Id: Ia9aba8c4dee7a94c36c282273a356d9c13df7f75
Fixes to nova-network and nova-api service restarts
Charm helpers sync to bring in the AppArmorContext class
Create specific service ApiAppArmorContexts
Add service specific templates for apparmor profiles
Add aa-profile-mode in config.yaml
Apply the apparmor profile as requested: disable, enforce, complain
Add aa-profile-mode change test to amulet
Charm-helpers sync to pull in AA Profile context changes
Change-Id: I18aff4bfe131010521ea9ff544c6bf76f888afa6
libvirt-bin installs a 192.168.122.0/24 default network and creates
MASQUERADE rules for it on boot. These rules will effect and break
instance traffic including GRE tenant networks.
Check if the network exists and then destroy it with virsh net-destroy
which both immediately removes the MASQUERADE rules and the network so
it is not applied after reboot.
Change-Id: Ia79aea6ef889d1ef58f903f967bea37dc07fd160
Closes-Bug: #1387390
All contributors to this charm have agreed to the switch
from GPL v3 to Apache 2.0; switch to Apache-2.0 license
as agreed so we can move forward with official project status.
Change-Id: I385f684581a74ae41b507ed9e9d17ef1e9fc5819
The charm code as it stood before did this correctly when the
neutron-plugin relation included 'metadata-shared-secret', but not when
it included 'enable-metadata' without 'metadata-shared-secret'.
(See https://bugs.launchpad.net/charms/+source/nova-compute/+bug/1515570
for when 'enable-metadata' is used in this way.)
This change commonizes the code that determines when nova-api-metadata
should run, and uses this both to install the package and to add the
resource map dependency from nova.conf to the nova-api-metadata service.
Change-Id: I3354051b454621bc423ec6b2853d61301e58658c