Recent test run(s) have shown memory exhaustion on the nova
cloud controller units. This exhibits itself as the controller
dropping messages from the compute nodes and logging messages like:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/oslo_service/threadgroup.py", line 268, in _perform_action_on_threads
File "/usr/lib/python3/dist-packages/oslo_service/threadgroup.py", line 342, in <lambda>
lambda x: x.wait(),
File "/usr/lib/python3/dist-packages/oslo_service/threadgroup.py", line 61, in wait
return self.thread.wait()
File "/usr/lib/python3/dist-packages/eventlet/greenthread.py", line 180, in wait
return self._exit_event.wait()
File "/usr/lib/python3/dist-packages/eventlet/event.py", line 125, in wait
result = hub.switch()
File "/usr/lib/python3/dist-packages/eventlet/hubs/hub.py", line 298, in switch
return self.greenlet.switch()
File "/usr/lib/python3/dist-packages/eventlet/hubs/hub.py", line 350, in run
self.wait(sleep_time)
File "/usr/lib/python3/dist-packages/eventlet/hubs/poll.py", line 80, in wait
presult = self.do_poll(seconds)
File "/usr/lib/python3/dist-packages/eventlet/hubs/epolls.py", line 31, in do_poll
return self.poll.poll(seconds)
MemoryError
to the nova-conductor log.
It seems very likely this issue is specific to Bionic Stein so it
may be a little wasteful to have increased the memory allocation
for all the bundles but I think consistancy between the bundles is
more important.
Change-Id: I4c693af04b291ebaa847f32c9169680228e22867
Co-authored-by: Liam Young <liam.young@canonical.com>
Especially with arm64/aarch64, the default value limits the number of
volume attachments to two usually. And when more than two volumes are to
be attached, it will fail with "No more available PCI slots". There is
no one-size-fits-all value here so let operators override the default
value.
Closes-Bug: #1944214
Change-Id: I9b9565873cbaeb575704b94a25d0a8556ab96292
Fix use of ephemeral-device with instances-path to ensure that
the configured block device is mounted in the desired location.
Ensure instances-path directory actually exists.
Change-Id: I81725f602ba3086bc142d59104e4bfc80918d8cf
Closes-Bug: 1909141
If an ephemeral-device storage configuration has been provided,
ensure that the nova-compute service will not start until the
mountpoint (currently /var/lib/nova/instances) has actually
been mounted. If this does not happen the nova-compute service
will fail to start in a failsafe condition.
Change-Id: Ic16691e119e430faec9994f6e207596629e47bb6
Closes-Bug: 1863358
The README of this important charm is in dire
need of an overhaul.
Depends-On: I1dafd2a87c4195e9d52f3f4840a5efa423c895d0
Change-Id: I29538070d9370b3e7bafd5ff13ad8a476652b643
When creating an instance with uefi support it is looking for various
files that are typically provided by ovmf package. This is automatically
installed from train and above, but is lacking in queens. This will
solve it for any deployment.
Change-Id: I7665a596112d673624ea5edd7438b915834918df
The NFV section was previously copied to the CDG [1].
This PR replaces it with a summary and a link to the
new location.
[1]: I220ad1a178a253ca04e1667e8c5727f8a54aa5cd
Change-Id: I66987fbea9d9fa4fbc971f621726b5eca0e53024
Be specific about what action 'security-checklist'
actually does.
Will be used for the other ten charms that support this
action.
Change-Id: I51916eed45c526991685e95b667804e2a490b5a9
Remove references to the charm's README for two
actions:
* register-to-cloud
* remove-from-cloud
Not only is this non-standard but, in particular, the
referenced information will be moved to the CDG
Drive-by: Improve the remaining defintions
Depends-On: I03493b30956eddcb77bd714360806aa53c126942
Change-Id: I80c212a223637e2b76e421085fef1b466f548c30
The lxd hypervisor is no longer supported
I'm not sure whether the other types should be
there, even for testing.
Improve general wording
Related-Bug: #1931579
Change-Id: I8510382c256dbe731b810f09f25408f7fa1d7b15
Allow access to main multipath configuration file from the
nova-compute daemon.
Change-Id: Ibaa5f45b7fd72fcc936986286939e1285bcdb945
Closes-Bug: 1906727
Nova supports setting allocation ratios at the nova-compute level from
Liberty onwards. Prior to this allocation ratios were set at the
nova-scheduler level.
Newton introduced the Placement API, and Ocata introduced the ability to
have compute resources (Core/RAM/Disk) precomputed before passing
candidates to the FilterScheduler [0]. Pike removed CoreFilter,
RAMFilter and DiskFilter scheduler filters.
From Pike onwards valid methods for settings these allocation ratios are via:
- A call to the Placement API [1].
- Config values to supplied to nova-compute (xxx_allocation_ratio).
Stein introduced initial_xxx_allocation_ratio in response to the runtime
behaviour of the ResourceTracker [2].
Currently, the precedence of resource ratio values are:
xxx_allocation_ratio > Placement API call > initial_xxx_allocation_ratio
That is a (compute) resource provider's allocation ratios will default
to initial_xxx_allocation_ratio which may be overridden at run time by a
call to the Placement API. If xxx_allocation_ratio is set it will
override all configurations for that provider.
When not otherwise configured, we set initial_xxx_allocation_ratio to
the values provided by ncc to maintain backwards compatibility. Where
initial_xxx_allocation_ratio is not available we set
xxx_allocation_ratio.
[0] https://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/resource-providers-scheduler-db-filters.html
[1] https://docs.openstack.org/api-ref/placement/#update-resource-provider-inventories
[2] https://specs.openstack.org/openstack/nova-specs/specs/stein/implemented/initial-allocation-ratios.html
Change-Id: Ifa314e9e23e0ae5d16113cd91a7507e61f9de704
Closes-Bug: #1677223
Remove section on scaling down as it has been documented
as a cloud operation in the CDG
Apply README template to the bottom of the README
Depends-On: I03493b30956eddcb77bd714360806aa53c126942
Change-Id: Iab6f0bdce05ee5115b0cdbb517cba79aa87dabf0
These are the test bundles (and any associated changes) for
focal-wallaby and hirsute-wallaby support.
Libraries sync.
hisute-wallaby test is disabled (moved to dev) due to [1] as bundle may
reference a reactive charm.
[1] https://github.com/juju-solutions/layer-basic/issues/194
Change-Id: I238e8a36b033594c67ffcefa325998f2eba2a659
This patchset updates all the requirements for charms.openstack,
charm-helpers, charms.ceph, zaza and zaza-openstack-tests back
to master branch.
Change-Id: I453eafd76c005cd5f10041c08e4a943fe235d474
Commit 9f4369d9 added a feature to set the availability zone of
the nova-compute unit on the cloud-compute relation. This uses the
value of the JUJU_AVAILABILITY_ZONE environment variable, which is
not consistent with how the nova-compute service sets its availability
zone.
Use the nova_compute_utils.get_availability_zone() method instead.
Closes-Bug #1925412
Change-Id: Ie68ecd808a60baf0d5bfe526f4355ce3c7ae5c77
The cross_az_attach property needs to be configured on the compute
nodes. The policy is set on the ncc service and is propigated to the
compute nodes on the cloud-compute relation. Update the relevant cinder
config setting based on the value provided.
Note, the default value for cross_az_attach is aligned with the nova
default of True.
Closes-Bug: #1899084
Change-Id: I7d00b50acbfe05dfd943a3511126b507fc570aeb
The 'hirsute' key in c-h/core/host_factory/ubuntu.py:
UBUNTU_RELEASES had been missed out, and is needed for
hirsute support in many of the charms. This sync is to
add just that key. See also [1]
Note that this sync is only for classic charms.
[1] https://github.com/juju/charm-helpers/pull/598
Change-Id: I60b208bbf5a04a9ab598b76ff0cf7f8baf216cbb
* charm-helpers sync for classic charms
* build.lock file for reactive charms
* ensure tox.ini is from release-tools
* ensure requirements.txt files are from release-tools
* On reactive charms:
- ensure stable/21.04 branch for charms.openstack
- ensure stable/21.04 branch for charm-helpers
Change-Id: I14762601bb124cfb03bd3f427fa4b1243ed2377b
List contains only nodes registered to the same nova-cloud-controller
as the nova-compute service running on targeted unit.
Closes-Bug: #1911013
Change-Id: I28d1a9bd18b3a87fc31ff4bca5bfe58449cdae57