Especially with arm64/aarch64, the default value limits the number of
volume attachments to two usually. And when more than two volumes are to
be attached, it will fail with "No more available PCI slots". There is
no one-size-fits-all value here so let operators override the default
value.
Closes-Bug: #1944214
Change-Id: I9b9565873cbaeb575704b94a25d0a8556ab96292
When creating an instance with uefi support it is looking for various
files that are typically provided by ovmf package. This is automatically
installed from train and above, but is lacking in queens. This will
solve it for any deployment.
Change-Id: I7665a596112d673624ea5edd7438b915834918df
The NFV section was previously copied to the CDG [1].
This PR replaces it with a summary and a link to the
new location.
[1]: I220ad1a178a253ca04e1667e8c5727f8a54aa5cd
Change-Id: I66987fbea9d9fa4fbc971f621726b5eca0e53024
Be specific about what action 'security-checklist'
actually does.
Will be used for the other ten charms that support this
action.
Change-Id: I51916eed45c526991685e95b667804e2a490b5a9
Remove references to the charm's README for two
actions:
* register-to-cloud
* remove-from-cloud
Not only is this non-standard but, in particular, the
referenced information will be moved to the CDG
Drive-by: Improve the remaining defintions
Depends-On: I03493b30956eddcb77bd714360806aa53c126942
Change-Id: I80c212a223637e2b76e421085fef1b466f548c30
The lxd hypervisor is no longer supported
I'm not sure whether the other types should be
there, even for testing.
Improve general wording
Related-Bug: #1931579
Change-Id: I8510382c256dbe731b810f09f25408f7fa1d7b15
Allow access to main multipath configuration file from the
nova-compute daemon.
Change-Id: Ibaa5f45b7fd72fcc936986286939e1285bcdb945
Closes-Bug: 1906727
Nova supports setting allocation ratios at the nova-compute level from
Liberty onwards. Prior to this allocation ratios were set at the
nova-scheduler level.
Newton introduced the Placement API, and Ocata introduced the ability to
have compute resources (Core/RAM/Disk) precomputed before passing
candidates to the FilterScheduler [0]. Pike removed CoreFilter,
RAMFilter and DiskFilter scheduler filters.
From Pike onwards valid methods for settings these allocation ratios are via:
- A call to the Placement API [1].
- Config values to supplied to nova-compute (xxx_allocation_ratio).
Stein introduced initial_xxx_allocation_ratio in response to the runtime
behaviour of the ResourceTracker [2].
Currently, the precedence of resource ratio values are:
xxx_allocation_ratio > Placement API call > initial_xxx_allocation_ratio
That is a (compute) resource provider's allocation ratios will default
to initial_xxx_allocation_ratio which may be overridden at run time by a
call to the Placement API. If xxx_allocation_ratio is set it will
override all configurations for that provider.
When not otherwise configured, we set initial_xxx_allocation_ratio to
the values provided by ncc to maintain backwards compatibility. Where
initial_xxx_allocation_ratio is not available we set
xxx_allocation_ratio.
[0] https://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/resource-providers-scheduler-db-filters.html
[1] https://docs.openstack.org/api-ref/placement/#update-resource-provider-inventories
[2] https://specs.openstack.org/openstack/nova-specs/specs/stein/implemented/initial-allocation-ratios.html
Change-Id: Ifa314e9e23e0ae5d16113cd91a7507e61f9de704
Closes-Bug: #1677223
Remove section on scaling down as it has been documented
as a cloud operation in the CDG
Apply README template to the bottom of the README
Depends-On: I03493b30956eddcb77bd714360806aa53c126942
Change-Id: Iab6f0bdce05ee5115b0cdbb517cba79aa87dabf0
These are the test bundles (and any associated changes) for
focal-wallaby and hirsute-wallaby support.
Libraries sync.
hisute-wallaby test is disabled (moved to dev) due to [1] as bundle may
reference a reactive charm.
[1] https://github.com/juju-solutions/layer-basic/issues/194
Change-Id: I238e8a36b033594c67ffcefa325998f2eba2a659
This patchset updates all the requirements for charms.openstack,
charm-helpers, charms.ceph, zaza and zaza-openstack-tests back
to master branch.
Change-Id: I453eafd76c005cd5f10041c08e4a943fe235d474
Commit 9f4369d9 added a feature to set the availability zone of
the nova-compute unit on the cloud-compute relation. This uses the
value of the JUJU_AVAILABILITY_ZONE environment variable, which is
not consistent with how the nova-compute service sets its availability
zone.
Use the nova_compute_utils.get_availability_zone() method instead.
Closes-Bug #1925412
Change-Id: Ie68ecd808a60baf0d5bfe526f4355ce3c7ae5c77
The cross_az_attach property needs to be configured on the compute
nodes. The policy is set on the ncc service and is propigated to the
compute nodes on the cloud-compute relation. Update the relevant cinder
config setting based on the value provided.
Note, the default value for cross_az_attach is aligned with the nova
default of True.
Closes-Bug: #1899084
Change-Id: I7d00b50acbfe05dfd943a3511126b507fc570aeb
The 'hirsute' key in c-h/core/host_factory/ubuntu.py:
UBUNTU_RELEASES had been missed out, and is needed for
hirsute support in many of the charms. This sync is to
add just that key. See also [1]
Note that this sync is only for classic charms.
[1] https://github.com/juju/charm-helpers/pull/598
Change-Id: I60b208bbf5a04a9ab598b76ff0cf7f8baf216cbb
* charm-helpers sync for classic charms
* build.lock file for reactive charms
* ensure tox.ini is from release-tools
* ensure requirements.txt files are from release-tools
* On reactive charms:
- ensure stable/21.04 branch for charms.openstack
- ensure stable/21.04 branch for charm-helpers
Change-Id: I14762601bb124cfb03bd3f427fa4b1243ed2377b
List contains only nodes registered to the same nova-cloud-controller
as the nova-compute service running on targeted unit.
Closes-Bug: #1911013
Change-Id: I28d1a9bd18b3a87fc31ff4bca5bfe58449cdae57
List of added actions:
* disable
* enable
* remove-from-cloud
* register-to-cloud
More detailed explanation of the process added to the README.md
Closes-Bug: #1691998
Change-Id: I45d1def2ca0b1289f6fcce06c5f8949ef2a4a69e
func-test-pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/470
Previously the cap is only applied to the units in containers. However,
services in a bare metal also requires a sensible cap. Otherwise,
nova-compute, for example, will have 256 workers for nova-api-metadata
out of the box which is an overkill with the following system.
32 cores * 2 threads/core * 2 sockets * 2 (default multiplier) = 256
Let's cap the number of workers as 4 always, then let operators override
it with an explicit config to worker-multiplier.
Synced charm-helpers for
https://github.com/juju/charm-helpers/pull/553
Closes-Bug: #1843011
Change-Id: If98f12d7cf1a77fb267f1b55c44896a48a40909a
This update adds the new hirsute Ubuntu release (21.04) and
removes trusty support (14.04 which is EOL at 21.04).
Change-Id: I59840b672673aa4a8e253659300d9333c1b20a4b