Update the charmcraft.yaml file to use base and platforms, only
allowing noble support.
- Update config to default to caracal.
- Update osci.yaml to use the charmcraft 3.x/beta
- Add noble/oracular to charmhelpers
- Drop non-noble tests
Change-Id: I1b386e6b36c85621da7e89618b407fb551197a08
Since OpenStack Train release the 'cpu_models' config option
has superseded the 'cpu_model' config option in the nova.conf.
This patch adds support for the new 'cpu_models' allowing a user
to provide a comma separated list of supported, named CPU models.
This patch also includes a unit test for the cpu_mode='custom'.
Closes-bug: #2025914
Change-Id: I30328abc07d3304f1bfb67c81360fb5229214c97
* sync charm-helpers to classic charms
* change openstack-origin/source default to zed
* align testing with zed
* add new zed bundles
* add zed bundles to tests.yaml
* add zed tests to osci.yaml and .zuul.yaml
* update build-on and run-on bases
* add bindep.txt for py310
* sync tox.ini and requirements.txt for ruamel
* use charmcraft_channel 2.0/stable
* drop reactive plugin overrides
* move interface/layer env vars to charmcraft.yaml
Change-Id: I506c53b4956024066bc769665525cb022438a0ae
- Add 22.04 to charmcraft.yaml
- Update metadata to include jammy
- Remove impish from metadata
- Update osci.yaml to include py3.10 default job
- Modify tox.ini to remove py35,py36,py37 tox target and add py310
target.
- ensure that the openstack-origin is yoga
Change-Id: I62d95763a6283d8dc7f54b65b0474d48b20608f0
Enable vTPM support in nova-compute charm. This adds new packages to be
installed swtpm and swtpm-tools as well as updates the nova-compute.conf
file and the qemu.conf file to set appropriate user/groups for swtpm.
func-test-pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/696
Change-Id: Idf0d19d75b9231f029fa6a7dc557d2a9ee04915b
Add an extra-repositories config option to nova-compute in order to
allow configuring additional apt repositories. This is useful when some
packages are not available in the distro or cloud archive.
Change-Id: Ie3b76ff3bc07b83e416c80fab1da2560d48df498
The upstream has 3 min as the timeout (60 retries at 3-seconds
interval). It should work if an image is in a raw format to leverage
Ceph's copy-on-write or an image is small enough to be copied quickly.
However, there are some cases exeeding the 3 min deadline such as a big
enough image with Qcow2 or other formats like Windows images, or storage
backend doesn't have copy-on-write from Glance.
Let's bump the deadline to 15 min (300 retries at 3-seconds interval) to
cover most of the cases out of the box, and let operators tune it
further by exposing those options.
Co-authored-by: Mark Maglana <mark.maglana@canonical.com>
Closes-Bug: 1758607
Change-Id: I6f6da8e90c6bbcd031ee183ae86d88eccd392230
Live migration post-copy was not working because to be effective,
'live_migration_timeout_action' must be set to 'force_complete'.
Closes-bug: #1950894
Change-Id: I66984a12b89cb0ac2aeebeb393a6f6c026d865da
The config is necessary to calculate available disk space for VMs with
disk-allocation-ratio which is already exposed as a charm option.
Closes-Bug: #1952184
Change-Id: I0ef55987517bded50f855e0dbc5e420cfbff4c1b
Especially with arm64/aarch64, the default value limits the number of
volume attachments to two usually. And when more than two volumes are to
be attached, it will fail with "No more available PCI slots". There is
no one-size-fits-all value here so let operators override the default
value.
Closes-Bug: #1944214
Change-Id: I9b9565873cbaeb575704b94a25d0a8556ab96292
The lxd hypervisor is no longer supported
I'm not sure whether the other types should be
there, even for testing.
Improve general wording
Related-Bug: #1931579
Change-Id: I8510382c256dbe731b810f09f25408f7fa1d7b15
Nova supports setting allocation ratios at the nova-compute level from
Liberty onwards. Prior to this allocation ratios were set at the
nova-scheduler level.
Newton introduced the Placement API, and Ocata introduced the ability to
have compute resources (Core/RAM/Disk) precomputed before passing
candidates to the FilterScheduler [0]. Pike removed CoreFilter,
RAMFilter and DiskFilter scheduler filters.
From Pike onwards valid methods for settings these allocation ratios are via:
- A call to the Placement API [1].
- Config values to supplied to nova-compute (xxx_allocation_ratio).
Stein introduced initial_xxx_allocation_ratio in response to the runtime
behaviour of the ResourceTracker [2].
Currently, the precedence of resource ratio values are:
xxx_allocation_ratio > Placement API call > initial_xxx_allocation_ratio
That is a (compute) resource provider's allocation ratios will default
to initial_xxx_allocation_ratio which may be overridden at run time by a
call to the Placement API. If xxx_allocation_ratio is set it will
override all configurations for that provider.
When not otherwise configured, we set initial_xxx_allocation_ratio to
the values provided by ncc to maintain backwards compatibility. Where
initial_xxx_allocation_ratio is not available we set
xxx_allocation_ratio.
[0] https://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/resource-providers-scheduler-db-filters.html
[1] https://docs.openstack.org/api-ref/placement/#update-resource-provider-inventories
[2] https://specs.openstack.org/openstack/nova-specs/specs/stein/implemented/initial-allocation-ratios.html
Change-Id: Ifa314e9e23e0ae5d16113cd91a7507e61f9de704
Closes-Bug: #1677223
Previously the cap is only applied to the units in containers. However,
services in a bare metal also requires a sensible cap. Otherwise,
nova-compute, for example, will have 256 workers for nova-api-metadata
out of the box which is an overkill with the following system.
32 cores * 2 threads/core * 2 sockets * 2 (default multiplier) = 256
Let's cap the number of workers as 4 always, then let operators override
it with an explicit config to worker-multiplier.
Synced charm-helpers for
https://github.com/juju/charm-helpers/pull/553
Closes-Bug: #1843011
Change-Id: If98f12d7cf1a77fb267f1b55c44896a48a40909a
This config option is to enable admin
password injection at instance boot time
* Added unit test to verify the config
is correctly set and nova.config is
updated.
* Updated all of the templates that have
inject-password set
* Moved inject_* options out of
{if libvirt_images_type and rbd_pool}
block as they are irrelevant.
Closes-Bug: #1755696
Change-Id: Ie766a14bfa6b16337aa957bf7adf2d869462f9d7
Enable support for use of Erasure Coded (EC) pools for
nova disks when RBD is used to back ephemeral storage volumes.
Add the standard set of EC based configuration options to the
charm.
Update Ceph broker request to create a replicated pool, an erasure
coding profile and an erasure coded pool (using the profile) when
pool-type == erasure-coded is specified.
Resync charm-helpers to pick changes to the standard ceph.conf
template and associated contexts for rbd default data pool mangle
due to lack for explicit support in OpenStack Services.
Update context to use metadata pool name in nova configuration
when erasure-coding is enabled.
Change-Id: Ida0b9c889ddf9fcc0847a9cee01b3206239d9318
Depends-On: Iec4de19f7b39f0b08158d96c5cc1561b40aefa10
This patch adds a focal-ussuri and bionic-ussuri bundles to the tests
for the charm. The linked bug is concerned with installing
nova-network, which is not available on Ussuri.
Closes-Bug: #1872770
Change-Id: Iea5a682aaebeb6f6941cf9d8f5780473f457e455
This patch removes the hardcoded notification_format value in nova.conf
in order to fill it with the configuration option provided by the user.
At the moment, if user fills with an option different than the default
empty, the produced nova.conf contains two [notifications] sections,
thus, ocurring in an erroneous configuration.
Change-Id: Iff44806bdd0f6078879c9872ba0531485f0ca4c7
Signed-off-by: Stamatis Katsaounis <skatsaounis@admin.grnet.gr>
The per-host config cpu-dedicated-set will provide new flexibility by
adding ability to pack guests vCPUs which need dedicated host CPUs vs
guests vCPUs which can float over a set of host CPUs (see:
cpu-shared-set).
Change-Id: Ia2181ad2bd1894d5c2f91be5df8396bc77555658
Signed-off-by: Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
This commits adds a new extra binding (migration) and
its corresponding fallback configuration flag (libvirt-
migration-network) that allows to specify a space or an
existing CIDR formatted network (if the config flag is pre-
ferred) that will be seleced as the inbound address to be used
as a the live migration target.
For the case of any openstack release >= ocata,
the live_migration_inbound_addr variable will be set
as well as the libvirt_migration_scheme (set to SSH
by default).
For older releases, the behavior remains as before,
as the only remaining option is to setup libvirt
to bind in a insecure tcp connection, so we keep it
as the current live_migration_uri.
The reason of not using an extra-binding exclusively relies
on the back-compability of the change, this needs
to be applied on existing clouds where updating
the bindings on deployed application
isn't possible due to LP: #1796653.
For fresh/new deployments, the migration extra-binding
has been defined and used with precedence over the
libvirt-migration-network variable.
Change-Id: I2f8c0a1e822ad6a90e23cd8009e181b8f86d765a
Closes-Bug: #1680531
Signed-off-by: Jorge Niedbalski <jnr@metaklass.org>
The last version of charmhelpers introduces ability to set additional
driver which enable notfications to be sent to system logs.
Depends-of: https://github.com/juju/charm-helpers/pull/323
Closes-bug: 1825016
Change-Id: Ibe84826060922671e7fc160734d5ae3974d663ed
Signed-off-by: Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
When clouds have a large number of hosts, the default size of the ARP
cache is too small. The cache can overflow, which means that the
system has no way to reach some ip addresses.
Setting the threshold limits higher addresses the situation, in a
reasonably safe way (the maximum impact is 5MB or so of additional RAM
used). Docs on ARP at http://man7.org/linux/man-pages/man7/arp.7.html,
and more discussion of the issue in the bug.
Change-Id: Iaf8382ee0b42e1444cfea589bb05a687cd0c23fa
Closes-Bug: 1780348
This change implements a new option in config.yaml that sets the
default format for new ephemeral volumes attached to instances.
Closes-Bug: #1693943
Change-Id: Iff9254039fa6f1cef9d80c0d59de8e37b39e30f0
This change ensures that the multipath dependencies are installed
on the compute node when the use-multipath config flag is enabled.
Change-Id: I39b017398b95f5901d9bc57ffa0c59ff59f3a359
Closes-Bug: #1806830
Implement missing configuration options and context to support
configuration of:
live_migration_permit_auto_converge
live_migration_permit_post_copy
in nova.conf. The configuration parameters are only added to the
nova.conf template when live-migration is actually enabled, since they
have no purpose without live-migration enabled. The options are added to
version Newton and upwards as that is the Nova version since they were
first supported.
Change-Id: I1bb3cee4ac532d0867b4297c742707668566a527
Closes-Bug: #1799916
For Rocky, Nova introduced compute/cpu_shared_set and libvirt driver
uses that option to specify set of pCPUs which will be used to put
emulator threads on. So guest vCPUs can be pinned on pCPUs which will
be entirely dedicated for them.
Change-Id: I88b2afda67a91266f21d2c870a76646262488db2
Closes-Bug: 1806434
Signed-off-by: Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
The DEFAULT/reserved_huge_pages option will be required by NFV
deployments in order to let Nova compute service knows that some huge
pages allocated in the host are used by third-party components like
DPDK PMDs.
Since juju does not support yet list of strings options that one will
be exposed using semicolons in charm.
Closes-Bug: 1804169
Change-Id: I7faa3406a6bd27b9d924925ae93d40075eb0aff2
Signed-off-by: Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
The libvirt/{tx|rx}_queue_size options will be required by NFV
deploymens in order to increase network performances per vCPUs and
avoiding packet drops.
Note that the options are available start to Rocky. Since QEMU 2.7.0
and libvirt 2.3.0 to configure RX queue size. Since QEMU 2.10.0 and
libvirt 3.7.0 to configure TX queue size.
Closes-Bug: 1804170
Change-Id: Iceacb42aae248fb36e9eecdc992c4a982f4e32b4
Signed-off-by: Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
The change adds an option to the charm to use JUJU_AVAILABILITY_ZONE
environment variable set by Juju for the hook environment based on the
underlying provider's availability zone information for a given machine.
This information is used to configure default_availability_zone for nova
and availability_zone for subordinate networking charms.
Change-Id: Idc7112e7fe7b76d15cf9c4896b702b8ffd8c0e8e
Closes-Bug: #1796068
The [pci] header is necessary since pike [0]. This change updates the
pike template to add the [pci] header, rename pci_passthrough_whitelist
to passthrough_whitelist and add the config option pci-alias.
[0] https://docs.openstack.org/nova/pike/admin/pci-passthrough.html
Change-Id: I7a8c76f5989edb5b4a0b30036ce722ffb0ecb7ab