Remove section on scaling down as it has been documented
as a cloud operation in the CDG
Apply README template to the bottom of the README
Depends-On: I03493b30956eddcb77bd714360806aa53c126942
Change-Id: Iab6f0bdce05ee5115b0cdbb517cba79aa87dabf0
These are the test bundles (and any associated changes) for
focal-wallaby and hirsute-wallaby support.
Libraries sync.
hisute-wallaby test is disabled (moved to dev) due to [1] as bundle may
reference a reactive charm.
[1] https://github.com/juju-solutions/layer-basic/issues/194
Change-Id: I238e8a36b033594c67ffcefa325998f2eba2a659
This patchset updates all the requirements for charms.openstack,
charm-helpers, charms.ceph, zaza and zaza-openstack-tests back
to master branch.
Change-Id: I453eafd76c005cd5f10041c08e4a943fe235d474
Commit 9f4369d9 added a feature to set the availability zone of
the nova-compute unit on the cloud-compute relation. This uses the
value of the JUJU_AVAILABILITY_ZONE environment variable, which is
not consistent with how the nova-compute service sets its availability
zone.
Use the nova_compute_utils.get_availability_zone() method instead.
Closes-Bug #1925412
Change-Id: Ie68ecd808a60baf0d5bfe526f4355ce3c7ae5c77
The 'hirsute' key in c-h/core/host_factory/ubuntu.py:
UBUNTU_RELEASES had been missed out, and is needed for
hirsute support in many of the charms. This sync is to
add just that key. See also [1]
Note that this sync is only for classic charms.
[1] https://github.com/juju/charm-helpers/pull/598
Change-Id: I60b208bbf5a04a9ab598b76ff0cf7f8baf216cbb
* charm-helpers sync for classic charms
* build.lock file for reactive charms
* ensure tox.ini is from release-tools
* ensure requirements.txt files are from release-tools
* On reactive charms:
- ensure stable/21.04 branch for charms.openstack
- ensure stable/21.04 branch for charm-helpers
Change-Id: I14762601bb124cfb03bd3f427fa4b1243ed2377b
List contains only nodes registered to the same nova-cloud-controller
as the nova-compute service running on targeted unit.
Closes-Bug: #1911013
Change-Id: I28d1a9bd18b3a87fc31ff4bca5bfe58449cdae57
List of added actions:
* disable
* enable
* remove-from-cloud
* register-to-cloud
More detailed explanation of the process added to the README.md
Closes-Bug: #1691998
Change-Id: I45d1def2ca0b1289f6fcce06c5f8949ef2a4a69e
func-test-pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/470
Previously the cap is only applied to the units in containers. However,
services in a bare metal also requires a sensible cap. Otherwise,
nova-compute, for example, will have 256 workers for nova-api-metadata
out of the box which is an overkill with the following system.
32 cores * 2 threads/core * 2 sockets * 2 (default multiplier) = 256
Let's cap the number of workers as 4 always, then let operators override
it with an explicit config to worker-multiplier.
Synced charm-helpers for
https://github.com/juju/charm-helpers/pull/553
Closes-Bug: #1843011
Change-Id: If98f12d7cf1a77fb267f1b55c44896a48a40909a
This update adds the new hirsute Ubuntu release (21.04) and
removes trusty support (14.04 which is EOL at 21.04).
Change-Id: I59840b672673aa4a8e253659300d9333c1b20a4b
The nova-compute daemon requires access to a couple of additional
paths to support querying the underlying hardware in hardware
offload enabled scenarios.
Update apparmor profile to reflect these additional requirements.
Change-Id: I4283f12e4346b64f89dbc13bb64e5fb7edca2f62
Closes-Bug: 1895530
A new Juju action, to the nova-cloud-controller charm, will be
added to sync the nova-compute units Juju availability zones with
the availability from OpenStack.
It is useful in the context of a MAAS deployment, in order to map
MAAS AZs to OpenStack AZs.
Change-Id: I62f68f0c0c97aeca20a8afb32095d2972abd8473
Prior to this commit the SPICE agent was hard set to True regardless of
the nova-cloud-controller spice-agent-enabled value, preventing the use
of hw_pointer_model=usbtablet for Windows guests.
Change-Id: I6553623414acfadeb415342e8601a00ba5d80660
Includes updates to charmhelpers/charms.openstack for cert_utils
and unit-get for the install hook error on Juju 2.9
* charm-helpers sync for classic charms
* rebuild for reactive charms
* ensure tox.ini is from release-tools
* ensure requirements.txt files are from release-tools
* On reactive charms:
- ensure master branch for charms.openstack
- ensure master branch for charm-helpers
* Include fix for local_address() and NoBindingError
Change-Id: If413a2bdd97bf5a751eba7fd74664525be39bd8c
A change landed to charm helpers to mark the
SubordinateConfigContext as incomplete if the subordinate supplied
no data *1. But the neutron-plugin subordinate may legitimately
supply no data if no special config is needed. This restores the
previous behaviour of marking the subordinate context for the
neutron-plugin relation as complete even if not data was supplied.
*1 https://github.com/juju/charm-helpers/pull/519
Change-Id: I34fa2d39171132e4fe7d0b7e5fd29162161a5060
Closes-Bug: #1912187
If more than a single Ceph key is set as part of the relation data,
make sure that all of them are configured.
Makes sure that the previous relation data is handled as well
in order to maintain backwards compatibility.
Co-authored-by: Ionut Balutoiu <ibalutoiu@cloudbasesolutions.com>
Change-Id: I24be0ed48edd5af517e1699df77ef0d96ef20aa2
This config option is to enable admin
password injection at instance boot time
* Added unit test to verify the config
is correctly set and nova.config is
updated.
* Updated all of the templates that have
inject-password set
* Moved inject_* options out of
{if libvirt_images_type and rbd_pool}
block as they are irrelevant.
Closes-Bug: #1755696
Change-Id: Ie766a14bfa6b16337aa957bf7adf2d869462f9d7
This new action to list instances as
virsh sees them on the node (virsh
list --all), sometimes this disagrees
with what nova thinks
Add a new zaza functional test class to
avoid breaking the older versions
To run the action, issue command:
$ juju run-action nova-compute/0 virsh-audit --wait
unit-nova-compute-0:
UnitId: nova-compute/0
id: "134"
results:
virsh-domains: |2+
Id Name State
-----------------------------------
1 instance-00000001 running
2 instance-00000002 running
status: completed
timing:
completed: 2020-12-08 11:05:02 +0000 UTC
enqueued: 2020-12-08 11:04:58 +0000 UTC
started: 2020-12-08 11:05:01 +0000 UTC
Closes-Bug: #1907409
func-test-pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/472
Change-Id: I222a119199ada82186e2058402a31a40baf7fd7b
On multi-region deployments, Nova may talk to the wrong
neutron endpoint (from the wrong region) if the region
is unspecified.
On Rocky+ it will also require updating the
os_region_name config to region_name, as os_region_name
has been deprecated, otherwise Nova will talk to the wrong
placement endpoint as well.
This fix addresses the issue where nova-compute will not
register the node to the correct nova_api/placement
database, and will also not be able to complete live-migrations.
Given that the template for the [placement] section is
applied to every release, it is included both old and
new config options.
Change-Id: I9500ba400d55e6f1bc11f2ba05b25b4714cda578
Closes-bug: #1903210
This updates the README for erasure coded
Ceph pools for the case of Ceph-backed Nova
images.
The new text should be as similar as possible
for all the charms that support configuration
options for EC pools. See the below review for
the first of these charms whose README has been
updated.
https://review.opendev.org/#/c/749824/
Add basic README template sections (Actions,
Bugs, Configuration). Standardise Network spaces
section.
Demystify the ceph-access endpoint relation.
Change-Id: I9c7426dc8a8a53f412e7222e125f9746cf2ae804
* charm-helpers sync for classic charms
* charms.ceph sync for ceph charms
* rebuild for reactive charms
* sync tox.ini files as needed
* sync requirements.txt files to sync to standard
Change-Id: I79af80dd7a0faa9175c9dca1fac669f8c187e3f5