Juju Charm - Nova Cloud Controller
Go to file
Corey Bryant abb8ee551c Ensure openstack-release package is correct after install hook
The linked bug shows the install of the charm with openstack-origin set
to zed.  This happens because configure_installation_source() causes the
openstack-release package to be installed *before* the zed cloud archive
sources are configured into /etc/apt and an apt update done. This means
that the openstack-release package says "yoga" despite the zed packages
actually being installed.

Then, on the config-changed hook, it sees that the installed version is
showing as yoga and tries to do an upgrade.  This fails, as the charm
hasn't yet bootstrapped, and the charm tries to bootstrap after
upgrading the packages.

There's a few bugs here which are exposed, but the tactical fix is to
force the openstack-release to match the installed packages.

Closes-Bug: #1989538
Change-Id: Icdef04e25e74c0a18fd49997c5f5b0540d583f40
2022-10-04 19:37:47 +00:00
actions Add sync-compute-availability-zones Juju action 2021-04-22 19:54:36 -07:00
charmhelpers Add Kinetic and Zed support 2022-08-26 18:40:41 +00:00
files Switch the charm to support py3 2018-10-18 15:43:03 +01:00
hooks Ensure openstack-release package is correct after install hook 2022-10-04 19:37:47 +00:00
lib Update tox.ini files from release-tools gold copy 2016-09-09 19:43:30 +00:00
scripts Sync scripts/. 2013-04-09 11:31:23 -07:00
templates Merge "Expose max_local_block_devices as an explicit charm option" 2022-03-11 08:41:05 +00:00
tests Add Kinetic and Zed support 2022-08-26 18:40:41 +00:00
unit_tests Restart nova-conductor on endpoint changes 2022-04-20 19:23:27 -07:00
.gitignore Update to classic charms to build using charmcraft in CI 2022-02-19 15:17:08 -07:00
.gitreview OpenDev Migration Patch 2019-04-19 19:36:40 +00:00
.project Add pydev project 2013-09-20 17:36:24 +01:00
.pydevproject Add missing fetch helper 2013-09-23 14:26:04 +01:00
.stestr.conf Update tox to remove py27 target 2018-11-02 16:29:59 -05:00
.zuul.yaml Add Kinetic and Zed support 2022-08-26 18:40:41 +00:00
actions.yaml Add sync-compute-availability-zones Juju action 2021-04-22 19:54:36 -07:00
bindep.txt Add Kinetic and Zed support 2022-08-26 18:40:41 +00:00
build-requirements.txt Update to classic charms to build using charmcraft in CI 2022-02-19 15:17:08 -07:00
charm-helpers-hooks.yaml charmhelpers sync for yoga release 2022-04-07 12:26:30 +01:00
charmcraft.yaml Ensure that kinetic/22.10 is enabled 2022-08-31 20:27:20 +01:00
config.yaml Add Kinetic and Zed support 2022-08-26 18:40:41 +00:00
copyright Re-license charm as Apache-2.0 2016-07-03 16:38:27 +00:00
hardening.yaml Add hardening support 2016-03-31 19:30:33 +01:00
icon.svg Update charm icon 2017-08-02 18:08:40 +01:00
LICENSE Re-license charm as Apache-2.0 2016-07-03 16:38:27 +00:00
Makefile Updates for 20.08 cycle start for groovy and libs 2020-06-02 14:34:42 +01:00
metadata.yaml Ensure that kinetic/22.10 is enabled 2022-08-31 20:27:20 +01:00
osci.yaml Add Kinetic and Zed support 2022-08-26 18:40:41 +00:00
pip.sh Add xena bundles 2021-09-27 15:51:23 +01:00
README.md Overhaul README 2021-10-01 17:28:39 -04:00
rename.sh Update to classic charms to build using charmcraft in CI 2022-02-19 15:17:08 -07:00
requirements.txt Add Kinetic and Zed support 2022-08-26 18:40:41 +00:00
revision [ivoks,r=] Add support for setting neutron-alchemy-flags 2014-07-16 15:50:01 +01:00
setup.cfg setup.cfg: Replace dashes with underscores 2021-05-07 11:50:11 +00:00
test-requirements.txt Add Kinetic and Zed support 2022-08-26 18:40:41 +00:00
tox.ini Add Kinetic and Zed support 2022-08-26 18:40:41 +00:00

Overview

The nova-cloud-controller charm deploys a suite of OpenStack Nova services:

Usage

Configuration

This section covers common and/or important configuration options. See file config.yaml for the full list of options, along with their descriptions and default values. See the Juju documentation for details on configuring applications.

cache-known-hosts

Controls whether or not the charm will use the current cache for hostname/IP resolution queries for nova-compute units. This occurs whenever information that is passed over the nova-compute:cloud-compute relation changes (e.g. a nova-compute unit is added). The default value is 'true'. See section SSH host lookup caching for details.

console-proxy-ip

Sets a client accessible proxy IP address that allows for VM console access. It should route to the nova-cloud-controller unit when the application is not under HA. When it is, the value of 'local' will point to the VIP.

Ensure that option console-access-protocol is set to a value other than 'None'.

VNC clients should be configured accordingly. In the case of a VIP, it will need to be determined.

console-access-protocol

Specifies the protocol to use when accessing the console of a VM. Supported values are: 'None', 'spice', 'xvpvnc', 'novnc', and 'vnc' (for both xvpvnc and novnc). Type 'xvpvnc' is not supported with UCA release 'bionic-ussuri' or with series 'focal' or later.

Caution

: VMs are configured with a specific protocol at creation time. Console access for existing VMs will therefore break if this value is changed to something different.

network-manager

Defines the network manager for the cloud. Supported values are:

  • 'FlatDHCPManager' for nova-network (the default)
  • 'FlatManager' - for nova-network
  • 'Neutron' - for a full SDN solution

When using 'Neutron' the neutron-gateway charm should be used to provide L3 routing and DHCP Services.

openstack-origin

States the software sources. A common value is an OpenStack UCA release (e.g. 'cloud:bionic-ussuri' or 'cloud:focal-wallaby'). See Ubuntu Cloud Archive. The underlying host's existing apt sources will be used if this option is not specified (this behaviour can be explicitly chosen by using the value of 'distro').

Deployment

These deployment instructions assume the following applications are present: keystone, rabbitmq-server, neutron-api, nova-compute, and a cloud database.

File ncc.yaml contains an example configuration:

   nova-cloud-controller:
     network-manager: Neutron
     openstack-origin: cloud:focal-wallaby

Nova cloud controller is often containerised. Here a single unit is deployed to a new container on machine '3':

juju deploy --to lxd:3 --config ncc.yaml nova-cloud-controller

Note

: The cloud's database is determined by the series: prior to focal percona-cluster is used, otherwise it is mysql-innodb-cluster. In the example deployment below mysql-innodb-cluster is used.

Join nova-cloud-controller to the cloud database:

juju deploy mysql-router ncc-mysql-router
juju add-relation ncc-mysql-router:db-router mysql-innodb-cluster:db-router
juju add-relation ncc-mysql-router:shared-db nova-cloud-controller:shared-db

Five additional relations can be added:

juju add-relation nova-cloud-controller:identity-service keystone:identity-service
juju add-relation nova-cloud-controller:amqp rabbitmq-server:amqp
juju add-relation nova-cloud-controller:neutron-api neutron-api:neutron-api
juju add-relation nova-cloud-controller:cloud-compute nova-compute:cloud-compute

TLS

Enable TLS by adding a relation to an existing vault application:

juju add-relation nova-cloud-controller:certificates vault:certificates

See Managing TLS certificates in the OpenStack Charms Deployment Guide for more information on TLS.

Note

: This charm also supports TLS configuration via charm options ssl_cert, ssl_key, and ssl_ca.

Actions

This section covers Juju actions supported by the charm. Actions allow specific operations to be performed on a per-unit basis. To display action descriptions run juju actions --schema nova-cloud-controller. If the charm is not deployed then see file actions.yaml.

  • archive-data
  • clear-unit-knownhost-cache
  • openstack-upgrade
  • pause
  • resume
  • security-checklist
  • sync-compute-availability-zones

High availability

When more than one unit is deployed with the hacluster application the charm will bring up an HA active/active cluster.

There are two mutually exclusive high availability options: using virtual IP(s) or DNS. In both cases the hacluster subordinate charm is used to provide the Corosync and Pacemaker backend HA functionality.

See OpenStack high availability in the OpenStack Charms Deployment Guide for details.

Spaces

This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above.

API endpoints can be bound to distinct network spaces supporting the network separation of public, internal and admin endpoints.

Access to the underlying MySQL instance can also be bound to a specific space using the shared-db relation.

To use this feature, use the --bind option when deploying the charm:

juju deploy nova-cloud-controller --bind \
   "public=public-space \
    internal=internal-space \
    admin=admin-space \
    shared-db=internal-space"

Alternatively, these can also be provided as part of a Juju native bundle configuration:

    nova-cloud-controller:
      charm: cs:xenial/nova-cloud-controller
      num_units: 1
      bindings:
        public: public-space
        admin: admin-space
        internal: internal-space
        shared-db: internal-space

Note

: Spaces must be configured in the underlying provider prior to attempting to use them.

Note

: Existing deployments using os-*-network configuration options will continue to function; these options are preferred over any network space binding provided if set.

Charm-managed quotas

The charm can optionally set project quotas, which affect both new and existing projects. These quotas are set with the following configuration options:

  • quota-cores
  • quota-count-usage-from-placement
  • quota-injected-files
  • quota-injected-file-size
  • quota-injected-path-size
  • quota-instances
  • quota-key-pairs
  • quota-metadata-items
  • quota-ram
  • quota-server-groups
  • quota-server-group-members

Given that OpenStack quotas can be set in a variety of ways, the order of precedence (from higher to lower) for the enforcing of quotas is:

  1. quotas set by the operator manually
  2. quotas set by the nova-cloud-controller charm
  3. default quotas of the OpenStack service

For information on OpenStack quotas see Manage quotas in the Nova documentation.

SSH host lookup caching

Caching SSH known hosts reduces 'cloud-compute' hook execution time. It does this by reducing the number of lookups performed by the nova-cloud-controller charm during SSH connection negotiations when distributing a new unit's SSH keys among existing units of the same application group. These keys are needed for VM migrations to succeed.

The cache is populated (or refreshed) when option cache-known-hosts is set to 'false', in which case DNS lookups are always performed. The cache is queried by the charm when it is set to 'true', where a lookup is only performed (adding the result to the cache) when the cache is unable satisfy the query.

When a modification is made to DNS resolution, the clear-unit-knownhost-cache action should be used. This action refreshes the charm's cache and updates the known_hosts file on the nova-compute units. Information can be updated selectively by targeting a specific unit, an application group, or all application groups:

juju run-action --wait nova-cloud-controller/0 clear-unit-knownhost-cache target=nova-compute/2
juju run-action --wait nova-cloud-controller/0 clear-unit-knownhost-cache target=nova-compute
juju run-action --wait nova-cloud-controller/0 clear-unit-knownhost-cache

When nova-cloud-controller is under HA, the same invocation must be run on all nova-cloud-controller units.

Policy overrides

Policy overrides is an advanced feature that allows an operator to override the default policy of an OpenStack service. The policies that the service supports, the defaults it implements in its code, and the defaults that a charm may include should all be clearly understood before proceeding.

Caution

: It is possible to break the system (for tenants and other services) if policies are incorrectly applied to the service.

Policy statements are placed in a YAML file. This file (or files) is then (ZIP) compressed into a single file and used as an application resource. The override is then enabled via a Boolean charm option.

Here are the essential commands (filenames are arbitrary):

zip overrides.zip override-file.yaml
juju attach-resource nova-cloud-controller policyd-override=overrides.zip
juju config nova-cloud-controller use-policyd-override=true

See appendix Policy overrides in the OpenStack Charms Deployment Guide for a thorough treatment of this feature.

Documentation

The OpenStack Charms project maintains two documentation guides:

Bugs

Please report bugs on Launchpad.