Juju Charm - Nova Cloud Controller
Go to file
Alex Kavanagh 4d9b4a2600 Refactor compute hostname resolving functionality
The main driver here is to separate the concerns of resolving host names
and adding them to service/user related files.  This is to enable the
(eventual) resolution of the feature to allow migrations across
relation ids (i.e. between nova-compute applications) and to enable
caching of hostname look ups.

Change-Id: I406d1daacbcc74eb6f3e090f9a46e01dd3e19cc8
2019-07-15 21:39:16 +01:00
actions Add security-checklist action 2019-03-13 10:33:15 +01:00
charmhelpers Propagate vendor_data from nova-cloud-controller 2019-06-05 10:22:56 -03:00
files Switch the charm to support py3 2018-10-18 15:43:03 +01:00
hooks Refactor compute hostname resolving functionality 2019-07-15 21:39:16 +01:00
lib Update tox.ini files from release-tools gold copy 2016-09-09 19:43:30 +00:00
scripts Sync scripts/. 2013-04-09 11:31:23 -07:00
templates Fix database parts templates for nova_alchemy_flags. 2019-05-07 13:05:48 -04:00
tests Propagate vendor_data from nova-cloud-controller 2019-06-05 10:22:56 -03:00
unit_tests Refactor compute hostname resolving functionality 2019-07-15 21:39:16 +01:00
.gitignore Block endpoint reg if cluster partially formed 2017-10-03 10:02:11 +00:00
.gitreview OpenDev Migration Patch 2019-04-19 19:36:40 +00:00
.project Add pydev project 2013-09-20 17:36:24 +01:00
.pydevproject Add missing fetch helper 2013-09-23 14:26:04 +01:00
.stestr.conf Update tox to remove py27 target 2018-11-02 16:29:59 -05:00
.zuul.yaml Added tox environment for gathering coverage 2019-03-01 14:42:34 +01:00
actions.yaml Add security-checklist action 2019-03-13 10:33:15 +01:00
bindep.txt Remove charm-helpers from tests dir and use venv instead 2017-05-26 16:24:32 +00:00
charm-helpers-hooks.yaml Update charm-helpers-hooks.yaml and sync ch 2019-02-12 15:57:51 -08:00
config.yaml Propagate vendor_data from nova-cloud-controller 2019-06-05 10:22:56 -03:00
copyright Re-license charm as Apache-2.0 2016-07-03 16:38:27 +00:00
hardening.yaml Add hardening support 2016-03-31 19:30:33 +01:00
icon.svg Update charm icon 2017-08-02 18:08:40 +01:00
LICENSE Re-license charm as Apache-2.0 2016-07-03 16:38:27 +00:00
Makefile Update repo to do ch-sync from Git 2017-09-26 09:55:32 +02:00
metadata.yaml Update series metadata 2019-04-05 07:58:39 +02:00
README.md Add default project quota configuration for compute 2018-12-17 20:29:03 +00:00
requirements.txt Add nova-metadata service 2018-10-03 07:24:05 +00:00
revision [ivoks,r=] Add support for setting neutron-alchemy-flags 2014-07-16 15:50:01 +01:00
setup.cfg Add project infomation into setup.cfg 2019-01-14 14:06:28 +00:00
test-requirements.txt Switch to direct execution of stestr for unit tests 2019-03-06 12:34:52 +00:00
tox.ini Refactor ssh_known_hosts_lines() and ssh_authorized_keys_lines() 2019-07-03 14:18:16 +01:00

nova-cloud-controller

Cloud controller node for OpenStack nova. Contains nova-schedule, nova-api, nova-network and nova-objectstore.

If console access is required then console-proxy-ip should be set to a client accessible IP that resolves to the nova-cloud-controller. If running in HA mode then the public vip is used if console-proxy-ip is set to local. Note: The console access protocol is baked into a guest when it is created, if you change it then console access for existing guests will stop working

HA/Clustering

There are two mutually exclusive high availability options: using virtual IP(s) or DNS. In both cases, a relationship to hacluster is required which provides the corosync back end HA functionality.

To use virtual IP(s) the clustered nodes must be on the same subnet such that the VIP is a valid IP on the subnet for one of the node's interfaces and each node has an interface in said subnet. The VIP becomes a highly-available API endpoint.

At a minimum, the config option 'vip' must be set in order to use virtual IP HA. If multiple networks are being used, a VIP should be provided for each network, separated by spaces. Optionally, vip_iface or vip_cidr may be specified.

To use DNS high availability there are several prerequisites. However, DNS HA does not require the clustered nodes to be on the same subnet. Currently the DNS HA feature is only available for MAAS 2.0 or greater environments. MAAS 2.0 requires Juju 2.0 or greater. The clustered nodes must have static or "reserved" IP addresses registered in MAAS. The DNS hostname(s) must be pre-registered in MAAS before use with DNS HA.

At a minimum, the config option 'dns-ha' must be set to true and at least one of 'os-public-hostname', 'os-internal-hostname' or 'os-internal-hostname' must be set in order to use DNS HA. One or more of the above hostnames may be set.

The charm will throw an exception in the following circumstances: If neither 'vip' nor 'dns-ha' is set and the charm is related to hacluster If both 'vip' and 'dns-ha' are set as they are mutually exclusive If 'dns-ha' is set and none of the os-{admin,internal,public}-hostname(s) are set

Network Space support

This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above.

API endpoints can be bound to distinct network spaces supporting the network separation of public, internal and admin endpoints.

Access to the underlying MySQL instance can also be bound to a specific space using the shared-db relation.

To use this feature, use the --bind option when deploying the charm:

juju deploy nova-cloud-controller --bind "public=public-space internal=internal-space admin=admin-space shared-db=internal-space"

alternatively these can also be provided as part of a juju native bundle configuration:

nova-cloud-controller:
  charm: cs:xenial/nova-cloud-controller
  num_units: 1
  bindings:
    public: public-space
    admin: admin-space
    internal: internal-space
    shared-db: internal-space

NOTE: Spaces must be configured in the underlying provider prior to attempting to use them.

NOTE: Existing deployments using os-*-network configuration options will continue to function; these options are preferred over any network space binding provided if set.

Default Quota Configuration

This charm supports default quota settings for projects. This feature is only available from Openstack Icehouse and later releases.

The default quota settings do not overwrite post-deployment CLI quotas set by operators. Existing projects whose quotas were not modified will adopt the new defaults when a config-changed hook occurs. Newly created projects will also adopt the defaults set in the charm's config.

By default, the charm's quota configs are not set and openstack projects have the values below as default: quota-instances - 10 quota-cores - 20 quota-ram - 51200 quota-metadata_items - 128 quota-injected_files - 5 quota-injected_file_content_bytes - 10240 quota-injected_file_path_length - 255 quota-key_pairs - 100 quota-server_groups - 10 (only available after Icehouse) quota-server_group_members - 10 (only available after Icehouse)