actions | ||
files | ||
hooks | ||
lib | ||
templates | ||
tests | ||
unit_tests | ||
.gitignore | ||
.gitreview | ||
.project | ||
.pydevproject | ||
.stestr.conf | ||
.zuul.yaml | ||
actions.yaml | ||
bindep.txt | ||
charm-helpers-hooks.yaml | ||
config.yaml | ||
copyright | ||
hardening.yaml | ||
icon.svg | ||
LICENSE | ||
Makefile | ||
metadata.yaml | ||
osci.yaml | ||
pip.sh | ||
README.md | ||
requirements.txt | ||
revision | ||
test-requirements.txt | ||
tox.ini |
Overview
The nova-compute charm deploys Nova Compute, the core OpenStack service that provisions virtual instances (VMs) and baremetal servers (via Ironic). The charm works alongside other Juju-deployed OpenStack services.
Usage
Configuration
This section covers common and/or important configuration options. See file
config.yaml
for the full list of options, along with their descriptions and
default values. See the Juju documentation for details
on configuring applications.
config-flags
A comma-separated list of key=value configuration flags. These values will be
placed in the [DEFAULT] section of the nova.conf
file.
enable-live-migration
Allows the live migration of VMs.
enable-resize
Allows the resizing of VMs.
migration-auth-type
Selects the TCP authentication scheme to use for live migration. The only accepted value is 'ssh'.
customize-failure-domain
When MAAS is the backing cloud and this option is set to 'true' then all
MAAS-defined zones will become available as Nova availability zones, and option
default-availability-zone
will be overridden. See section Availability
Zones.
default-availability-zone
Sets a single default Nova availability zone. It is used when a VM is created
without a Nova AZ being specified. The default value is 'nova'. A non-default
Nova AZ must be created manually (i.e. openstack aggregate create
). See
section Availability Zones.
libvirt-image-backend
Specifies what image backend to use. Possible values are 'rbd', 'qcow2', 'raw', and 'flat'. The default behaviour is for Nova to use qcow2.
openstack-origin
States the software sources. A common value is an OpenStack UCA release (e.g. 'cloud:bionic-train' or 'cloud:focal-wallaby'). See Ubuntu Cloud Archive. The underlying host's existing apt sources will be used if this option is not specified (this behaviour can be explicitly chosen by using the value of 'distro').
pool-type
Dictates the Ceph storage pool type. See sections Ceph pool type and RBD Nova images for more information.
Ceph pool type
Ceph storage pools can be configured to ensure data resiliency either through
replication or by erasure coding. This charm supports both types via the
pool-type
configuration option, which can take on the values of 'replicated'
and 'erasure-coded'. The default value is 'replicated'.
For this charm, the pool type will be associated with Nova-managed images.
Note
: Erasure-coded pools are supported starting with Ceph Luminous.
Replicated pools
Replicated pools use a simple replication strategy in which each written object is copied, in full, to multiple OSDs within the cluster.
The ceph-osd-replication-count
option sets the replica count for any object
stored within the 'nova' rbd pool. Increasing this value increases data
resilience at the cost of consuming more real storage in the Ceph cluster. The
default value is '3'.
Important
: The
ceph-osd-replication-count
option must be set prior to adding the relation to the ceph-mon application. Otherwise, the pool's configuration will need to be set by interfacing with the cluster directly.
Erasure coded pools
Erasure coded pools use a technique that allows for the same resiliency as replicated pools, yet reduces the amount of space required. Written data is split into data chunks and error correction chunks, which are both distributed throughout the cluster.
Note
: Erasure coded pools require more memory and CPU cycles than replicated pools do.
When using erasure coding two pools will be created: a replicated pool (for
storing RBD metadata) and an erasure coded pool (for storing the data written
into the RBD). The ceph-osd-replication-count
configuration option only
applies to the metadata (replicated) pool.
Erasure coded pools can be configured via options whose names begin with the
ec-
prefix.
Important
: It is strongly recommended to tailor the
ec-profile-k
andec-profile-m
options to the needs of the given environment. These latter options have default values of '1' and '2' respectively, which result in the same space requirements as those of a replicated pool.
See Ceph Erasure Coding in the OpenStack Charms Deployment Guide for more information.
Ceph BlueStore compression
This charm supports BlueStore inline compression
for its associated Ceph storage pool(s). The feature is enabled by assigning a
compression mode via the bluestore-compression-mode
configuration option. The
default behaviour is to disable compression.
The efficiency of compression depends heavily on what type of data is stored in the pool and the charm provides a set of configuration options to fine tune the compression behaviour.
Note
: BlueStore compression is supported starting with Ceph Mimic.
Deployment
These deployment instructions assume that the following applications are present: glance, nova-cloud-controller, ovn-chassis, and rabbitmq-server. Storage backends used for VM disks and volumes are configured separately (see sections Ceph backed storage and Local Cinder storage.
Let file nova-compute.yaml
contain the deployment configuration:
nova-compute:
config-flags: default_ephemeral_format=ext4
enable-live-migration: true
enable-resize: true
migration-auth-type: ssh
openstack-origin: cloud:focal-wallaby
To deploy nova-compute to machine '5':
juju deploy --to 5 --config nova-compute.yaml nova-compute
juju add-relation nova-compute:image-service glance:image-service
juju add-relation nova-compute:cloud-compute nova-cloud-controller:cloud-compute
juju add-relation nova-compute:neutron-plugin ovn-chassis:nova-compute
juju add-relation nova-compute:amqp rabbitmq-server:amqp
Ceph backed storage
Two concurrent Ceph backends are supported: RBD Nova images and RBD Cinder volumes. Each backend uses its own set of cephx credentials.
The steps below assume a pre-existing Ceph cluster (see the ceph-mon and ceph-osd charms).
RBD Nova images
RBD Nova images are enabled by setting option libvirt-image-backend
to 'rbd'
and by adding a relation to the Ceph cluster:
juju config nova-compute libvirt-image-backend=rbd
juju add-relation nova-compute:ceph ceph-mon:client
Warning
: Changing the value of option
libvirt-image-backend
will orphan any disks that were set up under a different setting. This will cause the restarting of associated VMs to fail.
This solution will place both root and ephemeral disks in Ceph.
Pro tip: An alternative is to selectively store just root disks in Ceph by using Cinder as an intermediary. See section RBD Cinder volumes as well as Launch an instance from a volume in the Nova documentation.
RBD Cinder volumes
RBD Cinder volumes are enabled by adding a relation to Cinder via the cinder-ceph application. Assuming Cinder is already backed by Ceph (see the cinder-ceph charm):
juju add-relation nova-compute:ceph-access cinder-ceph:ceph-access
Note
: The
nova-compute:ceph-access
relation is not needed for OpenStack releases older than Ocata.
Local Cinder storage
To use local storage, Cinder will need to be configured to use local block devices. See the cinder charm for details.
Availability Zones
Nova AZs can be matched with MAAS zones depending on how options
default-availability-zone
and customize-failure-domain
are configured. See
Availability Zones in the OpenStack Charms Deployment Guide
for in-depth coverage of how this works.
SSH keys and VM migration
VM migration requires the sharing of public SSH keys (host and several select users) among the compute hosts. By design, only those hosts belonging to the same application group will get each other's keys. This means that VM migration cannot occur (without manual intervention) between hosts belonging to different groups.
Note: The policy of only sharing SSH keys amongst hosts of the same application group may be struck down. This is being tracked in bug LP #1468871.
NFV support
This charm (in conjunction with the nova-cloud-controller and neutron-api charms) supports NFV for Compute nodes that are deployed in Telco NFV environments.
For more information on NFV see the Network Functions Virtualization (NFV) page in the OpenStack Charms Deployment Guide.
Network spaces
This charm supports the use of Juju network spaces (Juju
v.2.0
). This feature optionally allows specific types of the application's
network traffic to be bound to subnets that the underlying hardware is
connected to.
Note
: Spaces must be configured in the backing cloud prior to deployment.
In addition this charm declares two extra-bindings:
-
internal
: used to determine the network space to use for console access to instances. -
migration
: used to determine which network space should be used for live and cold migrations between hypervisors.
Note that the nova-cloud-controller application must have bindings to the same network spaces used for both 'internal' and 'migration' extra bindings.
Scaling back
Scaling back the nova-compute application implies the removal of one or more compute nodes. This is documented as a cloud operation in the OpenStack Charms Deployment Guide. See Remove a Compute node.
Actions
This section lists Juju actions supported by the charm.
Actions allow specific operations to be performed on a per-unit basis. To
display action descriptions run juju actions nova-compute
. If the charm is
not deployed then see file actions.yaml
.
disable
enable
hugepagereport
instance-count
list-compute-nodes
node-name
openstack-upgrade
pause
register-to-cloud
remove-from-cloud
resume
security-checklist
Documentation
The OpenStack Charms project maintains two documentation guides:
- OpenStack Charm Guide: for project information, including development and support notes
- OpenStack Charms Deployment Guide: for charm usage information
Bugs
Please report bugs on Launchpad.