Juju Charm - Nova Compute
Go to file
James Page 6fbc53d28f Add support for cephx pool grouping and permissions
Sync charmhelpers and add configuration option to allow access
to ceph pools to be limited based on grouping.

Nova will require access to volumes, images and vms pool groups.

Change-Id: I1c188d983609577ab34f7aef7854954c104b58bd
Partial-Bug: 1424771
2017-02-14 14:06:49 +00:00
actions Re-license charm as Apache-2.0 2016-07-03 16:37:37 +00:00
files Fix init script huge page location 2015-08-14 13:19:17 +00:00
hooks Add support for cephx pool grouping and permissions 2017-02-14 14:06:49 +00:00
lib Update tox.ini files from release-tools gold copy 2016-09-09 19:43:02 +00:00
templates Add /dev/vfio/vfio to cgroup permissions 2017-02-01 16:28:20 +00:00
tests Add support for cephx pool grouping and permissions 2017-02-14 14:06:49 +00:00
unit_tests Add support for cephx pool grouping and permissions 2017-02-14 14:06:49 +00:00
.coveragerc Finish up: new templates, ssh key creation, Makefile. 2013-08-01 16:21:58 -07:00
.gitignore Add ceph-pool-weight option for calculating pgs 2016-07-14 08:04:33 -07:00
.gitreview Add gitreview prior to migration to openstack 2016-02-24 21:53:34 +00:00
.project [trivial] fixup pydev project 2013-11-17 21:48:39 +00:00
.pydevproject Fixup kilo configs 2015-03-24 17:52:45 +00:00
.testr.conf Add tox support 2015-10-30 14:49:18 +09:00
LICENSE Re-license charm as Apache-2.0 2016-07-03 16:37:37 +00:00
Makefile Use bundletester for amulet test execution 2016-07-19 09:44:36 +01:00
README.md Improvements in README file 2016-11-06 20:01:40 +05:00
actions.yaml Enhanced pause/resume for maintenance mode 2016-04-01 11:49:18 +00:00
charm-helpers-hooks.yaml Add support for application version 2016-09-20 12:37:40 +01:00
charm-helpers-tests.yaml Add support for cephx pool grouping and permissions 2017-02-14 14:06:49 +00:00
config.yaml Add support for cephx pool grouping and permissions 2017-02-14 14:06:49 +00:00
copyright Re-license charm as Apache-2.0 2016-07-03 16:37:37 +00:00
hardening.yaml Add hardening support 2016-03-24 11:18:41 +00:00
icon.svg Update icon.svg 2013-10-23 13:14:56 -07:00
metadata.yaml Remove zesty series metadata 2016-12-03 09:48:04 -06:00
requirements.txt Fix pbr requirement 2016-04-13 10:24:49 +00:00
revision [hopem] added support for libvirt RBD imagebackend 2014-06-02 19:37:32 +01:00
test-requirements.txt Use bundletester for amulet test execution 2016-07-19 09:44:36 +01:00
tox.ini Update tox.ini files from release-tools gold copy 2016-09-09 19:43:02 +00:00

README.md

Overview

This charm provides Nova Compute, the OpenStack compute service. It's target platform is Ubuntu (preferably LTS) + Openstack.

Usage

The following interfaces are provided:

  • cloud-compute - Used to relate (at least) with one or more of nova-cloud-controller, glance, ceph, cinder, mysql, ceilometer-agent, rabbitmq-server, neutron

  • nrpe-external-master - Used to generate Nagios checks.

Database

Nova compute only requires database access if using nova-network. If using Neutron, no direct database access is required and the shared-db relation need not be added.

Networking

This charm support nova-network (legacy) and Neutron networking.

Storage

This charm supports a number of different storage backends depending on your hypervisor type and storage relations.

NFV support

This charm (in conjunction with the nova-cloud-controller and neutron-api charms) supports use of nova-compute nodes configured for use in Telco NFV deployments; specifically the following configuration options (yaml excerpt):

nova-compute:
  hugepages: 60%
  vcpu-pin-set: "^0,^2"
  reserved-host-memory: 1024
  pci-passthrough-whitelist: {"vendor_id":"1137","product_id":"0071","address":"*:0a:00.*","physical_network":"physnet1"}

In this example, compute nodes will be configured with 60% of available RAM for hugepage use (decreasing memory fragmentation in virtual machines, improving performance), and Nova will be configured to reserve CPU cores 0 and 2 and 1024M of RAM for host usage and use the supplied PCI device whitelist as PCI devices that as consumable by virtual machines, including any mapping to underlying provider network names (used for SR-IOV VF/PF port scheduling with Nova and Neutron's SR-IOV support).

The vcpu-pin-set configuration option is a comma-separated list of physical CPU numbers that virtual CPUs can be allocated to by default. Each element should be either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a previous range. For example:

vcpu-pin-set: "4-12,^8,15"

The pci-passthrough-whitelist configuration must be specified as follows:

A JSON dictionary which describe a whitelisted PCI device. It should take the following format:

["device_id": "<id>",] ["product_id": "<id>",]
["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]" |
"devname": "PCI Device Name",]
{"tag": "<tag_value>",}

where '[' indicates zero or one occurrences, '{' indicates zero or multiple occurrences, and '|' mutually exclusive options. Note that any missing fields are automatically wildcarded. Valid examples are:

pci-passthrough-whitelist: {"devname":"eth0", "physical_network":"physnet"}

pci-passthrough-whitelist: {"address":"*:0a:00.*"}

pci-passthrough-whitelist: {"address":":0a:00.", "physical_network":"physnet1"}

pci-passthrough-whitelist: {"vendor_id":"1137", "product_id":"0071"}

pci-passthrough-whitelist: {"vendor_id":"1137", "product_id":"0071", "address": "0000:0a:00.1", "physical_network":"physnet1"}

The following is invalid, as it specifies mutually exclusive options:

pci-passthrough-whitelist: {"devname":"eth0", "physical_network":"physnet", "address":"*:0a:00.*"}

A JSON list of JSON dictionaries corresponding to the above format. For example:

pci-passthrough-whitelist: [{"product_id":"0001", "vendor_id":"8086"}, {"product_id":"0002", "vendor_id":"8086"}]`

The OpenStack advanced networking documentation provides further details on whitelist configuration and how to create instances with Neutron ports wired to SR-IOV devices.