Retire master branch of nova-lxd

Drop content and replace with retirement notice.

Change-Id: I2de2eff7694d60597a6413a0a64124fbbede69bb
This commit is contained in:
James Page 2019-07-23 13:56:57 +01:00
parent 09ea20c600
commit 6603a7f323
89 changed files with 10 additions and 10986 deletions

View File

@ -1,7 +0,0 @@
[run]
branch = True
source = nova.virt.lxd
omit = nova/tests/*
[report]
ignore_errors = True

58
.gitignore vendored
View File

@ -1,58 +0,0 @@
*.py[cod]
*.idea
# C extensions
*.so
# Packages
*.egg
*.eggs
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.tox
nosetests.xml
.stestr
.venv
.stestr
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.swp
.*sw?
cover

View File

@ -1,3 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>

View File

@ -1,3 +0,0 @@
[DEFAULT]
test_path=./nova/tests/unit/virt/lxd
top_dir=./nova/tests/unit/virt/lxd/

View File

@ -1,30 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This job will execute 'tox -e func_lxd' from the OSA
# repo specified in 'osa_test_repo'.
- job:
name: openstack-ansible-nova-lxd
parent: openstack-ansible-cross-repo-functional
voting: false
required-projects:
- name: openstack/openstack-ansible-os_nova
vars:
tox_env: lxd
osa_test_repo: openstack/openstack-ansible-os_nova
- project:
templates:
- openstack-lower-constraints-jobs
check:
jobs:
- openstack-ansible-nova-lxd

View File

@ -1,91 +0,0 @@
Crash course in lxd setup
=========================
nova-lxd absolutely requires lxd, though its installation and configuration
is out of scope here. If you're running Ubuntu, here is the easy path
to a running lxd.
.. code-block: bash
add-apt-repository ppa:ubuntu-lxc/lxd-git-master && sudo apt-get update
apt-get -y install lxd
usermod -G lxd ${your_username|stack}
service lxd start
If you're currently logged in as the user you just added to lxd, you'll
need to log out and log back in again.
Using nova-lxd with devstack
============================
nova-lxd includes a plugin for use in devstack. If you'd like to run
devstack with nova-lxd, you'll want to add the following to `local.conf`:
.. code-block: bash
enable_plugin nova-lxd https://git.openstack.org/openstack/nova-lxd
In this case, nova-lxd will run HEAD from master. You may want to point
this at your own fork. A final argument to `enable_plugin` can be used
to specify a git revision.
Configuration and installation of devstack is beyond the scope
of this document. Here's an example `local.conf` file that will
run the very minimum you`ll need for devstack.
.. code-block: bash
[[local|localrc]]
ADMIN_PASSWORD=password
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=$ADMIN_PASSWORD
disable_service cinder c-sch c-api c-vol
disable_service n-net n-novnc
disable_service horizon
disable_service ironic ir-api ir-cond
enable_service q-svc q-agt q-dhcp q-13 q-meta
# Optional, to enable tempest configuration as part of devstack
enable_service tempest
enable_plugin nova-lxd https://git.openstack.org/openstack/nova-lxd
# More often than not, stack.sh explodes trying to configure IPv6 support,
# so let's just disable it for now.
IP_VERSION=4
Once devstack is running, you'll want to add the lxd image to glance. You can
do this (as an admin) with:
.. code-block: bash
wget http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-root.tar.xz
glance image-create --name lxd --container-format bare --disk-format raw \
--visibility=public < trusty-server-cloudimg-amd64-root.tar.xz
To run the tempest tests, you can use:
.. code-block: bash
/opt/stack/tempest/run_tempest.sh -N tempest.api.compute
Errata
======
Patches should be submitted to Openstack Gerrit via `git-review`.
Bugs should be filed on Launchpad:
https://bugs.launchpad.net/nova-lxd
If you would like to contribute to the development of OpenStack,
you must follow the steps in this page:
https://docs.openstack.org/infra/manual/developers.html

View File

@ -1,4 +0,0 @@
nova-lxd Style Commandments
===============================================
Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/

175
LICENSE
View File

@ -1,175 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,6 +0,0 @@
include AUTHORS
include ChangeLog
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

View File

@ -1,42 +0,0 @@
# nova-lxd [![Build Status](https://travis-ci.org/lxc/nova-lxd.svg?branch=master)](https://travis-ci.org/lxc/nova-lxd)
An OpenStack Compute driver for managing containers using LXD.
## nova-lxd on Devstack
For development purposes, nova-lxd provides a devstack plugin. To use it, just include the
following in your devstack `local.conf`:
```
[[local|localrc]]
enable_plugin nova-lxd https://git.openstack.org/openstack/nova-lxd
# You should enable the following if you use lxd 3.0.
# In addition, this setting requires zfs >= 0.7.0.
#LXD_BACKEND_DRIVER=zfs
```
Change git repositories as needed (it's probably not very useful to point to the main
nova-lxd repo). If you have a local tree you'd like to use, you can symlink your tree to
`/opt/stack/nova-lxd` and do your development from there.
The devstack default images come cirros LXD, you can still download
Ubuntu. Once your stack is up and you've configured authentication
against your devstack, do the following::
```
wget http://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64-root.tar.xz
glance image-create --name bionic-amd64 --disk-format raw --container-format bare --file bionic-server-cloudimg-amd64-root.tar.xz
```
# Support and discussions
We use the LXC mailing-lists for developer and user discussions, you can
find and subscribe to those at: https://lists.linuxcontainers.org
If you prefer live discussions, some of us also hang out in
[#lxcontainers](http://webchat.freenode.net/?channels=#lxcontainers) on irc.freenode.net.
## Bug reports
Bug reports can be filed at https://bugs.launchpad.net/nova-lxd

10
README.rst Normal file
View File

@ -0,0 +1,10 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1 +0,0 @@
[python: **.py]

View File

@ -1,25 +0,0 @@
#!/bin/bash -xe
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This script is executed inside post_test function in devstack gate.
source $BASE/new/devstack/functions
INSTALLDIR=${INSTALLDIR:-/opt/stack}
source $INSTALLDIR/devstack/functions-common
LOGDIR=/opt/stack/logs
# Collect logs from the containers
sudo mkdir -p $LOGDIR/containers/
sudo cp -rp /var/log/lxd/* $LOGDIR/containers

View File

@ -1,28 +0,0 @@
#!/bin/bash -xe
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This script is executed inside pre_test_hook function in devstack gate.
# First argument ($1) expects boolean as value where:
# 'False' means share driver will not handle share servers
# 'True' means it will handle share servers.
# Import devstack function 'trueorfalse'
source $BASE/new/devstack/functions
# Note, due to Bug#1822182 we have to set this to default for the disk backend
# otherwise rescue tests will not work.
DEVSTACK_LOCAL_CONFIG+=$'\n'"LXD_BACKEND_DRIVER=default"
export DEVSTACK_LOCAL_CONFIG

View File

@ -1,26 +0,0 @@
{
"namespace": "OS::Nova::LXDFlavor",
"display_name": "LXD properties",
"description": "You can pass several options to the LXD container hypervisor that will affect the container's capabilities.",
"visibility": "public",
"protected": false,
"resource_type_associations": [
{
"name": "OS::Nova::Flavor"
}
],
"properties": {
"lxd:nested_allowed": {
"title": "Allow nested containers",
"description": "Allow or disallow creation of nested containers. If True, you can install and run LXD inside the VM itself and provision another level of containers.",
"type": "string",
"default": false
},
"lxd:privileged_allowed": {
"title": "Create privileged container",
"description": "Containers created as Privileged have elevated powers on the compute host. You should not set this option on containers that you don't fully trust.",
"type": "string",
"default": false
}
}
}

View File

@ -1 +0,0 @@
Run run_tempest_lxd.sh to run tempest.api.compute tests to run against nova-lxd

View File

@ -1,33 +0,0 @@
#!/bin/bash
# Construct a regex t ouse when limiting scope of tempest
# to avoid features unsupported by nova-lxd
# Note that several tests are disabled by the use of tempest
# feature toggels in devstack for an LXD config
# so this regex is not entiriely representative of
# what's excluded
# Wen adding entries to the ignored_tests, add a comment explaining
# why since this list should not grow
# Temporarily skip the image tests since they give false positivies
# for nova-lxd
ignored_tests="|^tempest.api.compute.images"
# Regressions
ignored_tests="$ignored_tests|.*AttachInterfacesTestJSON.test_create_list_show_delete_interfaces"
# backups are not supported
ignored_tests="$ignored_tests|.*ServerActionsTestJSON.test_create_backup"
# failed verfication tests
ignored_tests="$ignored_tests|.*ServersWithSpecificFlavorTestJSON.test_verify_created_server_ephemeral_disk"
ignored_tests="$ignored_tests|.*AttachVolumeShelveTestJSON.test_attach_detach_volume"
ignored_tests="$ignored_tests|.*AttachVolumeTestJSON.test_attach_detach_volume"
regex="(?!.*\\[.*\\bslow\\b.*\\]$ignored_tests)(^tempest\\.api.\\compute)";
ostestr --serial --regex $regex run

View File

@ -1,29 +0,0 @@
[[local|localrc]]
# Set the HOST_IP and FLAT_INTERFACE if automatique detection is
# unreliable
#HOST_IP=
#FLAT_INTERFACE=
DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=password
ADMIN_PASSWORD=password
# run the services you want to use
ENABLED_SERVICES=rabbit,mysql,key
ENABLED_SERVICES+=,g-api,g-reg
ENABLED_SERVICES+=,n-cpu,n-api,n-crt,n-obj,n-cond,n-sch,n-novnc,n-cauth,placement-api,placement-client
ENABLED_SERVICES+=,neutron,q-svc,q-agt,q-dhcp,q-meta,q-l3
ENABLED_SERVICES+=,cinder,c-sch,c-api,c-vol
ENABLED_SERVICES+=,horizon
# disabled services
disable_service n-net
# enable nova-lxd
enable_plugin nova-lxd https://git.openstack.org/openstack/nova-lxd
# You should enable the following if you use lxd 3.0.
# In addition, this setting requires zfs >= 0.7.0.
#LXD_BACKEND_DRIVER=zfs

View File

@ -1,2 +0,0 @@
# Plug-in overrides
VIRT_DRIVER=lxd

View File

@ -1,202 +0,0 @@
#!/bin/bash
# Save trace setting
MY_XTRACE=$(set +o | grep xtrace)
set +o xtrace
# Defaults
# --------
# Set up base directories
NOVA_DIR=${NOVA_DIR:-$DEST/nova}
NOVA_CONF_DIR=${NOVA_CONF_DIR:-/etc/nova}
NOVA_CONF=${NOVA_CONF:-NOVA_CONF_DIR/nova.conf}
# Configure LXD storage backends
# Note Bug:1822182 - ZFS backend is broken for Rescue's so don't use it!
LXD_BACKEND_DRIVER=${LXD_BACKEND_DRIVER:-default}
LXD_DISK_IMAGE=${DATA_DIR}/lxd.img
LXD_LOOPBACK_DISK_SIZE=${LXD_LOOPBACK_DISK_SIZE:-8G}
LXD_POOL_NAME=${LXD_POOL_NAME:-default}
# nova-lxd directories
NOVA_COMPUTE_LXD_DIR=${NOVA_COMPUTE_LXD_DIR:-${DEST}/nova-lxd}
NOVA_COMPUTE_LXD_PLUGIN_DIR=$(readlink -f $(dirname ${BASH_SOURCE[0]}))
# glance directories
GLANCE_CONF_DIR=${GLANCE_CONF_DIR:-/etc/glance}
GLANCE_API_CONF=$GLANCE_CONF_DIR/glance-api.conf
function pre_install_nova-lxd() {
# Install OS packages if necessary with "install_package ...".
echo_summary "Installing LXD"
if is_ubuntu; then
if [ "$DISTRO" == "trusty" ]; then
sudo add-apt-repository -y ppa:ubuntu-lxc/lxd-stable
fi
is_package_installed lxd || install_package lxd
add_user_to_group $STACK_USER $LXD_GROUP
needs_restart=false
is_package_installed apparmor || \
install_package apparmor && needs_restart=true
is_package_installed apparmor-profiles-extra || \
install_package apparmor-profiles-extra && needs_restart=true
is_package_installed apparmor-utils || \
install_package apparmor-utils && needs_restart=true
if $needs_restart; then
restart_service lxd
fi
fi
}
function install_nova-lxd() {
# Install the service.
setup_develop $NOVA_COMPUTE_LXD_DIR
}
function configure_nova-lxd() {
# Configure the service.
iniset $NOVA_CONF DEFAULT compute_driver lxd.LXDDriver
iniset $NOVA_CONF DEFAULT force_config_drive False
iniset $NOVA_CONF lxd pool $LXD_POOL_NAME
if is_service_enabled glance; then
iniset $GLANCE_API_CONF DEFAULT disk_formats "ami,ari,aki,vhd,raw,iso,qcow2,root-tar"
iniset $GLANCE_API_CONF DEFAULT container_formats "ami,ari,aki,bare,ovf,tgz"
fi
# Install the rootwrap
sudo install -o root -g root -m 644 $NOVA_COMPUTE_LXD_DIR/etc/nova/rootwrap.d/*.filters $NOVA_CONF_DIR/rootwrap.d
}
function init_nova-lxd() {
# Initialize and start the service.
mkdir -p $TOP_DIR/files
# Download and install the cirros lxc image
CIRROS_IMAGE_FILE=cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-lxc.tar.gz
if [ ! -f $TOP_DIR/files/$CIRROS_IMAGE_FILE ]; then
wget --progress=dot:giga \
-c http://download.cirros-cloud.net/${CIRROS_VERSION}/${CIRROS_IMAGE_FILE} \
-O $TOP_DIR/files/${CIRROS_IMAGE_FILE}
fi
openstack --os-cloud=devstack-admin \
--os-region-name="$REGION_NAME" image create "cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-lxd" \
--public --container-format bare \
--disk-format raw < $TOP_DIR/files/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-lxc.tar.gz
if is_service_enabled cinder; then
# Enable user namespace for ext4, this has only been tested on xenial+
echo Y | sudo tee /sys/module/ext4/parameters/userns_mounts
fi
}
function test_config_nova-lxd() {
# Configure tempest or other tests as required
if is_service_enabled tempest; then
TEMPEST_CONFIG=${TEMPEST_CONFIG:-$TEMPEST_DIR/etc/tempest.conf}
TEMPEST_IMAGE=`openstack image list | grep cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-lxd | awk {'print $2'}`
TEMPEST_IMAGE_ALT=$TEMPEST_IMAGE
iniset $TEMPEST_CONFIG image disk_formats "ami,ari,aki,vhd,raw,iso,root-tar"
iniset $TEMPEST_CONFIG compute volume_device_name sdb
# TODO(jamespage): Review and update
iniset $TEMPEST_CONFIG compute-feature-enabled shelve False
iniset $TEMPEST_CONFIG compute-feature-enabled resize False
iniset $TEMPEST_CONFIG compute-feature-enabled config_drive False
iniset $TEMPEST_CONFIG compute-feature-enabled attach_encrypted_volume False
iniset $TEMPEST_CONFIG compute-feature-enabled vnc_console False
iniset $TEMPEST_CONFIG compute image_ref $TEMPEST_IMAGE
iniset $TEMPEST_CONFIG compute image_ref_alt $TEMPEST_IMAGE_ALT
iniset $TEMPEST_CONFIG scenario img_file cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-lxc.tar.gz
fi
}
function configure_lxd_block() {
echo_summary "Configure LXD storage backend."
if is_ubuntu; then
if [ "$LXD_BACKEND_DRIVER" == "default" ]; then
if [ "$LXD_POOL_NAME" == "default" ]; then
echo_summary " . Configuring '${LXD_POOL_NAME}' dir backend for bionic lxd"
sudo lxd init --auto --storage-backend dir
else
echo_summary " . LXD_POOL_NAME != default, considering lxd already initialized"
fi
elif [ "$LXD_BACKEND_DRIVER" == "zfs" ]; then
pool=`lxc profile device get default root pool 2>> /dev/null || :`
if [ "$pool" != "$LXD_POOL_NAME" ]; then
echo_summary " . Configuring ZFS backend"
truncate -s $LXD_LOOPBACK_DISK_SIZE $LXD_DISK_IMAGE
# TODO(sahid): switch to use snap
sudo apt-get install -y zfsutils-linux
lxd_dev=`sudo losetup --show -f ${LXD_DISK_IMAGE}`
sudo lxd init --auto --storage-backend zfs --storage-pool $LXD_POOL_NAME \
--storage-create-device $lxd_dev
else
echo_summary " . ZFS backend already configured"
fi
fi
fi
}
function shutdown_nova-lxd() {
# Shut the service down.
:
}
function cleanup_nova-lxd() {
# Cleanup the service.
if [ "$LXD_BACKEND_DRIVER" == "zfs" ]; then
pool=`lxc profile device get default root pool 2>> /dev/null || :`
if [ "$pool" == "$LXD_POOL_NAME" ]; then
sudo lxc profile device remove default root
sudo lxc storage delete $LXD_POOL_NAME
fi
fi
}
if is_service_enabled nova-lxd; then
if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then
# Set up system services
echo_summary "Configuring system services nova-lxd"
pre_install_nova-lxd
configure_lxd_block
elif [[ "$1" == "stack" && "$2" == "install" ]]; then
# Perform installation of service source
echo_summary "Installing nova-lxd"
install_nova-lxd
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
# Configure after the other layer 1 and 2 services have been configured
echo_summary "Configuring nova-lxd"
configure_nova-lxd
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
# Initialize and start the nova-lxd service
echo_summary "Initializing nova-lxd"
init_nova-lxd
elif [[ "$1" == "stack" && "$2" == "test-config" ]]; then
# Configure any testing configuration
echo_summary "Test configuration - nova-lxd"
test_config_nova-lxd
fi
if [[ "$1" == "unstack" ]]; then
# Shut down nova-lxd services
# no-op
shutdown_nova-lxd
fi
if [[ "$1" == "clean" ]]; then
# Remove state and transient data
# Remember clean.sh first calls unstack.sh
# no-op
cleanup_nova-lxd
fi
fi

View File

@ -1,6 +0,0 @@
# Add nova-lxd to enabled services
enable_service nova-lxd
# LXD install/upgrade settings
INSTALL_LXD=${INSTALL_LXD:-False}
LXD_GROUP=${LXD_GROUP:-lxd}

View File

@ -1,93 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# This script is executed in the OpenStack CI *tempest-dsvm-lxd job.
# It's used to configure which tempest tests actually get run. You can find
# the CI job configuration here:
#
# http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml
#
# Construct a regex to use when limiting scope of tempest
# to avoid features unsupported by Nova's LXD support.
# Note that several tests are disabled by the use of tempest
# feature toggles in devstack/lib/tempest for an lxd config,
# so this regex is not entirely representative of what's excluded.
# When adding entries to the regex, add a comment explaining why
# since this list should not grow.
r="^(?!.*"
r="$r(?:.*\[.*\bslow\b.*\])"
# (zulcss) nova-lxd does not support booting ami/aki images
r="$r|(?:tempest\.scenario\.test_minimum_basic\.TestMinimumBasicScenario\.test_minimum_basic_scenario)"
# XXX: zulcss (18 Oct 2016) nova-lxd does not support booting from ebs volumes
r="$r|(?:tempest\.scenario\.test_volume_boot_pattern.*)"
r="$r|(?:tempest\.api\.compute\.servers\.test_create_server\.ServersTestBootFromVolume)"
# XXX: zulcss (18 Oct 2016) tempest test only passes when there is more than 10 lines in the
# console output, and cirros LXD consoles have only a single line of output
r="$r|(?:tempest\.api\.compute\.servers\.test_server_actions\.ServerActionsTestJSON\.test_get_console_output_with_unlimited_size)"
# tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_get_console_output_with_unlimited_size
# also tempest get console fails for the following two for length of output reasons
r="$r|(?:tempest\.api\.compute\.servers\.test_server_actions\.ServerActionsTestJSON\.test_get_console_output)"
# tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_get_console_output
r="$r|(?:tempest\.api\.compute\.servers\.test_server_actions\.ServerActionsTestJSON\.test_get_console_output_server_id_in_shutoff_status)"
# tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_get_console_output_server_id_in_shutoff_status
# XXX: jamespage (09 June 2017) veth pair nics not detected/configured by tempest
# https://review.openstack.org/#/c/472641/
# XXX: jamespage (09 June 2017) instance not accessible via floating IP.
r="$r|(?:tempest\.scenario\.test_network_v6\.TestGettingAddress\.test_dualnet_multi_prefix_dhcpv6_stateless)"
r="$r|(?:tempest\.scenario\.test_network_v6\.TestGettingAddress\.test_dualnet_multi_prefix_slaac)"
#tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_dhcpv6_stateless
#tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_slaac
# XXX: zulcss (18 Oct 2016) Could not connect to instance
#r="$r|(?:tempest\.scenario\.test_network_advanced_server_ops\.TestNetworkAdvancedServerOps\.test_server_connectivity_suspend_resume)"
# XXX: jamespage (08 June 2017): test failures with a mismatch in the number of disks reported
r="$r|(?:tempest\.api\.compute\.admin\.test_create_server\.ServersWithSpecificFlavorTestJSON\.test_verify_created_server_ephemeral_disk)"
#tempest.api.compute.admin.test_create_server.ServersWithSpecificFlavorTestJSON.test_verify_created_server_ephemeral_disk
# XXX: jamespage (08 June 2017): nova-lxd driver does not support device tagging
r="$r|(?:tempest\.api\.compute\.servers\.test_device_tagging.*)"
#tempest.api.compute.servers.test_device_tagging.DeviceTaggingTestV2_42.test_device_tagging
#tempest.api.compute.servers.test_device_tagging.DeviceTaggingTestV2_42.test_device_tagging
# XXX: jamespage (08 June 2017): mismatching output on LXD instance use-case
#tempest.api.compute.volumes.test_attach_volume.AttachVolumeTestJSON.test_attach_detach_volume
#tempest.api.compute.volumes.test_attach_volume.AttachVolumeShelveTestJSON.test_attach_detach_volume
r="$r|(?:tempest\.api\.compute\.volumes\.test_attach_volume\.AttachVolumeTestJSON\.test_attach_detach_volume)"
r="$r|(?:tempest\.api\.compute\.volumes\.test_attach_volume\.AttachVolumeShelveTestJSON\.test_attach_detach_volume)"
#testtools.matchers._impl.MismatchError: u'NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT\nsda 8:0 0 1073741824 0 disk \nsdb 8:16 0 1073741824 0 disk \nvda 253:0 0 85899345920 0 disk \nvdb 253:16 0 42949672960 0 disk ' matches Contains('\nsdb ')
# XXX: jamespage (26 June 2017): disable diagnostic checks until driver implements them
# https://bugs.launchpad.net/nova-lxd/+bug/1700516
r="$r|(?:.*test_get_server_diagnostics.*)"
#test_get_server_diagnostics
# XXX: ajkavanagh (2018-07-23): disable test_show_update_rebuild_list_server as nova-lxd doesn't have the
# 'supports_trusted_certs' capability, and the test uses it.
# BUG: https://bugs.launchpad.net/nova-lxd/+bug/1783080
r="$r|(?:.*ServerShowV263Test.test_show_update_rebuild_list_server.*)"
r="$r).*$"
export DEVSTACK_GATE_TEMPEST_REGEX="$r"
# set the concurrency to 1 for devstack-gate
# See: https://bugs.launchpad.net/nova-lxd/+bug/1790943
#export TEMPEST_CONCURRENCY=1

View File

@ -1,76 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx_feature_classification.support_matrix',
#'sphinx.ext.intersphinx',
'oslosphinx'
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'nova-lxd'
copyright = u'2015, Canonical Ltd'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -1,4 +0,0 @@
============
Contributing
============
.. include:: ../../CONTRIBUTING.rst

View File

@ -1,125 +0,0 @@
Nova-LXD Exclusive Machine
==========================
As LXD is a system container format, it is possible to provision "bare metal"
machines with nova-lxd without exposing the kernel and firmware to the tenant.
This is done by means of host aggregates and flavor assignment. The instance
will fill the entirety of the host, and no other instances will be assigned
to it.
This document describes the method used to achieve this exclusive machine
scheduling. It is meant to serve as an example; the names of flavors and
aggregates may be named as desired.
Prerequisites
-------------
Exclusive machine scheduling requires two scheduler filters to be enabled in
`scheduler_default_filters` in `nova.conf`, namely
`AggregateInstanceExtraSpecsFilter` and `AggregateNumInstancesFilter`.
If juju was used to install and manage the openstack environment, the following
command will enable these filters::
juju set nova-cloud-controller scheduler-default-filters="AggregateInstanceExtraSpecsFilter,AggregateNumInstancesFilter,RetryFilter,AvailabilityZoneFilter,CoreFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter"
Host Aggregate
--------------
Each host designed to be exclusively available to a single instance must be
added to a special host aggregate.
In this example, the following is a nova host listing::
user@openstack$ nova host-list
+------------+-----------+----------+
| host_name | service | zone |
+------------+-----------+----------+
| machine-9 | cert | internal |
| machine-9 | scheduler | internal |
| machine-9 | conductor | internal |
| machine-12 | compute | nova |
| machine-11 | compute | nova |
| machine-10 | compute | nova |
+------------+-----------+----------+
Create the host aggregate itself. In this example, the aggregate is called
"exclusive-machines"::
user@openstack$ nova aggregate-create exclusive-machines
+----+--------------------+-------------------+-------+----------+
| 1 | exclusive-machines | - | | |
+----+--------------------+-------------------+-------+----------+
Two metadata properties are then set on the host aggregate itself::
user@openstack$ nova aggregate-set-metadata 1 aggregate_instance_extra_specs:exclusive=true
Metadata has been successfully updated for aggregate 1.
+----+--------------------+-------------------+-------+-------------------------------------------------+
| Id | Name | Availability Zone | Hosts | Metadata |
+----+--------------------+-------------------+-------+-------------------------------------------------+
| 1 | exclusive-machines | - | | 'aggregate_instance_extra_specs:exclusive=true' |
+----+--------------------+-------------------+-------+-------------------------------------------------+
user@openstack$ nova aggregate-set-metadata 1 max_instances_per_host=1
Metadata has been successfully updated for aggregate 1.
+----+--------------------+-------------------+-------+-----------------------------------------------------------------------------+
| Id | Name | Availability Zone | Hosts | Metadata |
+----+--------------------+-------------------+-------+-----------------------------------------------------------------------------+
| 1 | exclusive-machines | - | | 'aggregate_instance_extra_specs:exclusive=true', 'max_instances_per_host=1' |
+----+--------------------+-------------------+-------+-----------------------------------------------------------------------------
The first aggregate metadata property is the link between the flavor (still to
be created) and the compute hosts (still to be added to the aggregate). The
second metadata property ensures that nova doesn't ever try to add another
instance to this one in (e.g. if nova is configured to overcommit resources).
Now the hosts must be added to the aggregate. Once they are added to the
host aggregate, they will not be available for other flavors. This will be
important in resource sizing efforts. To add the hosts::
user@openstack$ nova aggregate-add-host exclusive-machines machine-10
Host juju-serverstack-machine-10 has been successfully added for aggregate 1
+----+--------------------+-------------------+--------------+-----------------------------------------------------------------------------+
| Id | Name | Availability Zone | Hosts | Metadata |
+----+--------------------+-------------------+--------------+-----------------------------------------------------------------------------+
| 1 | exclusive-machines | - | 'machine-10' | 'aggregate_instance_extra_specs:exclusive=true', 'max_instances_per_host=1' |
+----+--------------------+-------------------+--------------+-----------------------------------------------------------------------------+
Exclusive machine flavors
-------------------------
When planning for exclusive machine flavors, there is still a small amount
of various resources that will be needed for nova compute and lxd itself.
In general, it's a safe bet that this can be quantified in 100MB of RAM,
though specific hosts may need to be configured more closely to their
use cases.
In this example, `machine-10` has 4096MB of total memory, 2 CPUS, and 500GB
of disk space. The flavor that is created will have a quantity of 3996MB of
RAM, 2 CPUS, and 500GB of disk.::
user@openstack$ nova flavor-create --is-public true e1.medium 100 3996 500 2
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 100 | e1.medium | 3996 | 500 | 0 | | 2 | 1.0 | True |
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
The `e1.medium` flavor must now have some metadata set to link it with the
`exclusive-machines` host aggregate.::
user@openstack$ nova flavor-key 100 set exclusive=true
Booting an exclusive instance
-----------------------------
Once the host aggregate and flavor have been created, exclusive machines
can be provisioned by using the flavor `e1.medium`::
user@openstack$ nova boot --flavor 100 --image $IMAGE exclusive
The `exclusive` instance, once provisioned, will fill the entire host
machine.

View File

@ -1,25 +0,0 @@
.. nova-lxd documentation master file, created by
sphinx-quickstart on Tue Jul 9 22:26:36 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to nova-lxd's documentation!
========================================================
Contents:
.. toctree::
:maxdepth: 2
usage
contributing
exclusive_machine
vif_wiring
support_matrix/support-matrix
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,701 +0,0 @@
# Driver definition
[driver.nova-lxd]
title=Nova-LXD
# Functions:
[operation.attach-volume]
title=Attach block volume to instance
status=optional
notes=The attach volume operation provides a means to hotplug
additional block storage to a running instance. This allows
storage capabilities to be expanded without interruption of
service. In a cloud model it would be more typical to just
spin up a new instance with large storage, so the ability to
hotplug extra storage is for those cases where the instance
is considered to be more of a pet than cattle. Therefore
this operation is not considered to be mandatory to support.
cli=nova volume-attach <server> <volume>
driver.nova-lxd=complete
[operation.attach-tagged-volume]
title=Attach tagged block device to instance
status=optional
notes=Attach a block device with a tag to an existing server instance. See
"Device tags" for more information.
cli=nova volume-attach <server> <volume> [--tag <tag>]
driver.nova-lxd=unknown
[operation.detach-volume]
title=Detach block volume from instance
status=optional
notes=See notes for attach volume operation.
cli=nova volume-detach <server> <volume>
driver.nova-lxd=missing
[operation.extend-volume]
title=Extend block volume attached to instance
status=optional
notes=The extend volume operation provides a means to extend
the size of an attached volume. This allows volume size
to be expanded without interruption of service.
In a cloud model it would be more typical to just
spin up a new instance with large storage, so the ability to
extend the size of an attached volume is for those cases
where the instance is considered to be more of a pet than cattle.
Therefore this operation is not considered to be mandatory to support.
cli=cinder extend <volume> <new_size>
driver.nova-lxd=unknown
[operation.attach-interface]
title=Attach virtual network interface to instance
status=optional
notes=The attach interface operation provides a means to hotplug
additional interfaces to a running instance. Hotplug support
varies between guest OSes and some guests require a reboot for
new interfaces to be detected. This operation allows interface
capabilities to be expanded without interruption of service.
In a cloud model it would be more typical to just spin up a
new instance with more interfaces.
cli=nova interface-attach <server>
driver.nova-lxd=complete
[operation.attach-tagged-interface]
title=Attach tagged virtual network interface to instance
status=optional
notes=Attach a virtual network interface with a tag to an existing
server instance. See "Device tags" for more information.
cli=nova interface-attach <server> [--tag <tag>]
driver.nova-lxd=unknown
[operation.detach-interface]
title=Detach virtual network interface from instance
status=optional
notes=See notes for attach-interface operation.
cli=nova interface-detach <server> <port_id>
driver.nova-lxd=complete
[operation.maintenance-mode]
title=Set the host in a maintenance mode
status=optional
notes=This operation allows a host to be placed into maintenance
mode, automatically triggering migration of any running
instances to an alternative host and preventing new
instances from being launched. This is not considered
to be a mandatory operation to support.
The driver methods to implement are "host_maintenance_mode" and
"set_host_enabled".
cli=nova host-update <host>
driver.nova-lxd=unknown
[operation.evacuate]
title=Evacuate instances from a host
status=optional
notes=A possible failure scenario in a cloud environment is the outage
of one of the compute nodes. In such a case the instances of the down
host can be evacuated to another host. It is assumed that the old host
is unlikely ever to be powered back on, otherwise the evacuation
attempt will be rejected. When the instances get moved to the new
host, their volumes get re-attached and the locally stored data is
dropped. That happens in the same way as a rebuild.
This is not considered to be a mandatory operation to support.
cli=nova evacuate <server>;nova host-evacuate <host>
driver.nova-lxd=complete
[operation.rebuild]
title=Rebuild instance
status=optional
notes=A possible use case is additional attributes need to be set
to the instance, nova will purge all existing data from the system
and remakes the VM with given information such as 'metadata' and
'personalities'. Though this is not considered to be a mandatory
operation to support.
cli=nova rebuild <server> <image>
driver.nova-lxd=complete
[operation.get-guest-info]
title=Guest instance status
status=mandatory
notes=Provides realtime information about the power state of the guest
instance. Since the power state is used by the compute manager for
tracking changes in guests, this operation is considered mandatory to
support.
cli=
driver.nova-lxd=unknown
[operation.get-host-uptime]
title=Guest host uptime
status=optional
notes=Returns the result of host uptime since power on,
it's used to report hypervisor status.
cli=
driver.nova-lxd=unknown
[operation.get-host-ip]
title=Guest host ip
status=optional
notes=Returns the ip of this host, it's used when doing
resize and migration.
cli=
driver.nova-lxd=unknown
[operation.live-migrate]
title=Live migrate instance across hosts
status=optional
notes=Live migration provides a way to move an instance off one
compute host, to another compute host. Administrators may use
this to evacuate instances from a host that needs to undergo
maintenance tasks, though of course this may not help if the
host is already suffering a failure. In general instances are
considered cattle rather than pets, so it is expected that an
instance is liable to be killed if host maintenance is required.
It is technically challenging for some hypervisors to provide
support for the live migration operation, particularly those
built on the container based virtualization. Therefore this
operation is not considered mandatory to support.
cli=nova live-migration <server>;nova host-evacuate-live <host>
driver.nova-lxd=complete
[operation.force-live-migration-to-complete]
title=Force live migration to complete
status=optional
notes=Live migration provides a way to move a running instance to another
compute host. But it can sometimes fail to complete if an instance has
a high rate of memory or disk page access.
This operation provides the user with an option to assist the progress
of the live migration. The mechanism used to complete the live
migration depends on the underlying virtualization subsystem
capabilities. If libvirt/qemu is used and the post-copy feature is
available and enabled then the force complete operation will cause
a switch to post-copy mode. Otherwise the instance will be suspended
until the migration is completed or aborted.
cli=nova live-migration-force-complete <server> <migration>
driver.nova-lxd=unknown
[operation.launch]
title=Launch instance
status=mandatory
notes=Importing pre-existing running virtual machines on a host is
considered out of scope of the cloud paradigm. Therefore this
operation is mandatory to support in drivers.
cli=
driver.nova-lxd=unknown
[operation.pause]
title=Stop instance CPUs (pause)
status=optional
notes=Stopping an instances CPUs can be thought of as roughly
equivalent to suspend-to-RAM. The instance is still present
in memory, but execution has stopped. The problem, however,
is that there is no mechanism to inform the guest OS that
this takes place, so upon unpausing, its clocks will no
longer report correct time. For this reason hypervisor vendors
generally discourage use of this feature and some do not even
implement it. Therefore this operation is considered optional
to support in drivers.
cli=nova pause <server>
driver.nova-lxd=complete
[operation.reboot]
title=Reboot instance
status=optional
notes=It is reasonable for a guest OS administrator to trigger a
graceful reboot from inside the instance. A host initiated
graceful reboot requires guest co-operation and a non-graceful
reboot can be achieved by a combination of stop+start. Therefore
this operation is considered optional.
cli=nova reboot <server>
driver.nova-lxd=complete
[operation.rescue]
title=Rescue instance
status=optional
notes=The rescue operation starts an instance in a special
configuration whereby it is booted from an special root
disk image. The goal is to allow an administrator to
recover the state of a broken virtual machine. In general
the cloud model considers instances to be cattle, so if
an instance breaks the general expectation is that it be
thrown away and a new instance created. Therefore this
operation is considered optional to support in drivers.
cli=nova rescue <server>
driver.nova-lxd=complete
[operation.resize]
title=Resize instance
status=optional
notes=The resize operation allows the user to change a running
instance to match the size of a different flavor from the one
it was initially launched with. There are many different
flavor attributes that potentially need to be updated. In
general it is technically challenging for a hypervisor to
support the alteration of all relevant config settings for a
running instance. Therefore this operation is considered
optional to support in drivers.
cli=nova resize <server> <flavor>
driver.nova-lxd=missing
[operation.resume]
title=Restore instance
status=optional
notes=See notes for the suspend operation
cli=nova resume <server>
driver.nova-lxd=complete
[operation.set-admin-password]
title=Set instance admin password
status=optional
notes=Provides a mechanism to (re)set the password of the administrator
account inside the instance operating system. This requires that the
hypervisor has a way to communicate with the running guest operating
system. Given the wide range of operating systems in existence it is
unreasonable to expect this to be practical in the general case. The
configdrive and metadata service both provide a mechanism for setting
the administrator password at initial boot time. In the case where this
operation were not available, the administrator would simply have to
login to the guest and change the password in the normal manner, so
this is just a convenient optimization. Therefore this operation is
not considered mandatory for drivers to support.
cli=nova set-password <server>
driver.nova-lxd=unknown
[operation.snapshot]
title=Save snapshot of instance disk
status=optional
notes=The snapshot operation allows the current state of the
instance root disk to be saved and uploaded back into the
glance image repository. The instance can later be booted
again using this saved image. This is in effect making
the ephemeral instance root disk into a semi-persistent
storage, in so much as it is preserved even though the guest
is no longer running. In general though, the expectation is
that the root disks are ephemeral so the ability to take a
snapshot cannot be assumed. Therefore this operation is not
considered mandatory to support.
cli=nova image-create <server> <name>
driver.nova-lxd=complete
[operation.suspend]
title=Suspend instance
status=optional
notes=Suspending an instance can be thought of as roughly
equivalent to suspend-to-disk. The instance no longer
consumes any RAM or CPUs, with its live running state
having been preserved in a file on disk. It can later
be restored, at which point it should continue execution
where it left off. As with stopping instance CPUs, it suffers from the fact
that the guest OS will typically be left with a clock that
is no longer telling correct time. For container based
virtualization solutions, this operation is particularly
technically challenging to implement and is an area of
active research. This operation tends to make more sense
when thinking of instances as pets, rather than cattle,
since with cattle it would be simpler to just terminate
the instance instead of suspending. Therefore this operation
is considered optional to support.
cli=nova suspend <server>
driver.nova-lxd=complete
[operation.swap-volume]
title=Swap block volumes
status=optional
notes=The swap volume operation is a mechanism for changing a running
instance so that its attached volume(s) are backed by different
storage in the host. An alternative to this would be to simply
terminate the existing instance and spawn a new instance with the
new storage. In other words this operation is primarily targeted towards
the pet use case rather than cattle, however, it is required for volume
migration to work in the volume service. This is considered optional to
support.
cli=nova volume-update <server> <attachment> <volume>
driver.nova-lxd=missing
[operation.terminate]
title=Shutdown instance
status=mandatory
notes=The ability to terminate a virtual machine is required in
order for a cloud user to stop utilizing resources and thus
avoid indefinitely ongoing billing. Therefore this operation
is mandatory to support in drivers.
cli=nova delete <server>
driver.nova-lxd=complete
[operation.trigger-crash-dump]
title=Trigger crash dump
status=optional
notes=The trigger crash dump operation is a mechanism for triggering
a crash dump in an instance. The feature is typically implemented by
injecting an NMI (Non-maskable Interrupt) into the instance. It provides
a means to dump the production memory image as a dump file which is useful
for users. Therefore this operation is considered optional to support.
cli=nova trigger-crash-dump <server>
driver.nova-lxd=unknown
[operation.unpause]
title=Resume instance CPUs (unpause)
status=optional
notes=See notes for the "Stop instance CPUs" operation
cli=nova unpause <server>
driver.nova-lxd=unknown
[operation.guest.disk.autoconfig]
title=[Guest]Auto configure disk
status=optional
notes=Partition and resize FS to match the size specified by
flavors.root_gb, As this is hypervisor specific feature.
Therefore this operation is considered optional to support.
cli=
driver.nova-lxd=complete
[operation.guest.disk.rate-limit]
title=[Guest]Instance disk I/O limits
status=optional
notes=The ability to set rate limits on virtual disks allows for
greater performance isolation between instances running on the
same host storage. It is valid to delegate scheduling of I/O
operations to the hypervisor with its default settings, instead
of doing fine grained tuning. Therefore this is not considered
to be an mandatory configuration to support.
cli=nova limits
driver.nova-lxd=unknown
[operation.guest.setup.configdrive]
title=[Guest]Config drive support
status=choice(guest.setup)
notes=The config drive provides an information channel into
the guest operating system, to enable configuration of the
administrator password, file injection, registration of
SSH keys, etc. Since cloud images typically ship with all
login methods locked, a mechanism to set the administrator
password or keys is required to get login access. Alternatives
include the metadata service and disk injection. At least one
of the guest setup mechanisms is required to be supported by
drivers, in order to enable login access.
cli=
driver.nova-lxd=complete
[operation.guest.setup.inject.file]
title=[Guest]Inject files into disk image
status=optional
notes=This allows for the end user to provide data for multiple
files to be injected into the root filesystem before an instance
is booted. This requires that the compute node understand the
format of the filesystem and any partitioning scheme it might
use on the block device. This is a non-trivial problem considering
the vast number of filesystems in existence. The problem of injecting
files to a guest OS is better solved by obtaining via the metadata
service or config drive. Therefore this operation is considered
optional to support.
cli=
driver.nova-lxd=unknown
[operation.guest.setup.inject.networking]
title=[Guest]Inject guest networking config
status=optional
notes=This allows for static networking configuration (IP
address, netmask, gateway and routes) to be injected directly
into the root filesystem before an instance is booted. This
requires that the compute node understand how networking is
configured in the guest OS which is a non-trivial problem
considering the vast number of operating system types. The
problem of configuring networking is better solved by DHCP
or by obtaining static config via
config drive. Therefore this operation is considered optional
to support.
cli=
driver.nova-lxd=unknown
[operation.console.rdp]
title=[Console]Remote desktop over RDP
status=choice(console)
notes=This allows the administrator to interact with the graphical
console of the guest OS via RDP. This provides a way to see boot
up messages and login to the instance when networking configuration
has failed, thus preventing a network based login. Some operating
systems may prefer to emit messages via the serial console for
easier consumption. Therefore support for this operation is not
mandatory, however, a driver is required to support at least one
of the listed console access operations.
cli=nova get-rdp-console <server> <console-type>
driver.nova-lxd=missing
[operation.console.serial.log]
title=[Console]View serial console logs
status=choice(console)
notes=This allows the administrator to query the logs of data
emitted by the guest OS on its virtualized serial port. For
UNIX guests this typically includes all boot up messages and
so is useful for diagnosing problems when an instance fails
to successfully boot. Not all guest operating systems will be
able to emit boot information on a serial console, others may
only support graphical consoles. Therefore support for this
operation is not mandatory, however, a driver is required to
support at least one of the listed console access operations.
cli=nova console-log <server>
driver.nova-lxd=complete
[operation.console.serial.interactive]
title=[Console]Remote interactive serial console
status=choice(console)
notes=This allows the administrator to interact with the serial
console of the guest OS. This provides a way to see boot
up messages and login to the instance when networking configuration
has failed, thus preventing a network based login. Not all guest
operating systems will be able to emit boot information on a serial
console, others may only support graphical consoles. Therefore support
for this operation is not mandatory, however, a driver is required to
support at least one of the listed console access operations.
This feature was introduced in the Juno release with blueprint
https://blueprints.launchpad.net/nova/+spec/serial-ports
cli=nova get-serial-console <server>
driver.nova-lxd=unknown
[operation.console.spice]
title=[Console]Remote desktop over SPICE
status=choice(console)
notes=This allows the administrator to interact with the graphical
console of the guest OS via SPICE. This provides a way to see boot