Dev-ref update

This change set provides a significant update to the nova-powervm
drivers dev-ref.

Rebases the documentation to reflect the development that has been done
over the last several months.  Updates the readme to reflect the
execution that has completed.

Removes the fake_driver that is not useful.

Change-Id: I1910db886df89f884f3c399d6aab1eebdf10b5e0
This commit is contained in:
Drew Thorstensen 2015-09-30 16:45:16 -04:00
parent 2af57a1835
commit f7de66aac0
15 changed files with 382 additions and 448 deletions

3
.gitignore vendored
View File

@ -6,4 +6,7 @@ nova_powervm.egg-info/
/.idea/
.coverage
/cover/
.settings/
doc/build
AUTHORS
ChangeLog

View File

@ -2,26 +2,21 @@
PowerVM Nova Driver
===================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/neutron/+spec/example
The IBM PowerVM hypervisor provides virtualization on POWER hardware. PowerVM
admins can see benefits in their environments by making use of OpenStack.
This driver (along with a Neutron ML2 compatible agent and Ceilometer agent)
will provide capability for admins of PowerVM to use OpenStack natively.
provides the capability for operators of PowerVM to use OpenStack natively.
Problem Description
===================
As ecosystems continue to evolve around the POWER platform, a single OpenStack
driver does not meet all of the needs for the varying hypervisors. The work
done here is to build a PowerVM driver within the broader community. This
will sit alongside the existing libvirt based driver utilized by PowerKVM
environments.
driver does not meet all of the needs for the various hypervisors. The
standard libvirt driver provides support for KVM on POWER systems. This nova
driver provides PowerVM support to OpenStack environment.
This new driver must meet the following:
This driver meets the following:
* Built within the community
@ -34,51 +29,48 @@ This new driver must meet the following:
* Allows attachment of volumes from Cinder over supported protocols
The use cases should be the following:
This driver makes the following use cases available for PowerVM:
* As a deployer, all of the standard lifecycle operations (start, stop,
reboot, migrate, destroy, etc...) should be supported on a PowerVM based
reboot, migrate, destroy, etc.) should be supported on a PowerVM based
instance.
* As a deployer, I should be able to capture an instance to an image.
* VNC console to instances deployed.
Proposed Change
===============
The changes proposed are the following:
Overview of Architecture
========================
* Create a PowerVM based driver in a StackForge project. This will implement
the nova/virt/driver Compute Driver.
The driver enables the following:
* Provide deployments that work with the OpenStack model.
* Driver will be implemented using a new version of the PowerVM REST API.
* Driver is implemented using a new version of the PowerVM REST API.
* Ephemeral disks will be supported either with Virtual I/O Server (VIOS)
* Ephemeral disks are supported either with Virtual I/O Server (VIOS)
hosted local disks or via Shared Storage Pools (a PowerVM cluster file
system).
* Volume support will be via Cinder through supported protocols for the
Hypervisor.
* Volume support is provided via Cinder through supported protocols for the
Hypervisor (virtual SCSI and N-Port ID Virtualization).
* Network integration will be supported via a ML2 compatible Neutron Agent.
* Live migration support is available when using Shared Storage Pools or boot
from volume.
* Automated Functional Testing will be provided to validate changes from the
broader OpenStack community against the PowerVM driver.
* Network integration is supported via the ML2 compatible Neutron Agent. This
is the openstack/networking-powervm project.
* Thorough unit testing will be provided for the driver.
* Automated Functional Testing is provided to validate changes from the broader
OpenStack community against the PowerVM driver.
The changes proposed will bring support for the PowerVM hypervisor into the
OpenStack ecosystem, following the OpenStack development model.
* Thorough unit, syntax, and style testing is provided and enforced for the
driver.
This development will be done in StackForge in a project named nova-powervm.
The intent is that the completion of this work will provide the foundation to
bring the PowerVM Nova driver (with supporting components) into Nova Core via
a separate BluePrint in a future release of OpenStack.
Until a subsequent BluePrint is proposed and accepted, this driver is to be
considered experimental.
The intention is that this driver follows the OpenStack Nova model and will
be a candidate for promotion (via a subsequent blueprint) into the nova core
project.
Data Model Impact
@ -99,10 +91,7 @@ As such, no REST API impacts are anticipated.
Security Impact
---------------
New root wrap policies may need to be updated to support various commands for
the PowerVM REST API.
No other security impacts are foreseen.
No new security impacts are anticipated.
Notifications Impact
@ -123,12 +112,19 @@ Performance Impact
It is a goal of the driver to deploy systems with similar speed and agility
as the libvirt driver within OpenStack.
Since this process should match the OpenStack model, it is not planned to add
any new periodic tasks, database queries or other items.
Most operations are comparable in speed. Deployment, attach/detach volumes,
lifecycle, etc... are quick.
Performance impacts should be limited to the Compute Driver, as the changes
should be consolidated within the driver on the endpoint. The API processes
for instance should not be impacted.
The one exception is if the operator configures the system to use N-Port ID
Virtualization for storage (NPIV). This technology provides significant speed
increases for instance disk performance, but may increase the deployment time
by several seconds.
The driver is written to support concurrent operations. It has been tested
performing 10 concurrent deploys to a given compute node.
Due to the nature of the project, performance impacts are limited to the
Compute Driver. The API processes for instance are not impacted.
Other Deployer Impact
@ -137,63 +133,55 @@ Other Deployer Impact
The cloud administrator will need to refer to documentation on how to
configure OpenStack for use with a PowerVM hypervisor.
A 'powervm' configuration group will be used to contain all the PowerVM
specific configuration settings. Existing configuration file attributes will be
reused as much as possible. This reduces the number of PowerVM specific items
that will be needed. However, the driver will require some PowerVM specific
options.
A 'powervm' configuration group is used to contain all the PowerVM specific
configuration settings. Existing configuration file attributes will be
reused as much as possible (e.g. vif_plugging_timeout). This reduces the number
of PowerVM specific items that will be needed.
In this case, we plan to keep the PowerVM specifics contained within the
configuration file (and driver code). These will be documented on the
driver's wiki page.
It is the goal of the project to only require minimal additional attributes.
The deployer may specify additional attributes to fit their configuration.
There should be no impact to customers upgrading their cloud stack as this is
a genesis driver and should not have database impacts.
There is no impact to customers upgrading their cloud stack as this is a
genesis driver and does not have database impacts.
Developer Impact
----------------
The code for this driver will be contained within a powervm StackForge
project. The driver will be contained within /nova/virt/powervm/. The driver
will extend nova.virt.driver.ComputeDriver.
The code for this driver is currently contained within a powervm project.
The driver is within the /nova_powervm/virt/powervm/ package and extends the
nova.virt.driver.ComputeDriver class.
The code will interact with PowerVM through the pypowervm library. This python
The code interacts with PowerVM through the pypowervm library. This python
binding is a wrapper to the PowerVM REST API. All hypervisor operations will
interact with the PowerVM REST API via this binding. The driver will be
maintained to support future revisions of the PowerVM REST API as needed.
For ephemeral disk support, either a Virtual I/O Server hosted local disk or a
Shared Storage Pool (a PowerVM clustered file system) will be supported. For
volume attachments, the driver will support Cinder based attachments via
protocols supported by the hypervisor.
Shared Storage Pool (a PowerVM clustered file system) is supported. For
volume attachments, the driver supports Cinder based attachments via
protocols supported by the hypervisor (e.g. Fibre Channel).
For networking, a blueprint is being proposed for the Neutron project that
will provide a Neutron ML2 Agent. This project will be developed in
StackForge alongside nova-powervm. The Agent will provide the necessary
configuration on the Virtual I/O Server. The Nova driver code will have a
/nova/virt/powervm/vif.py file that will configure the network adapter on the
client VM.
For networking, the networking-powervm project provides a Neutron ML2 Agent.
The agent provides the necessary configuration on the Virtual I/O Server for
networking. The PowerVM Nova driver code creates the VIF for the client VM,
but the Neutron agent creates the VIF for VLANs.
Automated functional testing will be provided through a third party continuous
integration system. It will monitor for incoming Nova change sets, run a set
Automated functional testing is provided through a third party continuous
integration system. It monitors for incoming Nova change sets, runs a set
of functional tests (lifecycle operations) against the incoming change, and
provide a non-gating vote (+1 or -1).
provides a non-gating vote (+1 or -1).
Developers should not be impacted by these changes unless they wish to try the
driver.
Until a subsequent blueprint is proposed and accepted, unless otherwise noted,
the driver will be considered experimental.
Community Impact
----------------
The intent of this blueprint is to bring another driver to OpenStack that
aligns with the ideals and vision of the community.
It will be discussed in the Nova IRC and mailing lists.
The intent of this project is to bring another driver to OpenStack that
aligns with the ideals and vision of the community. The eventual impact is
ideally to promote this to core Nova.
Alternatives
@ -214,56 +202,21 @@ Primary assignee:
Other contributors:
thorst
dwarcher
ijuwang
efried
Work Items
----------
* Create a base PowerVM driver that is non-functional, but defines the methods
that need to be implemented.
* Implement the host statistics methods (get_host_stats, get_host_ip_addr,
get_host_cpu_stats, get_host_uptime, etc.).
* Implement the spawn method.
* Implement the destroy method.
* Implement the instance information methods (list_instances, instance_exists,
poll_rebooting_instances, etc.).
* Implement the live migration methods. Note that, for ephemeral disks, this
will be specific to Shared Storage Pool environments where the Virtual I/O
Servers on the source and target systems share the same (clustered) file
system.
* Implement support for Cinder volume operations.
* Implement an option to configure an internal management NIC - used for
Resource Monitoring and Control (RMC) as part of deploy. This is a
prerequisite for migration and resize. This will be controlled as part of
the CONF file.
* Implement the network interface methods (attach_interface and
detach_interface). Delegate the Virtual I/O Server work to the
corresponding Neutron ML2 agent.
* Implement an automated functional test server that listens for incoming
commits from the community and provides a non-gating vote (+1 or -1) on the
change.
Dependencies
============
* Will utilize the PowerVM REST API specification for management. Will
* Utilizes the PowerVM REST API specification for management. Will
utilize future versions of this specification as it becomes available:
http://ibm.co/1lThV9R
* Will build on top of the pypowervm library. This will be a prerequisite to
utilizing the driver and identified in the requirements.txt file.
* Builds on top of the `pypowervm library`_. This is a prerequisite to
utilizing the driver.
.. _pypowervm library: https://github.com/pypowervm
Testing
=======
@ -273,16 +226,15 @@ Tempest Tests
Since the tempest tests should be implementation agnostic, the existing
tempest tests should be able to run against the PowerVM driver without issue.
This blueprint does not foresee any changes based off this driver.
Thorough unit tests will be created within the Nova project to validate
specific functions within this driver implementation.
Thorough unit tests exist within the project to validate specific functions
within this implementation.
Functional Tests
----------------
A third party functional test environment will be created. It will monitor
A third party functional test environment will be created. It monitors
for incoming nova change sets. Once it detects a new change set, it will
execute the existing lifecycle API tests. A non-gating vote (+1 or -1) will
be provided with information provided (logs) based on the result.
@ -291,9 +243,8 @@ be provided with information provided (logs) based on the result.
API Tests
---------
The REST APIs are not planned to change as part of this. Existing APIs should
be valid. All testing is planned within the functional testing system and via
unit tests.
Existing APIs should be valid. All testing is planned within the functional
testing system and via unit tests.
Documentation Impact
@ -302,21 +253,15 @@ Documentation Impact
User Documentation
------------------
Documentation will be contributed which identifies how to configure the
driver. This will include configuring the dependencies specified above.
Documentation will be done on wiki, specifically at a minimum to the following
page: http://docs.openstack.org/trunk/config-reference/content/section_compute-hypervisors.html
Interlock is planned to be done with the OpenStack documentation team.
See the dev-ref for documentation on how to configure, contribute, use, etc.
this driver implementation.
Developer Documentation
-----------------------
No developer documentation additions are anticipated. If the existing
developer documentation is updated to reflect more hypervisor specific items,
this driver will follow suit.
The existing Nova developer documentation should typically suffice. However,
until merge into Nova, we will maintain a subset of dev-ref documentation.
References

View File

@ -40,6 +40,16 @@ Grab the code::
Setting up your environment
---------------------------
.. todo::
The purpose of this project is to provide the 'glue' between OpenStack
Compute (Nova) and PowerVM. The `pypowervm`_ project is used to control
PowerVM systems.
Add in steps for environment setup
It is recommended that you clone down the OpenStack Nova project along with
pypowervm into your respective development environment.
Running the tox python targets for tests will automatically clone these down
via the requirements.
Additional project requirements may be found in the requirements.txt file.
.. _pypowervm: https://github.com/pypowervm/pypowervm

View File

@ -27,6 +27,7 @@ Internals and Programming
.. toctree::
:maxdepth: 3
project_structure
development_environment
usage

View File

@ -0,0 +1,111 @@
..
Copyright 2015 IBM
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Source Code Structure
=====================
Since nova-powervm strives to be integrated into the upstream Nova project,
the source code structure matches a standard driver.
::
nova_powervm/
virt/
powervm/
disk/
tasks/
volume/
...
tests/
virt/
powervm/
disk/
tasks/
volume/
...
nova_powervm/virt/powervm
~~~~~~~~~~~~~~~~~~~~~~~~~
The main directory for the overall driver. Provides the driver
implementation, image support, and some high level classes to interact with
the PowerVM system (ex. host, vios, vm, etc...)
The driver attempts to utilize `TaskFlow`_ for major actions such as spawn.
This allows the driver to create atomic elements (within the tasks) to
drive operations against the system (with revert capabilities).
.. _TaskFlow: https://wiki.openstack.org/wiki/TaskFlow
nova_powervm/virt/powervm/disk
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The disk folder contains the various 'nova ephemeral' disk implementations.
These are basic images that do not involve Cinder.
Two disk implementations exist currently.
* localdisk - supports Virtual I/O Server Volume Groups. This configuration
uses any Volume Group on the system, allowing operators to make use of the
physical disks local to their system.
* Shared Storage Pool - utilizes PowerVM's distributed storage. As such this
implementation allows operators to make use of live migration capabilities.
The standard interface between these two implementations is defined in the
driver.py. This ensures that the nova-powervm compute driver does not need
to know the specifics about which disk implementation it is using.
nova_powervm/virt/powervm/tasks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The task folder contains `TaskFlow`_ classes. These implementations simply
wrap around other methods, providing logical units that the compute
driver can use when building a string of actions.
For instance, spawning an instance may require several atomic tasks:
- Create VM
- Plug Networking
- Create Disk from Glance
- Attach Disk to VM
- Power On
The tasks in this directory encapsulate this. If anything fails, they have
corresponding reverts. The logic to perform these operations is contained
elsewhere; these are simple wrappers that enable embedding into Taskflow.
.. _TaskFlow: https://wiki.openstack.org/wiki/TaskFlow
nova_powervm/virt/powervm/volume
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The volume folder contains the Cinder volume connectors. A volume connector
is the code that connects a Cinder volume (which is visible to the host) to
the Virtual Machine.
The PowerVM Compute Driver has an interface for the volume connectors defined
in this folder's `driver.py`.
The PowerVM Compute Driver provides two implementations for Fibre Channel
attached disks.
* Virtual SCSI (vSCSI): The disk is presented to a Virtual I/O Server and
the data is passed through to the VM through a virtualized SCSI
connection.
* N-Port ID Virtualization (NPIV): The disk is presented directly to the
VM. The VM will have virtual Fibre Channel connections to the disk, and
the Virtual I/O Server will not have the disk visible to it.

View File

@ -55,11 +55,10 @@ To run only pep8::
tox -e pep8
Since pep8 includes running pylint on all files, it can take quite some time to run.
To restrict the pylint check to only the files altered by the latest patch changes::
tox -e pep8 HEAD~1
To run only the unit tests::
tox -e py27
tox -e py27,py34

View File

@ -17,6 +17,143 @@
Usage
=====
.. todo::
To make use of the PowerVM drivers, a PowerVM system set up with `NovaLink`_ is
required. The nova-powervm driver should be installed on the management VM.
Add in steps for basic usage of nova-powervm
.. _NovaLink: http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=AN&subtype=CA&htmlfid=897/ENUS215-262&appname=USN
The NovaLink architecture is such that the compute driver runs directly on the
PowerVM system. No external management element (e.g. Hardware Management
Console or PowerVC) is needed. Management of the virtualization is driven
through a thin virtual machine running on the PowerVM system.
Configuration of the PowerVM system and NovaLink is required ahead of time. If
the operator is using volumes or Shared Storage Pools, they are required to be
configured ahead of time.
Configuration File Options
--------------------------
The standard nova configuration options are supported. Additionally, a
[powervm] section is used to provide additional customization to the driver.
By default, no additional inputs are needed. The base configuration allows for
a Nova driver to support ephemeral disks to a local volume group (only
one can be on the system in the default config). Connecting Fibre Channel
hosted disks via Cinder will use the Virtual SCSI connections through the
Virtual I/O Servers.
Operators may change the disk driver (nova based disks - NOT Cinder) via the
disk_driver property.
All of these values are under the [powervm] section. The tables are broken
out into logical sections.
VM Processor Options
~~~~~~~~~~~~~~~~~~~~
+--------------------------------------+------------------------------------------------------------+
| Configuration option = Default Value | Description |
+======================================+============================================================+
| proc_units_factor = 0.1 | (FloatOpt) Factor used to calculate the processor units |
| | per vcpu. Valid values are: 0.05 - 1.0 |
+--------------------------------------+------------------------------------------------------------+
| uncapped_proc_weight = 64 | (IntOpt) The processor weight to assign to newly created |
| | VMs. Value should be between 1 and 255. Represents the |
| | relative share of the uncapped processor cycles the |
| | Virtual Machine will receive when unused processor cycles |
| | are available. |
+--------------------------------------+------------------------------------------------------------+
Disk Options
~~~~~~~~~~~~
+--------------------------------------+------------------------------------------------------------+
| Configuration option = Default Value | Description |
+======================================+============================================================+
| disk_driver = localdisk | (StrOpt) The disk driver to use for PowerVM disks. Valid |
| | options are: localdisk, ssp |
| | |
| | If localdisk is specified and only one non-rootvg Volume |
| | Group exists on one of the Virtual I/O Servers, then no |
| | further config is needed. If multiple volume groups exist,|
| | then further specification can be done via the |
| | volume_group_* options. |
| | |
| | Live migration is not supported with a localdisk config. |
| | |
| | If ssp is specified, then a Shared Storage Pool will be |
| | used. If only one SSP exists on the system, no further |
| | configuration is needed. If multiple SSPs exist, then the |
| | cluster_name property must be specified. Live migration |
| | can be done within a SSP cluster. |
+--------------------------------------+------------------------------------------------------------+
| cluster_name = None | (StrOpt) Cluster hosting the Shared Storage Pool to use |
| | for storage operations. If none specified, the host is |
| | queried; if a single Cluster is found, it is used. Not |
| | used unless disk_driver option is set to ssp. |
+--------------------------------------+------------------------------------------------------------+
| volume_group_name = None | (StrOpt) Volume Group to use for block device operations. |
| | Must not be rootvg. If disk_driver is localdisk, and more |
| | than one non-rootvg volume group exists across the |
| | Virtual I/O Servers, then this attribute must be specified.|
+--------------------------------------+------------------------------------------------------------+
| volume_group_vios_name = None | (StrOpt) (Optional) The name of the Virtual I/O Server |
| | hosting the volume group. If this is not specified, the |
| | system will query through the Virtual I/O Servers looking |
| | for one that matches the volume_group_vios_name. This is |
| | only needed if the system has multiple Virtual I/O Servers |
| | with a non-rootvg volume group whose name is duplicated. |
| | |
| | Typically paired with the volume_group_name attribute. |
+--------------------------------------+------------------------------------------------------------+
Volume Options
~~~~~~~~~~~~~~
+--------------------------------------+------------------------------------------------------------+
| Configuration option = Default Value | Description |
+======================================+============================================================+
| fc_attach_strategy = vscsi | (StrOpt) The Fibre Channel Volume Strategy defines how FC |
| | Cinder volumes should be attached to the Virtual Machine. |
| | The options are: npiv or vscsi. |
+--------------------------------------+------------------------------------------------------------+
| ports_per_fabric = 1 | (IntOpt) (NPIV only) The number of physical ports that |
| | should be connected directly to the Virtual Machine, per |
| | fabric. |
| | |
| | Example: 2 fabrics and ports_per_fabric set to 2 will |
| | result in 4 NPIV ports being created, two per fabric. If |
| | multiple Virtual I/O Servers are available, will attempt |
| | to span ports across I/O Servers. |
+--------------------------------------+------------------------------------------------------------+
| fabrics = A | (StrOpt) (NPIV only) Unique identifier for each physical |
| | FC fabric that is available. This is a comma separated |
| | list. If there are two fabrics for multi-pathing, then |
| | this could be set to A,B. |
| | |
| | The fabric identifiers are used for the |
| | 'fabric_<identifier>_port_wwpns' key. |
+--------------------------------------+------------------------------------------------------------+
| fabric_<name>_port_wwpns | (StrOpt) (NPIV only) A comma delimited list of all the |
| | physical FC port WWPNs that support the specified fabric. |
| | Is tied to the NPIV 'fabrics' key. |
+--------------------------------------+------------------------------------------------------------+
Config Drive Options
~~~~~~~~~~~~~~~~~~~~
+--------------------------------------+------------------------------------------------------------+
| Configuration option = Default Value | Description |
+======================================+============================================================+
| vopt_media_volume_group = root_vg | (StrOpt) The volume group on the system that should be |
| | used to store the config drive metadata that will be |
| | attached to the VMs. |
+--------------------------------------+------------------------------------------------------------+
| vopt_media_rep_size = 1 | (IntOpt) The size of the media repository (in GB) for the |
| | metadata for config drive. Only used if the media |
| | repository needs to be created. |
+--------------------------------------+------------------------------------------------------------+
| image_meta_local_path = /tmp/cfgdrv/ | (StrOpt) The location where the config drive ISO files |
| | should be built. |
+--------------------------------------+------------------------------------------------------------+

View File

@ -17,12 +17,28 @@
Welcome to nova-powervm's documentation!
========================================
This project provides a Nova-compatible compute driver for PowerVM systems.
This project provides a Nova-compatible compute driver for `PowerVM`_ systems.
The project aims to integrate into OpenStack's Nova project. Initial
development is occurring in a separate project until it has matured and met the
Nova core team's requirements. As such, all development practices should
mirror those of the Nova project.
Documentation on Nova can be found at the `Nova Devref`_.
.. _`PowerVM`: http://www-03.ibm.com/systems/power/software/virtualization/
.. _`Nova Devref`: http://docs.openstack.org/developer/nova/devref
Nova-PowerVM Overview
=====================
Contents:
.. toctree::
:maxdepth: 1
readme
Nova-PowerVM Policies
=====================

View File

@ -29,7 +29,6 @@ Policies
bugs
contributing
code-reviews
readme
Indices and tables
------------------

View File

@ -1 +0,0 @@
.. include:: ../../../README.rst

1
doc/source/readme.rst Normal file
View File

@ -0,0 +1 @@
.. include:: ../../README.rst

View File

@ -19,8 +19,8 @@ from oslo_config import cfg
pvm_opts = [
cfg.FloatOpt('proc_units_factor',
default=0.1,
help='Factor used to calculate the processor units per vcpu.'
' Valid values are: 0.05 - 1.0'),
help='Factor used to calculate the processor units per vcpu. '
'Valid values are: 0.05 - 1.0'),
cfg.IntOpt('uncapped_proc_weight',
default=64,
help='The processor weight to assign to newly created VMs. '
@ -30,12 +30,13 @@ pvm_opts = [
cfg.StrOpt('vopt_media_volume_group',
default='rootvg',
help='The volume group on the system that should be used '
'for the config drive metadata that will be attached '
'to store the config drive metadata that will be attached '
'to VMs.'),
cfg.IntOpt('vopt_media_rep_size',
default=1,
help='The size of the media repository in GB for the metadata '
'for config drive.'),
help='The size of the media repository (in GB) for the '
'metadata for config drive. Only used if the media '
'repository needs to be created.'),
cfg.StrOpt('image_meta_local_path',
default='/tmp/cfgdrv/',
help='The location where the config drive ISO files should be '
@ -67,7 +68,7 @@ npiv_opts = [
'result in 4 NPIV ports being created, two per fabric. '
'If multiple Virtual I/O Servers are available, will '
'attempt to span ports across I/O Servers.'),
cfg.StrOpt('fabrics', default='',
cfg.StrOpt('fabrics', default='A',
help='Unique identifier for each physical FC fabric that is '
'available. This is a comma separated list. If there '
'are two fabrics for multi-pathing, then this could be '

View File

@ -36,15 +36,19 @@ localdisk_opts = [
cfg.StrOpt('volume_group_name',
default='',
help='Volume Group to use for block device operations. Must '
'not be rootvg.'),
'not be rootvg. If disk_driver is localdisk, and more '
'than one non-rootvg volume group exists across the '
'Virtual I/O Servers, then this attribute must be '
'specified.'),
cfg.StrOpt('volume_group_vios_name',
default='',
help='(Optional) The name of the Virtual I/O Server hosting '
'the volume group. If not specified, the system will '
'query through the Virtual I/O Servers looking for '
'one that matches the name. This is only needed if the '
'system has multiple Virtual I/O Servers with a volume '
'group whose name is duplicated.')
'the volume group. If this is not specified, the system '
'will query through the Virtual I/O Servers looking for '
'one that matches the volume_group_vios_name. This is '
'only needed if the system has multiple Virtual I/O '
'Servers with a non-rootvg volume group whose name is '
'duplicated.')
]

View File

@ -38,7 +38,8 @@ ssp_opts = [
default='',
help='Cluster hosting the Shared Storage Pool to use for '
'storage operations. If none specified, the host is '
'queried; if a single Cluster is found, it is used.')
'queried; if a single Cluster is found, it is used. '
'Not used unless disk_driver option is set to ssp.')
]

View File

@ -1,293 +0,0 @@
# Copyright 2014, 2015 IBM Corp.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.virt import driver
from nova.virt import fake
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
class FakePowerVMDriver(driver.ComputeDriver):
"""Fake implementation of the PowerVM Driver.
Can be useful for simulating environments. Will be enhanced to add RPC
callbacks and timeouts that are similar to actual driver.
"""
def __init__(self, virtapi):
super(FakePowerVMDriver, self).__init__(virtapi)
# Use the fake driver for scaffolding for now
fake.set_nodes(['fake-PowerVM'])
self._fake = fake.FakeDriver(virtapi)
def init_host(self, host):
"""Initialize anything that is necessary for the driver to function,
including catching up with currently running VM's on the given host.
"""
pass
def get_info(self, instance):
"""Get the current status of an instance, by name (not ID!)
Returns a dict containing:
:state: the running state, one of the power_state codes
:max_mem: (int) the maximum memory in KBytes allowed
:mem: (int) the memory in KBytes used by the domain
:num_cpu: (int) the number of virtual CPUs for the domain
:cpu_time: (int) the CPU time used in nanoseconds
"""
info = self._fake.get_info(instance)
return info
def list_instances(self):
"""Return the names of all the instances known to the virtualization
layer, as a list.
"""
return self._fake.list_instances()
def spawn(self, context, instance, image_meta, injected_files,
admin_password, network_info=None, block_device_info=None,
flavor=None):
"""Create a new instance/VM/domain on the virtualization platform.
Once this successfully completes, the instance should be
running (power_state.RUNNING).
If this fails, any partial instance should be completely
cleaned up, and the virtualization platform should be in the state
that it was before this call began.
:param context: security context
:param instance: Instance object as returned by DB layer.
This function should use the data there to guide
the creation of the new instance.
:param image_meta: image object returned by nova.image.glance that
defines the image from which to boot this instance
:param injected_files: User files to inject into instance.
:param admin_password: Administrator password to set in instance.
:param network_info:
:py:meth:`~nova.network.manager.NetworkManager.get_instance_nw_info`
:param block_device_info: Information about block devices to be
attached to the instance.
:param flavor: The flavor for the instance to be spawned.
"""
return self._fake.spawn(context, instance, image_meta, injected_files,
admin_password, network_info,
block_device_info)
def destroy(self, context, instance, network_info, block_device_info=None,
destroy_disks=True):
"""Destroy (shutdown and delete) the specified instance.
If the instance is not found (for example if networking failed), this
function should still succeed. It's probably a good idea to log a
warning in that case.
:param context: security context
:param instance: Instance object as returned by DB layer.
:param network_info:
:py:meth:`~nova.network.manager.NetworkManager.get_instance_nw_info`
:param block_device_info: Information about block devices that should
be detached from the instance.
:param destroy_disks: Indicates if disks should be destroyed
"""
return self._fake.destroy(instance, network_info, block_device_info,
destroy_disks)
def attach_volume(self, connection_info, instance, mountpoint):
"""Attach the disk to the instance at mountpoint using info."""
return self._fake.attach_volume(connection_info, instance, mountpoint)
def detach_volume(self, connection_info, instance, mountpoint):
"""Detach the disk attached to the instance."""
return self._fake.detach_volume(connection_info, instance, mountpoint)
def snapshot(self, context, instance, image_id, update_task_state):
"""Snapshots the specified instance.
:param context: security context
:param instance: Instance object as returned by DB layer.
:param image_id: Reference to a pre-created image that will
hold the snapshot.
"""
return self._fake.snapshot(context, instance, image_id,
update_task_state)
def power_off(self, instance, timeout=0, retry_interval=0):
"""Power off the specified instance.
:param instance: nova.objects.instance.Instance
:param timeout: time to wait for GuestOS to shutdown
:param retry_interval: How often to signal guest while
waiting for it to shutdown
"""
raise NotImplementedError()
def power_on(self, context, instance, network_info,
block_device_info=None):
"""Power on the specified instance.
:param instance: nova.objects.instance.Instance
"""
raise NotImplementedError()
def get_available_resource(self, nodename):
"""Retrieve resource information.
This method is called when nova-compute launches, and
as part of a periodic task
:param nodename:
node which the caller want to get resources from
a driver that manages only one node can safely ignore this
:return: Dictionary describing resources
"""
data = self._fake.get_available_resource(nodename)
return data
def get_host_uptime(self, host):
"""Returns the result of calling "uptime" on the target host."""
raise NotImplementedError()
def plug_vifs(self, instance, network_info):
"""Plug VIFs into networks."""
pass
def unplug_vifs(self, instance, network_info):
"""Unplug VIFs from networks."""
pass
def get_host_stats(self, refresh=False):
"""Return currently known host stats."""
data = self._fake.get_host_stats(refresh)
return data
def get_available_nodes(self):
"""Returns nodenames of all nodes managed by the compute service.
This method is for multi compute-nodes support. If a driver supports
multi compute-nodes, this method returns a list of nodenames managed
by the service. Otherwise, this method should return
[hypervisor_hostname].
"""
return self._fake.get_available_nodes()
def legacy_nwinfo(self):
"""Indicate if the driver requires the legacy network_info format.
"""
return False
def check_can_live_migrate_destination(self, ctxt, instance_ref,
src_compute_info, dst_compute_info,
block_migration=False,
disk_over_commit=False):
"""Validate the destination host is capable of live partition
migration.
:param ctxt: security context
:param instance_ref: instance to be migrated
:param src_compute_info: source host information
:param dst_compute_info: destination host information
:param block_migration: if true, prepare for block migration
:param disk_over_commit: if true, allow disk over commit
:return: dictionary containing destination data
"""
dest_check_data = \
self._fake.check_can_live_migrate_destination(
ctxt, instance_ref, src_compute_info, dst_compute_info,
block_migration=False, disk_over_commit=False)
return dest_check_data
def check_can_live_migrate_source(self, ctxt, instance_ref,
dest_check_data):
"""Validate the source host is capable of live partition
migration.
:param context: security context
:param instance_ref: instance to be migrated
:param dest_check_data: results from check_can_live_migrate_destination
:return: dictionary containing source and destination data for
migration
"""
migrate_data = \
self._fake.check_can_live_migrate_source(ctxt,
instance_ref,
dest_check_data)
return migrate_data
def pre_live_migration(self, context, instance,
block_device_info, network_info,
migrate_data=None):
"""Perfoms any required prerequisites on the destination
host prior to live partition migration.
:param context: security context
:param instance: instance to be migrated
:param block_device_info: instance block device information
:param network_info: instance network information
:param migrate_data: implementation specific data dictionary
"""
self._fake.pre_live_migration(context, instance,
block_device_info,
network_info,
migrate_data)
def live_migration(self, ctxt, instance_ref, dest,
post_method, recover_method,
block_migration=False, migrate_data=None):
"""Live migrates a partition from one host to another.
:param ctxt: security context
:params instance_ref: instance to be migrated.
:params dest: destination host
:params post_method: post operation method.
nova.compute.manager.post_live_migration.
:params recover_method: recovery method when any exception occurs.
nova.compute.manager.recover_live_migration.
:params block_migration: if true, migrate VM disk.
:params migrate_data: implementation specific data dictionary.
"""
self._fake.live_migration(ctxt, instance_ref, dest,
post_method, recover_method,
migrate_data, block_migration=False)
def post_live_migration_at_destination(self, ctxt, instance_ref,
network_info,
block_migration=False,
block_device_info=None):
"""Performs post operations on the destination host
following a successful live migration.
:param ctxt: security context
:param instance_ref: migrated instance
:param network_info: dictionary of network info for instance
:param block_migration: boolean for block migration
"""
self._fake.post_live_migration_at_destination(
ctxt, instance_ref, network_info,
block_migration=False, block_device_info=None)