Remove the PowerVM driver
The PowerVM driver was deprecated in November 2021 as part of change Icdef0a03c3c6f56b08ec9685c6958d6917bc88cb. As noted there, all indications suggest that this driver is no longer maintained and may be abandonware. It's been some time and there's still no activity here so it's time to abandon this for real. This isn't as tied into the codebase as the old XenAPI driver was, so removal is mostly a case of deleting large swathes of code. Lovely. Change-Id: Ibf4f36136f2c65adad64f75d665c00cf2de4b400 Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
This commit is contained in:
parent
c36782a96a
commit
deae814611
@ -10,7 +10,7 @@ OpenStack Nova
|
|||||||
|
|
||||||
OpenStack Nova provides a cloud computing fabric controller, supporting a wide
|
OpenStack Nova provides a cloud computing fabric controller, supporting a wide
|
||||||
variety of compute technologies, including: libvirt (KVM, Xen, LXC and more),
|
variety of compute technologies, including: libvirt (KVM, Xen, LXC and more),
|
||||||
Hyper-V, VMware, OpenStack Ironic and PowerVM.
|
Hyper-V, VMware and OpenStack Ironic.
|
||||||
|
|
||||||
Use the following resources to learn more.
|
Use the following resources to learn more.
|
||||||
|
|
||||||
|
@ -84,8 +84,6 @@ availability zones. Nova supports the following hypervisors:
|
|||||||
|
|
||||||
- `Linux Containers (LXC) <https://linuxcontainers.org>`__
|
- `Linux Containers (LXC) <https://linuxcontainers.org>`__
|
||||||
|
|
||||||
- `PowerVM <https://www.ibm.com/us-en/marketplace/ibm-powervm>`__
|
|
||||||
|
|
||||||
- `Quick Emulator (QEMU) <https://wiki.qemu.org/Manual>`__
|
- `Quick Emulator (QEMU) <https://wiki.qemu.org/Manual>`__
|
||||||
|
|
||||||
- `Virtuozzo <https://www.virtuozzo.com/products/vz7.html>`__
|
- `Virtuozzo <https://www.virtuozzo.com/products/vz7.html>`__
|
||||||
|
@ -39,9 +39,8 @@ compute host and image.
|
|||||||
|
|
||||||
.. rubric:: Compute host requirements
|
.. rubric:: Compute host requirements
|
||||||
|
|
||||||
The following virt drivers support the config drive: libvirt,
|
The following virt drivers support the config drive: libvirt, Hyper-V and
|
||||||
Hyper-V, VMware, and (since 17.0.0 Queens) PowerVM. The Bare Metal service also
|
VMware. The Bare Metal service also supports the config drive.
|
||||||
supports the config drive.
|
|
||||||
|
|
||||||
- To use config drives with libvirt or VMware, you must first
|
- To use config drives with libvirt or VMware, you must first
|
||||||
install the :command:`genisoimage` package on each compute host. Use the
|
install the :command:`genisoimage` package on each compute host. Use the
|
||||||
@ -56,8 +55,8 @@ supports the config drive.
|
|||||||
:oslo.config:option:`hyperv.qemu_img_cmd` config option to the full path to an
|
:oslo.config:option:`hyperv.qemu_img_cmd` config option to the full path to an
|
||||||
:command:`qemu-img` command installation.
|
:command:`qemu-img` command installation.
|
||||||
|
|
||||||
- To use config drives with PowerVM or the Bare Metal service, you do not need
|
- To use config drives with the Bare Metal service, you do not need to prepare
|
||||||
to prepare anything.
|
anything.
|
||||||
|
|
||||||
.. rubric:: Image requirements
|
.. rubric:: Image requirements
|
||||||
|
|
||||||
|
@ -1,75 +0,0 @@
|
|||||||
=======
|
|
||||||
PowerVM
|
|
||||||
=======
|
|
||||||
|
|
||||||
Introduction
|
|
||||||
------------
|
|
||||||
|
|
||||||
OpenStack Compute supports the PowerVM hypervisor through `NovaLink`_. In the
|
|
||||||
NovaLink architecture, a thin NovaLink virtual machine running on the Power
|
|
||||||
system manages virtualization for that system. The ``nova-compute`` service
|
|
||||||
can be installed on the NovaLink virtual machine and configured to use the
|
|
||||||
PowerVM compute driver. No external management element (e.g. Hardware
|
|
||||||
Management Console) is needed.
|
|
||||||
|
|
||||||
.. _NovaLink: https://www.ibm.com/support/knowledgecenter/en/POWER8/p8eig/p8eig_kickoff.htm
|
|
||||||
|
|
||||||
|
|
||||||
Configuration
|
|
||||||
-------------
|
|
||||||
|
|
||||||
In order to function properly, the ``nova-compute`` service must be executed
|
|
||||||
by a member of the ``pvm_admin`` group. Use the ``usermod`` command to add the
|
|
||||||
user. For example, to add the ``stacker`` user to the ``pvm_admin`` group, execute:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# usermod -a -G pvm_admin stacker
|
|
||||||
|
|
||||||
The user must re-login for the change to take effect.
|
|
||||||
|
|
||||||
To enable the PowerVM compute driver, configure
|
|
||||||
:oslo.config:option:`DEFAULT.compute_driver` = ``powervm.PowerVMDriver``. For
|
|
||||||
example:
|
|
||||||
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
compute_driver = powervm.PowerVMDriver
|
|
||||||
|
|
||||||
The PowerVM driver supports two types of storage for ephemeral disks:
|
|
||||||
``localdisk`` or ``ssp``. If ``localdisk`` is selected, you must specify which
|
|
||||||
volume group should be used. E.g.:
|
|
||||||
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[powervm]
|
|
||||||
disk_driver = localdisk
|
|
||||||
volume_group_name = openstackvg
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Using the ``rootvg`` volume group is strongly discouraged since ``rootvg``
|
|
||||||
is used by the management partition and filling this will cause failures.
|
|
||||||
|
|
||||||
The PowerVM driver also supports configuring the default amount of physical
|
|
||||||
processor compute power (known as "proc units") which will be given to each
|
|
||||||
vCPU. This value will be used if the requested flavor does not specify the
|
|
||||||
``powervm:proc_units`` extra-spec. A factor value of 1.0 means a whole physical
|
|
||||||
processor, whereas 0.05 means 1/20th of a physical processor. E.g.:
|
|
||||||
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[powervm]
|
|
||||||
proc_units_factor = 0.1
|
|
||||||
|
|
||||||
|
|
||||||
Volume Support
|
|
||||||
--------------
|
|
||||||
|
|
||||||
Volume support is provided for the PowerVM virt driver via Cinder. Currently,
|
|
||||||
the only supported volume protocol is `vSCSI`__ Fibre Channel. Attach, detach,
|
|
||||||
and extend are the operations supported by the PowerVM vSCSI FC volume adapter.
|
|
||||||
:term:`Boot From Volume` is not yet supported.
|
|
||||||
|
|
||||||
.. __: https://www.ibm.com/support/knowledgecenter/en/POWER8/p8hat/p8hat_virtualscsi.htm
|
|
@ -11,7 +11,6 @@ Hypervisors
|
|||||||
hypervisor-vmware
|
hypervisor-vmware
|
||||||
hypervisor-hyper-v
|
hypervisor-hyper-v
|
||||||
hypervisor-virtuozzo
|
hypervisor-virtuozzo
|
||||||
hypervisor-powervm
|
|
||||||
hypervisor-zvm
|
hypervisor-zvm
|
||||||
hypervisor-ironic
|
hypervisor-ironic
|
||||||
|
|
||||||
@ -44,9 +43,6 @@ The following hypervisors are supported:
|
|||||||
* `Virtuozzo`_ 7.0.0 and newer - OS Containers and Kernel-based Virtual
|
* `Virtuozzo`_ 7.0.0 and newer - OS Containers and Kernel-based Virtual
|
||||||
Machines supported. The supported formats include ploop and qcow2 images.
|
Machines supported. The supported formats include ploop and qcow2 images.
|
||||||
|
|
||||||
* `PowerVM`_ - Server virtualization with IBM PowerVM for AIX, IBM i, and Linux
|
|
||||||
workloads on the Power Systems platform.
|
|
||||||
|
|
||||||
* `zVM`_ - Server virtualization on z Systems and IBM LinuxONE, it can run Linux,
|
* `zVM`_ - Server virtualization on z Systems and IBM LinuxONE, it can run Linux,
|
||||||
z/OS and more.
|
z/OS and more.
|
||||||
|
|
||||||
@ -68,8 +64,6 @@ virt drivers:
|
|||||||
|
|
||||||
* :oslo.config:option:`compute_driver` = ``hyperv.HyperVDriver``
|
* :oslo.config:option:`compute_driver` = ``hyperv.HyperVDriver``
|
||||||
|
|
||||||
* :oslo.config:option:`compute_driver` = ``powervm.PowerVMDriver``
|
|
||||||
|
|
||||||
* :oslo.config:option:`compute_driver` = ``zvm.ZVMDriver``
|
* :oslo.config:option:`compute_driver` = ``zvm.ZVMDriver``
|
||||||
|
|
||||||
* :oslo.config:option:`compute_driver` = ``fake.FakeDriver``
|
* :oslo.config:option:`compute_driver` = ``fake.FakeDriver``
|
||||||
@ -83,6 +77,5 @@ virt drivers:
|
|||||||
.. _VMware vSphere: https://www.vmware.com/support/vsphere-hypervisor.html
|
.. _VMware vSphere: https://www.vmware.com/support/vsphere-hypervisor.html
|
||||||
.. _Hyper-V: https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/hyper-v-technology-overview
|
.. _Hyper-V: https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/hyper-v-technology-overview
|
||||||
.. _Virtuozzo: https://www.virtuozzo.com/products/vz7.html
|
.. _Virtuozzo: https://www.virtuozzo.com/products/vz7.html
|
||||||
.. _PowerVM: https://www.ibm.com/us-en/marketplace/ibm-powervm
|
|
||||||
.. _zVM: https://www.ibm.com/it-infrastructure/z/zvm
|
.. _zVM: https://www.ibm.com/it-infrastructure/z/zvm
|
||||||
.. _Ironic: https://docs.openstack.org/ironic/latest/
|
.. _Ironic: https://docs.openstack.org/ironic/latest/
|
||||||
|
@ -406,7 +406,7 @@ Some of attributes that can be used as useful key and their values contains:
|
|||||||
* ``free_ram_mb`` (compared with a number, values like ``>= 4096``)
|
* ``free_ram_mb`` (compared with a number, values like ``>= 4096``)
|
||||||
* ``free_disk_mb`` (compared with a number, values like ``>= 10240``)
|
* ``free_disk_mb`` (compared with a number, values like ``>= 10240``)
|
||||||
* ``host`` (compared with a string, values like ``<in> compute``, ``s== compute_01``)
|
* ``host`` (compared with a string, values like ``<in> compute``, ``s== compute_01``)
|
||||||
* ``hypervisor_type`` (compared with a string, values like ``s== QEMU``, ``s== powervm``)
|
* ``hypervisor_type`` (compared with a string, values like ``s== QEMU``, ``s== ironic``)
|
||||||
* ``hypervisor_version`` (compared with a number, values like ``>= 1005003``, ``== 2000000``)
|
* ``hypervisor_version`` (compared with a number, values like ``>= 1005003``, ``== 2000000``)
|
||||||
* ``num_instances`` (compared with a number, values like ``<= 10``)
|
* ``num_instances`` (compared with a number, values like ``<= 10``)
|
||||||
* ``num_io_ops`` (compared with a number, values like ``<= 5``)
|
* ``num_io_ops`` (compared with a number, values like ``<= 5``)
|
||||||
|
@ -183,16 +183,6 @@ They are only supported by the HyperV virt driver.
|
|||||||
|
|
||||||
.. extra-specs:: os
|
.. extra-specs:: os
|
||||||
|
|
||||||
``powervm``
|
|
||||||
~~~~~~~~~~~
|
|
||||||
|
|
||||||
The following extra specs are used to configure various attributes of
|
|
||||||
instances when using the PowerVM virt driver.
|
|
||||||
|
|
||||||
They are only supported by the PowerVM virt driver.
|
|
||||||
|
|
||||||
.. extra-specs:: powervm
|
|
||||||
|
|
||||||
``vmware``
|
``vmware``
|
||||||
~~~~~~~~~~
|
~~~~~~~~~~
|
||||||
|
|
||||||
|
@ -34,10 +34,6 @@ link=https://wiki.openstack.org/wiki/ThirdPartySystems/IBM_z/VM_CI
|
|||||||
title=Ironic CI
|
title=Ironic CI
|
||||||
link=
|
link=
|
||||||
|
|
||||||
[target.powervm]
|
|
||||||
title=IBM PowerVM CI
|
|
||||||
link=https://wiki.openstack.org/wiki/ThirdPartySystems/IBM_PowerVM_CI
|
|
||||||
|
|
||||||
#
|
#
|
||||||
# Lists all features
|
# Lists all features
|
||||||
#
|
#
|
||||||
@ -70,7 +66,6 @@ driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is i
|
|||||||
vmware=complete
|
vmware=complete
|
||||||
hyperv=complete
|
hyperv=complete
|
||||||
ironic=unknown
|
ironic=unknown
|
||||||
powervm=complete
|
|
||||||
zvm=complete
|
zvm=complete
|
||||||
|
|
||||||
[operation.snapshot-server]
|
[operation.snapshot-server]
|
||||||
@ -90,7 +85,6 @@ driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is i
|
|||||||
vmware=unknown
|
vmware=unknown
|
||||||
hyperv=unknown
|
hyperv=unknown
|
||||||
ironic=unknown
|
ironic=unknown
|
||||||
powervm=complete
|
|
||||||
zvm=complete
|
zvm=complete
|
||||||
|
|
||||||
[operation.power-ops]
|
[operation.power-ops]
|
||||||
@ -109,7 +103,6 @@ driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is i
|
|||||||
vmware=complete
|
vmware=complete
|
||||||
hyperv=complete
|
hyperv=complete
|
||||||
ironic=unknown
|
ironic=unknown
|
||||||
powervm=complete
|
|
||||||
zvm=complete
|
zvm=complete
|
||||||
|
|
||||||
[operation.rebuild-server]
|
[operation.rebuild-server]
|
||||||
@ -128,7 +121,6 @@ driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is i
|
|||||||
vmware=complete
|
vmware=complete
|
||||||
hyperv=complete
|
hyperv=complete
|
||||||
ironic=unknown
|
ironic=unknown
|
||||||
powervm=missing
|
|
||||||
zvm=missing
|
zvm=missing
|
||||||
|
|
||||||
[operation.resize-server]
|
[operation.resize-server]
|
||||||
@ -147,7 +139,6 @@ driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is i
|
|||||||
vmware=complete
|
vmware=complete
|
||||||
hyperv=complete
|
hyperv=complete
|
||||||
ironic=unknown
|
ironic=unknown
|
||||||
powervm=missing
|
|
||||||
zvm=missing
|
zvm=missing
|
||||||
|
|
||||||
[operation.server-volume-ops]
|
[operation.server-volume-ops]
|
||||||
@ -165,9 +156,6 @@ libvirt-virtuozzo-vm=complete
|
|||||||
vmware=complete
|
vmware=complete
|
||||||
hyperv=complete
|
hyperv=complete
|
||||||
ironic=missing
|
ironic=missing
|
||||||
powervm=complete
|
|
||||||
driver-notes-powervm=This is not tested for every CI run. Add a
|
|
||||||
"powervm:volume-check" comment to trigger a CI job running volume tests.
|
|
||||||
zvm=missing
|
zvm=missing
|
||||||
|
|
||||||
[operation.server-bdm]
|
[operation.server-bdm]
|
||||||
@ -189,7 +177,6 @@ vmware=partial
|
|||||||
driver-notes-vmware=This is not tested in a CI system, but it is implemented.
|
driver-notes-vmware=This is not tested in a CI system, but it is implemented.
|
||||||
hyperv=complete:n
|
hyperv=complete:n
|
||||||
ironic=missing
|
ironic=missing
|
||||||
powervm=missing
|
|
||||||
zvm=missing
|
zvm=missing
|
||||||
|
|
||||||
[operation.server-neutron]
|
[operation.server-neutron]
|
||||||
@ -211,7 +198,6 @@ driver-notes-vmware=This is not tested in a CI system, but it is implemented.
|
|||||||
hyperv=partial
|
hyperv=partial
|
||||||
driver-notes-hyperv=This is not tested in a CI system, but it is implemented.
|
driver-notes-hyperv=This is not tested in a CI system, but it is implemented.
|
||||||
ironic=missing
|
ironic=missing
|
||||||
powervm=complete
|
|
||||||
zvm=partial
|
zvm=partial
|
||||||
driver-notes-zvm=This is not tested in a CI system, but it is implemented.
|
driver-notes-zvm=This is not tested in a CI system, but it is implemented.
|
||||||
|
|
||||||
@ -232,7 +218,6 @@ vmware=partial
|
|||||||
driver-notes-vmware=This is not tested in a CI system, but it is implemented.
|
driver-notes-vmware=This is not tested in a CI system, but it is implemented.
|
||||||
hyperv=complete
|
hyperv=complete
|
||||||
ironic=missing
|
ironic=missing
|
||||||
powervm=missing
|
|
||||||
zvm=complete
|
zvm=complete
|
||||||
|
|
||||||
[operation.server-suspend]
|
[operation.server-suspend]
|
||||||
@ -252,7 +237,6 @@ driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is i
|
|||||||
vmware=complete
|
vmware=complete
|
||||||
hyperv=complete
|
hyperv=complete
|
||||||
ironic=missing
|
ironic=missing
|
||||||
powervm=missing
|
|
||||||
zvm=missing
|
zvm=missing
|
||||||
|
|
||||||
[operation.server-consoleoutput]
|
[operation.server-consoleoutput]
|
||||||
@ -272,7 +256,6 @@ driver-notes-vmware=This is not tested in a CI system, but it is implemented.
|
|||||||
hyperv=partial
|
hyperv=partial
|
||||||
driver-notes-hyperv=This is not tested in a CI system, but it is implemented.
|
driver-notes-hyperv=This is not tested in a CI system, but it is implemented.
|
||||||
ironic=missing
|
ironic=missing
|
||||||
powervm=complete
|
|
||||||
zvm=complete
|
zvm=complete
|
||||||
|
|
||||||
[operation.server-rescue]
|
[operation.server-rescue]
|
||||||
@ -293,7 +276,6 @@ vmware=complete
|
|||||||
hyperv=partial
|
hyperv=partial
|
||||||
driver-notes-hyperv=This is not tested in a CI system, but it is implemented.
|
driver-notes-hyperv=This is not tested in a CI system, but it is implemented.
|
||||||
ironic=missing
|
ironic=missing
|
||||||
powervm=missing
|
|
||||||
zvm=missing
|
zvm=missing
|
||||||
|
|
||||||
[operation.server-configdrive]
|
[operation.server-configdrive]
|
||||||
@ -314,7 +296,6 @@ vmware=complete
|
|||||||
hyperv=complete
|
hyperv=complete
|
||||||
ironic=partial
|
ironic=partial
|
||||||
driver-notes-ironic=This is not tested in a CI system, but it is implemented.
|
driver-notes-ironic=This is not tested in a CI system, but it is implemented.
|
||||||
powervm=complete
|
|
||||||
zvm=complete
|
zvm=complete
|
||||||
|
|
||||||
[operation.server-changepassword]
|
[operation.server-changepassword]
|
||||||
@ -334,7 +315,6 @@ vmware=missing
|
|||||||
hyperv=partial
|
hyperv=partial
|
||||||
driver-notes-hyperv=This is not tested in a CI system, but it is implemented.
|
driver-notes-hyperv=This is not tested in a CI system, but it is implemented.
|
||||||
ironic=missing
|
ironic=missing
|
||||||
powervm=missing
|
|
||||||
zvm=missing
|
zvm=missing
|
||||||
|
|
||||||
[operation.server-shelve]
|
[operation.server-shelve]
|
||||||
@ -354,5 +334,4 @@ libvirt-virtuozzo-vm=complete
|
|||||||
vmware=missing
|
vmware=missing
|
||||||
hyperv=complete
|
hyperv=complete
|
||||||
ironic=missing
|
ironic=missing
|
||||||
powervm=complete
|
|
||||||
zvm=missing
|
zvm=missing
|
||||||
|
@ -26,10 +26,6 @@ link=https://wiki.openstack.org/wiki/ThirdPartySystems/Hyper-V_CI
|
|||||||
title=Ironic
|
title=Ironic
|
||||||
link=http://docs.openstack.org/infra/manual/developers.html#project-gating
|
link=http://docs.openstack.org/infra/manual/developers.html#project-gating
|
||||||
|
|
||||||
[target.powervm]
|
|
||||||
title=PowerVM CI
|
|
||||||
link=https://wiki.openstack.org/wiki/ThirdPartySystems/IBM_PowerVM_CI
|
|
||||||
|
|
||||||
|
|
||||||
[operation.gpu-passthrough]
|
[operation.gpu-passthrough]
|
||||||
title=GPU Passthrough
|
title=GPU Passthrough
|
||||||
@ -51,7 +47,6 @@ driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is i
|
|||||||
vmware=missing
|
vmware=missing
|
||||||
hyperv=missing
|
hyperv=missing
|
||||||
ironic=unknown
|
ironic=unknown
|
||||||
powervm=missing
|
|
||||||
|
|
||||||
|
|
||||||
[operation.virtual-gpu]
|
[operation.virtual-gpu]
|
||||||
@ -67,4 +62,3 @@ libvirt-virtuozzo-vm=unknown
|
|||||||
vmware=missing
|
vmware=missing
|
||||||
hyperv=missing
|
hyperv=missing
|
||||||
ironic=missing
|
ironic=missing
|
||||||
powervm=missing
|
|
||||||
|
@ -104,9 +104,6 @@ title=Hyper-V
|
|||||||
[driver.ironic]
|
[driver.ironic]
|
||||||
title=Ironic
|
title=Ironic
|
||||||
|
|
||||||
[driver.powervm]
|
|
||||||
title=PowerVM
|
|
||||||
|
|
||||||
[driver.zvm]
|
[driver.zvm]
|
||||||
title=zVM
|
title=zVM
|
||||||
|
|
||||||
@ -133,9 +130,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=complete
|
|
||||||
driver-notes.powervm=This is not tested for every CI run. Add a
|
|
||||||
"powervm:volume-check" comment to trigger a CI job running volume tests.
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.attach-tagged-volume]
|
[operation.attach-tagged-volume]
|
||||||
@ -155,7 +149,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.detach-volume]
|
[operation.detach-volume]
|
||||||
@ -174,9 +167,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=complete
|
|
||||||
driver-notes.powervm=This is not tested for every CI run. Add a
|
|
||||||
"powervm:volume-check" comment to trigger a CI job running volume tests.
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.extend-volume]
|
[operation.extend-volume]
|
||||||
@ -202,9 +192,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=unknown
|
driver.libvirt-vz-vm=unknown
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=complete
|
|
||||||
driver-notes.powervm=This is not tested for every CI run. Add a
|
|
||||||
"powervm:volume-check" comment to trigger a CI job running volume tests.
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.attach-interface]
|
[operation.attach-interface]
|
||||||
@ -232,7 +219,6 @@ driver-notes.hyperv=Works without issue if instance is off. When
|
|||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver.powervm=complete
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.attach-tagged-interface]
|
[operation.attach-tagged-interface]
|
||||||
@ -252,7 +238,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.detach-interface]
|
[operation.detach-interface]
|
||||||
@ -274,7 +259,6 @@ driver-notes.hyperv=Works without issue if instance is off. When
|
|||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver.powervm=complete
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.maintenance-mode]
|
[operation.maintenance-mode]
|
||||||
@ -299,7 +283,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=missing
|
driver.libvirt-vz-vm=missing
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.evacuate]
|
[operation.evacuate]
|
||||||
@ -325,7 +308,6 @@ driver.hyperv=unknown
|
|||||||
driver.ironic=unknown
|
driver.ironic=unknown
|
||||||
driver.libvirt-vz-vm=missing
|
driver.libvirt-vz-vm=missing
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=unknown
|
driver.zvm=unknown
|
||||||
|
|
||||||
[operation.rebuild]
|
[operation.rebuild]
|
||||||
@ -348,7 +330,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=unknown
|
driver.zvm=unknown
|
||||||
|
|
||||||
[operation.get-guest-info]
|
[operation.get-guest-info]
|
||||||
@ -370,7 +351,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver.powervm=complete
|
|
||||||
driver.zvm=complete
|
driver.zvm=complete
|
||||||
|
|
||||||
[operation.get-host-uptime]
|
[operation.get-host-uptime]
|
||||||
@ -390,7 +370,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver.powervm=complete
|
|
||||||
driver.zvm=complete
|
driver.zvm=complete
|
||||||
|
|
||||||
[operation.get-host-ip]
|
[operation.get-host-ip]
|
||||||
@ -410,7 +389,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver.powervm=complete
|
|
||||||
driver.zvm=complete
|
driver.zvm=complete
|
||||||
|
|
||||||
[operation.live-migrate]
|
[operation.live-migrate]
|
||||||
@ -439,7 +417,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.force-live-migration-to-complete]
|
[operation.force-live-migration-to-complete]
|
||||||
@ -471,7 +448,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=missing
|
driver.libvirt-vz-vm=missing
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.abort-in-progress-live-migration]
|
[operation.abort-in-progress-live-migration]
|
||||||
@ -500,7 +476,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=unknown
|
driver.libvirt-vz-vm=unknown
|
||||||
driver.libvirt-vz-ct=unknown
|
driver.libvirt-vz-ct=unknown
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.launch]
|
[operation.launch]
|
||||||
@ -521,7 +496,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver.powervm=complete
|
|
||||||
driver.zvm=complete
|
driver.zvm=complete
|
||||||
|
|
||||||
[operation.pause]
|
[operation.pause]
|
||||||
@ -548,7 +522,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=complete
|
driver.zvm=complete
|
||||||
|
|
||||||
[operation.reboot]
|
[operation.reboot]
|
||||||
@ -571,7 +544,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver.powervm=complete
|
|
||||||
driver.zvm=complete
|
driver.zvm=complete
|
||||||
|
|
||||||
[operation.rescue]
|
[operation.rescue]
|
||||||
@ -597,7 +569,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.resize]
|
[operation.resize]
|
||||||
@ -625,7 +596,6 @@ driver.libvirt-vz-vm=complete
|
|||||||
driver-notes.vz-vm=Resizing Virtuozzo instances implies guest filesystem resize also
|
driver-notes.vz-vm=Resizing Virtuozzo instances implies guest filesystem resize also
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver-notes.vz-ct=Resizing Virtuozzo instances implies guest filesystem resize also
|
driver-notes.vz-ct=Resizing Virtuozzo instances implies guest filesystem resize also
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.resume]
|
[operation.resume]
|
||||||
@ -644,7 +614,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.set-admin-password]
|
[operation.set-admin-password]
|
||||||
@ -677,7 +646,6 @@ driver.libvirt-vz-vm=complete
|
|||||||
driver-notes.libvirt-vz-vm=Requires libvirt>=2.0.0
|
driver-notes.libvirt-vz-vm=Requires libvirt>=2.0.0
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver-notes.libvirt-vz-ct=Requires libvirt>=2.0.0
|
driver-notes.libvirt-vz-ct=Requires libvirt>=2.0.0
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.snapshot]
|
[operation.snapshot]
|
||||||
@ -705,11 +673,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver.powervm=complete
|
|
||||||
driver-notes.powervm=When using the localdisk disk driver, snapshot is only
|
|
||||||
supported if I/O is being hosted by the management partition. If hosting I/O
|
|
||||||
on traditional VIOS, we are limited by the fact that a VSCSI device can't be
|
|
||||||
mapped to two partitions (the VIOS and the management) at once.
|
|
||||||
driver.zvm=complete
|
driver.zvm=complete
|
||||||
|
|
||||||
[operation.suspend]
|
[operation.suspend]
|
||||||
@ -742,7 +705,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.swap-volume]
|
[operation.swap-volume]
|
||||||
@ -768,7 +730,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.terminate]
|
[operation.terminate]
|
||||||
@ -794,7 +755,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver.powervm=complete
|
|
||||||
driver.zvm=complete
|
driver.zvm=complete
|
||||||
|
|
||||||
[operation.trigger-crash-dump]
|
[operation.trigger-crash-dump]
|
||||||
@ -817,7 +777,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
driver.libvirt-vz-vm=missing
|
driver.libvirt-vz-vm=missing
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.unpause]
|
[operation.unpause]
|
||||||
@ -836,7 +795,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=complete
|
driver.zvm=complete
|
||||||
|
|
||||||
[guest.disk.autoconfig]
|
[guest.disk.autoconfig]
|
||||||
@ -857,7 +815,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=missing
|
driver.libvirt-vz-vm=missing
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=complete
|
driver.zvm=complete
|
||||||
|
|
||||||
[guest.disk.rate-limit]
|
[guest.disk.rate-limit]
|
||||||
@ -881,7 +838,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=missing
|
driver.libvirt-vz-vm=missing
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[guest.setup.configdrive]
|
[guest.setup.configdrive]
|
||||||
@ -909,7 +865,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=complete
|
|
||||||
driver.zvm=complete
|
driver.zvm=complete
|
||||||
|
|
||||||
[guest.setup.inject.file]
|
[guest.setup.inject.file]
|
||||||
@ -936,7 +891,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=missing
|
driver.libvirt-vz-vm=missing
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[guest.setup.inject.networking]
|
[guest.setup.inject.networking]
|
||||||
@ -967,7 +921,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=missing
|
driver.libvirt-vz-vm=missing
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[console.rdp]
|
[console.rdp]
|
||||||
@ -993,7 +946,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=missing
|
driver.libvirt-vz-vm=missing
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[console.serial.log]
|
[console.serial.log]
|
||||||
@ -1020,7 +972,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=missing
|
driver.libvirt-vz-vm=missing
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=complete
|
driver.zvm=complete
|
||||||
|
|
||||||
[console.serial.interactive]
|
[console.serial.interactive]
|
||||||
@ -1048,7 +999,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
driver.libvirt-vz-vm=missing
|
driver.libvirt-vz-vm=missing
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[console.spice]
|
[console.spice]
|
||||||
@ -1074,7 +1024,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=missing
|
driver.libvirt-vz-vm=missing
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[console.vnc]
|
[console.vnc]
|
||||||
@ -1100,7 +1049,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver.powervm=complete
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[storage.block]
|
[storage.block]
|
||||||
@ -1128,9 +1076,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
driver.libvirt-vz-vm=partial
|
driver.libvirt-vz-vm=partial
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=complete
|
|
||||||
driver-notes.powervm=This is not tested for every CI run. Add a
|
|
||||||
"powervm:volume-check" comment to trigger a CI job running volume tests.
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[storage.block.backend.fibrechannel]
|
[storage.block.backend.fibrechannel]
|
||||||
@ -1152,9 +1097,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=complete
|
|
||||||
driver-notes.powervm=This is not tested for every CI run. Add a
|
|
||||||
"powervm:volume-check" comment to trigger a CI job running volume tests.
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[storage.block.backend.iscsi]
|
[storage.block.backend.iscsi]
|
||||||
@ -1179,7 +1121,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[storage.block.backend.iscsi.auth.chap]
|
[storage.block.backend.iscsi.auth.chap]
|
||||||
@ -1201,7 +1142,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[storage.image]
|
[storage.image]
|
||||||
@ -1225,7 +1165,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver.powervm=complete
|
|
||||||
driver.zvm=complete
|
driver.zvm=complete
|
||||||
|
|
||||||
[operation.uefi-boot]
|
[operation.uefi-boot]
|
||||||
@ -1247,7 +1186,6 @@ driver.ironic=partial
|
|||||||
driver-notes.ironic=depends on hardware support
|
driver-notes.ironic=depends on hardware support
|
||||||
driver.libvirt-vz-vm=missing
|
driver.libvirt-vz-vm=missing
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.device-tags]
|
[operation.device-tags]
|
||||||
@ -1277,7 +1215,6 @@ driver.hyperv=complete
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=unknown
|
driver.libvirt-vz-ct=unknown
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.quiesce]
|
[operation.quiesce]
|
||||||
@ -1298,7 +1235,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=missing
|
driver.libvirt-vz-vm=missing
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.unquiesce]
|
[operation.unquiesce]
|
||||||
@ -1317,7 +1253,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=missing
|
driver.libvirt-vz-vm=missing
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.multiattach-volume]
|
[operation.multiattach-volume]
|
||||||
@ -1341,7 +1276,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.encrypted-volume]
|
[operation.encrypted-volume]
|
||||||
@ -1371,7 +1305,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=unknown
|
driver.libvirt-vz-vm=unknown
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.trusted-certs]
|
[operation.trusted-certs]
|
||||||
@ -1394,7 +1327,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.file-backed-memory]
|
[operation.file-backed-memory]
|
||||||
@ -1417,7 +1349,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=missing
|
driver.libvirt-vz-vm=missing
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.report-cpu-traits]
|
[operation.report-cpu-traits]
|
||||||
@ -1438,7 +1369,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=missing
|
driver.libvirt-vz-vm=missing
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.port-with-resource-request]
|
[operation.port-with-resource-request]
|
||||||
@ -1461,7 +1391,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=missing
|
driver.libvirt-vz-vm=missing
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.boot-encrypted-vm]
|
[operation.boot-encrypted-vm]
|
||||||
@ -1487,7 +1416,6 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=missing
|
driver.libvirt-vz-vm=missing
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.cache-images]
|
[operation.cache-images]
|
||||||
@ -1510,10 +1438,6 @@ driver.hyperv=partial
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=complete
|
driver.libvirt-vz-vm=complete
|
||||||
driver.libvirt-vz-ct=complete
|
driver.libvirt-vz-ct=complete
|
||||||
driver.powervm=partial
|
|
||||||
driver-notes.powervm=The PowerVM driver does image caching natively when using
|
|
||||||
the SSP disk driver. It does not use the config options in the [image_cache]
|
|
||||||
group.
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
|
||||||
[operation.boot-emulated-tpm]
|
[operation.boot-emulated-tpm]
|
||||||
@ -1537,5 +1461,4 @@ driver.hyperv=missing
|
|||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
driver.libvirt-vz-vm=missing
|
driver.libvirt-vz-vm=missing
|
||||||
driver.libvirt-vz-ct=missing
|
driver.libvirt-vz-ct=missing
|
||||||
driver.powervm=missing
|
|
||||||
driver.zvm=missing
|
driver.zvm=missing
|
||||||
|
@ -1,271 +0,0 @@
|
|||||||
# Copyright 2020 Red Hat, Inc. All rights reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""Validators for ``powervm`` namespaced extra specs.
|
|
||||||
|
|
||||||
These were all taken from the IBM documentation.
|
|
||||||
|
|
||||||
https://www.ibm.com/support/knowledgecenter/SSXK2N_1.4.4/com.ibm.powervc.standard.help.doc/powervc_pg_flavorsextraspecs_hmc.html
|
|
||||||
"""
|
|
||||||
|
|
||||||
from nova.api.validation.extra_specs import base
|
|
||||||
|
|
||||||
|
|
||||||
# TODO(stephenfin): A lot of these seem to overlap with existing 'hw:' extra
|
|
||||||
# specs and could be deprecated in favour of those.
|
|
||||||
EXTRA_SPEC_VALIDATORS = [
|
|
||||||
base.ExtraSpecValidator(
|
|
||||||
name='powervm:min_mem',
|
|
||||||
description=(
|
|
||||||
'Minimum memory (MB). If you do not specify the value, the value '
|
|
||||||
'is defaulted to the value for ``memory_mb``.'
|
|
||||||
),
|
|
||||||
value={
|
|
||||||
'type': int,
|
|
||||||
'min': 256,
|
|
||||||
'description': 'Integer >=256 divisible by LMB size of the target',
|
|
||||||
},
|
|
||||||
),
|
|
||||||
base.ExtraSpecValidator(
|
|
||||||
name='powervm:max_mem',
|
|
||||||
description=(
|
|
||||||
'Maximum memory (MB). If you do not specify the value, the value '
|
|
||||||
'is defaulted to the value for ``memory_mb``.'
|
|
||||||
),
|
|
||||||
value={
|
|
||||||
'type': int,
|
|
||||||
'min': 256,
|
|
||||||
'description': 'Integer >=256 divisible by LMB size of the target',
|
|
||||||
},
|
|
||||||
),
|
|
||||||
base.ExtraSpecValidator(
|
|
||||||
name='powervm:min_vcpu',
|
|
||||||
description=(
|
|
||||||
'Minimum virtual processors. Minimum resource that is required '
|
|
||||||
'for LPAR to boot is 1. The maximum value can be equal to the '
|
|
||||||
'value, which is set to vCPUs. If you specify the value of the '
|
|
||||||
'attribute, you must also specify value of powervm:max_vcpu. '
|
|
||||||
'Defaults to value set for vCPUs.'
|
|
||||||
),
|
|
||||||
value={
|
|
||||||
'type': int,
|
|
||||||
'min': 1,
|
|
||||||
},
|
|
||||||
),
|
|
||||||
base.ExtraSpecValidator(
|
|
||||||
name='powervm:max_vcpu',
|
|
||||||
description=(
|
|
||||||
'Minimum virtual processors. Minimum resource that is required '
|
|
||||||
'for LPAR to boot is 1. The maximum value can be equal to the '
|
|
||||||
'value, which is set to vCPUs. If you specify the value of the '
|
|
||||||
'attribute, you must also specify value of powervm:max_vcpu. '
|
|
||||||
'Defaults to value set for vCPUs.'
|
|
||||||
),
|
|
||||||
value={
|
|
||||||
'type': int,
|
|
||||||
'min': 1,
|
|
||||||
},
|
|
||||||
),
|
|
||||||
base.ExtraSpecValidator(
|
|
||||||
name='powervm:proc_units',
|
|
||||||
description=(
|
|
||||||
'The wanted ``proc_units``. The value for the attribute cannot be '
|
|
||||||
'less than 1/10 of the value that is specified for Virtual '
|
|
||||||
'CPUs (vCPUs) for hosts with firmware level 7.5 or earlier and '
|
|
||||||
'1/20 of the value that is specified for vCPUs for hosts with '
|
|
||||||
'firmware level 7.6 or later. If the value is not specified '
|
|
||||||
'during deployment, it is defaulted to vCPUs * 0.5.'
|
|
||||||
),
|
|
||||||
value={
|
|
||||||
'type': str,
|
|
||||||
'pattern': r'\d+\.\d+',
|
|
||||||
'description': (
|
|
||||||
'Float (divisible by 0.1 for hosts with firmware level 7.5 or '
|
|
||||||
'earlier and 0.05 for hosts with firmware level 7.6 or later)'
|
|
||||||
),
|
|
||||||
},
|
|
||||||
),
|
|
||||||
base.ExtraSpecValidator(
|
|
||||||
name='powervm:min_proc_units',
|
|
||||||
description=(
|
|
||||||
'Minimum ``proc_units``. The minimum value for the attribute is '
|
|
||||||
'0.1 for hosts with firmware level 7.5 or earlier and 0.05 for '
|
|
||||||
'hosts with firmware level 7.6 or later. The maximum value must '
|
|
||||||
'be equal to the maximum value of ``powervm:proc_units``. If you '
|
|
||||||
'specify the attribute, you must also specify '
|
|
||||||
'``powervm:proc_units``, ``powervm:max_proc_units``, '
|
|
||||||
'``powervm:min_vcpu``, `powervm:max_vcpu``, and '
|
|
||||||
'``powervm:dedicated_proc``. Set the ``powervm:dedicated_proc`` '
|
|
||||||
'to false.'
|
|
||||||
'\n'
|
|
||||||
'The value for the attribute cannot be less than 1/10 of the '
|
|
||||||
'value that is specified for powervm:min_vcpu for hosts with '
|
|
||||||
'firmware level 7.5 or earlier and 1/20 of the value that is '
|
|
||||||
'specified for ``powervm:min_vcpu`` for hosts with firmware '
|
|
||||||
'level 7.6 or later. If you do not specify the value of the '
|
|
||||||
'attribute during deployment, it is defaulted to equal the value '
|
|
||||||
'of ``powervm:proc_units``.'
|
|
||||||
),
|
|
||||||
value={
|
|
||||||
'type': str,
|
|
||||||
'pattern': r'\d+\.\d+',
|
|
||||||
'description': (
|
|
||||||
'Float (divisible by 0.1 for hosts with firmware level 7.5 or '
|
|
||||||
'earlier and 0.05 for hosts with firmware level 7.6 or later)'
|
|
||||||
),
|
|
||||||
},
|
|
||||||
),
|
|
||||||
base.ExtraSpecValidator(
|
|
||||||
name='powervm:max_proc_units',
|
|
||||||
description=(
|
|
||||||
'Maximum ``proc_units``. The minimum value can be equal to `` '
|
|
||||||
'``powervm:proc_units``. The maximum value for the attribute '
|
|
||||||
'cannot be more than the value of the host for maximum allowed '
|
|
||||||
'processors per partition. If you specify this attribute, you '
|
|
||||||
'must also specify ``powervm:proc_units``, '
|
|
||||||
'``powervm:min_proc_units``, ``powervm:min_vcpu``, '
|
|
||||||
'``powervm:max_vcpu``, and ``powervm:dedicated_proc``. Set the '
|
|
||||||
'``powervm:dedicated_proc`` to false.'
|
|
||||||
'\n'
|
|
||||||
'The value for the attribute cannot be less than 1/10 of the '
|
|
||||||
'value that is specified for powervm:max_vcpu for hosts with '
|
|
||||||
'firmware level 7.5 or earlier and 1/20 of the value that is '
|
|
||||||
'specified for ``powervm:max_vcpu`` for hosts with firmware '
|
|
||||||
'level 7.6 or later. If you do not specify the value of the '
|
|
||||||
'attribute during deployment, the value is defaulted to equal the '
|
|
||||||
'value of ``powervm:proc_units``.'
|
|
||||||
),
|
|
||||||
value={
|
|
||||||
'type': str,
|
|
||||||
'pattern': r'\d+\.\d+',
|
|
||||||
'description': (
|
|
||||||
'Float (divisible by 0.1 for hosts with firmware level 7.5 or '
|
|
||||||
'earlier and 0.05 for hosts with firmware level 7.6 or later)'
|
|
||||||
),
|
|
||||||
},
|
|
||||||
),
|
|
||||||
base.ExtraSpecValidator(
|
|
||||||
name='powervm:dedicated_proc',
|
|
||||||
description=(
|
|
||||||
'Use dedicated processors. The attribute defaults to false.'
|
|
||||||
),
|
|
||||||
value={
|
|
||||||
'type': bool,
|
|
||||||
},
|
|
||||||
),
|
|
||||||
base.ExtraSpecValidator(
|
|
||||||
name='powervm:shared_weight',
|
|
||||||
description=(
|
|
||||||
'Shared processor weight. When ``powervm:dedicated_proc`` is set '
|
|
||||||
'to true and ``powervm:uncapped`` is also set to true, the value '
|
|
||||||
'of the attribute defaults to 128.'
|
|
||||||
),
|
|
||||||
value={
|
|
||||||
'type': int,
|
|
||||||
'min': 0,
|
|
||||||
'max': 255,
|
|
||||||
},
|
|
||||||
),
|
|
||||||
base.ExtraSpecValidator(
|
|
||||||
name='powervm:availability_priority',
|
|
||||||
description=(
|
|
||||||
'Availability priority. The attribute priority of the server if '
|
|
||||||
'there is a processor failure and there are not enough resources '
|
|
||||||
'for all servers. VIOS and i5 need to remain high priority '
|
|
||||||
'default of 191. The value of the attribute defaults to 128.'
|
|
||||||
),
|
|
||||||
value={
|
|
||||||
'type': int,
|
|
||||||
'min': 0,
|
|
||||||
'max': 255,
|
|
||||||
},
|
|
||||||
),
|
|
||||||
base.ExtraSpecValidator(
|
|
||||||
name='powervm:uncapped',
|
|
||||||
description=(
|
|
||||||
'LPAR can use unused processor cycles that are beyond or exceed '
|
|
||||||
'the wanted setting of the attribute. This attribute is '
|
|
||||||
'supported only when ``powervm:dedicated_proc`` is set to false. '
|
|
||||||
'When ``powervm:dedicated_proc`` is set to false, '
|
|
||||||
'``powervm:uncapped`` defaults to true.'
|
|
||||||
),
|
|
||||||
value={
|
|
||||||
'type': bool,
|
|
||||||
},
|
|
||||||
),
|
|
||||||
base.ExtraSpecValidator(
|
|
||||||
name='powervm:dedicated_sharing_mode',
|
|
||||||
description=(
|
|
||||||
'Sharing mode for dedicated processors. The attribute is '
|
|
||||||
'supported only when ``powervm:dedicated_proc`` is set to true.'
|
|
||||||
),
|
|
||||||
value={
|
|
||||||
'type': str,
|
|
||||||
'enum': (
|
|
||||||
'share_idle_procs',
|
|
||||||
'keep_idle_procs',
|
|
||||||
'share_idle_procs_active',
|
|
||||||
'share_idle_procs_always',
|
|
||||||
)
|
|
||||||
},
|
|
||||||
),
|
|
||||||
base.ExtraSpecValidator(
|
|
||||||
name='powervm:processor_compatibility',
|
|
||||||
description=(
|
|
||||||
'A processor compatibility mode is a value that is assigned to a '
|
|
||||||
'logical partition by the hypervisor that specifies the processor '
|
|
||||||
'environment in which the logical partition can successfully '
|
|
||||||
'operate.'
|
|
||||||
),
|
|
||||||
value={
|
|
||||||
'type': str,
|
|
||||||
'enum': (
|
|
||||||
'default',
|
|
||||||
'POWER6',
|
|
||||||
'POWER6+',
|
|
||||||
'POWER6_Enhanced',
|
|
||||||
'POWER6+_Enhanced',
|
|
||||||
'POWER7',
|
|
||||||
'POWER8'
|
|
||||||
),
|
|
||||||
},
|
|
||||||
),
|
|
||||||
base.ExtraSpecValidator(
|
|
||||||
name='powervm:shared_proc_pool_name',
|
|
||||||
description=(
|
|
||||||
'Specifies the shared processor pool to be targeted during '
|
|
||||||
'deployment of a virtual machine.'
|
|
||||||
),
|
|
||||||
value={
|
|
||||||
'type': str,
|
|
||||||
'description': 'String with upper limit of 14 characters',
|
|
||||||
},
|
|
||||||
),
|
|
||||||
base.ExtraSpecValidator(
|
|
||||||
name='powervm:srr_capability',
|
|
||||||
description=(
|
|
||||||
'If the value of simplified remote restart capability is set to '
|
|
||||||
'true for the LPAR, you can remote restart the LPAR to supported '
|
|
||||||
'CEC or host when the source CEC or host is down. The attribute '
|
|
||||||
'defaults to false.'
|
|
||||||
),
|
|
||||||
value={
|
|
||||||
'type': bool,
|
|
||||||
},
|
|
||||||
),
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def register():
|
|
||||||
return EXTRA_SPEC_VALIDATORS
|
|
@ -49,7 +49,6 @@ from nova.conf import novnc
|
|||||||
from nova.conf import paths
|
from nova.conf import paths
|
||||||
from nova.conf import pci
|
from nova.conf import pci
|
||||||
from nova.conf import placement
|
from nova.conf import placement
|
||||||
from nova.conf import powervm
|
|
||||||
from nova.conf import quota
|
from nova.conf import quota
|
||||||
from nova.conf import rdp
|
from nova.conf import rdp
|
||||||
from nova.conf import rpc
|
from nova.conf import rpc
|
||||||
@ -99,7 +98,6 @@ novnc.register_opts(CONF)
|
|||||||
paths.register_opts(CONF)
|
paths.register_opts(CONF)
|
||||||
pci.register_opts(CONF)
|
pci.register_opts(CONF)
|
||||||
placement.register_opts(CONF)
|
placement.register_opts(CONF)
|
||||||
powervm.register_opts(CONF)
|
|
||||||
quota.register_opts(CONF)
|
quota.register_opts(CONF)
|
||||||
rdp.register_opts(CONF)
|
rdp.register_opts(CONF)
|
||||||
rpc.register_opts(CONF)
|
rpc.register_opts(CONF)
|
||||||
|
@ -41,7 +41,6 @@ Possible values:
|
|||||||
* ``ironic.IronicDriver``
|
* ``ironic.IronicDriver``
|
||||||
* ``vmwareapi.VMwareVCDriver``
|
* ``vmwareapi.VMwareVCDriver``
|
||||||
* ``hyperv.HyperVDriver``
|
* ``hyperv.HyperVDriver``
|
||||||
* ``powervm.PowerVMDriver``
|
|
||||||
* ``zvm.ZVMDriver``
|
* ``zvm.ZVMDriver``
|
||||||
"""),
|
"""),
|
||||||
cfg.BoolOpt('allow_resize_to_same_host',
|
cfg.BoolOpt('allow_resize_to_same_host',
|
||||||
|
@ -1,66 +0,0 @@
|
|||||||
# Copyright 2018 IBM Corporation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from oslo_config import cfg
|
|
||||||
|
|
||||||
|
|
||||||
powervm_group = cfg.OptGroup(
|
|
||||||
name="powervm",
|
|
||||||
title="PowerVM Options",
|
|
||||||
help="""
|
|
||||||
PowerVM options allow cloud administrators to configure how OpenStack will work
|
|
||||||
with the PowerVM hypervisor.
|
|
||||||
""")
|
|
||||||
|
|
||||||
powervm_opts = [
|
|
||||||
cfg.FloatOpt(
|
|
||||||
'proc_units_factor',
|
|
||||||
default=0.1,
|
|
||||||
min=0.05,
|
|
||||||
max=1,
|
|
||||||
help="""
|
|
||||||
Factor used to calculate the amount of physical processor compute power given
|
|
||||||
to each vCPU. E.g. A value of 1.0 means a whole physical processor, whereas
|
|
||||||
0.05 means 1/20th of a physical processor.
|
|
||||||
"""),
|
|
||||||
cfg.StrOpt('disk_driver',
|
|
||||||
choices=['localdisk', 'ssp'], ignore_case=True,
|
|
||||||
default='localdisk',
|
|
||||||
help="""
|
|
||||||
The disk driver to use for PowerVM disks. PowerVM provides support for
|
|
||||||
localdisk and PowerVM Shared Storage Pool disk drivers.
|
|
||||||
|
|
||||||
Related options:
|
|
||||||
|
|
||||||
* volume_group_name - required when using localdisk
|
|
||||||
|
|
||||||
"""),
|
|
||||||
cfg.StrOpt('volume_group_name',
|
|
||||||
default='',
|
|
||||||
help="""
|
|
||||||
Volume Group to use for block device operations. If disk_driver is localdisk,
|
|
||||||
then this attribute must be specified. It is strongly recommended NOT to use
|
|
||||||
rootvg since that is used by the management partition and filling it will cause
|
|
||||||
failures.
|
|
||||||
"""),
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def register_opts(conf):
|
|
||||||
conf.register_group(powervm_group)
|
|
||||||
conf.register_opts(powervm_opts, group=powervm_group)
|
|
||||||
|
|
||||||
|
|
||||||
def list_opts():
|
|
||||||
return {powervm_group: powervm_opts}
|
|
@ -2140,11 +2140,6 @@ class InvalidPCINUMAAffinity(Invalid):
|
|||||||
msg_fmt = _("Invalid PCI NUMA affinity configured: %(policy)s")
|
msg_fmt = _("Invalid PCI NUMA affinity configured: %(policy)s")
|
||||||
|
|
||||||
|
|
||||||
class PowerVMAPIFailed(NovaException):
|
|
||||||
msg_fmt = _("PowerVM API failed to complete for instance=%(inst_name)s. "
|
|
||||||
"%(reason)s")
|
|
||||||
|
|
||||||
|
|
||||||
class TraitRetrievalFailed(NovaException):
|
class TraitRetrievalFailed(NovaException):
|
||||||
msg_fmt = _("Failed to retrieve traits from the placement API: %(error)s")
|
msg_fmt = _("Failed to retrieve traits from the placement API: %(error)s")
|
||||||
|
|
||||||
|
@ -340,42 +340,6 @@ class HyperVLiveMigrateData(LiveMigrateData):
|
|||||||
del primitive['is_shared_instance_path']
|
del primitive['is_shared_instance_path']
|
||||||
|
|
||||||
|
|
||||||
@obj_base.NovaObjectRegistry.register
|
|
||||||
class PowerVMLiveMigrateData(LiveMigrateData):
|
|
||||||
# Version 1.0: Initial version
|
|
||||||
# Version 1.1: Added the Virtual Ethernet Adapter VLAN mappings.
|
|
||||||
# Version 1.2: Added old_vol_attachment_ids
|
|
||||||
# Version 1.3: Added wait_for_vif_plugged
|
|
||||||
# Version 1.4: Inherited vifs from LiveMigrateData
|
|
||||||
VERSION = '1.4'
|
|
||||||
|
|
||||||
fields = {
|
|
||||||
'host_mig_data': fields.DictOfNullableStringsField(),
|
|
||||||
'dest_ip': fields.StringField(),
|
|
||||||
'dest_user_id': fields.StringField(),
|
|
||||||
'dest_sys_name': fields.StringField(),
|
|
||||||
'public_key': fields.StringField(),
|
|
||||||
'dest_proc_compat': fields.StringField(),
|
|
||||||
'vol_data': fields.DictOfNullableStringsField(),
|
|
||||||
'vea_vlan_mappings': fields.DictOfNullableStringsField(),
|
|
||||||
}
|
|
||||||
|
|
||||||
def obj_make_compatible(self, primitive, target_version):
|
|
||||||
super(PowerVMLiveMigrateData, self).obj_make_compatible(
|
|
||||||
primitive, target_version)
|
|
||||||
target_version = versionutils.convert_version_to_tuple(target_version)
|
|
||||||
if target_version < (1, 4) and 'vifs' in primitive:
|
|
||||||
del primitive['vifs']
|
|
||||||
if target_version < (1, 3) and 'wait_for_vif_plugged' in primitive:
|
|
||||||
del primitive['wait_for_vif_plugged']
|
|
||||||
if target_version < (1, 2):
|
|
||||||
if 'old_vol_attachment_ids' in primitive:
|
|
||||||
del primitive['old_vol_attachment_ids']
|
|
||||||
if target_version < (1, 1):
|
|
||||||
if 'vea_vlan_mappings' in primitive:
|
|
||||||
del primitive['vea_vlan_mappings']
|
|
||||||
|
|
||||||
|
|
||||||
@obj_base.NovaObjectRegistry.register
|
@obj_base.NovaObjectRegistry.register
|
||||||
class VMwareLiveMigrateData(LiveMigrateData):
|
class VMwareLiveMigrateData(LiveMigrateData):
|
||||||
VERSION = '1.0'
|
VERSION = '1.0'
|
||||||
|
12
nova/tests/fixtures/nova.py
vendored
12
nova/tests/fixtures/nova.py
vendored
@ -29,7 +29,6 @@ import warnings
|
|||||||
import eventlet
|
import eventlet
|
||||||
import fixtures
|
import fixtures
|
||||||
import futurist
|
import futurist
|
||||||
import mock as mock_the_lib
|
|
||||||
from openstack import service_description
|
from openstack import service_description
|
||||||
from oslo_concurrency import lockutils
|
from oslo_concurrency import lockutils
|
||||||
from oslo_config import cfg
|
from oslo_config import cfg
|
||||||
@ -1609,13 +1608,10 @@ class GenericPoisonFixture(fixtures.Fixture):
|
|||||||
for component in components[1:]:
|
for component in components[1:]:
|
||||||
current = getattr(current, component)
|
current = getattr(current, component)
|
||||||
|
|
||||||
# TODO(stephenfin): Remove mock_the_lib check once pypowervm is
|
# NOTE(stephenfin): There are a couple of mock libraries in use
|
||||||
# no longer using mock and we no longer have mock in
|
# (including mocked versions of mock from oslotest) so we can't
|
||||||
# requirements
|
# use isinstance checks here
|
||||||
if not isinstance(
|
if 'mock' not in str(type(getattr(current, attribute))):
|
||||||
getattr(current, attribute),
|
|
||||||
(mock.Mock, mock_the_lib.Mock),
|
|
||||||
):
|
|
||||||
self.useFixture(fixtures.MonkeyPatch(
|
self.useFixture(fixtures.MonkeyPatch(
|
||||||
meth, poison_configure(meth, why)))
|
meth, poison_configure(meth, why)))
|
||||||
except ImportError:
|
except ImportError:
|
||||||
|
@ -28,7 +28,7 @@ class TestValidators(test.NoDBTestCase):
|
|||||||
"""
|
"""
|
||||||
namespaces = {
|
namespaces = {
|
||||||
'accel', 'aggregate_instance_extra_specs', 'capabilities', 'hw',
|
'accel', 'aggregate_instance_extra_specs', 'capabilities', 'hw',
|
||||||
'hw_rng', 'hw_video', 'os', 'pci_passthrough', 'powervm', 'quota',
|
'hw_rng', 'hw_video', 'os', 'pci_passthrough', 'quota',
|
||||||
'resources(?P<group>([a-zA-Z0-9_-]{1,64})?)',
|
'resources(?P<group>([a-zA-Z0-9_-]{1,64})?)',
|
||||||
'trait(?P<group>([a-zA-Z0-9_-]{1,64})?)', 'vmware',
|
'trait(?P<group>([a-zA-Z0-9_-]{1,64})?)', 'vmware',
|
||||||
}
|
}
|
||||||
@ -101,9 +101,7 @@ class TestValidators(test.NoDBTestCase):
|
|||||||
valid_specs = (
|
valid_specs = (
|
||||||
('hw:numa_nodes', '1'),
|
('hw:numa_nodes', '1'),
|
||||||
('os:monitors', '1'),
|
('os:monitors', '1'),
|
||||||
('powervm:shared_weight', '1'),
|
|
||||||
('os:monitors', '8'),
|
('os:monitors', '8'),
|
||||||
('powervm:shared_weight', '255'),
|
|
||||||
)
|
)
|
||||||
for key, value in valid_specs:
|
for key, value in valid_specs:
|
||||||
validators.validate(key, value)
|
validators.validate(key, value)
|
||||||
@ -113,9 +111,7 @@ class TestValidators(test.NoDBTestCase):
|
|||||||
('hw:serial_port_count', '!'), # NaN
|
('hw:serial_port_count', '!'), # NaN
|
||||||
('hw:numa_nodes', '0'), # has min
|
('hw:numa_nodes', '0'), # has min
|
||||||
('os:monitors', '0'), # has min
|
('os:monitors', '0'), # has min
|
||||||
('powervm:shared_weight', '-1'), # has min
|
|
||||||
('os:monitors', '9'), # has max
|
('os:monitors', '9'), # has max
|
||||||
('powervm:shared_weight', '256'), # has max
|
|
||||||
)
|
)
|
||||||
for key, value in invalid_specs:
|
for key, value in invalid_specs:
|
||||||
with testtools.ExpectedException(exception.ValidationError):
|
with testtools.ExpectedException(exception.ValidationError):
|
||||||
|
@ -168,7 +168,7 @@ class BaseTestCase(test.TestCase):
|
|||||||
'uuid': uuids.fake_compute_node,
|
'uuid': uuids.fake_compute_node,
|
||||||
'vcpus_used': 0,
|
'vcpus_used': 0,
|
||||||
'deleted': 0,
|
'deleted': 0,
|
||||||
'hypervisor_type': 'powervm',
|
'hypervisor_type': 'libvirt',
|
||||||
'created_at': '2013-04-01T00:27:06.000000',
|
'created_at': '2013-04-01T00:27:06.000000',
|
||||||
'local_gb_used': 0,
|
'local_gb_used': 0,
|
||||||
'updated_at': '2013-04-03T00:35:41.000000',
|
'updated_at': '2013-04-03T00:35:41.000000',
|
||||||
@ -178,7 +178,7 @@ class BaseTestCase(test.TestCase):
|
|||||||
'current_workload': 0,
|
'current_workload': 0,
|
||||||
'vcpus': 16,
|
'vcpus': 16,
|
||||||
'mapped': 1,
|
'mapped': 1,
|
||||||
'cpu_info': 'ppc64,powervm,3940',
|
'cpu_info': 'ppc64,libvirt,3940',
|
||||||
'running_vms': 0,
|
'running_vms': 0,
|
||||||
'free_disk_gb': 259,
|
'free_disk_gb': 259,
|
||||||
'service_id': 7,
|
'service_id': 7,
|
||||||
|
@ -219,67 +219,6 @@ class TestRemoteHyperVLiveMigrateData(test_objects._RemoteTest,
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
class _TestPowerVMLiveMigrateData(object):
|
|
||||||
@staticmethod
|
|
||||||
def _mk_obj():
|
|
||||||
return migrate_data.PowerVMLiveMigrateData(
|
|
||||||
host_mig_data=dict(one=2),
|
|
||||||
dest_ip='1.2.3.4',
|
|
||||||
dest_user_id='a_user',
|
|
||||||
dest_sys_name='a_sys',
|
|
||||||
public_key='a_key',
|
|
||||||
dest_proc_compat='POWER7',
|
|
||||||
vol_data=dict(three=4),
|
|
||||||
vea_vlan_mappings=dict(five=6),
|
|
||||||
old_vol_attachment_ids=dict(seven=8),
|
|
||||||
wait_for_vif_plugged=True)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _mk_leg():
|
|
||||||
return {
|
|
||||||
'host_mig_data': {'one': '2'},
|
|
||||||
'dest_ip': '1.2.3.4',
|
|
||||||
'dest_user_id': 'a_user',
|
|
||||||
'dest_sys_name': 'a_sys',
|
|
||||||
'public_key': 'a_key',
|
|
||||||
'dest_proc_compat': 'POWER7',
|
|
||||||
'vol_data': {'three': '4'},
|
|
||||||
'vea_vlan_mappings': {'five': '6'},
|
|
||||||
'old_vol_attachment_ids': {'seven': '8'},
|
|
||||||
'wait_for_vif_plugged': True
|
|
||||||
}
|
|
||||||
|
|
||||||
def test_migrate_data(self):
|
|
||||||
obj = self._mk_obj()
|
|
||||||
self.assertEqual('a_key', obj.public_key)
|
|
||||||
obj.public_key = 'key2'
|
|
||||||
self.assertEqual('key2', obj.public_key)
|
|
||||||
|
|
||||||
def test_obj_make_compatible(self):
|
|
||||||
obj = self._mk_obj()
|
|
||||||
|
|
||||||
data = lambda x: x['nova_object.data']
|
|
||||||
|
|
||||||
primitive = data(obj.obj_to_primitive())
|
|
||||||
self.assertIn('vea_vlan_mappings', primitive)
|
|
||||||
primitive = data(obj.obj_to_primitive(target_version='1.0'))
|
|
||||||
self.assertNotIn('vea_vlan_mappings', primitive)
|
|
||||||
primitive = data(obj.obj_to_primitive(target_version='1.1'))
|
|
||||||
self.assertNotIn('old_vol_attachment_ids', primitive)
|
|
||||||
primitive = data(obj.obj_to_primitive(target_version='1.2'))
|
|
||||||
self.assertNotIn('wait_for_vif_plugged', primitive)
|
|
||||||
|
|
||||||
|
|
||||||
class TestPowerVMLiveMigrateData(test_objects._LocalTest,
|
|
||||||
_TestPowerVMLiveMigrateData):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class TestRemotePowerVMLiveMigrateData(test_objects._RemoteTest,
|
|
||||||
_TestPowerVMLiveMigrateData):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class TestVIFMigrateData(test.NoDBTestCase):
|
class TestVIFMigrateData(test.NoDBTestCase):
|
||||||
|
|
||||||
def test_get_dest_vif_source_vif_not_set(self):
|
def test_get_dest_vif_source_vif_not_set(self):
|
||||||
|
@ -1117,7 +1117,6 @@ object_data = {
|
|||||||
'PciDeviceList': '1.3-52ff14355491c8c580bdc0ba34c26210',
|
'PciDeviceList': '1.3-52ff14355491c8c580bdc0ba34c26210',
|
||||||
'PciDevicePool': '1.1-3f5ddc3ff7bfa14da7f6c7e9904cc000',
|
'PciDevicePool': '1.1-3f5ddc3ff7bfa14da7f6c7e9904cc000',
|
||||||
'PciDevicePoolList': '1.1-15ecf022a68ddbb8c2a6739cfc9f8f5e',
|
'PciDevicePoolList': '1.1-15ecf022a68ddbb8c2a6739cfc9f8f5e',
|
||||||
'PowerVMLiveMigrateData': '1.4-a745f4eda16b45e1bc5686a0c498f27e',
|
|
||||||
'Quotas': '1.3-3b2b91371f60e788035778fc5f87797d',
|
'Quotas': '1.3-3b2b91371f60e788035778fc5f87797d',
|
||||||
'QuotasNoOp': '1.3-d1593cf969c81846bc8192255ea95cce',
|
'QuotasNoOp': '1.3-d1593cf969c81846bc8192255ea95cce',
|
||||||
'RequestGroup': '1.3-0458d350a8ec9d0673f9be5640a990ce',
|
'RequestGroup': '1.3-0458d350a8ec9d0673f9be5640a990ce',
|
||||||
|
@ -1,65 +0,0 @@
|
|||||||
# Copyright 2014, 2017 IBM Corp.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import unittest
|
|
||||||
|
|
||||||
from oslo_utils.fixture import uuidsentinel
|
|
||||||
|
|
||||||
from nova.compute import power_state
|
|
||||||
from nova.compute import vm_states
|
|
||||||
from nova import objects
|
|
||||||
|
|
||||||
try:
|
|
||||||
import powervm # noqa: F401
|
|
||||||
except ImportError:
|
|
||||||
raise unittest.SkipTest(
|
|
||||||
"The 'pypowervm' dependency is not installed."
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
TEST_FLAVOR = objects.flavor.Flavor(
|
|
||||||
memory_mb=2048,
|
|
||||||
swap=0,
|
|
||||||
vcpu_weight=None,
|
|
||||||
root_gb=10, id=2,
|
|
||||||
name=u'm1.small',
|
|
||||||
ephemeral_gb=0,
|
|
||||||
rxtx_factor=1.0,
|
|
||||||
flavorid=uuidsentinel.flav_id,
|
|
||||||
vcpus=1)
|
|
||||||
|
|
||||||
TEST_INSTANCE = objects.Instance(
|
|
||||||
id=1,
|
|
||||||
uuid=uuidsentinel.inst_id,
|
|
||||||
display_name='Fake Instance',
|
|
||||||
root_gb=10,
|
|
||||||
ephemeral_gb=0,
|
|
||||||
instance_type_id=TEST_FLAVOR.id,
|
|
||||||
system_metadata={'image_os_distro': 'rhel'},
|
|
||||||
host='host1',
|
|
||||||
flavor=TEST_FLAVOR,
|
|
||||||
task_state=None,
|
|
||||||
vm_state=vm_states.STOPPED,
|
|
||||||
power_state=power_state.SHUTDOWN,
|
|
||||||
)
|
|
||||||
|
|
||||||
IMAGE1 = {
|
|
||||||
'id': uuidsentinel.img_id,
|
|
||||||
'name': 'image1',
|
|
||||||
'size': 300,
|
|
||||||
'container_format': 'bare',
|
|
||||||
'disk_format': 'raw',
|
|
||||||
'checksum': 'b518a8ba2b152b5607aceb5703fac072',
|
|
||||||
}
|
|
||||||
TEST_IMAGE1 = objects.image_meta.ImageMeta.from_dict(IMAGE1)
|
|
@ -1,52 +0,0 @@
|
|||||||
# Copyright 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from nova.virt.powervm.disk import driver as disk_dvr
|
|
||||||
|
|
||||||
|
|
||||||
class FakeDiskAdapter(disk_dvr.DiskAdapter):
|
|
||||||
"""A fake subclass of DiskAdapter.
|
|
||||||
|
|
||||||
This is done so that the abstract methods/properties can be stubbed and the
|
|
||||||
class can be instantiated for testing.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def _vios_uuids(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def _disk_match_func(self, disk_type, instance):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def disconnect_disk_from_mgmt(self, vios_uuid, disk_name):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def capacity(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def capacity_used(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def detach_disk(self, instance):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def delete_disks(self, storage_elems):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def create_disk_from_image(self, context, instance, image_meta):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def attach_disk(self, instance, disk_info, stg_ftsk):
|
|
||||||
pass
|
|
@ -1,60 +0,0 @@
|
|||||||
# Copyright 2015, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from unittest import mock
|
|
||||||
|
|
||||||
import fixtures
|
|
||||||
from pypowervm import const as pvm_const
|
|
||||||
|
|
||||||
from nova import test
|
|
||||||
from nova.tests.unit.virt.powervm.disk import fake_adapter
|
|
||||||
|
|
||||||
|
|
||||||
class TestDiskAdapter(test.NoDBTestCase):
|
|
||||||
"""Unit Tests for the generic storage driver."""
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestDiskAdapter, self).setUp()
|
|
||||||
|
|
||||||
# Return the mgmt uuid
|
|
||||||
self.mgmt_uuid = self.useFixture(fixtures.MockPatch(
|
|
||||||
'nova.virt.powervm.mgmt.mgmt_uuid')).mock
|
|
||||||
self.mgmt_uuid.return_value = 'mp_uuid'
|
|
||||||
|
|
||||||
# The values (adapter and host uuid) are not used in the base.
|
|
||||||
# Default them to None. We use the fake adapter here because we can't
|
|
||||||
# instantiate DiskAdapter which is an abstract base class.
|
|
||||||
self.st_adpt = fake_adapter.FakeDiskAdapter(None, None)
|
|
||||||
|
|
||||||
@mock.patch("pypowervm.util.sanitize_file_name_for_api")
|
|
||||||
def test_get_disk_name(self, mock_san):
|
|
||||||
inst = mock.Mock()
|
|
||||||
inst.configure_mock(name='a_name_that_is_longer_than_eight',
|
|
||||||
uuid='01234567-abcd-abcd-abcd-123412341234')
|
|
||||||
|
|
||||||
# Long
|
|
||||||
self.assertEqual(mock_san.return_value,
|
|
||||||
self.st_adpt._get_disk_name('type', inst))
|
|
||||||
mock_san.assert_called_with(inst.name, prefix='type_',
|
|
||||||
max_len=pvm_const.MaxLen.FILENAME_DEFAULT)
|
|
||||||
|
|
||||||
mock_san.reset_mock()
|
|
||||||
|
|
||||||
# Short
|
|
||||||
self.assertEqual(mock_san.return_value,
|
|
||||||
self.st_adpt._get_disk_name('type', inst, short=True))
|
|
||||||
mock_san.assert_called_with('a_name_t_0123', prefix='t_',
|
|
||||||
max_len=pvm_const.MaxLen.VDISK_NAME)
|
|
@ -1,313 +0,0 @@
|
|||||||
# Copyright 2015, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from unittest import mock
|
|
||||||
|
|
||||||
import fixtures
|
|
||||||
|
|
||||||
from nova import exception
|
|
||||||
from nova import test
|
|
||||||
from oslo_utils.fixture import uuidsentinel as uuids
|
|
||||||
from pypowervm import const as pvm_const
|
|
||||||
from pypowervm.tasks import storage as tsk_stg
|
|
||||||
from pypowervm.utils import transaction as pvm_tx
|
|
||||||
from pypowervm.wrappers import storage as pvm_stg
|
|
||||||
from pypowervm.wrappers import virtual_io_server as pvm_vios
|
|
||||||
|
|
||||||
from nova.virt.powervm.disk import driver as disk_dvr
|
|
||||||
from nova.virt.powervm.disk import localdisk
|
|
||||||
|
|
||||||
|
|
||||||
class TestLocalDisk(test.NoDBTestCase):
|
|
||||||
"""Unit Tests for the LocalDisk storage driver."""
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestLocalDisk, self).setUp()
|
|
||||||
self.adpt = mock.Mock()
|
|
||||||
|
|
||||||
# The mock VIOS needs to have scsi_mappings as a list. Internals are
|
|
||||||
# set by individual test cases as needed.
|
|
||||||
smaps = [mock.Mock()]
|
|
||||||
self.vio_wrap = mock.create_autospec(
|
|
||||||
pvm_vios.VIOS, instance=True, scsi_mappings=smaps,
|
|
||||||
uuid='vios-uuid')
|
|
||||||
|
|
||||||
# Return the mgmt uuid.
|
|
||||||
self.mgmt_uuid = self.useFixture(fixtures.MockPatch(
|
|
||||||
'nova.virt.powervm.mgmt.mgmt_uuid', autospec=True)).mock
|
|
||||||
self.mgmt_uuid.return_value = 'mgmt_uuid'
|
|
||||||
|
|
||||||
self.pvm_uuid = self.useFixture(fixtures.MockPatch(
|
|
||||||
'nova.virt.powervm.vm.get_pvm_uuid')).mock
|
|
||||||
self.pvm_uuid.return_value = 'pvm_uuid'
|
|
||||||
|
|
||||||
# Set up for the mocks for the disk adapter.
|
|
||||||
self.mock_find_vg = self.useFixture(fixtures.MockPatch(
|
|
||||||
'pypowervm.tasks.storage.find_vg', autospec=True)).mock
|
|
||||||
self.vg_uuid = uuids.vg_uuid
|
|
||||||
self.vg = mock.Mock(spec=pvm_stg.VG, uuid=self.vg_uuid)
|
|
||||||
self.mock_find_vg.return_value = (self.vio_wrap, self.vg)
|
|
||||||
|
|
||||||
self.flags(volume_group_name='fakevg', group='powervm')
|
|
||||||
|
|
||||||
# Mock the feed tasks.
|
|
||||||
self.mock_afs = self.useFixture(fixtures.MockPatch(
|
|
||||||
'pypowervm.utils.transaction.FeedTask.add_functor_subtask',
|
|
||||||
autospec=True)).mock
|
|
||||||
self.mock_wtsk = mock.create_autospec(
|
|
||||||
pvm_tx.WrapperTask, instance=True)
|
|
||||||
self.mock_wtsk.configure_mock(wrapper=self.vio_wrap)
|
|
||||||
self.mock_ftsk = mock.create_autospec(pvm_tx.FeedTask, instance=True)
|
|
||||||
self.mock_ftsk.configure_mock(
|
|
||||||
wrapper_tasks={'vios-uuid': self.mock_wtsk})
|
|
||||||
|
|
||||||
# Create the adapter.
|
|
||||||
self.ld_adpt = localdisk.LocalStorage(self.adpt, 'host_uuid')
|
|
||||||
|
|
||||||
def test_init(self):
|
|
||||||
# Localdisk adapter already initialized in setUp()
|
|
||||||
# From super __init__()
|
|
||||||
self.assertEqual(self.adpt, self.ld_adpt._adapter)
|
|
||||||
self.assertEqual('host_uuid', self.ld_adpt._host_uuid)
|
|
||||||
self.assertEqual('mgmt_uuid', self.ld_adpt.mp_uuid)
|
|
||||||
|
|
||||||
# From LocalStorage __init__()
|
|
||||||
self.assertEqual('fakevg', self.ld_adpt.vg_name)
|
|
||||||
self.mock_find_vg.assert_called_once_with(self.adpt, 'fakevg')
|
|
||||||
self.assertEqual('vios-uuid', self.ld_adpt._vios_uuid)
|
|
||||||
self.assertEqual(self.vg_uuid, self.ld_adpt.vg_uuid)
|
|
||||||
self.assertFalse(self.ld_adpt.capabilities['shared_storage'])
|
|
||||||
self.assertFalse(self.ld_adpt.capabilities['has_imagecache'])
|
|
||||||
self.assertFalse(self.ld_adpt.capabilities['snapshot'])
|
|
||||||
|
|
||||||
# Assert snapshot capability is true if hosting I/O on mgmt partition.
|
|
||||||
self.mgmt_uuid.return_value = 'vios-uuid'
|
|
||||||
self.ld_adpt = localdisk.LocalStorage(self.adpt, 'host_uuid')
|
|
||||||
self.assertTrue(self.ld_adpt.capabilities['snapshot'])
|
|
||||||
|
|
||||||
# Assert volume_group_name is required.
|
|
||||||
self.flags(volume_group_name=None, group='powervm')
|
|
||||||
self.assertRaises(exception.OptRequiredIfOtherOptValue,
|
|
||||||
localdisk.LocalStorage, self.adpt, 'host_uuid')
|
|
||||||
|
|
||||||
def test_vios_uuids(self):
|
|
||||||
self.assertEqual(['vios-uuid'], self.ld_adpt._vios_uuids)
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.gen_match_func', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.disk.driver.DiskAdapter._get_disk_name')
|
|
||||||
def test_disk_match_func(self, mock_disk_name, mock_gen_match):
|
|
||||||
mock_disk_name.return_value = 'disk_name'
|
|
||||||
func = self.ld_adpt._disk_match_func('disk_type', 'instance')
|
|
||||||
mock_disk_name.assert_called_once_with(
|
|
||||||
'disk_type', 'instance', short=True)
|
|
||||||
mock_gen_match.assert_called_once_with(
|
|
||||||
pvm_stg.VDisk, names=['disk_name'])
|
|
||||||
self.assertEqual(mock_gen_match.return_value, func)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.disk.localdisk.LocalStorage._get_vg_wrap')
|
|
||||||
def test_capacity(self, mock_vg):
|
|
||||||
"""Tests the capacity methods."""
|
|
||||||
mock_vg.return_value = mock.Mock(
|
|
||||||
capacity='5120', available_size='2048')
|
|
||||||
self.assertEqual(5120.0, self.ld_adpt.capacity)
|
|
||||||
self.assertEqual(3072.0, self.ld_adpt.capacity_used)
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.storage.rm_vg_storage', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.disk.localdisk.LocalStorage._get_vg_wrap')
|
|
||||||
def test_delete_disks(self, mock_vg, mock_rm_vg):
|
|
||||||
self.ld_adpt.delete_disks('storage_elems')
|
|
||||||
mock_vg.assert_called_once_with()
|
|
||||||
mock_rm_vg.assert_called_once_with(
|
|
||||||
mock_vg.return_value, vdisks='storage_elems')
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.wrappers.virtual_io_server.VIOS.get')
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.remove_maps', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.gen_match_func', autospec=True)
|
|
||||||
def test_detach_disk(self, mock_match_fn, mock_rm_maps, mock_vios):
|
|
||||||
mock_match_fn.return_value = 'match_func'
|
|
||||||
mock_vios.return_value = self.vio_wrap
|
|
||||||
mock_map1 = mock.Mock(backing_storage='back_stor1')
|
|
||||||
mock_map2 = mock.Mock(backing_storage='back_stor2')
|
|
||||||
mock_rm_maps.return_value = [mock_map1, mock_map2]
|
|
||||||
|
|
||||||
back_stores = self.ld_adpt.detach_disk('instance')
|
|
||||||
|
|
||||||
self.assertEqual(['back_stor1', 'back_stor2'], back_stores)
|
|
||||||
mock_match_fn.assert_called_once_with(pvm_stg.VDisk)
|
|
||||||
mock_vios.assert_called_once_with(
|
|
||||||
self.ld_adpt._adapter, uuid='vios-uuid',
|
|
||||||
xag=[pvm_const.XAG.VIO_SMAP])
|
|
||||||
mock_rm_maps.assert_called_with(self.vio_wrap, 'pvm_uuid',
|
|
||||||
match_func=mock_match_fn.return_value)
|
|
||||||
mock_vios.return_value.update.assert_called_once()
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.remove_vdisk_mapping',
|
|
||||||
autospec=True)
|
|
||||||
def test_disconnect_disk_from_mgmt(self, mock_rm_vdisk_map):
|
|
||||||
self.ld_adpt.disconnect_disk_from_mgmt('vios-uuid', 'disk_name')
|
|
||||||
mock_rm_vdisk_map.assert_called_with(
|
|
||||||
self.ld_adpt._adapter, 'vios-uuid', 'mgmt_uuid',
|
|
||||||
disk_names=['disk_name'])
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.disk.localdisk.LocalStorage._upload_image')
|
|
||||||
def test_create_disk_from_image(self, mock_upload_image):
|
|
||||||
mock_image_meta = mock.Mock()
|
|
||||||
mock_image_meta.size = 30
|
|
||||||
mock_upload_image.return_value = 'mock_img'
|
|
||||||
|
|
||||||
self.ld_adpt.create_disk_from_image(
|
|
||||||
'context', 'instance', mock_image_meta)
|
|
||||||
|
|
||||||
mock_upload_image.assert_called_once_with(
|
|
||||||
'context', 'instance', mock_image_meta)
|
|
||||||
|
|
||||||
@mock.patch('nova.image.glance.API.download')
|
|
||||||
@mock.patch('nova.virt.powervm.disk.driver.IterableToFileAdapter')
|
|
||||||
@mock.patch('pypowervm.tasks.storage.upload_new_vdisk')
|
|
||||||
@mock.patch('nova.virt.powervm.disk.driver.DiskAdapter._get_disk_name')
|
|
||||||
def test_upload_image(self, mock_name, mock_upload, mock_iter, mock_dl):
|
|
||||||
mock_meta = mock.Mock(id='1', size=1073741824, disk_format='raw')
|
|
||||||
mock_upload.return_value = ['mock_img']
|
|
||||||
|
|
||||||
mock_img = self.ld_adpt._upload_image('context', 'inst', mock_meta)
|
|
||||||
|
|
||||||
self.assertEqual('mock_img', mock_img)
|
|
||||||
mock_name.assert_called_once_with(
|
|
||||||
disk_dvr.DiskType.BOOT, 'inst', short=True)
|
|
||||||
mock_dl.assert_called_once_with('context', '1')
|
|
||||||
mock_iter.assert_called_once_with(mock_dl.return_value)
|
|
||||||
mock_upload.assert_called_once_with(
|
|
||||||
self.adpt, 'vios-uuid', self.vg_uuid, mock_iter.return_value,
|
|
||||||
mock_name.return_value, 1073741824, d_size=1073741824,
|
|
||||||
upload_type=tsk_stg.UploadType.IO_STREAM, file_format='raw')
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.add_map', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.build_vscsi_mapping',
|
|
||||||
autospec=True)
|
|
||||||
def test_attach_disk(self, mock_bldmap, mock_addmap):
|
|
||||||
def test_afs(add_func):
|
|
||||||
# Verify the internal add_func
|
|
||||||
self.assertEqual(mock_addmap.return_value, add_func(self.vio_wrap))
|
|
||||||
mock_bldmap.assert_called_once_with(
|
|
||||||
self.ld_adpt._host_uuid, self.vio_wrap, 'pvm_uuid',
|
|
||||||
'disk_info')
|
|
||||||
mock_addmap.assert_called_once_with(
|
|
||||||
self.vio_wrap, mock_bldmap.return_value)
|
|
||||||
|
|
||||||
self.mock_wtsk.add_functor_subtask.side_effect = test_afs
|
|
||||||
self.ld_adpt.attach_disk('instance', 'disk_info', self.mock_ftsk)
|
|
||||||
self.pvm_uuid.assert_called_once_with('instance')
|
|
||||||
self.assertEqual(1, self.mock_wtsk.add_functor_subtask.call_count)
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.wrappers.storage.VG.get')
|
|
||||||
def test_get_vg_wrap(self, mock_vg):
|
|
||||||
vg_wrap = self.ld_adpt._get_vg_wrap()
|
|
||||||
self.assertEqual(mock_vg.return_value, vg_wrap)
|
|
||||||
mock_vg.assert_called_once_with(
|
|
||||||
self.adpt, uuid=self.vg_uuid, parent_type=pvm_vios.VIOS,
|
|
||||||
parent_uuid='vios-uuid')
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.wrappers.virtual_io_server.VIOS.get')
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.find_maps', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.disk.localdisk.LocalStorage.'
|
|
||||||
'_disk_match_func')
|
|
||||||
def test_get_bootdisk_path(self, mock_match_fn, mock_findmaps,
|
|
||||||
mock_vios):
|
|
||||||
mock_vios.return_value = self.vio_wrap
|
|
||||||
|
|
||||||
# No maps found
|
|
||||||
mock_findmaps.return_value = None
|
|
||||||
devname = self.ld_adpt.get_bootdisk_path('inst', 'vios_uuid')
|
|
||||||
self.pvm_uuid.assert_called_once_with('inst')
|
|
||||||
mock_match_fn.assert_called_once_with(disk_dvr.DiskType.BOOT, 'inst')
|
|
||||||
mock_vios.assert_called_once_with(
|
|
||||||
self.adpt, uuid='vios_uuid', xag=[pvm_const.XAG.VIO_SMAP])
|
|
||||||
mock_findmaps.assert_called_once_with(
|
|
||||||
self.vio_wrap.scsi_mappings,
|
|
||||||
client_lpar_id='pvm_uuid',
|
|
||||||
match_func=mock_match_fn.return_value)
|
|
||||||
self.assertIsNone(devname)
|
|
||||||
|
|
||||||
# Good map
|
|
||||||
mock_lu = mock.Mock()
|
|
||||||
mock_lu.server_adapter.backing_dev_name = 'devname'
|
|
||||||
mock_findmaps.return_value = [mock_lu]
|
|
||||||
devname = self.ld_adpt.get_bootdisk_path('inst', 'vios_uuid')
|
|
||||||
self.assertEqual('devname', devname)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_instance_wrapper', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.find_maps')
|
|
||||||
@mock.patch('pypowervm.wrappers.virtual_io_server.VIOS.get')
|
|
||||||
@mock.patch('pypowervm.wrappers.storage.VG.get', new=mock.Mock())
|
|
||||||
def test_get_bootdisk_iter(self, mock_vios, mock_find_maps, mock_lw):
|
|
||||||
inst, lpar_wrap, vios1 = self._bld_mocks_for_instance_disk()
|
|
||||||
mock_lw.return_value = lpar_wrap
|
|
||||||
|
|
||||||
# Good path
|
|
||||||
mock_vios.return_value = vios1
|
|
||||||
for vdisk, vios in self.ld_adpt._get_bootdisk_iter(inst):
|
|
||||||
self.assertEqual(vios1.scsi_mappings[0].backing_storage, vdisk)
|
|
||||||
self.assertEqual(vios1.uuid, vios.uuid)
|
|
||||||
mock_vios.assert_called_once_with(
|
|
||||||
self.adpt, uuid='vios-uuid', xag=[pvm_const.XAG.VIO_SMAP])
|
|
||||||
|
|
||||||
# Not found, no storage of that name.
|
|
||||||
mock_vios.reset_mock()
|
|
||||||
mock_find_maps.return_value = []
|
|
||||||
for vdisk, vios in self.ld_adpt._get_bootdisk_iter(inst):
|
|
||||||
self.fail('Should not have found any storage elements.')
|
|
||||||
mock_vios.assert_called_once_with(
|
|
||||||
self.adpt, uuid='vios-uuid', xag=[pvm_const.XAG.VIO_SMAP])
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.disk.driver.DiskAdapter._get_bootdisk_iter',
|
|
||||||
autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_instance_wrapper', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.add_vscsi_mapping', autospec=True)
|
|
||||||
def test_connect_instance_disk_to_mgmt(self, mock_add, mock_lw, mock_iter):
|
|
||||||
inst, lpar_wrap, vios1 = self._bld_mocks_for_instance_disk()
|
|
||||||
mock_lw.return_value = lpar_wrap
|
|
||||||
|
|
||||||
# Good path
|
|
||||||
mock_iter.return_value = [(vios1.scsi_mappings[0].backing_storage,
|
|
||||||
vios1)]
|
|
||||||
vdisk, vios = self.ld_adpt.connect_instance_disk_to_mgmt(inst)
|
|
||||||
self.assertEqual(vios1.scsi_mappings[0].backing_storage, vdisk)
|
|
||||||
self.assertIs(vios1, vios)
|
|
||||||
self.assertEqual(1, mock_add.call_count)
|
|
||||||
mock_add.assert_called_with('host_uuid', vios, 'mgmt_uuid', vdisk)
|
|
||||||
|
|
||||||
# add_vscsi_mapping raises. Show-stopper since only one VIOS.
|
|
||||||
mock_add.reset_mock()
|
|
||||||
mock_add.side_effect = Exception
|
|
||||||
self.assertRaises(exception.InstanceDiskMappingFailed,
|
|
||||||
self.ld_adpt.connect_instance_disk_to_mgmt, inst)
|
|
||||||
self.assertEqual(1, mock_add.call_count)
|
|
||||||
|
|
||||||
# Not found
|
|
||||||
mock_add.reset_mock()
|
|
||||||
mock_iter.return_value = []
|
|
||||||
self.assertRaises(exception.InstanceDiskMappingFailed,
|
|
||||||
self.ld_adpt.connect_instance_disk_to_mgmt, inst)
|
|
||||||
self.assertFalse(mock_add.called)
|
|
||||||
|
|
||||||
def _bld_mocks_for_instance_disk(self):
|
|
||||||
inst = mock.Mock()
|
|
||||||
inst.name = 'Name Of Instance'
|
|
||||||
inst.uuid = uuids.inst_uuid
|
|
||||||
lpar_wrap = mock.Mock()
|
|
||||||
lpar_wrap.id = 2
|
|
||||||
vios1 = self.vio_wrap
|
|
||||||
back_stor_name = 'b_Name_Of__' + inst.uuid[:4]
|
|
||||||
vios1.scsi_mappings[0].backing_storage.name = back_stor_name
|
|
||||||
return inst, lpar_wrap, vios1
|
|
@ -1,425 +0,0 @@
|
|||||||
# Copyright 2015, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from unittest import mock
|
|
||||||
|
|
||||||
import fixtures
|
|
||||||
from oslo_utils import uuidutils
|
|
||||||
from pypowervm import const as pvm_const
|
|
||||||
from pypowervm import exceptions as pvm_exc
|
|
||||||
from pypowervm.tasks import storage as tsk_stg
|
|
||||||
from pypowervm.utils import transaction as pvm_tx
|
|
||||||
from pypowervm.wrappers import cluster as pvm_clust
|
|
||||||
from pypowervm.wrappers import storage as pvm_stg
|
|
||||||
from pypowervm.wrappers import virtual_io_server as pvm_vios
|
|
||||||
|
|
||||||
from nova import exception
|
|
||||||
from nova import test
|
|
||||||
from nova.tests.unit.virt import powervm
|
|
||||||
from nova.virt.powervm.disk import ssp as ssp_dvr
|
|
||||||
from nova.virt.powervm import vm
|
|
||||||
|
|
||||||
FAKE_INST_UUID = uuidutils.generate_uuid(dashed=True)
|
|
||||||
FAKE_INST_UUID_PVM = vm.get_pvm_uuid(mock.Mock(uuid=FAKE_INST_UUID))
|
|
||||||
|
|
||||||
|
|
||||||
class TestSSPDiskAdapter(test.NoDBTestCase):
|
|
||||||
"""Unit Tests for the LocalDisk storage driver."""
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestSSPDiskAdapter, self).setUp()
|
|
||||||
|
|
||||||
self.inst = powervm.TEST_INSTANCE
|
|
||||||
|
|
||||||
self.apt = mock.Mock()
|
|
||||||
self.host_uuid = 'host_uuid'
|
|
||||||
|
|
||||||
self.ssp_wrap = mock.create_autospec(pvm_stg.SSP, instance=True)
|
|
||||||
|
|
||||||
# SSP.refresh() returns itself
|
|
||||||
self.ssp_wrap.refresh.return_value = self.ssp_wrap
|
|
||||||
self.node1 = mock.create_autospec(pvm_clust.Node, instance=True)
|
|
||||||
self.node2 = mock.create_autospec(pvm_clust.Node, instance=True)
|
|
||||||
self.clust_wrap = mock.create_autospec(
|
|
||||||
pvm_clust.Cluster, instance=True)
|
|
||||||
self.clust_wrap.nodes = [self.node1, self.node2]
|
|
||||||
self.clust_wrap.refresh.return_value = self.clust_wrap
|
|
||||||
self.tier_wrap = mock.create_autospec(pvm_stg.Tier, instance=True)
|
|
||||||
# Tier.refresh() returns itself
|
|
||||||
self.tier_wrap.refresh.return_value = self.tier_wrap
|
|
||||||
self.vio_wrap = mock.create_autospec(pvm_vios.VIOS, instance=True)
|
|
||||||
|
|
||||||
# For _cluster
|
|
||||||
self.mock_clust = self.useFixture(fixtures.MockPatch(
|
|
||||||
'pypowervm.wrappers.cluster.Cluster', autospec=True)).mock
|
|
||||||
self.mock_clust.get.return_value = [self.clust_wrap]
|
|
||||||
|
|
||||||
# For _ssp
|
|
||||||
self.mock_ssp_gbhref = self.useFixture(fixtures.MockPatch(
|
|
||||||
'pypowervm.wrappers.storage.SSP.get_by_href')).mock
|
|
||||||
self.mock_ssp_gbhref.return_value = self.ssp_wrap
|
|
||||||
|
|
||||||
# For _tier
|
|
||||||
self.mock_get_tier = self.useFixture(fixtures.MockPatch(
|
|
||||||
'pypowervm.tasks.storage.default_tier_for_ssp',
|
|
||||||
autospec=True)).mock
|
|
||||||
self.mock_get_tier.return_value = self.tier_wrap
|
|
||||||
|
|
||||||
# A FeedTask
|
|
||||||
self.mock_wtsk = mock.create_autospec(
|
|
||||||
pvm_tx.WrapperTask, instance=True)
|
|
||||||
self.mock_wtsk.configure_mock(wrapper=self.vio_wrap)
|
|
||||||
self.mock_ftsk = mock.create_autospec(pvm_tx.FeedTask, instance=True)
|
|
||||||
self.mock_afs = self.mock_ftsk.add_functor_subtask
|
|
||||||
self.mock_ftsk.configure_mock(
|
|
||||||
wrapper_tasks={self.vio_wrap.uuid: self.mock_wtsk})
|
|
||||||
|
|
||||||
self.pvm_uuid = self.useFixture(fixtures.MockPatch(
|
|
||||||
'nova.virt.powervm.vm.get_pvm_uuid')).mock
|
|
||||||
|
|
||||||
# Return the mgmt uuid
|
|
||||||
self.mgmt_uuid = self.useFixture(fixtures.MockPatch(
|
|
||||||
'nova.virt.powervm.mgmt.mgmt_uuid')).mock
|
|
||||||
self.mgmt_uuid.return_value = 'mp_uuid'
|
|
||||||
|
|
||||||
# The SSP disk adapter
|
|
||||||
self.ssp_drv = ssp_dvr.SSPDiskAdapter(self.apt, self.host_uuid)
|
|
||||||
|
|
||||||
def test_init(self):
|
|
||||||
self.assertEqual(self.apt, self.ssp_drv._adapter)
|
|
||||||
self.assertEqual(self.host_uuid, self.ssp_drv._host_uuid)
|
|
||||||
self.mock_clust.get.assert_called_once_with(self.apt)
|
|
||||||
self.assertEqual(self.mock_clust.get.return_value,
|
|
||||||
[self.ssp_drv._clust])
|
|
||||||
self.mock_ssp_gbhref.assert_called_once_with(
|
|
||||||
self.apt, self.clust_wrap.ssp_uri)
|
|
||||||
self.assertEqual(self.mock_ssp_gbhref.return_value, self.ssp_drv._ssp)
|
|
||||||
self.mock_get_tier.assert_called_once_with(self.ssp_wrap)
|
|
||||||
self.assertEqual(self.mock_get_tier.return_value, self.ssp_drv._tier)
|
|
||||||
|
|
||||||
def test_init_error(self):
|
|
||||||
# Do these in reverse order to verify we trap all of 'em
|
|
||||||
for raiser in (self.mock_get_tier, self.mock_ssp_gbhref,
|
|
||||||
self.mock_clust.get):
|
|
||||||
raiser.side_effect = pvm_exc.TimeoutError("timed out")
|
|
||||||
self.assertRaises(exception.NotFound,
|
|
||||||
ssp_dvr.SSPDiskAdapter, self.apt, self.host_uuid)
|
|
||||||
raiser.side_effect = ValueError
|
|
||||||
self.assertRaises(ValueError,
|
|
||||||
ssp_dvr.SSPDiskAdapter, self.apt, self.host_uuid)
|
|
||||||
|
|
||||||
def test_capabilities(self):
|
|
||||||
self.assertTrue(self.ssp_drv.capabilities.get('shared_storage'))
|
|
||||||
self.assertFalse(self.ssp_drv.capabilities.get('has_imagecache'))
|
|
||||||
self.assertTrue(self.ssp_drv.capabilities.get('snapshot'))
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.util.get_req_path_uuid', autospec=True)
|
|
||||||
def test_vios_uuids(self, mock_rpu):
|
|
||||||
mock_rpu.return_value = self.host_uuid
|
|
||||||
vios_uuids = self.ssp_drv._vios_uuids
|
|
||||||
self.assertEqual({self.node1.vios_uuid, self.node2.vios_uuid},
|
|
||||||
set(vios_uuids))
|
|
||||||
mock_rpu.assert_has_calls(
|
|
||||||
[mock.call(node.vios_uri, preserve_case=True, root=True)
|
|
||||||
for node in [self.node1, self.node2]])
|
|
||||||
|
|
||||||
mock_rpu.reset_mock()
|
|
||||||
|
|
||||||
# Test VIOSes on other nodes, which won't have uuid or url
|
|
||||||
node1 = mock.Mock(vios_uuid=None, vios_uri='uri1')
|
|
||||||
node2 = mock.Mock(vios_uuid='2', vios_uri=None)
|
|
||||||
# This mock is good and should be returned
|
|
||||||
node3 = mock.Mock(vios_uuid='3', vios_uri='uri3')
|
|
||||||
self.clust_wrap.nodes = [node1, node2, node3]
|
|
||||||
self.assertEqual(['3'], self.ssp_drv._vios_uuids)
|
|
||||||
# get_req_path_uuid was only called on the good one
|
|
||||||
mock_rpu.assert_called_once_with('uri3', preserve_case=True, root=True)
|
|
||||||
|
|
||||||
def test_capacity(self):
|
|
||||||
self.tier_wrap.capacity = 10
|
|
||||||
self.assertAlmostEqual(10.0, self.ssp_drv.capacity)
|
|
||||||
self.tier_wrap.refresh.assert_called_once_with()
|
|
||||||
|
|
||||||
def test_capacity_used(self):
|
|
||||||
self.ssp_wrap.capacity = 4.56
|
|
||||||
self.ssp_wrap.free_space = 1.23
|
|
||||||
self.assertAlmostEqual((4.56 - 1.23), self.ssp_drv.capacity_used)
|
|
||||||
self.ssp_wrap.refresh.assert_called_once_with()
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.cluster_ssp.get_or_upload_image_lu',
|
|
||||||
autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.disk.ssp.SSPDiskAdapter._vios_uuids',
|
|
||||||
new_callable=mock.PropertyMock)
|
|
||||||
@mock.patch('pypowervm.util.sanitize_file_name_for_api', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.storage.crt_lu', autospec=True)
|
|
||||||
@mock.patch('nova.image.glance.API.download')
|
|
||||||
@mock.patch('nova.virt.powervm.disk.driver.IterableToFileAdapter',
|
|
||||||
autospec=True)
|
|
||||||
def test_create_disk_from_image(self, mock_it2f, mock_dl, mock_crt_lu,
|
|
||||||
mock_san, mock_vuuid, mock_goru):
|
|
||||||
img = powervm.TEST_IMAGE1
|
|
||||||
|
|
||||||
mock_crt_lu.return_value = self.ssp_drv._ssp, 'boot_lu'
|
|
||||||
mock_san.return_value = 'disk_name'
|
|
||||||
mock_vuuid.return_value = ['vuuid']
|
|
||||||
|
|
||||||
self.assertEqual('boot_lu', self.ssp_drv.create_disk_from_image(
|
|
||||||
'context', self.inst, img))
|
|
||||||
mock_dl.assert_called_once_with('context', img.id)
|
|
||||||
mock_san.assert_has_calls([
|
|
||||||
mock.call(img.name, prefix='image_', suffix='_' + img.checksum),
|
|
||||||
mock.call(self.inst.name, prefix='boot_')])
|
|
||||||
mock_it2f.assert_called_once_with(mock_dl.return_value)
|
|
||||||
mock_goru.assert_called_once_with(
|
|
||||||
self.ssp_drv._tier, 'disk_name', 'vuuid',
|
|
||||||
mock_it2f.return_value, img.size,
|
|
||||||
upload_type=tsk_stg.UploadType.IO_STREAM)
|
|
||||||
mock_crt_lu.assert_called_once_with(
|
|
||||||
self.mock_get_tier.return_value, mock_san.return_value,
|
|
||||||
self.inst.flavor.root_gb, typ=pvm_stg.LUType.DISK,
|
|
||||||
clone=mock_goru.return_value)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.disk.ssp.SSPDiskAdapter._vios_uuids',
|
|
||||||
new_callable=mock.PropertyMock)
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.add_map', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.build_vscsi_mapping',
|
|
||||||
autospec=True)
|
|
||||||
@mock.patch('pypowervm.wrappers.storage.LU', autospec=True)
|
|
||||||
def test_connect_disk(self, mock_lu, mock_bldmap, mock_addmap,
|
|
||||||
mock_vio_uuids):
|
|
||||||
disk_info = mock.Mock()
|
|
||||||
disk_info.configure_mock(name='dname', udid='dudid')
|
|
||||||
mock_vio_uuids.return_value = [self.vio_wrap.uuid]
|
|
||||||
|
|
||||||
def test_afs(add_func):
|
|
||||||
# Verify the internal add_func
|
|
||||||
self.assertEqual(mock_addmap.return_value, add_func(self.vio_wrap))
|
|
||||||
mock_bldmap.assert_called_once_with(
|
|
||||||
self.host_uuid, self.vio_wrap, self.pvm_uuid.return_value,
|
|
||||||
mock_lu.bld_ref.return_value)
|
|
||||||
mock_addmap.assert_called_once_with(
|
|
||||||
self.vio_wrap, mock_bldmap.return_value)
|
|
||||||
self.mock_wtsk.add_functor_subtask.side_effect = test_afs
|
|
||||||
|
|
||||||
self.ssp_drv.attach_disk(self.inst, disk_info, self.mock_ftsk)
|
|
||||||
mock_lu.bld_ref.assert_called_once_with(self.apt, 'dname', 'dudid')
|
|
||||||
self.pvm_uuid.assert_called_once_with(self.inst)
|
|
||||||
self.assertEqual(1, self.mock_wtsk.add_functor_subtask.call_count)
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.storage.rm_tier_storage', autospec=True)
|
|
||||||
def test_delete_disks(self, mock_rm_tstor):
|
|
||||||
self.ssp_drv.delete_disks(['disk1', 'disk2'])
|
|
||||||
mock_rm_tstor.assert_called_once_with(['disk1', 'disk2'],
|
|
||||||
tier=self.ssp_drv._tier)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.disk.ssp.SSPDiskAdapter._vios_uuids',
|
|
||||||
new_callable=mock.PropertyMock)
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.find_maps', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.remove_maps', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.gen_match_func', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.partition.build_active_vio_feed_task',
|
|
||||||
autospec=True)
|
|
||||||
def test_disconnect_disk(self, mock_bld_ftsk, mock_gmf, mock_rmmaps,
|
|
||||||
mock_findmaps, mock_vio_uuids):
|
|
||||||
mock_vio_uuids.return_value = [self.vio_wrap.uuid]
|
|
||||||
mock_bld_ftsk.return_value = self.mock_ftsk
|
|
||||||
lu1, lu2 = [mock.create_autospec(pvm_stg.LU, instance=True)] * 2
|
|
||||||
# Two mappings have the same LU, to verify set behavior
|
|
||||||
mock_findmaps.return_value = [
|
|
||||||
mock.Mock(spec=pvm_vios.VSCSIMapping, backing_storage=lu)
|
|
||||||
for lu in (lu1, lu2, lu1)]
|
|
||||||
|
|
||||||
def test_afs(rm_func):
|
|
||||||
# verify the internal rm_func
|
|
||||||
self.assertEqual(mock_rmmaps.return_value, rm_func(self.vio_wrap))
|
|
||||||
mock_rmmaps.assert_called_once_with(
|
|
||||||
self.vio_wrap, self.pvm_uuid.return_value,
|
|
||||||
match_func=mock_gmf.return_value)
|
|
||||||
self.mock_wtsk.add_functor_subtask.side_effect = test_afs
|
|
||||||
|
|
||||||
self.assertEqual(
|
|
||||||
{lu1, lu2}, set(self.ssp_drv.detach_disk(self.inst)))
|
|
||||||
mock_bld_ftsk.assert_called_once_with(
|
|
||||||
self.apt, name='ssp', xag=[pvm_const.XAG.VIO_SMAP])
|
|
||||||
self.pvm_uuid.assert_called_once_with(self.inst)
|
|
||||||
mock_gmf.assert_called_once_with(pvm_stg.LU)
|
|
||||||
self.assertEqual(1, self.mock_wtsk.add_functor_subtask.call_count)
|
|
||||||
mock_findmaps.assert_called_once_with(
|
|
||||||
self.vio_wrap.scsi_mappings,
|
|
||||||
client_lpar_id=self.pvm_uuid.return_value,
|
|
||||||
match_func=mock_gmf.return_value)
|
|
||||||
self.mock_ftsk.execute.assert_called_once_with()
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.wrappers.virtual_io_server.VIOS.get')
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.find_maps', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.disk.ssp.SSPDiskAdapter._disk_match_func')
|
|
||||||
def test_get_bootdisk_path(self, mock_match_fn, mock_findmaps,
|
|
||||||
mock_vios):
|
|
||||||
mock_vios.return_value = self.vio_wrap
|
|
||||||
|
|
||||||
# No maps found
|
|
||||||
mock_findmaps.return_value = None
|
|
||||||
devname = self.ssp_drv.get_bootdisk_path('inst', 'vios_uuid')
|
|
||||||
mock_vios.assert_called_once_with(
|
|
||||||
self.apt, uuid='vios_uuid', xag=[pvm_const.XAG.VIO_SMAP])
|
|
||||||
mock_findmaps.assert_called_once_with(
|
|
||||||
self.vio_wrap.scsi_mappings,
|
|
||||||
client_lpar_id=self.pvm_uuid.return_value,
|
|
||||||
match_func=mock_match_fn.return_value)
|
|
||||||
self.assertIsNone(devname)
|
|
||||||
|
|
||||||
# Good map
|
|
||||||
mock_lu = mock.Mock()
|
|
||||||
mock_lu.server_adapter.backing_dev_name = 'devname'
|
|
||||||
mock_findmaps.return_value = [mock_lu]
|
|
||||||
devname = self.ssp_drv.get_bootdisk_path('inst', 'vios_uuid')
|
|
||||||
self.assertEqual('devname', devname)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.disk.ssp.SSPDiskAdapter.'
|
|
||||||
'_vios_uuids', new_callable=mock.PropertyMock)
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_instance_wrapper', autospec=True)
|
|
||||||
@mock.patch('pypowervm.wrappers.virtual_io_server.VIOS.get')
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.add_vscsi_mapping', autospec=True)
|
|
||||||
def test_connect_instance_disk_to_mgmt(self, mock_add, mock_vio_get,
|
|
||||||
mock_lw, mock_vio_uuids):
|
|
||||||
inst, lpar_wrap, vio1, vio2, vio3 = self._bld_mocks_for_instance_disk()
|
|
||||||
mock_lw.return_value = lpar_wrap
|
|
||||||
mock_vio_uuids.return_value = [1, 2]
|
|
||||||
|
|
||||||
# Test with two VIOSes, both of which contain the mapping
|
|
||||||
mock_vio_get.side_effect = [vio1, vio2]
|
|
||||||
lu, vios = self.ssp_drv.connect_instance_disk_to_mgmt(inst)
|
|
||||||
self.assertEqual('lu_udid', lu.udid)
|
|
||||||
# Should hit on the first VIOS
|
|
||||||
self.assertIs(vio1, vios)
|
|
||||||
mock_add.assert_called_once_with(self.host_uuid, vio1, 'mp_uuid', lu)
|
|
||||||
|
|
||||||
# Now the first VIOS doesn't have the mapping, but the second does
|
|
||||||
mock_add.reset_mock()
|
|
||||||
mock_vio_get.side_effect = [vio3, vio2]
|
|
||||||
lu, vios = self.ssp_drv.connect_instance_disk_to_mgmt(inst)
|
|
||||||
self.assertEqual('lu_udid', lu.udid)
|
|
||||||
# Should hit on the second VIOS
|
|
||||||
self.assertIs(vio2, vios)
|
|
||||||
self.assertEqual(1, mock_add.call_count)
|
|
||||||
mock_add.assert_called_once_with(self.host_uuid, vio2, 'mp_uuid', lu)
|
|
||||||
|
|
||||||
# No hits
|
|
||||||
mock_add.reset_mock()
|
|
||||||
mock_vio_get.side_effect = [vio3, vio3]
|
|
||||||
self.assertRaises(exception.InstanceDiskMappingFailed,
|
|
||||||
self.ssp_drv.connect_instance_disk_to_mgmt, inst)
|
|
||||||
self.assertEqual(0, mock_add.call_count)
|
|
||||||
|
|
||||||
# First add_vscsi_mapping call raises
|
|
||||||
mock_vio_get.side_effect = [vio1, vio2]
|
|
||||||
mock_add.side_effect = [Exception("mapping failed"), None]
|
|
||||||
# Should hit on the second VIOS
|
|
||||||
self.assertIs(vio2, vios)
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.remove_lu_mapping', autospec=True)
|
|
||||||
def test_disconnect_disk_from_mgmt(self, mock_rm_lu_map):
|
|
||||||
self.ssp_drv.disconnect_disk_from_mgmt('vios_uuid', 'disk_name')
|
|
||||||
mock_rm_lu_map.assert_called_with(self.apt, 'vios_uuid',
|
|
||||||
'mp_uuid', disk_names=['disk_name'])
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.gen_match_func', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.disk.ssp.SSPDiskAdapter._get_disk_name')
|
|
||||||
def test_disk_match_func(self, mock_disk_name, mock_gen_match):
|
|
||||||
mock_disk_name.return_value = 'disk_name'
|
|
||||||
self.ssp_drv._disk_match_func('disk_type', 'instance')
|
|
||||||
mock_disk_name.assert_called_once_with('disk_type', 'instance')
|
|
||||||
mock_gen_match.assert_called_with(pvm_stg.LU, names=['disk_name'])
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.disk.ssp.SSPDiskAdapter.'
|
|
||||||
'_vios_uuids', new_callable=mock.PropertyMock)
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_instance_wrapper', autospec=True)
|
|
||||||
@mock.patch('pypowervm.wrappers.virtual_io_server.VIOS.get')
|
|
||||||
def test_get_bootdisk_iter(self, mock_vio_get, mock_lw, mock_vio_uuids):
|
|
||||||
inst, lpar_wrap, vio1, vio2, vio3 = self._bld_mocks_for_instance_disk()
|
|
||||||
mock_lw.return_value = lpar_wrap
|
|
||||||
mock_vio_uuids.return_value = [1, 2]
|
|
||||||
|
|
||||||
# Test with two VIOSes, both of which contain the mapping. Force the
|
|
||||||
# method to get the lpar_wrap.
|
|
||||||
mock_vio_get.side_effect = [vio1, vio2]
|
|
||||||
idi = self.ssp_drv._get_bootdisk_iter(inst)
|
|
||||||
lu, vios = next(idi)
|
|
||||||
self.assertEqual('lu_udid', lu.udid)
|
|
||||||
self.assertEqual('vios1', vios.name)
|
|
||||||
mock_vio_get.assert_called_once_with(self.apt, uuid=1,
|
|
||||||
xag=[pvm_const.XAG.VIO_SMAP])
|
|
||||||
lu, vios = next(idi)
|
|
||||||
self.assertEqual('lu_udid', lu.udid)
|
|
||||||
self.assertEqual('vios2', vios.name)
|
|
||||||
mock_vio_get.assert_called_with(self.apt, uuid=2,
|
|
||||||
xag=[pvm_const.XAG.VIO_SMAP])
|
|
||||||
self.assertRaises(StopIteration, next, idi)
|
|
||||||
self.assertEqual(2, mock_vio_get.call_count)
|
|
||||||
mock_lw.assert_called_once_with(self.apt, inst)
|
|
||||||
|
|
||||||
# Same, but prove that breaking out of the loop early avoids the second
|
|
||||||
# get call. Supply lpar_wrap from here on, and prove no calls to
|
|
||||||
# get_instance_wrapper
|
|
||||||
mock_vio_get.reset_mock()
|
|
||||||
mock_lw.reset_mock()
|
|
||||||
mock_vio_get.side_effect = [vio1, vio2]
|
|
||||||
for lu, vios in self.ssp_drv._get_bootdisk_iter(inst):
|
|
||||||
self.assertEqual('lu_udid', lu.udid)
|
|
||||||
self.assertEqual('vios1', vios.name)
|
|
||||||
break
|
|
||||||
mock_vio_get.assert_called_once_with(self.apt, uuid=1,
|
|
||||||
xag=[pvm_const.XAG.VIO_SMAP])
|
|
||||||
|
|
||||||
# Now the first VIOS doesn't have the mapping, but the second does
|
|
||||||
mock_vio_get.reset_mock()
|
|
||||||
mock_vio_get.side_effect = [vio3, vio2]
|
|
||||||
idi = self.ssp_drv._get_bootdisk_iter(inst)
|
|
||||||
lu, vios = next(idi)
|
|
||||||
self.assertEqual('lu_udid', lu.udid)
|
|
||||||
self.assertEqual('vios2', vios.name)
|
|
||||||
mock_vio_get.assert_has_calls(
|
|
||||||
[mock.call(self.apt, uuid=uuid, xag=[pvm_const.XAG.VIO_SMAP])
|
|
||||||
for uuid in (1, 2)])
|
|
||||||
self.assertRaises(StopIteration, next, idi)
|
|
||||||
self.assertEqual(2, mock_vio_get.call_count)
|
|
||||||
|
|
||||||
# No hits
|
|
||||||
mock_vio_get.reset_mock()
|
|
||||||
mock_vio_get.side_effect = [vio3, vio3]
|
|
||||||
self.assertEqual([], list(self.ssp_drv._get_bootdisk_iter(inst)))
|
|
||||||
self.assertEqual(2, mock_vio_get.call_count)
|
|
||||||
|
|
||||||
def _bld_mocks_for_instance_disk(self):
|
|
||||||
inst = mock.Mock()
|
|
||||||
inst.name = 'my-instance-name'
|
|
||||||
lpar_wrap = mock.Mock()
|
|
||||||
lpar_wrap.id = 4
|
|
||||||
lu_wrap = mock.Mock(spec=pvm_stg.LU)
|
|
||||||
lu_wrap.configure_mock(name='boot_my_instance_name', udid='lu_udid')
|
|
||||||
smap = mock.Mock(backing_storage=lu_wrap,
|
|
||||||
server_adapter=mock.Mock(lpar_id=4))
|
|
||||||
# Build mock VIOS Wrappers as the returns from VIOS.wrap.
|
|
||||||
# vios1 and vios2 will both have the mapping for client ID 4 and LU
|
|
||||||
# named boot_my_instance_name.
|
|
||||||
smaps = [mock.Mock(), mock.Mock(), mock.Mock(), smap]
|
|
||||||
vios1 = mock.Mock(spec=pvm_vios.VIOS)
|
|
||||||
vios1.configure_mock(name='vios1', uuid='uuid1', scsi_mappings=smaps)
|
|
||||||
vios2 = mock.Mock(spec=pvm_vios.VIOS)
|
|
||||||
vios2.configure_mock(name='vios2', uuid='uuid2', scsi_mappings=smaps)
|
|
||||||
# vios3 will not have the mapping
|
|
||||||
vios3 = mock.Mock(spec=pvm_vios.VIOS)
|
|
||||||
vios3.configure_mock(name='vios3', uuid='uuid3',
|
|
||||||
scsi_mappings=[mock.Mock(), mock.Mock()])
|
|
||||||
return inst, lpar_wrap, vios1, vios2, vios3
|
|
@ -1,68 +0,0 @@
|
|||||||
# Copyright 2015, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from unittest import mock
|
|
||||||
|
|
||||||
from nova import test
|
|
||||||
|
|
||||||
from nova.virt.powervm.tasks import image as tsk_img
|
|
||||||
|
|
||||||
|
|
||||||
class TestImage(test.TestCase):
|
|
||||||
def test_update_task_state(self):
|
|
||||||
def func(task_state, expected_state='delirious'):
|
|
||||||
self.assertEqual('task_state', task_state)
|
|
||||||
self.assertEqual('delirious', expected_state)
|
|
||||||
tf = tsk_img.UpdateTaskState(func, 'task_state')
|
|
||||||
self.assertEqual('update_task_state_task_state', tf.name)
|
|
||||||
tf.execute()
|
|
||||||
|
|
||||||
def func2(task_state, expected_state=None):
|
|
||||||
self.assertEqual('task_state', task_state)
|
|
||||||
self.assertEqual('expected_state', expected_state)
|
|
||||||
tf = tsk_img.UpdateTaskState(func2, 'task_state',
|
|
||||||
expected_state='expected_state')
|
|
||||||
tf.execute()
|
|
||||||
|
|
||||||
# Validate args on taskflow.task.Task instantiation
|
|
||||||
with mock.patch('taskflow.task.Task.__init__') as tf:
|
|
||||||
tsk_img.UpdateTaskState(func, 'task_state')
|
|
||||||
tf.assert_called_once_with(
|
|
||||||
name='update_task_state_task_state')
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.image.stream_blockdev_to_glance',
|
|
||||||
autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.image.generate_snapshot_metadata',
|
|
||||||
autospec=True)
|
|
||||||
def test_stream_to_glance(self, mock_metadata, mock_stream):
|
|
||||||
mock_metadata.return_value = 'metadata'
|
|
||||||
mock_inst = mock.Mock()
|
|
||||||
mock_inst.name = 'instance_name'
|
|
||||||
tf = tsk_img.StreamToGlance('context', 'image_api', 'image_id',
|
|
||||||
mock_inst)
|
|
||||||
self.assertEqual('stream_to_glance', tf.name)
|
|
||||||
tf.execute('disk_path')
|
|
||||||
mock_metadata.assert_called_with('context', 'image_api', 'image_id',
|
|
||||||
mock_inst)
|
|
||||||
mock_stream.assert_called_with('context', 'image_api', 'image_id',
|
|
||||||
'metadata', 'disk_path')
|
|
||||||
|
|
||||||
# Validate args on taskflow.task.Task instantiation
|
|
||||||
with mock.patch('taskflow.task.Task.__init__') as tf:
|
|
||||||
tsk_img.StreamToGlance(
|
|
||||||
'context', 'image_api', 'image_id', mock_inst)
|
|
||||||
tf.assert_called_once_with(
|
|
||||||
name='stream_to_glance', requires='disk_path')
|
|
@ -1,323 +0,0 @@
|
|||||||
# Copyright 2015, 2017 IBM Corp.
|
|
||||||
#
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import copy
|
|
||||||
from unittest import mock
|
|
||||||
|
|
||||||
import eventlet
|
|
||||||
from pypowervm.wrappers import network as pvm_net
|
|
||||||
|
|
||||||
from nova import exception
|
|
||||||
from nova import test
|
|
||||||
from nova.tests.unit.virt import powervm
|
|
||||||
from nova.virt.powervm.tasks import network as tf_net
|
|
||||||
|
|
||||||
|
|
||||||
def cna(mac):
|
|
||||||
"""Builds a mock Client Network Adapter for unit tests."""
|
|
||||||
return mock.MagicMock(mac=mac, vswitch_uri='fake_href')
|
|
||||||
|
|
||||||
|
|
||||||
class TestNetwork(test.NoDBTestCase):
|
|
||||||
def setUp(self):
|
|
||||||
super(TestNetwork, self).setUp()
|
|
||||||
self.flags(host='host1')
|
|
||||||
self.apt = mock.Mock()
|
|
||||||
|
|
||||||
self.mock_lpar_wrap = mock.MagicMock()
|
|
||||||
self.mock_lpar_wrap.can_modify_io.return_value = True, None
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_instance_wrapper')
|
|
||||||
@mock.patch('nova.virt.powervm.vif.unplug')
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_cnas')
|
|
||||||
def test_unplug_vifs(self, mock_vm_get, mock_unplug, mock_get_wrap):
|
|
||||||
"""Tests that a delete of the vif can be done."""
|
|
||||||
inst = powervm.TEST_INSTANCE
|
|
||||||
|
|
||||||
# Mock up the CNA responses.
|
|
||||||
cnas = [cna('AABBCCDDEEFF'), cna('AABBCCDDEE11'), cna('AABBCCDDEE22')]
|
|
||||||
mock_vm_get.return_value = cnas
|
|
||||||
|
|
||||||
# Mock up the network info. This also validates that they will be
|
|
||||||
# sanitized to upper case.
|
|
||||||
net_info = [
|
|
||||||
{'address': 'aa:bb:cc:dd:ee:ff'}, {'address': 'aa:bb:cc:dd:ee:22'},
|
|
||||||
{'address': 'aa:bb:cc:dd:ee:33'}
|
|
||||||
]
|
|
||||||
|
|
||||||
# Mock out the instance wrapper
|
|
||||||
mock_get_wrap.return_value = self.mock_lpar_wrap
|
|
||||||
|
|
||||||
# Mock out the vif driver
|
|
||||||
def validate_unplug(adapter, instance, vif, cna_w_list=None):
|
|
||||||
self.assertEqual(adapter, self.apt)
|
|
||||||
self.assertEqual(instance, inst)
|
|
||||||
self.assertIn(vif, net_info)
|
|
||||||
self.assertEqual(cna_w_list, cnas)
|
|
||||||
|
|
||||||
mock_unplug.side_effect = validate_unplug
|
|
||||||
|
|
||||||
# Run method
|
|
||||||
p_vifs = tf_net.UnplugVifs(self.apt, inst, net_info)
|
|
||||||
p_vifs.execute()
|
|
||||||
|
|
||||||
# Make sure the unplug was invoked, so that we know that the validation
|
|
||||||
# code was called
|
|
||||||
self.assertEqual(3, mock_unplug.call_count)
|
|
||||||
|
|
||||||
# Validate args on taskflow.task.Task instantiation
|
|
||||||
with mock.patch('taskflow.task.Task.__init__') as tf:
|
|
||||||
tf_net.UnplugVifs(self.apt, inst, net_info)
|
|
||||||
tf.assert_called_once_with(name='unplug_vifs')
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_instance_wrapper')
|
|
||||||
def test_unplug_vifs_invalid_state(self, mock_get_wrap):
|
|
||||||
"""Tests that the delete raises an exception if bad VM state."""
|
|
||||||
inst = powervm.TEST_INSTANCE
|
|
||||||
|
|
||||||
# Mock out the instance wrapper
|
|
||||||
mock_get_wrap.return_value = self.mock_lpar_wrap
|
|
||||||
|
|
||||||
# Mock that the state is incorrect
|
|
||||||
self.mock_lpar_wrap.can_modify_io.return_value = False, 'bad'
|
|
||||||
|
|
||||||
# Run method
|
|
||||||
p_vifs = tf_net.UnplugVifs(self.apt, inst, mock.Mock())
|
|
||||||
self.assertRaises(exception.VirtualInterfaceUnplugException,
|
|
||||||
p_vifs.execute)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vif.plug')
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_cnas')
|
|
||||||
def test_plug_vifs_rmc(self, mock_cna_get, mock_plug):
|
|
||||||
"""Tests that a crt vif can be done with secure RMC."""
|
|
||||||
inst = powervm.TEST_INSTANCE
|
|
||||||
|
|
||||||
# Mock up the CNA response. One should already exist, the other
|
|
||||||
# should not.
|
|
||||||
pre_cnas = [cna('AABBCCDDEEFF'), cna('AABBCCDDEE11')]
|
|
||||||
mock_cna_get.return_value = copy.deepcopy(pre_cnas)
|
|
||||||
|
|
||||||
# Mock up the network info. This also validates that they will be
|
|
||||||
# sanitized to upper case.
|
|
||||||
net_info = [
|
|
||||||
{'address': 'aa:bb:cc:dd:ee:ff', 'vnic_type': 'normal'},
|
|
||||||
{'address': 'aa:bb:cc:dd:ee:22', 'vnic_type': 'normal'},
|
|
||||||
]
|
|
||||||
|
|
||||||
# First run the CNA update, then the CNA create.
|
|
||||||
mock_new_cna = mock.Mock(spec=pvm_net.CNA)
|
|
||||||
mock_plug.side_effect = ['upd_cna', mock_new_cna]
|
|
||||||
|
|
||||||
# Run method
|
|
||||||
p_vifs = tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info)
|
|
||||||
|
|
||||||
all_cnas = p_vifs.execute(self.mock_lpar_wrap)
|
|
||||||
|
|
||||||
# new vif should be created twice.
|
|
||||||
mock_plug.assert_any_call(self.apt, inst, net_info[0], new_vif=False)
|
|
||||||
mock_plug.assert_any_call(self.apt, inst, net_info[1], new_vif=True)
|
|
||||||
|
|
||||||
# The Task provides the list of original CNAs plus only CNAs that were
|
|
||||||
# created.
|
|
||||||
self.assertEqual(pre_cnas + [mock_new_cna], all_cnas)
|
|
||||||
|
|
||||||
# Validate args on taskflow.task.Task instantiation
|
|
||||||
with mock.patch('taskflow.task.Task.__init__') as tf:
|
|
||||||
tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info)
|
|
||||||
tf.assert_called_once_with(
|
|
||||||
name='plug_vifs', provides='vm_cnas', requires=['lpar_wrap'])
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vif.plug')
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_cnas')
|
|
||||||
def test_plug_vifs_rmc_no_create(self, mock_vm_get, mock_plug):
|
|
||||||
"""Verifies if no creates are needed, none are done."""
|
|
||||||
inst = powervm.TEST_INSTANCE
|
|
||||||
|
|
||||||
# Mock up the CNA response. Both should already exist.
|
|
||||||
mock_vm_get.return_value = [cna('AABBCCDDEEFF'), cna('AABBCCDDEE11')]
|
|
||||||
|
|
||||||
# Mock up the network info. This also validates that they will be
|
|
||||||
# sanitized to upper case. This also validates that we don't call
|
|
||||||
# get_vnics if no nets have vnic_type 'direct'.
|
|
||||||
net_info = [
|
|
||||||
{'address': 'aa:bb:cc:dd:ee:ff', 'vnic_type': 'normal'},
|
|
||||||
{'address': 'aa:bb:cc:dd:ee:11', 'vnic_type': 'normal'}
|
|
||||||
]
|
|
||||||
|
|
||||||
# Run method
|
|
||||||
p_vifs = tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info)
|
|
||||||
p_vifs.execute(self.mock_lpar_wrap)
|
|
||||||
|
|
||||||
# The create should have been called with new_vif as False.
|
|
||||||
mock_plug.assert_any_call(self.apt, inst, net_info[0], new_vif=False)
|
|
||||||
mock_plug.assert_any_call(self.apt, inst, net_info[1], new_vif=False)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vif.plug')
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_cnas')
|
|
||||||
def test_plug_vifs_invalid_state(self, mock_vm_get, mock_plug):
|
|
||||||
"""Tests that a crt_vif fails when the LPAR state is bad."""
|
|
||||||
inst = powervm.TEST_INSTANCE
|
|
||||||
|
|
||||||
# Mock up the CNA response. Only doing one for simplicity
|
|
||||||
mock_vm_get.return_value = []
|
|
||||||
net_info = [{'address': 'aa:bb:cc:dd:ee:ff', 'vnic_type': 'normal'}]
|
|
||||||
|
|
||||||
# Mock that the state is incorrect
|
|
||||||
self.mock_lpar_wrap.can_modify_io.return_value = False, 'bad'
|
|
||||||
|
|
||||||
# Run method
|
|
||||||
p_vifs = tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info)
|
|
||||||
self.assertRaises(exception.VirtualInterfaceCreateException,
|
|
||||||
p_vifs.execute, self.mock_lpar_wrap)
|
|
||||||
|
|
||||||
# The create should not have been invoked
|
|
||||||
self.assertEqual(0, mock_plug.call_count)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vif.plug')
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_cnas')
|
|
||||||
def test_plug_vifs_timeout(self, mock_vm_get, mock_plug):
|
|
||||||
"""Tests that crt vif failure via loss of neutron callback."""
|
|
||||||
inst = powervm.TEST_INSTANCE
|
|
||||||
|
|
||||||
# Mock up the CNA response. Only doing one for simplicity
|
|
||||||
mock_vm_get.return_value = [cna('AABBCCDDEE11')]
|
|
||||||
|
|
||||||
# Mock up the network info.
|
|
||||||
net_info = [{'address': 'aa:bb:cc:dd:ee:ff', 'vnic_type': 'normal'}]
|
|
||||||
|
|
||||||
# Ensure that an exception is raised by a timeout.
|
|
||||||
mock_plug.side_effect = eventlet.timeout.Timeout()
|
|
||||||
|
|
||||||
# Run method
|
|
||||||
p_vifs = tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info)
|
|
||||||
self.assertRaises(exception.VirtualInterfaceCreateException,
|
|
||||||
p_vifs.execute, self.mock_lpar_wrap)
|
|
||||||
|
|
||||||
# The create should have only been called once.
|
|
||||||
self.assertEqual(1, mock_plug.call_count)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vif.unplug')
|
|
||||||
@mock.patch('nova.virt.powervm.vif.plug')
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_cnas')
|
|
||||||
def test_plug_vifs_revert(self, mock_vm_get, mock_plug, mock_unplug):
|
|
||||||
"""Tests that the revert flow works properly."""
|
|
||||||
inst = powervm.TEST_INSTANCE
|
|
||||||
|
|
||||||
# Fake CNA list. The one pre-existing VIF should *not* get reverted.
|
|
||||||
cna_list = [cna('AABBCCDDEEFF'), cna('FFEEDDCCBBAA')]
|
|
||||||
mock_vm_get.return_value = cna_list
|
|
||||||
|
|
||||||
# Mock up the network info. Three roll backs.
|
|
||||||
net_info = [
|
|
||||||
{'address': 'aa:bb:cc:dd:ee:ff', 'vnic_type': 'normal'},
|
|
||||||
{'address': 'aa:bb:cc:dd:ee:22', 'vnic_type': 'normal'},
|
|
||||||
{'address': 'aa:bb:cc:dd:ee:33', 'vnic_type': 'normal'}
|
|
||||||
]
|
|
||||||
|
|
||||||
# Make sure we test raising an exception
|
|
||||||
mock_unplug.side_effect = [exception.NovaException(), None]
|
|
||||||
|
|
||||||
# Run method
|
|
||||||
p_vifs = tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info)
|
|
||||||
p_vifs.execute(self.mock_lpar_wrap)
|
|
||||||
p_vifs.revert(self.mock_lpar_wrap, mock.Mock(), mock.Mock())
|
|
||||||
|
|
||||||
# The unplug should be called twice. The exception shouldn't stop the
|
|
||||||
# second call.
|
|
||||||
self.assertEqual(2, mock_unplug.call_count)
|
|
||||||
|
|
||||||
# Make sure each call is invoked correctly. The first plug was not a
|
|
||||||
# new vif, so it should not be reverted.
|
|
||||||
c2 = mock.call(self.apt, inst, net_info[1], cna_w_list=cna_list)
|
|
||||||
c3 = mock.call(self.apt, inst, net_info[2], cna_w_list=cna_list)
|
|
||||||
mock_unplug.assert_has_calls([c2, c3])
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.cna.crt_cna')
|
|
||||||
@mock.patch('pypowervm.wrappers.network.VSwitch.search')
|
|
||||||
@mock.patch('nova.virt.powervm.vif.plug')
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_cnas')
|
|
||||||
def test_plug_mgmt_vif(self, mock_vm_get, mock_plug, mock_vs_search,
|
|
||||||
mock_crt_cna):
|
|
||||||
"""Tests that a mgmt vif can be created."""
|
|
||||||
inst = powervm.TEST_INSTANCE
|
|
||||||
|
|
||||||
# Mock up the rmc vswitch
|
|
||||||
vswitch_w = mock.MagicMock()
|
|
||||||
vswitch_w.href = 'fake_mgmt_uri'
|
|
||||||
mock_vs_search.return_value = [vswitch_w]
|
|
||||||
|
|
||||||
# Run method such that it triggers a fresh CNA search
|
|
||||||
p_vifs = tf_net.PlugMgmtVif(self.apt, inst)
|
|
||||||
p_vifs.execute(None)
|
|
||||||
|
|
||||||
# With the default get_cnas mock (which returns a Mock()), we think we
|
|
||||||
# found an existing management CNA.
|
|
||||||
mock_crt_cna.assert_not_called()
|
|
||||||
mock_vm_get.assert_called_once_with(
|
|
||||||
self.apt, inst, vswitch_uri='fake_mgmt_uri')
|
|
||||||
|
|
||||||
# Now mock get_cnas to return no hits
|
|
||||||
mock_vm_get.reset_mock()
|
|
||||||
mock_vm_get.return_value = []
|
|
||||||
p_vifs.execute(None)
|
|
||||||
|
|
||||||
# Get was called; and since it didn't have the mgmt CNA, so was plug.
|
|
||||||
self.assertEqual(1, mock_crt_cna.call_count)
|
|
||||||
mock_vm_get.assert_called_once_with(
|
|
||||||
self.apt, inst, vswitch_uri='fake_mgmt_uri')
|
|
||||||
|
|
||||||
# Now pass CNAs, but not the mgmt vif, "from PlugVifs"
|
|
||||||
cnas = [mock.Mock(vswitch_uri='uri1'), mock.Mock(vswitch_uri='uri2')]
|
|
||||||
mock_crt_cna.reset_mock()
|
|
||||||
mock_vm_get.reset_mock()
|
|
||||||
p_vifs.execute(cnas)
|
|
||||||
|
|
||||||
# Get wasn't called, since the CNAs were passed "from PlugVifs"; but
|
|
||||||
# since the mgmt vif wasn't included, plug was called.
|
|
||||||
mock_vm_get.assert_not_called()
|
|
||||||
mock_crt_cna.assert_called()
|
|
||||||
|
|
||||||
# Finally, pass CNAs including the mgmt.
|
|
||||||
cnas.append(mock.Mock(vswitch_uri='fake_mgmt_uri'))
|
|
||||||
mock_crt_cna.reset_mock()
|
|
||||||
p_vifs.execute(cnas)
|
|
||||||
|
|
||||||
# Neither get nor plug was called.
|
|
||||||
mock_vm_get.assert_not_called()
|
|
||||||
mock_crt_cna.assert_not_called()
|
|
||||||
|
|
||||||
# Validate args on taskflow.task.Task instantiation
|
|
||||||
with mock.patch('taskflow.task.Task.__init__') as tf:
|
|
||||||
tf_net.PlugMgmtVif(self.apt, inst)
|
|
||||||
tf.assert_called_once_with(
|
|
||||||
name='plug_mgmt_vif', provides='mgmt_cna', requires=['vm_cnas'])
|
|
||||||
|
|
||||||
def test_get_vif_events(self):
|
|
||||||
# Set up common mocks.
|
|
||||||
inst = powervm.TEST_INSTANCE
|
|
||||||
net_info = [mock.MagicMock(), mock.MagicMock()]
|
|
||||||
net_info[0]['id'] = 'a'
|
|
||||||
net_info[0].get.return_value = False
|
|
||||||
net_info[1]['id'] = 'b'
|
|
||||||
net_info[1].get.return_value = True
|
|
||||||
|
|
||||||
# Set up the runner.
|
|
||||||
p_vifs = tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info)
|
|
||||||
p_vifs.crt_network_infos = net_info
|
|
||||||
resp = p_vifs._get_vif_events()
|
|
||||||
|
|
||||||
# Only one should be returned since only one was active.
|
|
||||||
self.assertEqual(1, len(resp))
|
|
@ -1,355 +0,0 @@
|
|||||||
# Copyright 2015, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from unittest import mock
|
|
||||||
|
|
||||||
import fixtures
|
|
||||||
from pypowervm import exceptions as pvm_exc
|
|
||||||
|
|
||||||
from nova import exception
|
|
||||||
from nova import test
|
|
||||||
from nova.virt.powervm.tasks import storage as tf_stg
|
|
||||||
|
|
||||||
|
|
||||||
class TestStorage(test.NoDBTestCase):
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestStorage, self).setUp()
|
|
||||||
|
|
||||||
self.adapter = mock.Mock()
|
|
||||||
self.disk_dvr = mock.MagicMock()
|
|
||||||
self.mock_cfg_drv = self.useFixture(fixtures.MockPatch(
|
|
||||||
'nova.virt.powervm.media.ConfigDrivePowerVM')).mock
|
|
||||||
self.mock_mb = self.mock_cfg_drv.return_value
|
|
||||||
self.instance = mock.MagicMock()
|
|
||||||
self.context = 'context'
|
|
||||||
|
|
||||||
def test_create_and_connect_cfg_drive(self):
|
|
||||||
# With a specified FeedTask
|
|
||||||
task = tf_stg.CreateAndConnectCfgDrive(
|
|
||||||
self.adapter, self.instance, 'injected_files',
|
|
||||||
'network_info', 'stg_ftsk', admin_pass='admin_pass')
|
|
||||||
task.execute('mgmt_cna')
|
|
||||||
self.mock_cfg_drv.assert_called_once_with(self.adapter)
|
|
||||||
self.mock_mb.create_cfg_drv_vopt.assert_called_once_with(
|
|
||||||
self.instance, 'injected_files', 'network_info', 'stg_ftsk',
|
|
||||||
admin_pass='admin_pass', mgmt_cna='mgmt_cna')
|
|
||||||
|
|
||||||
# Normal revert
|
|
||||||
task.revert('mgmt_cna', 'result', 'flow_failures')
|
|
||||||
self.mock_mb.dlt_vopt.assert_called_once_with(self.instance,
|
|
||||||
'stg_ftsk')
|
|
||||||
|
|
||||||
self.mock_mb.reset_mock()
|
|
||||||
|
|
||||||
# Revert when dlt_vopt fails
|
|
||||||
self.mock_mb.dlt_vopt.side_effect = pvm_exc.Error('fake-exc')
|
|
||||||
task.revert('mgmt_cna', 'result', 'flow_failures')
|
|
||||||
self.mock_mb.dlt_vopt.assert_called_once()
|
|
||||||
|
|
||||||
self.mock_mb.reset_mock()
|
|
||||||
|
|
||||||
# Revert when media builder not created
|
|
||||||
task.mb = None
|
|
||||||
task.revert('mgmt_cna', 'result', 'flow_failures')
|
|
||||||
self.mock_mb.assert_not_called()
|
|
||||||
|
|
||||||
# Validate args on taskflow.task.Task instantiation
|
|
||||||
with mock.patch('taskflow.task.Task.__init__') as tf:
|
|
||||||
tf_stg.CreateAndConnectCfgDrive(
|
|
||||||
self.adapter, self.instance, 'injected_files',
|
|
||||||
'network_info', 'stg_ftsk', admin_pass='admin_pass')
|
|
||||||
tf.assert_called_once_with(name='cfg_drive', requires=['mgmt_cna'])
|
|
||||||
|
|
||||||
def test_delete_vopt(self):
|
|
||||||
# Test with no FeedTask
|
|
||||||
task = tf_stg.DeleteVOpt(self.adapter, self.instance)
|
|
||||||
task.execute()
|
|
||||||
self.mock_cfg_drv.assert_called_once_with(self.adapter)
|
|
||||||
self.mock_mb.dlt_vopt.assert_called_once_with(
|
|
||||||
self.instance, stg_ftsk=None)
|
|
||||||
|
|
||||||
self.mock_cfg_drv.reset_mock()
|
|
||||||
self.mock_mb.reset_mock()
|
|
||||||
|
|
||||||
# With a specified FeedTask
|
|
||||||
task = tf_stg.DeleteVOpt(self.adapter, self.instance, stg_ftsk='ftsk')
|
|
||||||
task.execute()
|
|
||||||
self.mock_cfg_drv.assert_called_once_with(self.adapter)
|
|
||||||
self.mock_mb.dlt_vopt.assert_called_once_with(
|
|
||||||
self.instance, stg_ftsk='ftsk')
|
|
||||||
|
|
||||||
# Validate args on taskflow.task.Task instantiation
|
|
||||||
with mock.patch('taskflow.task.Task.__init__') as tf:
|
|
||||||
tf_stg.DeleteVOpt(self.adapter, self.instance)
|
|
||||||
tf.assert_called_once_with(name='vopt_delete')
|
|
||||||
|
|
||||||
def test_delete_disk(self):
|
|
||||||
stor_adpt_mappings = mock.Mock()
|
|
||||||
|
|
||||||
task = tf_stg.DeleteDisk(self.disk_dvr)
|
|
||||||
task.execute(stor_adpt_mappings)
|
|
||||||
self.disk_dvr.delete_disks.assert_called_once_with(stor_adpt_mappings)
|
|
||||||
|
|
||||||
# Validate args on taskflow.task.Task instantiation
|
|
||||||
with mock.patch('taskflow.task.Task.__init__') as tf:
|
|
||||||
tf_stg.DeleteDisk(self.disk_dvr)
|
|
||||||
tf.assert_called_once_with(
|
|
||||||
name='delete_disk', requires=['stor_adpt_mappings'])
|
|
||||||
|
|
||||||
def test_detach_disk(self):
|
|
||||||
task = tf_stg.DetachDisk(self.disk_dvr, self.instance)
|
|
||||||
task.execute()
|
|
||||||
self.disk_dvr.detach_disk.assert_called_once_with(self.instance)
|
|
||||||
|
|
||||||
# Validate args on taskflow.task.Task instantiation
|
|
||||||
with mock.patch('taskflow.task.Task.__init__') as tf:
|
|
||||||
tf_stg.DetachDisk(self.disk_dvr, self.instance)
|
|
||||||
tf.assert_called_once_with(
|
|
||||||
name='detach_disk', provides='stor_adpt_mappings')
|
|
||||||
|
|
||||||
def test_attach_disk(self):
|
|
||||||
stg_ftsk = mock.Mock()
|
|
||||||
disk_dev_info = mock.Mock()
|
|
||||||
|
|
||||||
task = tf_stg.AttachDisk(self.disk_dvr, self.instance, stg_ftsk)
|
|
||||||
task.execute(disk_dev_info)
|
|
||||||
self.disk_dvr.attach_disk.assert_called_once_with(
|
|
||||||
self.instance, disk_dev_info, stg_ftsk)
|
|
||||||
|
|
||||||
task.revert(disk_dev_info, 'result', 'flow failures')
|
|
||||||
self.disk_dvr.detach_disk.assert_called_once_with(self.instance)
|
|
||||||
|
|
||||||
self.disk_dvr.detach_disk.reset_mock()
|
|
||||||
|
|
||||||
# Revert failures are not raised
|
|
||||||
self.disk_dvr.detach_disk.side_effect = pvm_exc.TimeoutError(
|
|
||||||
"timed out")
|
|
||||||
task.revert(disk_dev_info, 'result', 'flow failures')
|
|
||||||
self.disk_dvr.detach_disk.assert_called_once_with(self.instance)
|
|
||||||
|
|
||||||
# Validate args on taskflow.task.Task instantiation
|
|
||||||
with mock.patch('taskflow.task.Task.__init__') as tf:
|
|
||||||
tf_stg.AttachDisk(self.disk_dvr, self.instance, stg_ftsk)
|
|
||||||
tf.assert_called_once_with(
|
|
||||||
name='attach_disk', requires=['disk_dev_info'])
|
|
||||||
|
|
||||||
def test_create_disk_for_img(self):
|
|
||||||
image_meta = mock.Mock()
|
|
||||||
|
|
||||||
task = tf_stg.CreateDiskForImg(
|
|
||||||
self.disk_dvr, self.context, self.instance, image_meta)
|
|
||||||
task.execute()
|
|
||||||
self.disk_dvr.create_disk_from_image.assert_called_once_with(
|
|
||||||
self.context, self.instance, image_meta)
|
|
||||||
|
|
||||||
task.revert('result', 'flow failures')
|
|
||||||
self.disk_dvr.delete_disks.assert_called_once_with(['result'])
|
|
||||||
|
|
||||||
self.disk_dvr.delete_disks.reset_mock()
|
|
||||||
|
|
||||||
# Delete not called if no result
|
|
||||||
task.revert(None, None)
|
|
||||||
self.disk_dvr.delete_disks.assert_not_called()
|
|
||||||
|
|
||||||
# Delete exception doesn't raise
|
|
||||||
self.disk_dvr.delete_disks.side_effect = pvm_exc.TimeoutError(
|
|
||||||
"timed out")
|
|
||||||
task.revert('result', 'flow failures')
|
|
||||||
self.disk_dvr.delete_disks.assert_called_once_with(['result'])
|
|
||||||
|
|
||||||
# Validate args on taskflow.task.Task instantiation
|
|
||||||
with mock.patch('taskflow.task.Task.__init__') as tf:
|
|
||||||
tf_stg.CreateDiskForImg(
|
|
||||||
self.disk_dvr, self.context, self.instance, image_meta)
|
|
||||||
tf.assert_called_once_with(
|
|
||||||
name='create_disk_from_img', provides='disk_dev_info')
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.find_maps', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.mgmt.discover_vscsi_disk', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.mgmt.remove_block_dev', autospec=True)
|
|
||||||
def test_instance_disk_to_mgmt(self, mock_rm, mock_discover, mock_find):
|
|
||||||
mock_discover.return_value = '/dev/disk'
|
|
||||||
mock_instance = mock.Mock()
|
|
||||||
mock_instance.name = 'instance_name'
|
|
||||||
mock_stg = mock.Mock()
|
|
||||||
mock_stg.name = 'stg_name'
|
|
||||||
mock_vwrap = mock.Mock()
|
|
||||||
mock_vwrap.name = 'vios_name'
|
|
||||||
mock_vwrap.uuid = 'vios_uuid'
|
|
||||||
mock_vwrap.scsi_mappings = ['mapping1']
|
|
||||||
|
|
||||||
disk_dvr = mock.MagicMock()
|
|
||||||
disk_dvr.mp_uuid = 'mp_uuid'
|
|
||||||
disk_dvr.connect_instance_disk_to_mgmt.return_value = (mock_stg,
|
|
||||||
mock_vwrap)
|
|
||||||
|
|
||||||
def reset_mocks():
|
|
||||||
mock_find.reset_mock()
|
|
||||||
mock_discover.reset_mock()
|
|
||||||
mock_rm.reset_mock()
|
|
||||||
disk_dvr.reset_mock()
|
|
||||||
|
|
||||||
# Good path - find_maps returns one result
|
|
||||||
mock_find.return_value = ['one_mapping']
|
|
||||||
tf = tf_stg.InstanceDiskToMgmt(disk_dvr, mock_instance)
|
|
||||||
self.assertEqual('instance_disk_to_mgmt', tf.name)
|
|
||||||
self.assertEqual((mock_stg, mock_vwrap, '/dev/disk'), tf.execute())
|
|
||||||
disk_dvr.connect_instance_disk_to_mgmt.assert_called_with(
|
|
||||||
mock_instance)
|
|
||||||
mock_find.assert_called_with(['mapping1'], client_lpar_id='mp_uuid',
|
|
||||||
stg_elem=mock_stg)
|
|
||||||
mock_discover.assert_called_with('one_mapping')
|
|
||||||
tf.revert('result', 'failures')
|
|
||||||
disk_dvr.disconnect_disk_from_mgmt.assert_called_with('vios_uuid',
|
|
||||||
'stg_name')
|
|
||||||
mock_rm.assert_called_with('/dev/disk')
|
|
||||||
|
|
||||||
# Good path - find_maps returns >1 result
|
|
||||||
reset_mocks()
|
|
||||||
mock_find.return_value = ['first_mapping', 'second_mapping']
|
|
||||||
tf = tf_stg.InstanceDiskToMgmt(disk_dvr, mock_instance)
|
|
||||||
self.assertEqual((mock_stg, mock_vwrap, '/dev/disk'), tf.execute())
|
|
||||||
disk_dvr.connect_instance_disk_to_mgmt.assert_called_with(
|
|
||||||
mock_instance)
|
|
||||||
mock_find.assert_called_with(['mapping1'], client_lpar_id='mp_uuid',
|
|
||||||
stg_elem=mock_stg)
|
|
||||||
mock_discover.assert_called_with('first_mapping')
|
|
||||||
tf.revert('result', 'failures')
|
|
||||||
disk_dvr.disconnect_disk_from_mgmt.assert_called_with('vios_uuid',
|
|
||||||
'stg_name')
|
|
||||||
mock_rm.assert_called_with('/dev/disk')
|
|
||||||
|
|
||||||
# Management Partition is VIOS and NovaLink hosted storage
|
|
||||||
reset_mocks()
|
|
||||||
disk_dvr._vios_uuids = ['mp_uuid']
|
|
||||||
dev_name = '/dev/vg/fake_name'
|
|
||||||
disk_dvr.get_bootdisk_path.return_value = dev_name
|
|
||||||
tf = tf_stg.InstanceDiskToMgmt(disk_dvr, mock_instance)
|
|
||||||
self.assertEqual((None, None, dev_name), tf.execute())
|
|
||||||
|
|
||||||
# Management Partition is VIOS and not NovaLink hosted storage
|
|
||||||
reset_mocks()
|
|
||||||
disk_dvr._vios_uuids = ['mp_uuid']
|
|
||||||
disk_dvr.get_bootdisk_path.return_value = None
|
|
||||||
tf = tf_stg.InstanceDiskToMgmt(disk_dvr, mock_instance)
|
|
||||||
tf.execute()
|
|
||||||
disk_dvr.connect_instance_disk_to_mgmt.assert_called_with(
|
|
||||||
mock_instance)
|
|
||||||
|
|
||||||
# Bad path - find_maps returns no results
|
|
||||||
reset_mocks()
|
|
||||||
mock_find.return_value = []
|
|
||||||
tf = tf_stg.InstanceDiskToMgmt(disk_dvr, mock_instance)
|
|
||||||
self.assertRaises(exception.NewMgmtMappingNotFoundException,
|
|
||||||
tf.execute)
|
|
||||||
disk_dvr.connect_instance_disk_to_mgmt.assert_called_with(
|
|
||||||
mock_instance)
|
|
||||||
# find_maps was still called
|
|
||||||
mock_find.assert_called_with(['mapping1'], client_lpar_id='mp_uuid',
|
|
||||||
stg_elem=mock_stg)
|
|
||||||
# discover_vscsi_disk didn't get called
|
|
||||||
self.assertEqual(0, mock_discover.call_count)
|
|
||||||
tf.revert('result', 'failures')
|
|
||||||
# disconnect_disk_from_mgmt got called
|
|
||||||
disk_dvr.disconnect_disk_from_mgmt.assert_called_with('vios_uuid',
|
|
||||||
'stg_name')
|
|
||||||
# ...but remove_block_dev did not.
|
|
||||||
self.assertEqual(0, mock_rm.call_count)
|
|
||||||
|
|
||||||
# Bad path - connect raises
|
|
||||||
reset_mocks()
|
|
||||||
disk_dvr.connect_instance_disk_to_mgmt.side_effect = (
|
|
||||||
exception.InstanceDiskMappingFailed(instance_name='inst_name'))
|
|
||||||
tf = tf_stg.InstanceDiskToMgmt(disk_dvr, mock_instance)
|
|
||||||
self.assertRaises(exception.InstanceDiskMappingFailed, tf.execute)
|
|
||||||
disk_dvr.connect_instance_disk_to_mgmt.assert_called_with(
|
|
||||||
mock_instance)
|
|
||||||
self.assertEqual(0, mock_find.call_count)
|
|
||||||
self.assertEqual(0, mock_discover.call_count)
|
|
||||||
# revert shouldn't call disconnect or remove
|
|
||||||
tf.revert('result', 'failures')
|
|
||||||
self.assertEqual(0, disk_dvr.disconnect_disk_from_mgmt.call_count)
|
|
||||||
self.assertEqual(0, mock_rm.call_count)
|
|
||||||
|
|
||||||
# Validate args on taskflow.task.Task instantiation
|
|
||||||
with mock.patch('taskflow.task.Task.__init__') as tf:
|
|
||||||
tf_stg.InstanceDiskToMgmt(disk_dvr, mock_instance)
|
|
||||||
tf.assert_called_once_with(
|
|
||||||
name='instance_disk_to_mgmt',
|
|
||||||
provides=['stg_elem', 'vios_wrap', 'disk_path'])
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.mgmt.remove_block_dev', autospec=True)
|
|
||||||
def test_remove_instance_disk_from_mgmt(self, mock_rm):
|
|
||||||
disk_dvr = mock.MagicMock()
|
|
||||||
mock_instance = mock.Mock()
|
|
||||||
mock_instance.name = 'instance_name'
|
|
||||||
mock_stg = mock.Mock()
|
|
||||||
mock_stg.name = 'stg_name'
|
|
||||||
mock_vwrap = mock.Mock()
|
|
||||||
mock_vwrap.name = 'vios_name'
|
|
||||||
mock_vwrap.uuid = 'vios_uuid'
|
|
||||||
|
|
||||||
tf = tf_stg.RemoveInstanceDiskFromMgmt(disk_dvr, mock_instance)
|
|
||||||
self.assertEqual('remove_inst_disk_from_mgmt', tf.name)
|
|
||||||
|
|
||||||
# Boot disk not mapped to mgmt partition
|
|
||||||
tf.execute(None, mock_vwrap, '/dev/disk')
|
|
||||||
self.assertEqual(disk_dvr.disconnect_disk_from_mgmt.call_count, 0)
|
|
||||||
self.assertEqual(mock_rm.call_count, 0)
|
|
||||||
|
|
||||||
# Boot disk mapped to mgmt partition
|
|
||||||
tf.execute(mock_stg, mock_vwrap, '/dev/disk')
|
|
||||||
disk_dvr.disconnect_disk_from_mgmt.assert_called_with('vios_uuid',
|
|
||||||
'stg_name')
|
|
||||||
mock_rm.assert_called_with('/dev/disk')
|
|
||||||
|
|
||||||
# Validate args on taskflow.task.Task instantiation
|
|
||||||
with mock.patch('taskflow.task.Task.__init__') as tf:
|
|
||||||
tf_stg.RemoveInstanceDiskFromMgmt(disk_dvr, mock_instance)
|
|
||||||
tf.assert_called_once_with(
|
|
||||||
name='remove_inst_disk_from_mgmt',
|
|
||||||
requires=['stg_elem', 'vios_wrap', 'disk_path'])
|
|
||||||
|
|
||||||
def test_attach_volume(self):
|
|
||||||
vol_dvr = mock.Mock(connection_info={'data': {'volume_id': '1'}})
|
|
||||||
|
|
||||||
task = tf_stg.AttachVolume(vol_dvr)
|
|
||||||
task.execute()
|
|
||||||
vol_dvr.attach_volume.assert_called_once_with()
|
|
||||||
|
|
||||||
task.revert('result', 'flow failures')
|
|
||||||
vol_dvr.reset_stg_ftsk.assert_called_once_with()
|
|
||||||
vol_dvr.detach_volume.assert_called_once_with()
|
|
||||||
|
|
||||||
# Validate args on taskflow.task.Task instantiation
|
|
||||||
with mock.patch('taskflow.task.Task.__init__') as tf:
|
|
||||||
tf_stg.AttachVolume(vol_dvr)
|
|
||||||
tf.assert_called_once_with(name='attach_vol_1')
|
|
||||||
|
|
||||||
def test_detach_volume(self):
|
|
||||||
vol_dvr = mock.Mock(connection_info={'data': {'volume_id': '1'}})
|
|
||||||
|
|
||||||
task = tf_stg.DetachVolume(vol_dvr)
|
|
||||||
task.execute()
|
|
||||||
vol_dvr.detach_volume.assert_called_once_with()
|
|
||||||
|
|
||||||
task.revert('result', 'flow failures')
|
|
||||||
vol_dvr.reset_stg_ftsk.assert_called_once_with()
|
|
||||||
vol_dvr.detach_volume.assert_called_once_with()
|
|
||||||
|
|
||||||
# Validate args on taskflow.task.Task instantiation
|
|
||||||
with mock.patch('taskflow.task.Task.__init__') as tf:
|
|
||||||
tf_stg.DetachVolume(vol_dvr)
|
|
||||||
tf.assert_called_once_with(name='detach_vol_1')
|
|
@ -1,134 +0,0 @@
|
|||||||
# Copyright 2015, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from taskflow import engines as tf_eng
|
|
||||||
from taskflow.patterns import linear_flow as tf_lf
|
|
||||||
from taskflow import task as tf_tsk
|
|
||||||
from unittest import mock
|
|
||||||
|
|
||||||
from nova import exception
|
|
||||||
from nova import test
|
|
||||||
from nova.virt.powervm.tasks import vm as tf_vm
|
|
||||||
|
|
||||||
|
|
||||||
class TestVMTasks(test.NoDBTestCase):
|
|
||||||
def setUp(self):
|
|
||||||
super(TestVMTasks, self).setUp()
|
|
||||||
self.apt = mock.Mock()
|
|
||||||
self.instance = mock.Mock()
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_instance_wrapper', autospec=True)
|
|
||||||
def test_get(self, mock_get_wrap):
|
|
||||||
get = tf_vm.Get(self.apt, self.instance)
|
|
||||||
get.execute()
|
|
||||||
mock_get_wrap.assert_called_once_with(self.apt, self.instance)
|
|
||||||
|
|
||||||
# Validate args on taskflow.task.Task instantiation
|
|
||||||
with mock.patch('taskflow.task.Task.__init__') as tf:
|
|
||||||
tf_vm.Get(self.apt, self.instance)
|
|
||||||
tf.assert_called_once_with(name='get_vm', provides='lpar_wrap')
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.storage.add_lpar_storage_scrub_tasks',
|
|
||||||
autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.vm.create_lpar')
|
|
||||||
def test_create(self, mock_vm_crt, mock_stg):
|
|
||||||
lpar_entry = mock.Mock()
|
|
||||||
|
|
||||||
# Test create with normal (non-recreate) ftsk
|
|
||||||
crt = tf_vm.Create(self.apt, 'host_wrapper', self.instance, 'ftsk')
|
|
||||||
mock_vm_crt.return_value = lpar_entry
|
|
||||||
crt.execute()
|
|
||||||
|
|
||||||
mock_vm_crt.assert_called_once_with(self.apt, 'host_wrapper',
|
|
||||||
self.instance)
|
|
||||||
|
|
||||||
mock_stg.assert_called_once_with(
|
|
||||||
[lpar_entry.id], 'ftsk', lpars_exist=True)
|
|
||||||
mock_stg.assert_called_once_with([mock_vm_crt.return_value.id], 'ftsk',
|
|
||||||
lpars_exist=True)
|
|
||||||
|
|
||||||
# Validate args on taskflow.task.Task instantiation
|
|
||||||
with mock.patch('taskflow.task.Task.__init__') as tf:
|
|
||||||
tf_vm.Create(self.apt, 'host_wrapper', self.instance, 'ftsk')
|
|
||||||
tf.assert_called_once_with(name='crt_vm', provides='lpar_wrap')
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vm.power_on')
|
|
||||||
def test_power_on(self, mock_pwron):
|
|
||||||
pwron = tf_vm.PowerOn(self.apt, self.instance)
|
|
||||||
pwron.execute()
|
|
||||||
mock_pwron.assert_called_once_with(self.apt, self.instance)
|
|
||||||
|
|
||||||
# Validate args on taskflow.task.Task instantiation
|
|
||||||
with mock.patch('taskflow.task.Task.__init__') as tf:
|
|
||||||
tf_vm.PowerOn(self.apt, self.instance)
|
|
||||||
tf.assert_called_once_with(name='pwr_vm')
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vm.power_on')
|
|
||||||
@mock.patch('nova.virt.powervm.vm.power_off')
|
|
||||||
def test_power_on_revert(self, mock_pwroff, mock_pwron):
|
|
||||||
flow = tf_lf.Flow('revert_power_on')
|
|
||||||
pwron = tf_vm.PowerOn(self.apt, self.instance)
|
|
||||||
flow.add(pwron)
|
|
||||||
|
|
||||||
# Dummy Task that fails, triggering flow revert
|
|
||||||
def failure(*a, **k):
|
|
||||||
raise ValueError()
|
|
||||||
flow.add(tf_tsk.FunctorTask(failure))
|
|
||||||
|
|
||||||
# When PowerOn.execute doesn't fail, revert calls power_off
|
|
||||||
self.assertRaises(ValueError, tf_eng.run, flow)
|
|
||||||
mock_pwron.assert_called_once_with(self.apt, self.instance)
|
|
||||||
mock_pwroff.assert_called_once_with(self.apt, self.instance,
|
|
||||||
force_immediate=True)
|
|
||||||
|
|
||||||
mock_pwron.reset_mock()
|
|
||||||
mock_pwroff.reset_mock()
|
|
||||||
|
|
||||||
# When PowerOn.execute fails, revert doesn't call power_off
|
|
||||||
mock_pwron.side_effect = exception.NovaException()
|
|
||||||
self.assertRaises(exception.NovaException, tf_eng.run, flow)
|
|
||||||
mock_pwron.assert_called_once_with(self.apt, self.instance)
|
|
||||||
mock_pwroff.assert_not_called()
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vm.power_off')
|
|
||||||
def test_power_off(self, mock_pwroff):
|
|
||||||
# Default force_immediate
|
|
||||||
pwroff = tf_vm.PowerOff(self.apt, self.instance)
|
|
||||||
pwroff.execute()
|
|
||||||
mock_pwroff.assert_called_once_with(self.apt, self.instance,
|
|
||||||
force_immediate=False)
|
|
||||||
|
|
||||||
mock_pwroff.reset_mock()
|
|
||||||
|
|
||||||
# Explicit force_immediate
|
|
||||||
pwroff = tf_vm.PowerOff(self.apt, self.instance, force_immediate=True)
|
|
||||||
pwroff.execute()
|
|
||||||
mock_pwroff.assert_called_once_with(self.apt, self.instance,
|
|
||||||
force_immediate=True)
|
|
||||||
|
|
||||||
# Validate args on taskflow.task.Task instantiation
|
|
||||||
with mock.patch('taskflow.task.Task.__init__') as tf:
|
|
||||||
tf_vm.PowerOff(self.apt, self.instance)
|
|
||||||
tf.assert_called_once_with(name='pwr_off_vm')
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vm.delete_lpar')
|
|
||||||
def test_delete(self, mock_dlt):
|
|
||||||
delete = tf_vm.Delete(self.apt, self.instance)
|
|
||||||
delete.execute()
|
|
||||||
mock_dlt.assert_called_once_with(self.apt, self.instance)
|
|
||||||
|
|
||||||
# Validate args on taskflow.task.Task instantiation
|
|
||||||
with mock.patch('taskflow.task.Task.__init__') as tf:
|
|
||||||
tf_vm.Delete(self.apt, self.instance)
|
|
||||||
tf.assert_called_once_with(name='dlt_vm')
|
|
@ -1,649 +0,0 @@
|
|||||||
# Copyright 2016, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import contextlib
|
|
||||||
from unittest import mock
|
|
||||||
|
|
||||||
import fixtures
|
|
||||||
from oslo_serialization import jsonutils
|
|
||||||
from oslo_utils.fixture import uuidsentinel as uuids
|
|
||||||
from pypowervm import const as pvm_const
|
|
||||||
from pypowervm import exceptions as pvm_exc
|
|
||||||
from pypowervm.helpers import log_helper as pvm_hlp_log
|
|
||||||
from pypowervm.helpers import vios_busy as pvm_hlp_vbusy
|
|
||||||
from pypowervm.utils import transaction as pvm_tx
|
|
||||||
from pypowervm.wrappers import virtual_io_server as pvm_vios
|
|
||||||
|
|
||||||
from nova import block_device as nova_block_device
|
|
||||||
from nova.compute import provider_tree
|
|
||||||
from nova import conf as cfg
|
|
||||||
from nova import exception
|
|
||||||
from nova.objects import block_device as bdmobj
|
|
||||||
from nova import test
|
|
||||||
from nova.tests.unit.virt import powervm
|
|
||||||
from nova.virt import block_device as nova_virt_bdm
|
|
||||||
from nova.virt import driver as nova_driver
|
|
||||||
from nova.virt.driver import ComputeDriver
|
|
||||||
from nova.virt import hardware
|
|
||||||
from nova.virt.powervm.disk import ssp
|
|
||||||
from nova.virt.powervm import driver
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
|
||||||
|
|
||||||
|
|
||||||
class TestPowerVMDriver(test.NoDBTestCase):
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestPowerVMDriver, self).setUp()
|
|
||||||
self.drv = driver.PowerVMDriver('virtapi')
|
|
||||||
self.adp = self.useFixture(fixtures.MockPatch(
|
|
||||||
'pypowervm.adapter.Adapter', autospec=True)).mock
|
|
||||||
self.drv.adapter = self.adp
|
|
||||||
self.sess = self.useFixture(fixtures.MockPatch(
|
|
||||||
'pypowervm.adapter.Session', autospec=True)).mock
|
|
||||||
|
|
||||||
self.pwron = self.useFixture(fixtures.MockPatch(
|
|
||||||
'nova.virt.powervm.vm.power_on')).mock
|
|
||||||
self.pwroff = self.useFixture(fixtures.MockPatch(
|
|
||||||
'nova.virt.powervm.vm.power_off')).mock
|
|
||||||
|
|
||||||
# Create an instance to test with
|
|
||||||
self.inst = powervm.TEST_INSTANCE
|
|
||||||
|
|
||||||
def test_driver_capabilities(self):
|
|
||||||
"""Test the driver capabilities."""
|
|
||||||
# check that the driver reports all capabilities
|
|
||||||
self.assertEqual(set(ComputeDriver.capabilities),
|
|
||||||
set(self.drv.capabilities))
|
|
||||||
# check the values for each capability
|
|
||||||
self.assertFalse(self.drv.capabilities['has_imagecache'])
|
|
||||||
self.assertFalse(self.drv.capabilities['supports_evacuate'])
|
|
||||||
self.assertFalse(
|
|
||||||
self.drv.capabilities['supports_migrate_to_same_host'])
|
|
||||||
self.assertTrue(self.drv.capabilities['supports_attach_interface'])
|
|
||||||
self.assertFalse(self.drv.capabilities['supports_device_tagging'])
|
|
||||||
self.assertFalse(
|
|
||||||
self.drv.capabilities['supports_tagged_attach_interface'])
|
|
||||||
self.assertFalse(
|
|
||||||
self.drv.capabilities['supports_tagged_attach_volume'])
|
|
||||||
self.assertTrue(self.drv.capabilities['supports_extend_volume'])
|
|
||||||
self.assertFalse(self.drv.capabilities['supports_multiattach'])
|
|
||||||
|
|
||||||
@mock.patch('nova.image.glance.API')
|
|
||||||
@mock.patch('pypowervm.tasks.storage.ComprehensiveScrub', autospec=True)
|
|
||||||
@mock.patch('oslo_utils.importutils.import_object_ns', autospec=True)
|
|
||||||
@mock.patch('pypowervm.wrappers.managed_system.System', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.partition.validate_vios_ready', autospec=True)
|
|
||||||
def test_init_host(self, mock_vvr, mock_sys, mock_import, mock_scrub,
|
|
||||||
mock_img):
|
|
||||||
mock_hostw = mock.Mock(uuid='uuid')
|
|
||||||
mock_sys.get.return_value = [mock_hostw]
|
|
||||||
self.drv.init_host('host')
|
|
||||||
self.sess.assert_called_once_with(conn_tries=60)
|
|
||||||
self.adp.assert_called_once_with(
|
|
||||||
self.sess.return_value, helpers=[
|
|
||||||
pvm_hlp_log.log_helper, pvm_hlp_vbusy.vios_busy_retry_helper])
|
|
||||||
mock_vvr.assert_called_once_with(self.drv.adapter)
|
|
||||||
mock_sys.get.assert_called_once_with(self.drv.adapter)
|
|
||||||
self.assertEqual(mock_hostw, self.drv.host_wrapper)
|
|
||||||
mock_scrub.assert_called_once_with(self.drv.adapter)
|
|
||||||
mock_scrub.return_value.execute.assert_called_once_with()
|
|
||||||
mock_import.assert_called_once_with(
|
|
||||||
'nova.virt.powervm.disk', 'localdisk.LocalStorage',
|
|
||||||
self.drv.adapter, 'uuid')
|
|
||||||
self.assertEqual(mock_import.return_value, self.drv.disk_dvr)
|
|
||||||
mock_img.assert_called_once_with()
|
|
||||||
self.assertEqual(mock_img.return_value, self.drv.image_api)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_pvm_uuid')
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_vm_qp')
|
|
||||||
@mock.patch('nova.virt.powervm.vm._translate_vm_state')
|
|
||||||
def test_get_info(self, mock_tx_state, mock_qp, mock_uuid):
|
|
||||||
mock_tx_state.return_value = 'fake-state'
|
|
||||||
self.assertEqual(hardware.InstanceInfo('fake-state'),
|
|
||||||
self.drv.get_info('inst'))
|
|
||||||
mock_uuid.assert_called_once_with('inst')
|
|
||||||
mock_qp.assert_called_once_with(
|
|
||||||
self.drv.adapter, mock_uuid.return_value, 'PartitionState')
|
|
||||||
mock_tx_state.assert_called_once_with(mock_qp.return_value)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_lpar_names')
|
|
||||||
def test_list_instances(self, mock_names):
|
|
||||||
mock_names.return_value = ['one', 'two', 'three']
|
|
||||||
self.assertEqual(['one', 'two', 'three'], self.drv.list_instances())
|
|
||||||
mock_names.assert_called_once_with(self.adp)
|
|
||||||
|
|
||||||
def test_get_available_nodes(self):
|
|
||||||
self.flags(host='hostname')
|
|
||||||
self.assertEqual(['hostname'], self.drv.get_available_nodes('node'))
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.wrappers.managed_system.System', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.host.build_host_resource_from_ms')
|
|
||||||
def test_get_available_resource(self, mock_bhrfm, mock_sys):
|
|
||||||
mock_sys.get.return_value = ['sys']
|
|
||||||
mock_bhrfm.return_value = {'foo': 'bar'}
|
|
||||||
self.drv.disk_dvr = mock.create_autospec(ssp.SSPDiskAdapter,
|
|
||||||
instance=True)
|
|
||||||
self.assertEqual(
|
|
||||||
{'foo': 'bar', 'local_gb': self.drv.disk_dvr.capacity,
|
|
||||||
'local_gb_used': self.drv.disk_dvr.capacity_used},
|
|
||||||
self.drv.get_available_resource('node'))
|
|
||||||
mock_sys.get.assert_called_once_with(self.adp)
|
|
||||||
mock_bhrfm.assert_called_once_with('sys')
|
|
||||||
self.assertEqual('sys', self.drv.host_wrapper)
|
|
||||||
|
|
||||||
@contextlib.contextmanager
|
|
||||||
def _update_provider_tree(self, allocations=None):
|
|
||||||
"""Host resource dict gets converted properly to provider tree inv."""
|
|
||||||
|
|
||||||
with mock.patch('nova.virt.powervm.host.'
|
|
||||||
'build_host_resource_from_ms') as mock_bhrfm:
|
|
||||||
mock_bhrfm.return_value = {
|
|
||||||
'vcpus': 8,
|
|
||||||
'memory_mb': 2048,
|
|
||||||
}
|
|
||||||
self.drv.host_wrapper = 'host_wrapper'
|
|
||||||
# Validate that this gets converted to int with floor
|
|
||||||
self.drv.disk_dvr = mock.Mock(capacity=2091.8)
|
|
||||||
exp_inv = {
|
|
||||||
'VCPU': {
|
|
||||||
'total': 8,
|
|
||||||
'max_unit': 8,
|
|
||||||
'allocation_ratio': 16.0,
|
|
||||||
'reserved': 0,
|
|
||||||
},
|
|
||||||
'MEMORY_MB': {
|
|
||||||
'total': 2048,
|
|
||||||
'max_unit': 2048,
|
|
||||||
'allocation_ratio': 1.5,
|
|
||||||
'reserved': 512,
|
|
||||||
},
|
|
||||||
'DISK_GB': {
|
|
||||||
'total': 2091,
|
|
||||||
'max_unit': 2091,
|
|
||||||
'allocation_ratio': 1.0,
|
|
||||||
'reserved': 0,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
ptree = provider_tree.ProviderTree()
|
|
||||||
ptree.new_root('compute_host', uuids.cn)
|
|
||||||
# Let the caller muck with these
|
|
||||||
yield ptree, exp_inv
|
|
||||||
self.drv.update_provider_tree(ptree, 'compute_host',
|
|
||||||
allocations=allocations)
|
|
||||||
self.assertEqual(exp_inv, ptree.data('compute_host').inventory)
|
|
||||||
mock_bhrfm.assert_called_once_with('host_wrapper')
|
|
||||||
|
|
||||||
def test_update_provider_tree(self):
|
|
||||||
# Basic: no inventory already on the provider, no extra providers, no
|
|
||||||
# aggregates or traits.
|
|
||||||
with self._update_provider_tree():
|
|
||||||
pass
|
|
||||||
|
|
||||||
def test_update_provider_tree_ignore_allocations(self):
|
|
||||||
with self._update_provider_tree(allocations="This is ignored"):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def test_update_provider_tree_conf_overrides(self):
|
|
||||||
# Non-default CONF values for allocation ratios and reserved.
|
|
||||||
self.flags(cpu_allocation_ratio=12.3,
|
|
||||||
reserved_host_cpus=4,
|
|
||||||
ram_allocation_ratio=4.5,
|
|
||||||
reserved_host_memory_mb=32,
|
|
||||||
disk_allocation_ratio=6.7,
|
|
||||||
# This gets int(ceil)'d
|
|
||||||
reserved_host_disk_mb=5432.1)
|
|
||||||
with self._update_provider_tree() as (_, exp_inv):
|
|
||||||
exp_inv['VCPU']['allocation_ratio'] = 12.3
|
|
||||||
exp_inv['VCPU']['reserved'] = 4
|
|
||||||
exp_inv['MEMORY_MB']['allocation_ratio'] = 4.5
|
|
||||||
exp_inv['MEMORY_MB']['reserved'] = 32
|
|
||||||
exp_inv['DISK_GB']['allocation_ratio'] = 6.7
|
|
||||||
exp_inv['DISK_GB']['reserved'] = 6
|
|
||||||
|
|
||||||
def test_update_provider_tree_complex_ptree(self):
|
|
||||||
# Overrides inventory already on the provider; leaves other providers
|
|
||||||
# and aggregates/traits alone.
|
|
||||||
with self._update_provider_tree() as (ptree, exp_inv):
|
|
||||||
ptree.update_inventory('compute_host', {
|
|
||||||
# these should get blown away
|
|
||||||
'VCPU': {
|
|
||||||
'total': 16,
|
|
||||||
'max_unit': 2,
|
|
||||||
'allocation_ratio': 1.0,
|
|
||||||
'reserved': 10,
|
|
||||||
},
|
|
||||||
'CUSTOM_BOGUS': {
|
|
||||||
'total': 1234,
|
|
||||||
}
|
|
||||||
})
|
|
||||||
ptree.update_aggregates('compute_host',
|
|
||||||
[uuids.ss_agg, uuids.other_agg])
|
|
||||||
ptree.update_traits('compute_host', ['CUSTOM_FOO', 'CUSTOM_BAR'])
|
|
||||||
ptree.new_root('ssp', uuids.ssp)
|
|
||||||
ptree.update_inventory('ssp', {'sentinel': 'inventory',
|
|
||||||
'for': 'ssp'})
|
|
||||||
ptree.update_aggregates('ssp', [uuids.ss_agg])
|
|
||||||
ptree.new_child('sriov', 'compute_host', uuid=uuids.sriov)
|
|
||||||
# Since CONF.cpu_allocation_ratio is not set and this is not
|
|
||||||
# the initial upt call (so CONF.initial_cpu_allocation_ratio would
|
|
||||||
# be used), the existing allocation ratio value from the tree is
|
|
||||||
# used.
|
|
||||||
exp_inv['VCPU']['allocation_ratio'] = 1.0
|
|
||||||
|
|
||||||
# Make sure the compute's agg and traits were left alone
|
|
||||||
cndata = ptree.data('compute_host')
|
|
||||||
self.assertEqual(set([uuids.ss_agg, uuids.other_agg]),
|
|
||||||
cndata.aggregates)
|
|
||||||
self.assertEqual(set(['CUSTOM_FOO', 'CUSTOM_BAR']), cndata.traits)
|
|
||||||
# And the other providers were left alone
|
|
||||||
self.assertEqual(set([uuids.cn, uuids.ssp, uuids.sriov]),
|
|
||||||
set(ptree.get_provider_uuids()))
|
|
||||||
# ...including the ssp's aggregates
|
|
||||||
self.assertEqual(set([uuids.ss_agg]), ptree.data('ssp').aggregates)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.tasks.storage.AttachVolume.execute')
|
|
||||||
@mock.patch('nova.virt.powervm.tasks.network.PlugMgmtVif.execute')
|
|
||||||
@mock.patch('nova.virt.powervm.tasks.network.PlugVifs.execute')
|
|
||||||
@mock.patch('nova.virt.powervm.media.ConfigDrivePowerVM')
|
|
||||||
@mock.patch('nova.virt.configdrive.required_by')
|
|
||||||
@mock.patch('nova.virt.powervm.vm.create_lpar')
|
|
||||||
@mock.patch('pypowervm.tasks.partition.build_active_vio_feed_task',
|
|
||||||
autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.storage.add_lpar_storage_scrub_tasks',
|
|
||||||
autospec=True)
|
|
||||||
def test_spawn_ops(self, mock_scrub, mock_bldftsk, mock_crt_lpar,
|
|
||||||
mock_cdrb, mock_cfg_drv, mock_plug_vifs,
|
|
||||||
mock_plug_mgmt_vif, mock_attach_vol):
|
|
||||||
"""Validates the 'typical' spawn flow of the spawn of an instance. """
|
|
||||||
mock_cdrb.return_value = True
|
|
||||||
self.drv.host_wrapper = mock.Mock()
|
|
||||||
self.drv.disk_dvr = mock.create_autospec(ssp.SSPDiskAdapter,
|
|
||||||
instance=True)
|
|
||||||
mock_ftsk = pvm_tx.FeedTask('fake', [mock.Mock(spec=pvm_vios.VIOS)])
|
|
||||||
mock_bldftsk.return_value = mock_ftsk
|
|
||||||
block_device_info = self._fake_bdms()
|
|
||||||
self.drv.spawn('context', self.inst, 'img_meta', 'files', 'password',
|
|
||||||
'allocs', network_info='netinfo',
|
|
||||||
block_device_info=block_device_info)
|
|
||||||
mock_crt_lpar.assert_called_once_with(
|
|
||||||
self.adp, self.drv.host_wrapper, self.inst)
|
|
||||||
mock_bldftsk.assert_called_once_with(
|
|
||||||
self.adp, xag={pvm_const.XAG.VIO_SMAP, pvm_const.XAG.VIO_FMAP})
|
|
||||||
self.assertTrue(mock_plug_vifs.called)
|
|
||||||
self.assertTrue(mock_plug_mgmt_vif.called)
|
|
||||||
mock_scrub.assert_called_once_with(
|
|
||||||
[mock_crt_lpar.return_value.id], mock_ftsk, lpars_exist=True)
|
|
||||||
self.drv.disk_dvr.create_disk_from_image.assert_called_once_with(
|
|
||||||
'context', self.inst, 'img_meta')
|
|
||||||
self.drv.disk_dvr.attach_disk.assert_called_once_with(
|
|
||||||
self.inst, self.drv.disk_dvr.create_disk_from_image.return_value,
|
|
||||||
mock_ftsk)
|
|
||||||
self.assertEqual(2, mock_attach_vol.call_count)
|
|
||||||
mock_cfg_drv.assert_called_once_with(self.adp)
|
|
||||||
mock_cfg_drv.return_value.create_cfg_drv_vopt.assert_called_once_with(
|
|
||||||
self.inst, 'files', 'netinfo', mock_ftsk, admin_pass='password',
|
|
||||||
mgmt_cna=mock.ANY)
|
|
||||||
self.pwron.assert_called_once_with(self.adp, self.inst)
|
|
||||||
|
|
||||||
mock_cfg_drv.reset_mock()
|
|
||||||
mock_attach_vol.reset_mock()
|
|
||||||
|
|
||||||
# No config drive, no bdms
|
|
||||||
mock_cdrb.return_value = False
|
|
||||||
self.drv.spawn('context', self.inst, 'img_meta', 'files', 'password',
|
|
||||||
'allocs')
|
|
||||||
mock_cfg_drv.assert_not_called()
|
|
||||||
mock_attach_vol.assert_not_called()
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.tasks.storage.DetachVolume.execute')
|
|
||||||
@mock.patch('nova.virt.powervm.tasks.network.UnplugVifs.execute')
|
|
||||||
@mock.patch('nova.virt.powervm.vm.delete_lpar')
|
|
||||||
@mock.patch('nova.virt.powervm.media.ConfigDrivePowerVM')
|
|
||||||
@mock.patch('nova.virt.configdrive.required_by')
|
|
||||||
@mock.patch('pypowervm.tasks.partition.build_active_vio_feed_task',
|
|
||||||
autospec=True)
|
|
||||||
def test_destroy(self, mock_bldftsk, mock_cdrb, mock_cfgdrv,
|
|
||||||
mock_dlt_lpar, mock_unplug, mock_detach_vol):
|
|
||||||
"""Validates PowerVM destroy."""
|
|
||||||
self.drv.host_wrapper = mock.Mock()
|
|
||||||
self.drv.disk_dvr = mock.create_autospec(ssp.SSPDiskAdapter,
|
|
||||||
instance=True)
|
|
||||||
|
|
||||||
mock_ftsk = pvm_tx.FeedTask('fake', [mock.Mock(spec=pvm_vios.VIOS)])
|
|
||||||
mock_bldftsk.return_value = mock_ftsk
|
|
||||||
block_device_info = self._fake_bdms()
|
|
||||||
|
|
||||||
# Good path, with config drive, destroy disks
|
|
||||||
mock_cdrb.return_value = True
|
|
||||||
self.drv.destroy('context', self.inst, [],
|
|
||||||
block_device_info=block_device_info)
|
|
||||||
self.pwroff.assert_called_once_with(
|
|
||||||
self.adp, self.inst, force_immediate=True)
|
|
||||||
mock_bldftsk.assert_called_once_with(
|
|
||||||
self.adp, xag=[pvm_const.XAG.VIO_SMAP])
|
|
||||||
mock_unplug.assert_called_once()
|
|
||||||
mock_cdrb.assert_called_once_with(self.inst)
|
|
||||||
mock_cfgdrv.assert_called_once_with(self.adp)
|
|
||||||
mock_cfgdrv.return_value.dlt_vopt.assert_called_once_with(
|
|
||||||
self.inst, stg_ftsk=mock_bldftsk.return_value)
|
|
||||||
self.assertEqual(2, mock_detach_vol.call_count)
|
|
||||||
self.drv.disk_dvr.detach_disk.assert_called_once_with(
|
|
||||||
self.inst)
|
|
||||||
self.drv.disk_dvr.delete_disks.assert_called_once_with(
|
|
||||||
self.drv.disk_dvr.detach_disk.return_value)
|
|
||||||
mock_dlt_lpar.assert_called_once_with(self.adp, self.inst)
|
|
||||||
|
|
||||||
self.pwroff.reset_mock()
|
|
||||||
mock_bldftsk.reset_mock()
|
|
||||||
mock_unplug.reset_mock()
|
|
||||||
mock_cdrb.reset_mock()
|
|
||||||
mock_cfgdrv.reset_mock()
|
|
||||||
self.drv.disk_dvr.detach_disk.reset_mock()
|
|
||||||
self.drv.disk_dvr.delete_disks.reset_mock()
|
|
||||||
mock_detach_vol.reset_mock()
|
|
||||||
mock_dlt_lpar.reset_mock()
|
|
||||||
|
|
||||||
# No config drive, preserve disks, no block device info
|
|
||||||
mock_cdrb.return_value = False
|
|
||||||
self.drv.destroy('context', self.inst, [], block_device_info={},
|
|
||||||
destroy_disks=False)
|
|
||||||
mock_cfgdrv.return_value.dlt_vopt.assert_not_called()
|
|
||||||
mock_detach_vol.assert_not_called()
|
|
||||||
self.drv.disk_dvr.delete_disks.assert_not_called()
|
|
||||||
|
|
||||||
# Non-forced power_off, since preserving disks
|
|
||||||
self.pwroff.assert_called_once_with(
|
|
||||||
self.adp, self.inst, force_immediate=False)
|
|
||||||
mock_bldftsk.assert_called_once_with(
|
|
||||||
self.adp, xag=[pvm_const.XAG.VIO_SMAP])
|
|
||||||
mock_unplug.assert_called_once()
|
|
||||||
mock_cdrb.assert_called_once_with(self.inst)
|
|
||||||
mock_cfgdrv.assert_not_called()
|
|
||||||
mock_cfgdrv.return_value.dlt_vopt.assert_not_called()
|
|
||||||
self.drv.disk_dvr.detach_disk.assert_called_once_with(
|
|
||||||
self.inst)
|
|
||||||
self.drv.disk_dvr.delete_disks.assert_not_called()
|
|
||||||
mock_dlt_lpar.assert_called_once_with(self.adp, self.inst)
|
|
||||||
|
|
||||||
self.pwroff.reset_mock()
|
|
||||||
mock_bldftsk.reset_mock()
|
|
||||||
mock_unplug.reset_mock()
|
|
||||||
mock_cdrb.reset_mock()
|
|
||||||
mock_cfgdrv.reset_mock()
|
|
||||||
self.drv.disk_dvr.detach_disk.reset_mock()
|
|
||||||
self.drv.disk_dvr.delete_disks.reset_mock()
|
|
||||||
mock_dlt_lpar.reset_mock()
|
|
||||||
|
|
||||||
# InstanceNotFound exception, non-forced
|
|
||||||
self.pwroff.side_effect = exception.InstanceNotFound(
|
|
||||||
instance_id='something')
|
|
||||||
self.drv.destroy('context', self.inst, [], block_device_info={},
|
|
||||||
destroy_disks=False)
|
|
||||||
self.pwroff.assert_called_once_with(
|
|
||||||
self.adp, self.inst, force_immediate=False)
|
|
||||||
self.drv.disk_dvr.detach_disk.assert_not_called()
|
|
||||||
mock_unplug.assert_not_called()
|
|
||||||
self.drv.disk_dvr.delete_disks.assert_not_called()
|
|
||||||
mock_dlt_lpar.assert_not_called()
|
|
||||||
|
|
||||||
self.pwroff.reset_mock()
|
|
||||||
self.pwroff.side_effect = None
|
|
||||||
mock_unplug.reset_mock()
|
|
||||||
|
|
||||||
# Convertible (PowerVM) exception
|
|
||||||
mock_dlt_lpar.side_effect = pvm_exc.TimeoutError("Timed out")
|
|
||||||
self.assertRaises(exception.InstanceTerminationFailure,
|
|
||||||
self.drv.destroy, 'context', self.inst, [],
|
|
||||||
block_device_info={})
|
|
||||||
|
|
||||||
# Everything got called
|
|
||||||
self.pwroff.assert_called_once_with(
|
|
||||||
self.adp, self.inst, force_immediate=True)
|
|
||||||
mock_unplug.assert_called_once()
|
|
||||||
self.drv.disk_dvr.detach_disk.assert_called_once_with(self.inst)
|
|
||||||
self.drv.disk_dvr.delete_disks.assert_called_once_with(
|
|
||||||
self.drv.disk_dvr.detach_disk.return_value)
|
|
||||||
mock_dlt_lpar.assert_called_once_with(self.adp, self.inst)
|
|
||||||
|
|
||||||
# Other random exception raises directly
|
|
||||||
mock_dlt_lpar.side_effect = ValueError()
|
|
||||||
self.assertRaises(ValueError,
|
|
||||||
self.drv.destroy, 'context', self.inst, [],
|
|
||||||
block_device_info={})
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.tasks.image.UpdateTaskState.'
|
|
||||||
'execute', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.tasks.storage.InstanceDiskToMgmt.'
|
|
||||||
'execute', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.tasks.image.StreamToGlance.execute')
|
|
||||||
@mock.patch('nova.virt.powervm.tasks.storage.RemoveInstanceDiskFromMgmt.'
|
|
||||||
'execute')
|
|
||||||
def test_snapshot(self, mock_rm, mock_stream, mock_conn, mock_update):
|
|
||||||
self.drv.disk_dvr = mock.Mock()
|
|
||||||
self.drv.image_api = mock.Mock()
|
|
||||||
mock_conn.return_value = 'stg_elem', 'vios_wrap', 'disk_path'
|
|
||||||
self.drv.snapshot('context', self.inst, 'image_id',
|
|
||||||
'update_task_state')
|
|
||||||
self.assertEqual(2, mock_update.call_count)
|
|
||||||
self.assertEqual(1, mock_conn.call_count)
|
|
||||||
mock_stream.assert_called_once_with(disk_path='disk_path')
|
|
||||||
mock_rm.assert_called_once_with(
|
|
||||||
stg_elem='stg_elem', vios_wrap='vios_wrap', disk_path='disk_path')
|
|
||||||
|
|
||||||
self.drv.disk_dvr.capabilities = {'snapshot': False}
|
|
||||||
self.assertRaises(exception.NotSupportedWithOption, self.drv.snapshot,
|
|
||||||
'context', self.inst, 'image_id', 'update_task_state')
|
|
||||||
|
|
||||||
def test_power_on(self):
|
|
||||||
self.drv.power_on('context', self.inst, 'network_info')
|
|
||||||
self.pwron.assert_called_once_with(self.adp, self.inst)
|
|
||||||
|
|
||||||
def test_power_off(self):
|
|
||||||
self.drv.power_off(self.inst)
|
|
||||||
self.pwroff.assert_called_once_with(
|
|
||||||
self.adp, self.inst, force_immediate=True, timeout=None)
|
|
||||||
|
|
||||||
def test_power_off_timeout(self):
|
|
||||||
# Long timeout (retry interval means nothing on powervm)
|
|
||||||
self.drv.power_off(self.inst, timeout=500, retry_interval=10)
|
|
||||||
self.pwroff.assert_called_once_with(
|
|
||||||
self.adp, self.inst, force_immediate=False, timeout=500)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vm.reboot')
|
|
||||||
def test_reboot_soft(self, mock_reboot):
|
|
||||||
inst = mock.Mock()
|
|
||||||
self.drv.reboot('context', inst, 'network_info', 'SOFT')
|
|
||||||
mock_reboot.assert_called_once_with(self.adp, inst, False)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vm.reboot')
|
|
||||||
def test_reboot_hard(self, mock_reboot):
|
|
||||||
inst = mock.Mock()
|
|
||||||
self.drv.reboot('context', inst, 'network_info', 'HARD')
|
|
||||||
mock_reboot.assert_called_once_with(self.adp, inst, True)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.driver.PowerVMDriver.plug_vifs')
|
|
||||||
def test_attach_interface(self, mock_plug_vifs):
|
|
||||||
self.drv.attach_interface('context', 'inst', 'image_meta', 'vif')
|
|
||||||
mock_plug_vifs.assert_called_once_with('inst', ['vif'])
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.driver.PowerVMDriver.unplug_vifs')
|
|
||||||
def test_detach_interface(self, mock_unplug_vifs):
|
|
||||||
self.drv.detach_interface('context', 'inst', 'vif')
|
|
||||||
mock_unplug_vifs.assert_called_once_with('inst', ['vif'])
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.tasks.vm.Get', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.tasks.base.run', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.tasks.network.PlugVifs', autospec=True)
|
|
||||||
@mock.patch('taskflow.patterns.linear_flow.Flow', autospec=True)
|
|
||||||
def test_plug_vifs(self, mock_tf, mock_plug_vifs, mock_tf_run, mock_get):
|
|
||||||
# Successful plug
|
|
||||||
mock_inst = mock.Mock()
|
|
||||||
self.drv.plug_vifs(mock_inst, 'net_info')
|
|
||||||
mock_get.assert_called_once_with(self.adp, mock_inst)
|
|
||||||
mock_plug_vifs.assert_called_once_with(
|
|
||||||
self.drv.virtapi, self.adp, mock_inst, 'net_info')
|
|
||||||
add_calls = [mock.call(mock_get.return_value),
|
|
||||||
mock.call(mock_plug_vifs.return_value)]
|
|
||||||
mock_tf.return_value.add.assert_has_calls(add_calls)
|
|
||||||
mock_tf_run.assert_called_once_with(
|
|
||||||
mock_tf.return_value, instance=mock_inst)
|
|
||||||
|
|
||||||
# InstanceNotFound and generic exception both raise
|
|
||||||
mock_tf_run.side_effect = exception.InstanceNotFound('id')
|
|
||||||
exc = self.assertRaises(exception.VirtualInterfacePlugException,
|
|
||||||
self.drv.plug_vifs, mock_inst, 'net_info')
|
|
||||||
self.assertIn('instance', str(exc))
|
|
||||||
mock_tf_run.side_effect = Exception
|
|
||||||
exc = self.assertRaises(exception.VirtualInterfacePlugException,
|
|
||||||
self.drv.plug_vifs, mock_inst, 'net_info')
|
|
||||||
self.assertIn('unexpected', str(exc))
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.tasks.base.run', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.tasks.network.UnplugVifs', autospec=True)
|
|
||||||
@mock.patch('taskflow.patterns.linear_flow.Flow', autospec=True)
|
|
||||||
def test_unplug_vifs(self, mock_tf, mock_unplug_vifs, mock_tf_run):
|
|
||||||
# Successful unplug
|
|
||||||
mock_inst = mock.Mock()
|
|
||||||
self.drv.unplug_vifs(mock_inst, 'net_info')
|
|
||||||
mock_unplug_vifs.assert_called_once_with(self.adp, mock_inst,
|
|
||||||
'net_info')
|
|
||||||
mock_tf.return_value.add.assert_called_once_with(
|
|
||||||
mock_unplug_vifs.return_value)
|
|
||||||
mock_tf_run.assert_called_once_with(mock_tf.return_value,
|
|
||||||
instance=mock_inst)
|
|
||||||
|
|
||||||
# InstanceNotFound should pass
|
|
||||||
mock_tf_run.side_effect = exception.InstanceNotFound(instance_id='1')
|
|
||||||
self.drv.unplug_vifs(mock_inst, 'net_info')
|
|
||||||
|
|
||||||
# Raise InterfaceDetachFailed otherwise
|
|
||||||
mock_tf_run.side_effect = Exception
|
|
||||||
self.assertRaises(exception.InterfaceDetachFailed,
|
|
||||||
self.drv.unplug_vifs, mock_inst, 'net_info')
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.vterm.open_remotable_vnc_vterm',
|
|
||||||
autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_pvm_uuid',
|
|
||||||
new=mock.Mock(return_value='uuid'))
|
|
||||||
def test_get_vnc_console(self, mock_vterm):
|
|
||||||
# Success
|
|
||||||
mock_vterm.return_value = '10'
|
|
||||||
resp = self.drv.get_vnc_console(mock.ANY, self.inst)
|
|
||||||
self.assertEqual('127.0.0.1', resp.host)
|
|
||||||
self.assertEqual('10', resp.port)
|
|
||||||
self.assertEqual('uuid', resp.internal_access_path)
|
|
||||||
mock_vterm.assert_called_once_with(
|
|
||||||
mock.ANY, 'uuid', mock.ANY, vnc_path='uuid')
|
|
||||||
|
|
||||||
# VNC failure - exception is raised directly
|
|
||||||
mock_vterm.side_effect = pvm_exc.VNCBasedTerminalFailedToOpen(err='xx')
|
|
||||||
self.assertRaises(pvm_exc.VNCBasedTerminalFailedToOpen,
|
|
||||||
self.drv.get_vnc_console, mock.ANY, self.inst)
|
|
||||||
|
|
||||||
# 404
|
|
||||||
mock_vterm.side_effect = pvm_exc.HttpError(mock.Mock(status=404))
|
|
||||||
self.assertRaises(exception.InstanceNotFound, self.drv.get_vnc_console,
|
|
||||||
mock.ANY, self.inst)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.volume.fcvscsi.FCVscsiVolumeAdapter')
|
|
||||||
def test_attach_volume(self, mock_vscsi_adpt):
|
|
||||||
"""Validates the basic PowerVM attach volume."""
|
|
||||||
# BDMs
|
|
||||||
mock_bdm = self._fake_bdms()['block_device_mapping'][0]
|
|
||||||
|
|
||||||
with mock.patch.object(self.inst, 'save') as mock_save:
|
|
||||||
# Invoke the method.
|
|
||||||
self.drv.attach_volume('context', mock_bdm.get('connection_info'),
|
|
||||||
self.inst, mock.sentinel.stg_ftsk)
|
|
||||||
|
|
||||||
# Verify the connect volume was invoked
|
|
||||||
mock_vscsi_adpt.return_value.attach_volume.assert_called_once_with()
|
|
||||||
mock_save.assert_called_once_with()
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.volume.fcvscsi.FCVscsiVolumeAdapter')
|
|
||||||
def test_detach_volume(self, mock_vscsi_adpt):
|
|
||||||
"""Validates the basic PowerVM detach volume."""
|
|
||||||
# BDMs
|
|
||||||
mock_bdm = self._fake_bdms()['block_device_mapping'][0]
|
|
||||||
|
|
||||||
# Invoke the method, good path test.
|
|
||||||
self.drv.detach_volume('context', mock_bdm.get('connection_info'),
|
|
||||||
self.inst, mock.sentinel.stg_ftsk)
|
|
||||||
# Verify the disconnect volume was invoked
|
|
||||||
mock_vscsi_adpt.return_value.detach_volume.assert_called_once_with()
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.volume.fcvscsi.FCVscsiVolumeAdapter')
|
|
||||||
def test_extend_volume(self, mock_vscsi_adpt):
|
|
||||||
mock_bdm = self._fake_bdms()['block_device_mapping'][0]
|
|
||||||
self.drv.extend_volume(
|
|
||||||
'context', mock_bdm.get('connection_info'), self.inst, 0)
|
|
||||||
mock_vscsi_adpt.return_value.extend_volume.assert_called_once_with()
|
|
||||||
|
|
||||||
def test_vol_drv_iter(self):
|
|
||||||
block_device_info = self._fake_bdms()
|
|
||||||
bdms = nova_driver.block_device_info_get_mapping(block_device_info)
|
|
||||||
vol_adpt = mock.Mock()
|
|
||||||
|
|
||||||
def _get_results(bdms):
|
|
||||||
# Patch so we get the same mock back each time.
|
|
||||||
with mock.patch('nova.virt.powervm.volume.fcvscsi.'
|
|
||||||
'FCVscsiVolumeAdapter', return_value=vol_adpt):
|
|
||||||
return [
|
|
||||||
(bdm, vol_drv) for bdm, vol_drv in self.drv._vol_drv_iter(
|
|
||||||
'context', self.inst, bdms)]
|
|
||||||
|
|
||||||
results = _get_results(bdms)
|
|
||||||
self.assertEqual(
|
|
||||||
'fake_vol1',
|
|
||||||
results[0][0]['connection_info']['data']['volume_id'])
|
|
||||||
self.assertEqual(vol_adpt, results[0][1])
|
|
||||||
self.assertEqual(
|
|
||||||
'fake_vol2',
|
|
||||||
results[1][0]['connection_info']['data']['volume_id'])
|
|
||||||
self.assertEqual(vol_adpt, results[1][1])
|
|
||||||
|
|
||||||
# Test with empty bdms
|
|
||||||
self.assertEqual([], _get_results([]))
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _fake_bdms():
|
|
||||||
def _fake_bdm(volume_id, target_lun):
|
|
||||||
connection_info = {'driver_volume_type': 'fibre_channel',
|
|
||||||
'data': {'volume_id': volume_id,
|
|
||||||
'target_lun': target_lun,
|
|
||||||
'initiator_target_map':
|
|
||||||
{'21000024F5': ['50050768']}}}
|
|
||||||
mapping_dict = {'source_type': 'volume', 'volume_id': volume_id,
|
|
||||||
'destination_type': 'volume',
|
|
||||||
'connection_info':
|
|
||||||
jsonutils.dumps(connection_info),
|
|
||||||
}
|
|
||||||
bdm_dict = nova_block_device.BlockDeviceDict(mapping_dict)
|
|
||||||
bdm_obj = bdmobj.BlockDeviceMapping(**bdm_dict)
|
|
||||||
|
|
||||||
return nova_virt_bdm.DriverVolumeBlockDevice(bdm_obj)
|
|
||||||
|
|
||||||
bdm_list = [_fake_bdm('fake_vol1', 0), _fake_bdm('fake_vol2', 1)]
|
|
||||||
block_device_info = {'block_device_mapping': bdm_list}
|
|
||||||
|
|
||||||
return block_device_info
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.volume.fcvscsi.wwpns', autospec=True)
|
|
||||||
def test_get_volume_connector(self, mock_wwpns):
|
|
||||||
vol_connector = self.drv.get_volume_connector(mock.Mock())
|
|
||||||
self.assertEqual(mock_wwpns.return_value, vol_connector['wwpns'])
|
|
||||||
self.assertFalse(vol_connector['multipath'])
|
|
||||||
self.assertEqual(vol_connector['host'], CONF.host)
|
|
||||||
self.assertIsNone(vol_connector['initiator'])
|
|
@ -1,62 +0,0 @@
|
|||||||
# Copyright 2016 IBM Corp.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
#
|
|
||||||
|
|
||||||
from pypowervm.wrappers import managed_system as pvm_ms
|
|
||||||
from unittest import mock
|
|
||||||
|
|
||||||
from nova import test
|
|
||||||
from nova.virt.powervm import host as pvm_host
|
|
||||||
|
|
||||||
|
|
||||||
class TestPowerVMHost(test.NoDBTestCase):
|
|
||||||
def test_host_resources(self):
|
|
||||||
# Create objects to test with
|
|
||||||
ms_wrapper = mock.create_autospec(pvm_ms.System, spec_set=True)
|
|
||||||
asio = mock.create_autospec(pvm_ms.ASIOConfig, spec_set=True)
|
|
||||||
ms_wrapper.configure_mock(
|
|
||||||
proc_units_configurable=500,
|
|
||||||
proc_units_avail=500,
|
|
||||||
memory_configurable=5242880,
|
|
||||||
memory_free=5242752,
|
|
||||||
memory_region_size='big',
|
|
||||||
asio_config=asio)
|
|
||||||
self.flags(host='the_hostname')
|
|
||||||
|
|
||||||
# Run the actual test
|
|
||||||
stats = pvm_host.build_host_resource_from_ms(ms_wrapper)
|
|
||||||
self.assertIsNotNone(stats)
|
|
||||||
|
|
||||||
# Check for the presence of fields
|
|
||||||
fields = (('vcpus', 500), ('vcpus_used', 0),
|
|
||||||
('memory_mb', 5242880), ('memory_mb_used', 128),
|
|
||||||
'hypervisor_type', 'hypervisor_version',
|
|
||||||
('hypervisor_hostname', 'the_hostname'), 'cpu_info',
|
|
||||||
'supported_instances', 'stats')
|
|
||||||
for fld in fields:
|
|
||||||
if isinstance(fld, tuple):
|
|
||||||
value = stats.get(fld[0], None)
|
|
||||||
self.assertEqual(value, fld[1])
|
|
||||||
else:
|
|
||||||
value = stats.get(fld, None)
|
|
||||||
self.assertIsNotNone(value)
|
|
||||||
# Check for individual stats
|
|
||||||
hstats = (('proc_units', '500.00'), ('proc_units_used', '0.00'))
|
|
||||||
for stat in hstats:
|
|
||||||
if isinstance(stat, tuple):
|
|
||||||
value = stats['stats'].get(stat[0], None)
|
|
||||||
self.assertEqual(value, stat[1])
|
|
||||||
else:
|
|
||||||
value = stats['stats'].get(stat, None)
|
|
||||||
self.assertIsNotNone(value)
|
|
@ -1,55 +0,0 @@
|
|||||||
# Copyright 2015, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from unittest import mock
|
|
||||||
|
|
||||||
from nova import test
|
|
||||||
from nova.virt.powervm import image
|
|
||||||
|
|
||||||
|
|
||||||
class TestImage(test.TestCase):
|
|
||||||
|
|
||||||
@mock.patch('nova.utils.temporary_chown', autospec=True)
|
|
||||||
@mock.patch('nova.image.glance.API', autospec=True)
|
|
||||||
def test_stream_blockdev_to_glance(self, mock_api, mock_chown):
|
|
||||||
mock_open = mock.mock_open()
|
|
||||||
with mock.patch('builtins.open', new=mock_open):
|
|
||||||
image.stream_blockdev_to_glance('context', mock_api, 'image_id',
|
|
||||||
'metadata', '/dev/disk')
|
|
||||||
mock_chown.assert_called_with('/dev/disk')
|
|
||||||
mock_open.assert_called_with('/dev/disk', 'rb')
|
|
||||||
mock_api.update.assert_called_with('context', 'image_id', 'metadata',
|
|
||||||
mock_open.return_value)
|
|
||||||
|
|
||||||
@mock.patch('nova.image.glance.API', autospec=True)
|
|
||||||
def test_generate_snapshot_metadata(self, mock_api):
|
|
||||||
mock_api.get.return_value = {'name': 'image_name'}
|
|
||||||
mock_instance = mock.Mock()
|
|
||||||
mock_instance.project_id = 'project_id'
|
|
||||||
ret = image.generate_snapshot_metadata('context', mock_api, 'image_id',
|
|
||||||
mock_instance)
|
|
||||||
mock_api.get.assert_called_with('context', 'image_id')
|
|
||||||
self.assertEqual({
|
|
||||||
'name': 'image_name',
|
|
||||||
'status': 'active',
|
|
||||||
'disk_format': 'raw',
|
|
||||||
'container_format': 'bare',
|
|
||||||
'properties': {
|
|
||||||
'image_location': 'snapshot',
|
|
||||||
'image_state': 'available',
|
|
||||||
'owner_id': 'project_id',
|
|
||||||
}
|
|
||||||
}, ret)
|
|
@ -1,204 +0,0 @@
|
|||||||
# Copyright 2015, 2017 IBM Corp.
|
|
||||||
#
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from unittest import mock
|
|
||||||
|
|
||||||
import fixtures
|
|
||||||
from oslo_utils.fixture import uuidsentinel
|
|
||||||
from pypowervm import const as pvm_const
|
|
||||||
from pypowervm.tasks import scsi_mapper as tsk_map
|
|
||||||
from pypowervm.tests import test_fixtures as pvm_fx
|
|
||||||
from pypowervm.utils import transaction as pvm_tx
|
|
||||||
from pypowervm.wrappers import network as pvm_net
|
|
||||||
from pypowervm.wrappers import storage as pvm_stg
|
|
||||||
from pypowervm.wrappers import virtual_io_server as pvm_vios
|
|
||||||
|
|
||||||
from nova import test
|
|
||||||
from nova.virt.powervm import media as m
|
|
||||||
|
|
||||||
|
|
||||||
class TestConfigDrivePowerVM(test.NoDBTestCase):
|
|
||||||
"""Unit Tests for the ConfigDrivePowerVM class."""
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestConfigDrivePowerVM, self).setUp()
|
|
||||||
|
|
||||||
self.apt = self.useFixture(pvm_fx.AdapterFx()).adpt
|
|
||||||
|
|
||||||
self.validate_vopt = self.useFixture(fixtures.MockPatch(
|
|
||||||
'pypowervm.tasks.vopt.validate_vopt_repo_exists',
|
|
||||||
autospec=True)).mock
|
|
||||||
self.validate_vopt.return_value = 'vios_uuid', 'vg_uuid'
|
|
||||||
|
|
||||||
@mock.patch('nova.api.metadata.base.InstanceMetadata')
|
|
||||||
@mock.patch('nova.virt.configdrive.ConfigDriveBuilder.make_drive')
|
|
||||||
def test_crt_cfg_dr_iso(self, mock_mkdrv, mock_meta):
|
|
||||||
"""Validates that the image creation method works."""
|
|
||||||
cfg_dr_builder = m.ConfigDrivePowerVM(self.apt)
|
|
||||||
self.assertTrue(self.validate_vopt.called)
|
|
||||||
mock_instance = mock.MagicMock()
|
|
||||||
mock_instance.uuid = uuidsentinel.inst_id
|
|
||||||
mock_files = mock.MagicMock()
|
|
||||||
mock_net = mock.MagicMock()
|
|
||||||
iso_path = '/tmp/cfgdrv.iso'
|
|
||||||
cfg_dr_builder._create_cfg_dr_iso(mock_instance, mock_files, mock_net,
|
|
||||||
iso_path)
|
|
||||||
self.assertEqual(mock_mkdrv.call_count, 1)
|
|
||||||
|
|
||||||
# Test retry iso create
|
|
||||||
mock_mkdrv.reset_mock()
|
|
||||||
mock_mkdrv.side_effect = [OSError, mock_mkdrv]
|
|
||||||
cfg_dr_builder._create_cfg_dr_iso(mock_instance, mock_files, mock_net,
|
|
||||||
iso_path)
|
|
||||||
self.assertEqual(mock_mkdrv.call_count, 2)
|
|
||||||
|
|
||||||
@mock.patch('tempfile.NamedTemporaryFile')
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_pvm_uuid')
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.build_vscsi_mapping')
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.add_map')
|
|
||||||
@mock.patch('os.path.getsize')
|
|
||||||
@mock.patch('pypowervm.tasks.storage.upload_vopt')
|
|
||||||
@mock.patch('nova.virt.powervm.media.ConfigDrivePowerVM.'
|
|
||||||
'_create_cfg_dr_iso')
|
|
||||||
def test_create_cfg_drv_vopt(self, mock_ccdi, mock_upl, mock_getsize,
|
|
||||||
mock_addmap, mock_bldmap, mock_vm_id,
|
|
||||||
mock_ntf):
|
|
||||||
cfg_dr = m.ConfigDrivePowerVM(self.apt)
|
|
||||||
mock_instance = mock.MagicMock()
|
|
||||||
mock_instance.uuid = uuidsentinel.inst_id
|
|
||||||
mock_upl.return_value = 'vopt', 'f_uuid'
|
|
||||||
fh = mock_ntf.return_value.__enter__.return_value
|
|
||||||
fh.name = 'iso_path'
|
|
||||||
wtsk = mock.create_autospec(pvm_tx.WrapperTask, instance=True)
|
|
||||||
ftsk = mock.create_autospec(pvm_tx.FeedTask, instance=True)
|
|
||||||
ftsk.configure_mock(wrapper_tasks={'vios_uuid': wtsk})
|
|
||||||
|
|
||||||
def test_afs(add_func):
|
|
||||||
# Validate the internal add_func
|
|
||||||
vio = mock.create_autospec(pvm_vios.VIOS)
|
|
||||||
self.assertEqual(mock_addmap.return_value, add_func(vio))
|
|
||||||
mock_vm_id.assert_called_once_with(mock_instance)
|
|
||||||
mock_bldmap.assert_called_once_with(
|
|
||||||
None, vio, mock_vm_id.return_value, 'vopt')
|
|
||||||
mock_addmap.assert_called_once_with(vio, mock_bldmap.return_value)
|
|
||||||
wtsk.add_functor_subtask.side_effect = test_afs
|
|
||||||
|
|
||||||
# calculate expected file name
|
|
||||||
expected_file_name = 'cfg_' + mock_instance.uuid.replace('-', '')
|
|
||||||
allowed_len = pvm_const.MaxLen.VOPT_NAME - 4 # '.iso' is 4 chars
|
|
||||||
expected_file_name = expected_file_name[:allowed_len] + '.iso'
|
|
||||||
|
|
||||||
cfg_dr.create_cfg_drv_vopt(
|
|
||||||
mock_instance, 'files', 'netinfo', ftsk, admin_pass='pass')
|
|
||||||
|
|
||||||
mock_ntf.assert_called_once_with(mode='rb')
|
|
||||||
mock_ccdi.assert_called_once_with(mock_instance, 'files', 'netinfo',
|
|
||||||
'iso_path', admin_pass='pass')
|
|
||||||
mock_getsize.assert_called_once_with('iso_path')
|
|
||||||
mock_upl.assert_called_once_with(self.apt, 'vios_uuid', fh,
|
|
||||||
expected_file_name,
|
|
||||||
mock_getsize.return_value)
|
|
||||||
wtsk.add_functor_subtask.assert_called_once()
|
|
||||||
|
|
||||||
def test_sanitize_network_info(self):
|
|
||||||
network_info = [{'type': 'lbr'}, {'type': 'pvm_sea'},
|
|
||||||
{'type': 'ovs'}]
|
|
||||||
|
|
||||||
cfg_dr_builder = m.ConfigDrivePowerVM(self.apt)
|
|
||||||
|
|
||||||
resp = cfg_dr_builder._sanitize_network_info(network_info)
|
|
||||||
expected_ret = [{'type': 'vif'}, {'type': 'vif'},
|
|
||||||
{'type': 'ovs'}]
|
|
||||||
self.assertEqual(resp, expected_ret)
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.wrappers.storage.VG', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.storage.rm_vg_storage', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_pvm_uuid')
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.gen_match_func', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.find_maps', autospec=True)
|
|
||||||
@mock.patch('pypowervm.wrappers.virtual_io_server.VIOS', autospec=True)
|
|
||||||
@mock.patch('taskflow.task.FunctorTask', autospec=True)
|
|
||||||
def test_dlt_vopt(self, mock_functask, mock_vios, mock_find_maps, mock_gmf,
|
|
||||||
mock_uuid, mock_rmstg, mock_vg):
|
|
||||||
cfg_dr = m.ConfigDrivePowerVM(self.apt)
|
|
||||||
wtsk = mock.create_autospec(pvm_tx.WrapperTask, instance=True)
|
|
||||||
ftsk = mock.create_autospec(pvm_tx.FeedTask, instance=True)
|
|
||||||
ftsk.configure_mock(wrapper_tasks={'vios_uuid': wtsk})
|
|
||||||
|
|
||||||
# Test with no media to remove
|
|
||||||
mock_find_maps.return_value = []
|
|
||||||
cfg_dr.dlt_vopt('inst', ftsk)
|
|
||||||
mock_uuid.assert_called_once_with('inst')
|
|
||||||
mock_gmf.assert_called_once_with(pvm_stg.VOptMedia)
|
|
||||||
wtsk.add_functor_subtask.assert_called_once_with(
|
|
||||||
tsk_map.remove_maps, mock_uuid.return_value,
|
|
||||||
match_func=mock_gmf.return_value)
|
|
||||||
ftsk.get_wrapper.assert_called_once_with('vios_uuid')
|
|
||||||
mock_find_maps.assert_called_once_with(
|
|
||||||
ftsk.get_wrapper.return_value.scsi_mappings,
|
|
||||||
client_lpar_id=mock_uuid.return_value,
|
|
||||||
match_func=mock_gmf.return_value)
|
|
||||||
mock_functask.assert_not_called()
|
|
||||||
|
|
||||||
# Test with media to remove
|
|
||||||
mock_find_maps.return_value = [mock.Mock(backing_storage=media)
|
|
||||||
for media in ['m1', 'm2']]
|
|
||||||
|
|
||||||
def test_functor_task(rm_vopt):
|
|
||||||
# Validate internal rm_vopt function
|
|
||||||
rm_vopt()
|
|
||||||
mock_vg.get.assert_called_once_with(
|
|
||||||
self.apt, uuid='vg_uuid', parent_type=pvm_vios.VIOS,
|
|
||||||
parent_uuid='vios_uuid')
|
|
||||||
mock_rmstg.assert_called_once_with(
|
|
||||||
mock_vg.get.return_value, vopts=['m1', 'm2'])
|
|
||||||
return 'functor_task'
|
|
||||||
mock_functask.side_effect = test_functor_task
|
|
||||||
|
|
||||||
cfg_dr.dlt_vopt('inst', ftsk)
|
|
||||||
mock_functask.assert_called_once()
|
|
||||||
ftsk.add_post_execute.assert_called_once_with('functor_task')
|
|
||||||
|
|
||||||
def test_mgmt_cna_to_vif(self):
|
|
||||||
mock_cna = mock.Mock(spec=pvm_net.CNA, mac="FAD4433ED120")
|
|
||||||
|
|
||||||
# Run
|
|
||||||
cfg_dr_builder = m.ConfigDrivePowerVM(self.apt)
|
|
||||||
vif = cfg_dr_builder._mgmt_cna_to_vif(mock_cna)
|
|
||||||
|
|
||||||
# Validate
|
|
||||||
self.assertEqual(vif.get('address'), "fa:d4:43:3e:d1:20")
|
|
||||||
self.assertEqual(vif.get('id'), 'mgmt_vif')
|
|
||||||
self.assertIsNotNone(vif.get('network'))
|
|
||||||
self.assertEqual(1, len(vif.get('network').get('subnets')))
|
|
||||||
subnet = vif.get('network').get('subnets')[0]
|
|
||||||
self.assertEqual(6, subnet.get('version'))
|
|
||||||
self.assertEqual('fe80::/64', subnet.get('cidr'))
|
|
||||||
ip = subnet.get('ips')[0]
|
|
||||||
self.assertEqual('fe80::f8d4:43ff:fe3e:d120', ip.get('address'))
|
|
||||||
|
|
||||||
def test_mac_to_link_local(self):
|
|
||||||
mac = 'fa:d4:43:3e:d1:20'
|
|
||||||
self.assertEqual('fe80::f8d4:43ff:fe3e:d120',
|
|
||||||
m.ConfigDrivePowerVM._mac_to_link_local(mac))
|
|
||||||
|
|
||||||
mac = '00:00:00:00:00:00'
|
|
||||||
self.assertEqual('fe80::0200:00ff:fe00:0000',
|
|
||||||
m.ConfigDrivePowerVM._mac_to_link_local(mac))
|
|
||||||
|
|
||||||
mac = 'ff:ff:ff:ff:ff:ff'
|
|
||||||
self.assertEqual('fe80::fdff:ffff:feff:ffff',
|
|
||||||
m.ConfigDrivePowerVM._mac_to_link_local(mac))
|
|
@ -1,193 +0,0 @@
|
|||||||
# Copyright 2015, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import retrying
|
|
||||||
from unittest import mock
|
|
||||||
|
|
||||||
from nova import exception
|
|
||||||
from nova import test
|
|
||||||
from pypowervm.tests import test_fixtures as pvm_fx
|
|
||||||
from pypowervm.tests.test_utils import pvmhttp
|
|
||||||
|
|
||||||
from nova.virt.powervm import mgmt
|
|
||||||
|
|
||||||
LPAR_HTTPRESP_FILE = "lpar.txt"
|
|
||||||
|
|
||||||
|
|
||||||
class TestMgmt(test.TestCase):
|
|
||||||
def setUp(self):
|
|
||||||
super(TestMgmt, self).setUp()
|
|
||||||
self.apt = self.useFixture(pvm_fx.AdapterFx()).adpt
|
|
||||||
|
|
||||||
lpar_http = pvmhttp.load_pvm_resp(LPAR_HTTPRESP_FILE, adapter=self.apt)
|
|
||||||
self.assertIsNotNone(
|
|
||||||
lpar_http, "Could not load %s " % LPAR_HTTPRESP_FILE)
|
|
||||||
|
|
||||||
self.resp = lpar_http.response
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.partition.get_this_partition', autospec=True)
|
|
||||||
def test_mgmt_uuid(self, mock_get_partition):
|
|
||||||
mock_get_partition.return_value = mock.Mock(uuid='mock_mgmt')
|
|
||||||
adpt = mock.Mock()
|
|
||||||
|
|
||||||
# First run should call the partition only once
|
|
||||||
self.assertEqual('mock_mgmt', mgmt.mgmt_uuid(adpt))
|
|
||||||
mock_get_partition.assert_called_once_with(adpt)
|
|
||||||
|
|
||||||
# But a subsequent call should effectively no-op
|
|
||||||
mock_get_partition.reset_mock()
|
|
||||||
self.assertEqual('mock_mgmt', mgmt.mgmt_uuid(adpt))
|
|
||||||
self.assertEqual(mock_get_partition.call_count, 0)
|
|
||||||
|
|
||||||
@mock.patch('glob.glob', autospec=True)
|
|
||||||
@mock.patch('nova.privsep.path.writefile', autospec=True)
|
|
||||||
@mock.patch('os.path.realpath', autospec=True)
|
|
||||||
def test_discover_vscsi_disk(self, mock_realpath, mock_writefile,
|
|
||||||
mock_glob):
|
|
||||||
scanpath = '/sys/bus/vio/devices/30000005/host*/scsi_host/host*/scan'
|
|
||||||
udid = ('275b5d5f88fa5611e48be9000098be9400'
|
|
||||||
'13fb2aa55a2d7b8d150cb1b7b6bc04d6')
|
|
||||||
devlink = ('/dev/disk/by-id/scsi-SIBM_3303_NVDISK' + udid)
|
|
||||||
mapping = mock.Mock()
|
|
||||||
mapping.client_adapter.lpar_slot_num = 5
|
|
||||||
mapping.backing_storage.udid = udid
|
|
||||||
# Realistically, first glob would return e.g. .../host0/.../host0/...
|
|
||||||
# but it doesn't matter for test purposes.
|
|
||||||
mock_glob.side_effect = [[scanpath], [devlink]]
|
|
||||||
mgmt.discover_vscsi_disk(mapping)
|
|
||||||
mock_glob.assert_has_calls(
|
|
||||||
[mock.call(scanpath), mock.call('/dev/disk/by-id/*' + udid[-32:])])
|
|
||||||
mock_writefile.assert_called_once_with(scanpath, 'a', '- - -')
|
|
||||||
mock_realpath.assert_called_with(devlink)
|
|
||||||
|
|
||||||
@mock.patch('retrying.retry', autospec=True)
|
|
||||||
@mock.patch('glob.glob', autospec=True)
|
|
||||||
@mock.patch('nova.privsep.path.writefile', autospec=True)
|
|
||||||
def test_discover_vscsi_disk_not_one_result(self, mock_writefile,
|
|
||||||
mock_glob, mock_retry):
|
|
||||||
"""Zero or more than one disk is found by discover_vscsi_disk."""
|
|
||||||
def validate_retry(kwargs):
|
|
||||||
self.assertIn('retry_on_result', kwargs)
|
|
||||||
self.assertEqual(250, kwargs['wait_fixed'])
|
|
||||||
self.assertEqual(300000, kwargs['stop_max_delay'])
|
|
||||||
|
|
||||||
def raiser(unused):
|
|
||||||
raise retrying.RetryError(mock.Mock(attempt_number=123))
|
|
||||||
|
|
||||||
def retry_passthrough(**kwargs):
|
|
||||||
validate_retry(kwargs)
|
|
||||||
|
|
||||||
def wrapped(_poll_for_dev):
|
|
||||||
return _poll_for_dev
|
|
||||||
return wrapped
|
|
||||||
|
|
||||||
def retry_timeout(**kwargs):
|
|
||||||
validate_retry(kwargs)
|
|
||||||
|
|
||||||
def wrapped(_poll_for_dev):
|
|
||||||
return raiser
|
|
||||||
return wrapped
|
|
||||||
|
|
||||||
udid = ('275b5d5f88fa5611e48be9000098be9400'
|
|
||||||
'13fb2aa55a2d7b8d150cb1b7b6bc04d6')
|
|
||||||
mapping = mock.Mock()
|
|
||||||
mapping.client_adapter.lpar_slot_num = 5
|
|
||||||
mapping.backing_storage.udid = udid
|
|
||||||
# No disks found
|
|
||||||
mock_retry.side_effect = retry_timeout
|
|
||||||
mock_glob.side_effect = lambda path: []
|
|
||||||
self.assertRaises(exception.NoDiskDiscoveryException,
|
|
||||||
mgmt.discover_vscsi_disk, mapping)
|
|
||||||
# Multiple disks found
|
|
||||||
mock_retry.side_effect = retry_passthrough
|
|
||||||
mock_glob.side_effect = [['path'], ['/dev/sde', '/dev/sdf']]
|
|
||||||
self.assertRaises(exception.UniqueDiskDiscoveryException,
|
|
||||||
mgmt.discover_vscsi_disk, mapping)
|
|
||||||
|
|
||||||
@mock.patch('time.sleep', autospec=True)
|
|
||||||
@mock.patch('os.path.realpath', autospec=True)
|
|
||||||
@mock.patch('os.stat', autospec=True)
|
|
||||||
@mock.patch('nova.privsep.path.writefile', autospec=True)
|
|
||||||
def test_remove_block_dev(self, mock_writefile, mock_stat, mock_realpath,
|
|
||||||
mock_sleep):
|
|
||||||
link = '/dev/link/foo'
|
|
||||||
realpath = '/dev/sde'
|
|
||||||
delpath = '/sys/block/sde/device/delete'
|
|
||||||
mock_realpath.return_value = realpath
|
|
||||||
|
|
||||||
# Good path
|
|
||||||
mock_stat.side_effect = (None, None, OSError())
|
|
||||||
mgmt.remove_block_dev(link)
|
|
||||||
mock_realpath.assert_called_with(link)
|
|
||||||
mock_stat.assert_has_calls([mock.call(realpath), mock.call(delpath),
|
|
||||||
mock.call(realpath)])
|
|
||||||
mock_writefile.assert_called_once_with(delpath, 'a', '1')
|
|
||||||
self.assertEqual(0, mock_sleep.call_count)
|
|
||||||
|
|
||||||
# Device param not found
|
|
||||||
mock_writefile.reset_mock()
|
|
||||||
mock_stat.reset_mock()
|
|
||||||
mock_stat.side_effect = (OSError(), None, None)
|
|
||||||
self.assertRaises(exception.InvalidDevicePath, mgmt.remove_block_dev,
|
|
||||||
link)
|
|
||||||
# stat was called once; exec was not called
|
|
||||||
self.assertEqual(1, mock_stat.call_count)
|
|
||||||
self.assertEqual(0, mock_writefile.call_count)
|
|
||||||
|
|
||||||
# Delete special file not found
|
|
||||||
mock_writefile.reset_mock()
|
|
||||||
mock_stat.reset_mock()
|
|
||||||
mock_stat.side_effect = (None, OSError(), None)
|
|
||||||
self.assertRaises(exception.InvalidDevicePath, mgmt.remove_block_dev,
|
|
||||||
link)
|
|
||||||
# stat was called twice; exec was not called
|
|
||||||
self.assertEqual(2, mock_stat.call_count)
|
|
||||||
self.assertEqual(0, mock_writefile.call_count)
|
|
||||||
|
|
||||||
@mock.patch('retrying.retry')
|
|
||||||
@mock.patch('os.path.realpath')
|
|
||||||
@mock.patch('os.stat')
|
|
||||||
@mock.patch('nova.privsep.path.writefile')
|
|
||||||
def test_remove_block_dev_timeout(self, mock_dacw, mock_stat,
|
|
||||||
mock_realpath, mock_retry):
|
|
||||||
|
|
||||||
def validate_retry(kwargs):
|
|
||||||
self.assertIn('retry_on_result', kwargs)
|
|
||||||
self.assertEqual(250, kwargs['wait_fixed'])
|
|
||||||
self.assertEqual(10000, kwargs['stop_max_delay'])
|
|
||||||
|
|
||||||
def raiser(unused):
|
|
||||||
raise retrying.RetryError(mock.Mock(attempt_number=123))
|
|
||||||
|
|
||||||
def retry_timeout(**kwargs):
|
|
||||||
validate_retry(kwargs)
|
|
||||||
|
|
||||||
def wrapped(_poll_for_del):
|
|
||||||
return raiser
|
|
||||||
return wrapped
|
|
||||||
|
|
||||||
# Deletion was attempted, but device is still there
|
|
||||||
link = '/dev/link/foo'
|
|
||||||
delpath = '/sys/block/sde/device/delete'
|
|
||||||
realpath = '/dev/sde'
|
|
||||||
mock_realpath.return_value = realpath
|
|
||||||
mock_stat.side_effect = lambda path: 1
|
|
||||||
mock_retry.side_effect = retry_timeout
|
|
||||||
|
|
||||||
self.assertRaises(
|
|
||||||
exception.DeviceDeletionException, mgmt.remove_block_dev, link)
|
|
||||||
mock_realpath.assert_called_once_with(link)
|
|
||||||
mock_dacw.assert_called_with(delpath, 'a', '1')
|
|
@ -1,327 +0,0 @@
|
|||||||
# Copyright 2017 IBM Corp.
|
|
||||||
#
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from pypowervm import exceptions as pvm_ex
|
|
||||||
from pypowervm.wrappers import network as pvm_net
|
|
||||||
from unittest import mock
|
|
||||||
|
|
||||||
from nova import exception
|
|
||||||
from nova.network import model
|
|
||||||
from nova import test
|
|
||||||
from nova.virt.powervm import vif
|
|
||||||
|
|
||||||
|
|
||||||
def cna(mac):
|
|
||||||
"""Builds a mock Client Network Adapter for unit tests."""
|
|
||||||
return mock.Mock(spec=pvm_net.CNA, mac=mac, vswitch_uri='fake_href')
|
|
||||||
|
|
||||||
|
|
||||||
class TestVifFunctions(test.NoDBTestCase):
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestVifFunctions, self).setUp()
|
|
||||||
|
|
||||||
self.adpt = mock.Mock()
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vif.PvmOvsVifDriver')
|
|
||||||
def test_build_vif_driver(self, mock_driver):
|
|
||||||
# Valid vif type
|
|
||||||
driver = vif._build_vif_driver(self.adpt, 'instance', {'type': 'ovs'})
|
|
||||||
self.assertEqual(mock_driver.return_value, driver)
|
|
||||||
|
|
||||||
mock_driver.reset_mock()
|
|
||||||
|
|
||||||
# Fail if no vif type
|
|
||||||
self.assertRaises(exception.VirtualInterfacePlugException,
|
|
||||||
vif._build_vif_driver, self.adpt, 'instance',
|
|
||||||
{'type': None})
|
|
||||||
mock_driver.assert_not_called()
|
|
||||||
|
|
||||||
# Fail if invalid vif type
|
|
||||||
self.assertRaises(exception.VirtualInterfacePlugException,
|
|
||||||
vif._build_vif_driver, self.adpt, 'instance',
|
|
||||||
{'type': 'bad_type'})
|
|
||||||
mock_driver.assert_not_called()
|
|
||||||
|
|
||||||
@mock.patch('oslo_serialization.jsonutils.dumps')
|
|
||||||
@mock.patch('pypowervm.wrappers.event.Event')
|
|
||||||
def test_push_vif_event(self, mock_event, mock_dumps):
|
|
||||||
mock_vif = mock.Mock(mac='MAC', href='HREF')
|
|
||||||
vif._push_vif_event(self.adpt, 'action', mock_vif, mock.Mock(),
|
|
||||||
'pvm_sea')
|
|
||||||
mock_dumps.assert_called_once_with(
|
|
||||||
{'provider': 'NOVA_PVM_VIF', 'action': 'action', 'mac': 'MAC',
|
|
||||||
'type': 'pvm_sea'})
|
|
||||||
mock_event.bld.assert_called_once_with(self.adpt, 'HREF',
|
|
||||||
mock_dumps.return_value)
|
|
||||||
mock_event.bld.return_value.create.assert_called_once_with()
|
|
||||||
|
|
||||||
mock_dumps.reset_mock()
|
|
||||||
mock_event.bld.reset_mock()
|
|
||||||
mock_event.bld.return_value.create.reset_mock()
|
|
||||||
|
|
||||||
# Exception reraises
|
|
||||||
mock_event.bld.return_value.create.side_effect = IndexError
|
|
||||||
self.assertRaises(IndexError, vif._push_vif_event, self.adpt, 'action',
|
|
||||||
mock_vif, mock.Mock(), 'pvm_sea')
|
|
||||||
mock_dumps.assert_called_once_with(
|
|
||||||
{'provider': 'NOVA_PVM_VIF', 'action': 'action', 'mac': 'MAC',
|
|
||||||
'type': 'pvm_sea'})
|
|
||||||
mock_event.bld.assert_called_once_with(self.adpt, 'HREF',
|
|
||||||
mock_dumps.return_value)
|
|
||||||
mock_event.bld.return_value.create.assert_called_once_with()
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vif._push_vif_event')
|
|
||||||
@mock.patch('nova.virt.powervm.vif._build_vif_driver')
|
|
||||||
def test_plug(self, mock_bld_drv, mock_event):
|
|
||||||
"""Test the top-level plug method."""
|
|
||||||
mock_vif = {'address': 'MAC', 'type': 'pvm_sea'}
|
|
||||||
|
|
||||||
# 1) With new_vif=True (default)
|
|
||||||
vnet = vif.plug(self.adpt, 'instance', mock_vif)
|
|
||||||
|
|
||||||
mock_bld_drv.assert_called_once_with(self.adpt, 'instance', mock_vif)
|
|
||||||
mock_bld_drv.return_value.plug.assert_called_once_with(mock_vif,
|
|
||||||
new_vif=True)
|
|
||||||
self.assertEqual(mock_bld_drv.return_value.plug.return_value, vnet)
|
|
||||||
mock_event.assert_called_once_with(self.adpt, 'plug', vnet, mock.ANY,
|
|
||||||
'pvm_sea')
|
|
||||||
|
|
||||||
# Clean up
|
|
||||||
mock_bld_drv.reset_mock()
|
|
||||||
mock_bld_drv.return_value.plug.reset_mock()
|
|
||||||
mock_event.reset_mock()
|
|
||||||
|
|
||||||
# 2) Plug returns None (which it should IRL whenever new_vif=False).
|
|
||||||
mock_bld_drv.return_value.plug.return_value = None
|
|
||||||
vnet = vif.plug(self.adpt, 'instance', mock_vif, new_vif=False)
|
|
||||||
|
|
||||||
mock_bld_drv.assert_called_once_with(self.adpt, 'instance', mock_vif)
|
|
||||||
mock_bld_drv.return_value.plug.assert_called_once_with(mock_vif,
|
|
||||||
new_vif=False)
|
|
||||||
self.assertIsNone(vnet)
|
|
||||||
mock_event.assert_not_called()
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vif._build_vif_driver')
|
|
||||||
def test_plug_raises(self, mock_vif_drv):
|
|
||||||
"""HttpError is converted to VirtualInterfacePlugException."""
|
|
||||||
vif_drv = mock.Mock(plug=mock.Mock(side_effect=pvm_ex.HttpError(
|
|
||||||
resp=mock.Mock())))
|
|
||||||
mock_vif_drv.return_value = vif_drv
|
|
||||||
mock_vif = {'address': 'vifaddr'}
|
|
||||||
self.assertRaises(exception.VirtualInterfacePlugException,
|
|
||||||
vif.plug, 'adap', 'inst', mock_vif,
|
|
||||||
new_vif='new_vif')
|
|
||||||
mock_vif_drv.assert_called_once_with('adap', 'inst', mock_vif)
|
|
||||||
vif_drv.plug.assert_called_once_with(mock_vif, new_vif='new_vif')
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vif._push_vif_event')
|
|
||||||
@mock.patch('nova.virt.powervm.vif._build_vif_driver')
|
|
||||||
def test_unplug(self, mock_bld_drv, mock_event):
|
|
||||||
"""Test the top-level unplug method."""
|
|
||||||
mock_vif = {'address': 'MAC', 'type': 'pvm_sea'}
|
|
||||||
|
|
||||||
# 1) With default cna_w_list
|
|
||||||
mock_bld_drv.return_value.unplug.return_value = 'vnet_w'
|
|
||||||
vif.unplug(self.adpt, 'instance', mock_vif)
|
|
||||||
mock_bld_drv.assert_called_once_with(self.adpt, 'instance', mock_vif)
|
|
||||||
mock_bld_drv.return_value.unplug.assert_called_once_with(
|
|
||||||
mock_vif, cna_w_list=None)
|
|
||||||
mock_event.assert_called_once_with(self.adpt, 'unplug', 'vnet_w',
|
|
||||||
mock.ANY, 'pvm_sea')
|
|
||||||
# Clean up
|
|
||||||
mock_bld_drv.reset_mock()
|
|
||||||
mock_bld_drv.return_value.unplug.reset_mock()
|
|
||||||
mock_event.reset_mock()
|
|
||||||
|
|
||||||
# 2) With specified cna_w_list
|
|
||||||
mock_bld_drv.return_value.unplug.return_value = None
|
|
||||||
vif.unplug(self.adpt, 'instance', mock_vif, cna_w_list='cnalist')
|
|
||||||
mock_bld_drv.assert_called_once_with(self.adpt, 'instance', mock_vif)
|
|
||||||
mock_bld_drv.return_value.unplug.assert_called_once_with(
|
|
||||||
mock_vif, cna_w_list='cnalist')
|
|
||||||
mock_event.assert_not_called()
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vif._build_vif_driver')
|
|
||||||
def test_unplug_raises(self, mock_vif_drv):
|
|
||||||
"""HttpError is converted to VirtualInterfacePlugException."""
|
|
||||||
vif_drv = mock.Mock(unplug=mock.Mock(side_effect=pvm_ex.HttpError(
|
|
||||||
resp=mock.Mock())))
|
|
||||||
mock_vif_drv.return_value = vif_drv
|
|
||||||
mock_vif = {'address': 'vifaddr'}
|
|
||||||
self.assertRaises(exception.VirtualInterfaceUnplugException,
|
|
||||||
vif.unplug, 'adap', 'inst', mock_vif,
|
|
||||||
cna_w_list='cna_w_list')
|
|
||||||
mock_vif_drv.assert_called_once_with('adap', 'inst', mock_vif)
|
|
||||||
vif_drv.unplug.assert_called_once_with(
|
|
||||||
mock_vif, cna_w_list='cna_w_list')
|
|
||||||
|
|
||||||
|
|
||||||
class TestVifOvsDriver(test.NoDBTestCase):
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestVifOvsDriver, self).setUp()
|
|
||||||
|
|
||||||
self.adpt = mock.Mock()
|
|
||||||
self.inst = mock.MagicMock(uuid='inst_uuid')
|
|
||||||
self.drv = vif.PvmOvsVifDriver(self.adpt, self.inst)
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.cna.crt_p2p_cna', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.partition.get_this_partition', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_pvm_uuid')
|
|
||||||
def test_plug(self, mock_pvm_uuid, mock_mgmt_lpar, mock_p2p_cna,):
|
|
||||||
# Mock the data
|
|
||||||
mock_pvm_uuid.return_value = 'lpar_uuid'
|
|
||||||
mock_mgmt_lpar.return_value = mock.Mock(uuid='mgmt_uuid')
|
|
||||||
# mock_trunk_dev_name.return_value = 'device'
|
|
||||||
|
|
||||||
cna_w, trunk_wraps = mock.MagicMock(), [mock.MagicMock()]
|
|
||||||
mock_p2p_cna.return_value = cna_w, trunk_wraps
|
|
||||||
|
|
||||||
# Run the plug
|
|
||||||
network_model = model.Model({'bridge': 'br0', 'meta': {'mtu': 1450}})
|
|
||||||
mock_vif = model.VIF(address='aa:bb:cc:dd:ee:ff', id='vif_id',
|
|
||||||
network=network_model, devname='device')
|
|
||||||
self.drv.plug(mock_vif)
|
|
||||||
|
|
||||||
# Validate the calls
|
|
||||||
ovs_ext_ids = ('iface-id=vif_id,iface-status=active,'
|
|
||||||
'attached-mac=aa:bb:cc:dd:ee:ff,vm-uuid=inst_uuid')
|
|
||||||
mock_p2p_cna.assert_called_once_with(
|
|
||||||
self.adpt, None, 'lpar_uuid', ['mgmt_uuid'],
|
|
||||||
'NovaLinkVEABridge', configured_mtu=1450, crt_vswitch=True,
|
|
||||||
mac_addr='aa:bb:cc:dd:ee:ff', dev_name='device', ovs_bridge='br0',
|
|
||||||
ovs_ext_ids=ovs_ext_ids)
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.partition.get_this_partition', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_pvm_uuid')
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_cnas')
|
|
||||||
@mock.patch('pypowervm.tasks.cna.find_trunks', autospec=True)
|
|
||||||
def test_plug_existing_vif(self, mock_find_trunks, mock_get_cnas,
|
|
||||||
mock_pvm_uuid, mock_mgmt_lpar):
|
|
||||||
# Mock the data
|
|
||||||
t1, t2 = mock.MagicMock(), mock.MagicMock()
|
|
||||||
mock_find_trunks.return_value = [t1, t2]
|
|
||||||
|
|
||||||
mock_cna = mock.Mock(mac='aa:bb:cc:dd:ee:ff')
|
|
||||||
mock_get_cnas.return_value = [mock_cna]
|
|
||||||
|
|
||||||
mock_pvm_uuid.return_value = 'lpar_uuid'
|
|
||||||
|
|
||||||
mock_mgmt_lpar.return_value = mock.Mock(uuid='mgmt_uuid')
|
|
||||||
|
|
||||||
self.inst = mock.MagicMock(uuid='c2e7ff9f-b9b6-46fa-8716-93bbb795b8b4')
|
|
||||||
self.drv = vif.PvmOvsVifDriver(self.adpt, self.inst)
|
|
||||||
|
|
||||||
# Run the plug
|
|
||||||
network_model = model.Model({'bridge': 'br0', 'meta': {'mtu': 1500}})
|
|
||||||
mock_vif = model.VIF(address='aa:bb:cc:dd:ee:ff', id='vif_id',
|
|
||||||
network=network_model, devname='devname')
|
|
||||||
resp = self.drv.plug(mock_vif, new_vif=False)
|
|
||||||
|
|
||||||
self.assertIsNone(resp)
|
|
||||||
|
|
||||||
# Validate if trunk.update got invoked for all trunks of CNA of vif
|
|
||||||
self.assertTrue(t1.update.called)
|
|
||||||
self.assertTrue(t2.update.called)
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.cna.find_trunks')
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_cnas')
|
|
||||||
def test_unplug(self, mock_get_cnas, mock_find_trunks):
|
|
||||||
# Set up the mocks
|
|
||||||
mock_cna = mock.Mock(mac='aa:bb:cc:dd:ee:ff')
|
|
||||||
mock_get_cnas.return_value = [mock_cna]
|
|
||||||
|
|
||||||
t1, t2 = mock.MagicMock(), mock.MagicMock()
|
|
||||||
mock_find_trunks.return_value = [t1, t2]
|
|
||||||
|
|
||||||
# Call the unplug
|
|
||||||
mock_vif = {'address': 'aa:bb:cc:dd:ee:ff',
|
|
||||||
'network': {'bridge': 'br-int'}}
|
|
||||||
self.drv.unplug(mock_vif)
|
|
||||||
|
|
||||||
# The trunks and the cna should have been deleted
|
|
||||||
self.assertTrue(t1.delete.called)
|
|
||||||
self.assertTrue(t2.delete.called)
|
|
||||||
self.assertTrue(mock_cna.delete.called)
|
|
||||||
|
|
||||||
|
|
||||||
class TestVifSeaDriver(test.NoDBTestCase):
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestVifSeaDriver, self).setUp()
|
|
||||||
|
|
||||||
self.adpt = mock.Mock()
|
|
||||||
self.inst = mock.Mock()
|
|
||||||
self.drv = vif.PvmSeaVifDriver(self.adpt, self.inst)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_pvm_uuid')
|
|
||||||
@mock.patch('pypowervm.tasks.cna.crt_cna')
|
|
||||||
def test_plug_from_neutron(self, mock_crt_cna, mock_pvm_uuid):
|
|
||||||
"""Tests that a VIF can be created. Mocks Neutron net"""
|
|
||||||
|
|
||||||
# Set up the mocks. Look like Neutron
|
|
||||||
fake_vif = {'details': {'vlan': 5}, 'network': {'meta': {}},
|
|
||||||
'address': 'aabbccddeeff'}
|
|
||||||
|
|
||||||
def validate_crt(adpt, host_uuid, lpar_uuid, vlan, mac_addr=None):
|
|
||||||
self.assertIsNone(host_uuid)
|
|
||||||
self.assertEqual(5, vlan)
|
|
||||||
self.assertEqual('aabbccddeeff', mac_addr)
|
|
||||||
return pvm_net.CNA.bld(self.adpt, 5, 'host_uuid',
|
|
||||||
mac_addr=mac_addr)
|
|
||||||
mock_crt_cna.side_effect = validate_crt
|
|
||||||
|
|
||||||
# Invoke
|
|
||||||
resp = self.drv.plug(fake_vif)
|
|
||||||
|
|
||||||
# Validate (along with validate method above)
|
|
||||||
self.assertEqual(1, mock_crt_cna.call_count)
|
|
||||||
self.assertIsNotNone(resp)
|
|
||||||
self.assertIsInstance(resp, pvm_net.CNA)
|
|
||||||
|
|
||||||
def test_plug_existing_vif(self):
|
|
||||||
"""Tests that a VIF need not be created."""
|
|
||||||
|
|
||||||
# Set up the mocks
|
|
||||||
fake_vif = {'network': {'meta': {'vlan': 5}},
|
|
||||||
'address': 'aabbccddeeff'}
|
|
||||||
|
|
||||||
# Invoke
|
|
||||||
resp = self.drv.plug(fake_vif, new_vif=False)
|
|
||||||
|
|
||||||
self.assertIsNone(resp)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_cnas')
|
|
||||||
def test_unplug_vifs(self, mock_vm_get):
|
|
||||||
"""Tests that a delete of the vif can be done."""
|
|
||||||
# Mock up the CNA response. Two should already exist, the other
|
|
||||||
# should not.
|
|
||||||
cnas = [cna('AABBCCDDEEFF'), cna('AABBCCDDEE11'), cna('AABBCCDDEE22')]
|
|
||||||
mock_vm_get.return_value = cnas
|
|
||||||
|
|
||||||
# Run method. The AABBCCDDEE11 won't be unplugged (wasn't invoked
|
|
||||||
# below) and the last unplug will also just no-op because its not on
|
|
||||||
# the VM.
|
|
||||||
self.drv.unplug({'address': 'aa:bb:cc:dd:ee:ff'})
|
|
||||||
self.drv.unplug({'address': 'aa:bb:cc:dd:ee:22'})
|
|
||||||
self.drv.unplug({'address': 'aa:bb:cc:dd:ee:33'})
|
|
||||||
|
|
||||||
# The delete should have only been called once for each applicable vif.
|
|
||||||
# The second CNA didn't have a matching mac so it should be skipped.
|
|
||||||
self.assertEqual(1, cnas[0].delete.call_count)
|
|
||||||
self.assertEqual(0, cnas[1].delete.call_count)
|
|
||||||
self.assertEqual(1, cnas[2].delete.call_count)
|
|
@ -1,564 +0,0 @@
|
|||||||
# Copyright 2014, 2017 IBM Corp.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from unittest import mock
|
|
||||||
|
|
||||||
import fixtures
|
|
||||||
from pypowervm import exceptions as pvm_exc
|
|
||||||
from pypowervm.helpers import log_helper as pvm_log
|
|
||||||
from pypowervm.tests import test_fixtures as pvm_fx
|
|
||||||
from pypowervm.utils import lpar_builder as lpar_bld
|
|
||||||
from pypowervm.utils import uuid as pvm_uuid
|
|
||||||
from pypowervm.wrappers import base_partition as pvm_bp
|
|
||||||
from pypowervm.wrappers import logical_partition as pvm_lpar
|
|
||||||
|
|
||||||
from nova.compute import power_state
|
|
||||||
from nova import exception
|
|
||||||
from nova import test
|
|
||||||
from nova.tests.unit.virt import powervm
|
|
||||||
from nova.virt.powervm import vm
|
|
||||||
|
|
||||||
|
|
||||||
class TestVMBuilder(test.NoDBTestCase):
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestVMBuilder, self).setUp()
|
|
||||||
|
|
||||||
self.adpt = mock.MagicMock()
|
|
||||||
self.host_w = mock.MagicMock()
|
|
||||||
self.lpar_b = vm.VMBuilder(self.host_w, self.adpt)
|
|
||||||
|
|
||||||
self.san_lpar_name = self.useFixture(fixtures.MockPatch(
|
|
||||||
'pypowervm.util.sanitize_partition_name_for_api',
|
|
||||||
autospec=True)).mock
|
|
||||||
|
|
||||||
self.inst = powervm.TEST_INSTANCE
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.utils.lpar_builder.DefaultStandardize',
|
|
||||||
autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_pvm_uuid')
|
|
||||||
@mock.patch('pypowervm.utils.lpar_builder.LPARBuilder', autospec=True)
|
|
||||||
def test_vm_builder(self, mock_lpar_bldr, mock_uuid2pvm, mock_def_stdz):
|
|
||||||
inst = mock.Mock()
|
|
||||||
inst.configure_mock(
|
|
||||||
name='lpar_name', uuid='lpar_uuid',
|
|
||||||
flavor=mock.Mock(memory_mb='mem', vcpus='vcpus', extra_specs={}))
|
|
||||||
vmb = vm.VMBuilder('host', 'adap')
|
|
||||||
mock_def_stdz.assert_called_once_with('host', proc_units_factor=0.1)
|
|
||||||
self.assertEqual(mock_lpar_bldr.return_value,
|
|
||||||
vmb.lpar_builder(inst))
|
|
||||||
self.san_lpar_name.assert_called_once_with('lpar_name')
|
|
||||||
mock_uuid2pvm.assert_called_once_with(inst)
|
|
||||||
mock_lpar_bldr.assert_called_once_with(
|
|
||||||
'adap', {'name': self.san_lpar_name.return_value,
|
|
||||||
'uuid': mock_uuid2pvm.return_value,
|
|
||||||
'memory': 'mem',
|
|
||||||
'vcpu': 'vcpus',
|
|
||||||
'srr_capability': True}, mock_def_stdz.return_value)
|
|
||||||
|
|
||||||
# Assert non-default proc_units_factor.
|
|
||||||
mock_def_stdz.reset_mock()
|
|
||||||
self.flags(proc_units_factor=0.2, group='powervm')
|
|
||||||
vmb = vm.VMBuilder('host', 'adap')
|
|
||||||
mock_def_stdz.assert_called_once_with('host', proc_units_factor=0.2)
|
|
||||||
|
|
||||||
def test_format_flavor(self):
|
|
||||||
"""Perform tests against _format_flavor."""
|
|
||||||
# convert instance uuid to pypowervm uuid
|
|
||||||
# LP 1561128, simplified remote restart is enabled by default
|
|
||||||
lpar_attrs = {'memory': 2048,
|
|
||||||
'name': self.san_lpar_name.return_value,
|
|
||||||
'uuid': pvm_uuid.convert_uuid_to_pvm(
|
|
||||||
self.inst.uuid).upper(),
|
|
||||||
'vcpu': 1, 'srr_capability': True}
|
|
||||||
|
|
||||||
# Test dedicated procs
|
|
||||||
self.inst.flavor.extra_specs = {'powervm:dedicated_proc': 'true'}
|
|
||||||
test_attrs = dict(lpar_attrs, dedicated_proc='true')
|
|
||||||
|
|
||||||
self.assertEqual(self.lpar_b._format_flavor(self.inst), test_attrs)
|
|
||||||
self.san_lpar_name.assert_called_with(self.inst.name)
|
|
||||||
self.san_lpar_name.reset_mock()
|
|
||||||
|
|
||||||
# Test dedicated procs, min/max vcpu and sharing mode
|
|
||||||
self.inst.flavor.extra_specs = {'powervm:dedicated_proc': 'true',
|
|
||||||
'powervm:dedicated_sharing_mode':
|
|
||||||
'share_idle_procs_active',
|
|
||||||
'powervm:min_vcpu': '1',
|
|
||||||
'powervm:max_vcpu': '3'}
|
|
||||||
test_attrs = dict(lpar_attrs,
|
|
||||||
dedicated_proc='true',
|
|
||||||
sharing_mode='sre idle procs active',
|
|
||||||
min_vcpu='1', max_vcpu='3')
|
|
||||||
self.assertEqual(self.lpar_b._format_flavor(self.inst), test_attrs)
|
|
||||||
self.san_lpar_name.assert_called_with(self.inst.name)
|
|
||||||
self.san_lpar_name.reset_mock()
|
|
||||||
|
|
||||||
# Test shared proc sharing mode
|
|
||||||
self.inst.flavor.extra_specs = {'powervm:uncapped': 'true'}
|
|
||||||
test_attrs = dict(lpar_attrs, sharing_mode='uncapped')
|
|
||||||
self.assertEqual(self.lpar_b._format_flavor(self.inst), test_attrs)
|
|
||||||
self.san_lpar_name.assert_called_with(self.inst.name)
|
|
||||||
self.san_lpar_name.reset_mock()
|
|
||||||
|
|
||||||
# Test availability priority
|
|
||||||
self.inst.flavor.extra_specs = {'powervm:availability_priority': '150'}
|
|
||||||
test_attrs = dict(lpar_attrs, avail_priority='150')
|
|
||||||
self.assertEqual(self.lpar_b._format_flavor(self.inst), test_attrs)
|
|
||||||
self.san_lpar_name.assert_called_with(self.inst.name)
|
|
||||||
self.san_lpar_name.reset_mock()
|
|
||||||
|
|
||||||
# Test processor compatibility
|
|
||||||
self.inst.flavor.extra_specs = {
|
|
||||||
'powervm:processor_compatibility': 'POWER8'}
|
|
||||||
test_attrs = dict(lpar_attrs, processor_compatibility='POWER8')
|
|
||||||
self.assertEqual(self.lpar_b._format_flavor(self.inst), test_attrs)
|
|
||||||
self.san_lpar_name.assert_called_with(self.inst.name)
|
|
||||||
self.san_lpar_name.reset_mock()
|
|
||||||
|
|
||||||
# Test min, max proc units
|
|
||||||
self.inst.flavor.extra_specs = {'powervm:min_proc_units': '0.5',
|
|
||||||
'powervm:max_proc_units': '2.0'}
|
|
||||||
test_attrs = dict(lpar_attrs, min_proc_units='0.5',
|
|
||||||
max_proc_units='2.0')
|
|
||||||
self.assertEqual(self.lpar_b._format_flavor(self.inst), test_attrs)
|
|
||||||
self.san_lpar_name.assert_called_with(self.inst.name)
|
|
||||||
self.san_lpar_name.reset_mock()
|
|
||||||
|
|
||||||
# Test min, max mem
|
|
||||||
self.inst.flavor.extra_specs = {'powervm:min_mem': '1024',
|
|
||||||
'powervm:max_mem': '4096'}
|
|
||||||
test_attrs = dict(lpar_attrs, min_mem='1024', max_mem='4096')
|
|
||||||
self.assertEqual(self.lpar_b._format_flavor(self.inst), test_attrs)
|
|
||||||
self.san_lpar_name.assert_called_with(self.inst.name)
|
|
||||||
self.san_lpar_name.reset_mock()
|
|
||||||
|
|
||||||
# Test remote restart set to false
|
|
||||||
self.inst.flavor.extra_specs = {'powervm:srr_capability': 'false'}
|
|
||||||
test_attrs = dict(lpar_attrs, srr_capability=False)
|
|
||||||
self.assertEqual(self.lpar_b._format_flavor(self.inst), test_attrs)
|
|
||||||
|
|
||||||
# Unhandled powervm: key is ignored
|
|
||||||
self.inst.flavor.extra_specs = {'powervm:srr_capability': 'false',
|
|
||||||
'powervm:something_new': 'foo'}
|
|
||||||
test_attrs = dict(lpar_attrs, srr_capability=False)
|
|
||||||
self.assertEqual(self.lpar_b._format_flavor(self.inst), test_attrs)
|
|
||||||
|
|
||||||
# If we recognize a key, but don't handle it, we raise
|
|
||||||
with mock.patch.object(self.lpar_b, '_is_pvm_valid_key',
|
|
||||||
return_value=True):
|
|
||||||
self.inst.flavor.extra_specs = {'powervm:srr_capability': 'false',
|
|
||||||
'powervm:something_new': 'foo'}
|
|
||||||
self.assertRaises(KeyError, self.lpar_b._format_flavor, self.inst)
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.wrappers.shared_proc_pool.SharedProcPool.search')
|
|
||||||
def test_spp_pool_id(self, mock_search):
|
|
||||||
# The default pool is always zero. Validate the path.
|
|
||||||
self.assertEqual(0, self.lpar_b._spp_pool_id('DefaultPool'))
|
|
||||||
self.assertEqual(0, self.lpar_b._spp_pool_id(None))
|
|
||||||
|
|
||||||
# Further invocations require calls to the adapter. Build a minimal
|
|
||||||
# mocked SPP wrapper
|
|
||||||
spp = mock.MagicMock()
|
|
||||||
spp.id = 1
|
|
||||||
|
|
||||||
# Three invocations. First has too many elems. Second has none.
|
|
||||||
# Third is just right. :-)
|
|
||||||
mock_search.side_effect = [[spp, spp], [], [spp]]
|
|
||||||
|
|
||||||
self.assertRaises(exception.ValidationError, self.lpar_b._spp_pool_id,
|
|
||||||
'fake_name')
|
|
||||||
self.assertRaises(exception.ValidationError, self.lpar_b._spp_pool_id,
|
|
||||||
'fake_name')
|
|
||||||
|
|
||||||
self.assertEqual(1, self.lpar_b._spp_pool_id('fake_name'))
|
|
||||||
|
|
||||||
|
|
||||||
class TestVM(test.NoDBTestCase):
|
|
||||||
def setUp(self):
|
|
||||||
super(TestVM, self).setUp()
|
|
||||||
|
|
||||||
self.apt = self.useFixture(pvm_fx.AdapterFx(
|
|
||||||
traits=pvm_fx.LocalPVMTraits)).adpt
|
|
||||||
self.apt.helpers = [pvm_log.log_helper]
|
|
||||||
|
|
||||||
self.san_lpar_name = self.useFixture(fixtures.MockPatch(
|
|
||||||
'pypowervm.util.sanitize_partition_name_for_api')).mock
|
|
||||||
self.san_lpar_name.side_effect = lambda name: name
|
|
||||||
mock_entries = [mock.Mock(), mock.Mock()]
|
|
||||||
self.resp = mock.MagicMock()
|
|
||||||
self.resp.feed = mock.MagicMock(entries=mock_entries)
|
|
||||||
|
|
||||||
self.get_pvm_uuid = self.useFixture(fixtures.MockPatch(
|
|
||||||
'nova.virt.powervm.vm.get_pvm_uuid')).mock
|
|
||||||
|
|
||||||
self.inst = powervm.TEST_INSTANCE
|
|
||||||
|
|
||||||
def test_translate_vm_state(self):
|
|
||||||
self.assertEqual(power_state.RUNNING,
|
|
||||||
vm._translate_vm_state('running'))
|
|
||||||
self.assertEqual(power_state.RUNNING,
|
|
||||||
vm._translate_vm_state('migrating running'))
|
|
||||||
self.assertEqual(power_state.RUNNING,
|
|
||||||
vm._translate_vm_state('starting'))
|
|
||||||
self.assertEqual(power_state.RUNNING,
|
|
||||||
vm._translate_vm_state('open firmware'))
|
|
||||||
self.assertEqual(power_state.RUNNING,
|
|
||||||
vm._translate_vm_state('shutting down'))
|
|
||||||
self.assertEqual(power_state.RUNNING,
|
|
||||||
vm._translate_vm_state('suspending'))
|
|
||||||
|
|
||||||
self.assertEqual(power_state.SHUTDOWN,
|
|
||||||
vm._translate_vm_state('migrating not active'))
|
|
||||||
self.assertEqual(power_state.SHUTDOWN,
|
|
||||||
vm._translate_vm_state('not activated'))
|
|
||||||
|
|
||||||
self.assertEqual(power_state.NOSTATE,
|
|
||||||
vm._translate_vm_state('unknown'))
|
|
||||||
self.assertEqual(power_state.NOSTATE,
|
|
||||||
vm._translate_vm_state('hardware discovery'))
|
|
||||||
self.assertEqual(power_state.NOSTATE,
|
|
||||||
vm._translate_vm_state('not available'))
|
|
||||||
|
|
||||||
self.assertEqual(power_state.SUSPENDED,
|
|
||||||
vm._translate_vm_state('resuming'))
|
|
||||||
self.assertEqual(power_state.SUSPENDED,
|
|
||||||
vm._translate_vm_state('suspended'))
|
|
||||||
|
|
||||||
self.assertEqual(power_state.CRASHED,
|
|
||||||
vm._translate_vm_state('error'))
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.wrappers.logical_partition.LPAR', autospec=True)
|
|
||||||
def test_get_lpar_names(self, mock_lpar):
|
|
||||||
inst1 = mock.Mock()
|
|
||||||
inst1.configure_mock(name='inst1')
|
|
||||||
inst2 = mock.Mock()
|
|
||||||
inst2.configure_mock(name='inst2')
|
|
||||||
mock_lpar.search.return_value = [inst1, inst2]
|
|
||||||
self.assertEqual({'inst1', 'inst2'}, set(vm.get_lpar_names('adap')))
|
|
||||||
mock_lpar.search.assert_called_once_with(
|
|
||||||
'adap', is_mgmt_partition=False)
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.vterm.close_vterm', autospec=True)
|
|
||||||
def test_dlt_lpar(self, mock_vterm):
|
|
||||||
"""Performs a delete LPAR test."""
|
|
||||||
vm.delete_lpar(self.apt, 'inst')
|
|
||||||
self.get_pvm_uuid.assert_called_once_with('inst')
|
|
||||||
self.apt.delete.assert_called_once_with(
|
|
||||||
pvm_lpar.LPAR.schema_type, root_id=self.get_pvm_uuid.return_value)
|
|
||||||
self.assertEqual(1, mock_vterm.call_count)
|
|
||||||
|
|
||||||
# Test Failure Path
|
|
||||||
# build a mock response body with the expected HSCL msg
|
|
||||||
resp = mock.Mock()
|
|
||||||
resp.body = 'error msg: HSCL151B more text'
|
|
||||||
self.apt.delete.side_effect = pvm_exc.Error(
|
|
||||||
'Mock Error Message', response=resp)
|
|
||||||
|
|
||||||
# Reset counters
|
|
||||||
self.apt.reset_mock()
|
|
||||||
mock_vterm.reset_mock()
|
|
||||||
|
|
||||||
self.assertRaises(pvm_exc.Error, vm.delete_lpar, self.apt, 'inst')
|
|
||||||
self.assertEqual(1, mock_vterm.call_count)
|
|
||||||
self.assertEqual(1, self.apt.delete.call_count)
|
|
||||||
|
|
||||||
self.apt.reset_mock()
|
|
||||||
mock_vterm.reset_mock()
|
|
||||||
|
|
||||||
# Test HttpError 404
|
|
||||||
resp.status = 404
|
|
||||||
self.apt.delete.side_effect = pvm_exc.HttpError(resp=resp)
|
|
||||||
vm.delete_lpar(self.apt, 'inst')
|
|
||||||
self.assertEqual(1, mock_vterm.call_count)
|
|
||||||
self.assertEqual(1, self.apt.delete.call_count)
|
|
||||||
|
|
||||||
self.apt.reset_mock()
|
|
||||||
mock_vterm.reset_mock()
|
|
||||||
|
|
||||||
# Test Other HttpError
|
|
||||||
resp.status = 111
|
|
||||||
self.apt.delete.side_effect = pvm_exc.HttpError(resp=resp)
|
|
||||||
self.assertRaises(pvm_exc.HttpError, vm.delete_lpar, self.apt, 'inst')
|
|
||||||
self.assertEqual(1, mock_vterm.call_count)
|
|
||||||
self.assertEqual(1, self.apt.delete.call_count)
|
|
||||||
|
|
||||||
self.apt.reset_mock()
|
|
||||||
mock_vterm.reset_mock()
|
|
||||||
|
|
||||||
# Test HttpError 404 closing vterm
|
|
||||||
resp.status = 404
|
|
||||||
mock_vterm.side_effect = pvm_exc.HttpError(resp=resp)
|
|
||||||
vm.delete_lpar(self.apt, 'inst')
|
|
||||||
self.assertEqual(1, mock_vterm.call_count)
|
|
||||||
self.assertEqual(0, self.apt.delete.call_count)
|
|
||||||
|
|
||||||
self.apt.reset_mock()
|
|
||||||
mock_vterm.reset_mock()
|
|
||||||
|
|
||||||
# Test Other HttpError closing vterm
|
|
||||||
resp.status = 111
|
|
||||||
mock_vterm.side_effect = pvm_exc.HttpError(resp=resp)
|
|
||||||
self.assertRaises(pvm_exc.HttpError, vm.delete_lpar, self.apt, 'inst')
|
|
||||||
self.assertEqual(1, mock_vterm.call_count)
|
|
||||||
self.assertEqual(0, self.apt.delete.call_count)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vm.VMBuilder', autospec=True)
|
|
||||||
@mock.patch('pypowervm.utils.validation.LPARWrapperValidator',
|
|
||||||
autospec=True)
|
|
||||||
def test_crt_lpar(self, mock_vld, mock_vmbldr):
|
|
||||||
self.inst.flavor.extra_specs = {'powervm:dedicated_proc': 'true'}
|
|
||||||
mock_bldr = mock.Mock(spec=lpar_bld.LPARBuilder)
|
|
||||||
mock_vmbldr.return_value.lpar_builder.return_value = mock_bldr
|
|
||||||
mock_pend_lpar = mock.create_autospec(pvm_lpar.LPAR, instance=True)
|
|
||||||
mock_bldr.build.return_value = mock_pend_lpar
|
|
||||||
|
|
||||||
vm.create_lpar(self.apt, 'host', self.inst)
|
|
||||||
mock_vmbldr.assert_called_once_with('host', self.apt)
|
|
||||||
mock_vmbldr.return_value.lpar_builder.assert_called_once_with(
|
|
||||||
self.inst)
|
|
||||||
mock_bldr.build.assert_called_once_with()
|
|
||||||
mock_vld.assert_called_once_with(mock_pend_lpar, 'host')
|
|
||||||
mock_vld.return_value.validate_all.assert_called_once_with()
|
|
||||||
mock_pend_lpar.create.assert_called_once_with(parent='host')
|
|
||||||
|
|
||||||
# Test to verify the LPAR Creation with invalid name specification
|
|
||||||
mock_vmbldr.side_effect = lpar_bld.LPARBuilderException("Invalid Name")
|
|
||||||
self.assertRaises(exception.BuildAbortException,
|
|
||||||
vm.create_lpar, self.apt, 'host', self.inst)
|
|
||||||
|
|
||||||
# HttpError
|
|
||||||
mock_vmbldr.side_effect = pvm_exc.HttpError(mock.Mock())
|
|
||||||
self.assertRaises(exception.PowerVMAPIFailed,
|
|
||||||
vm.create_lpar, self.apt, 'host', self.inst)
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.wrappers.logical_partition.LPAR', autospec=True)
|
|
||||||
def test_get_instance_wrapper(self, mock_lpar):
|
|
||||||
resp = mock.Mock(status=404)
|
|
||||||
mock_lpar.get.side_effect = pvm_exc.Error('message', response=resp)
|
|
||||||
# vm.get_instance_wrapper(self.apt, instance, 'lpar_uuid')
|
|
||||||
self.assertRaises(exception.InstanceNotFound, vm.get_instance_wrapper,
|
|
||||||
self.apt, self.inst)
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.power.power_on', autospec=True)
|
|
||||||
@mock.patch('oslo_concurrency.lockutils.lock', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_instance_wrapper')
|
|
||||||
def test_power_on(self, mock_wrap, mock_lock, mock_power_on):
|
|
||||||
entry = mock.Mock(state=pvm_bp.LPARState.NOT_ACTIVATED)
|
|
||||||
mock_wrap.return_value = entry
|
|
||||||
|
|
||||||
vm.power_on(None, self.inst)
|
|
||||||
mock_power_on.assert_called_once_with(entry, None)
|
|
||||||
mock_lock.assert_called_once_with('power_%s' % self.inst.uuid)
|
|
||||||
|
|
||||||
mock_power_on.reset_mock()
|
|
||||||
mock_lock.reset_mock()
|
|
||||||
|
|
||||||
stop_states = [
|
|
||||||
pvm_bp.LPARState.RUNNING, pvm_bp.LPARState.STARTING,
|
|
||||||
pvm_bp.LPARState.OPEN_FIRMWARE, pvm_bp.LPARState.SHUTTING_DOWN,
|
|
||||||
pvm_bp.LPARState.ERROR, pvm_bp.LPARState.RESUMING,
|
|
||||||
pvm_bp.LPARState.SUSPENDING]
|
|
||||||
|
|
||||||
for stop_state in stop_states:
|
|
||||||
entry.state = stop_state
|
|
||||||
vm.power_on(None, self.inst)
|
|
||||||
mock_lock.assert_called_once_with('power_%s' % self.inst.uuid)
|
|
||||||
mock_lock.reset_mock()
|
|
||||||
self.assertEqual(0, mock_power_on.call_count)
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.power.power_on', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_instance_wrapper')
|
|
||||||
def test_power_on_negative(self, mock_wrp, mock_power_on):
|
|
||||||
mock_wrp.return_value = mock.Mock(state=pvm_bp.LPARState.NOT_ACTIVATED)
|
|
||||||
|
|
||||||
# Convertible (PowerVM) exception
|
|
||||||
mock_power_on.side_effect = pvm_exc.VMPowerOnFailure(
|
|
||||||
reason='Something bad', lpar_nm='TheLPAR')
|
|
||||||
self.assertRaises(exception.InstancePowerOnFailure,
|
|
||||||
vm.power_on, None, self.inst)
|
|
||||||
|
|
||||||
# Non-pvm error raises directly
|
|
||||||
mock_power_on.side_effect = ValueError()
|
|
||||||
self.assertRaises(ValueError, vm.power_on, None, self.inst)
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.power.PowerOp', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.power.power_off_progressive', autospec=True)
|
|
||||||
@mock.patch('oslo_concurrency.lockutils.lock', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_instance_wrapper')
|
|
||||||
def test_power_off(self, mock_wrap, mock_lock, mock_power_off, mock_pop):
|
|
||||||
entry = mock.Mock(state=pvm_bp.LPARState.NOT_ACTIVATED)
|
|
||||||
mock_wrap.return_value = entry
|
|
||||||
|
|
||||||
vm.power_off(None, self.inst)
|
|
||||||
self.assertEqual(0, mock_power_off.call_count)
|
|
||||||
self.assertEqual(0, mock_pop.stop.call_count)
|
|
||||||
mock_lock.assert_called_once_with('power_%s' % self.inst.uuid)
|
|
||||||
|
|
||||||
stop_states = [
|
|
||||||
pvm_bp.LPARState.RUNNING, pvm_bp.LPARState.STARTING,
|
|
||||||
pvm_bp.LPARState.OPEN_FIRMWARE, pvm_bp.LPARState.SHUTTING_DOWN,
|
|
||||||
pvm_bp.LPARState.ERROR, pvm_bp.LPARState.RESUMING,
|
|
||||||
pvm_bp.LPARState.SUSPENDING]
|
|
||||||
for stop_state in stop_states:
|
|
||||||
entry.state = stop_state
|
|
||||||
mock_power_off.reset_mock()
|
|
||||||
mock_pop.stop.reset_mock()
|
|
||||||
mock_lock.reset_mock()
|
|
||||||
vm.power_off(None, self.inst)
|
|
||||||
mock_power_off.assert_called_once_with(entry)
|
|
||||||
self.assertEqual(0, mock_pop.stop.call_count)
|
|
||||||
mock_lock.assert_called_once_with('power_%s' % self.inst.uuid)
|
|
||||||
mock_power_off.reset_mock()
|
|
||||||
mock_lock.reset_mock()
|
|
||||||
vm.power_off(None, self.inst, force_immediate=True, timeout=5)
|
|
||||||
self.assertEqual(0, mock_power_off.call_count)
|
|
||||||
mock_pop.stop.assert_called_once_with(
|
|
||||||
entry, opts=mock.ANY, timeout=5)
|
|
||||||
self.assertEqual('PowerOff(immediate=true, operation=shutdown)',
|
|
||||||
str(mock_pop.stop.call_args[1]['opts']))
|
|
||||||
mock_lock.assert_called_once_with('power_%s' % self.inst.uuid)
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.power.power_off_progressive', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_instance_wrapper')
|
|
||||||
def test_power_off_negative(self, mock_wrap, mock_power_off):
|
|
||||||
"""Negative tests."""
|
|
||||||
mock_wrap.return_value = mock.Mock(state=pvm_bp.LPARState.RUNNING)
|
|
||||||
|
|
||||||
# Raise the expected pypowervm exception
|
|
||||||
mock_power_off.side_effect = pvm_exc.VMPowerOffFailure(
|
|
||||||
reason='Something bad.', lpar_nm='TheLPAR')
|
|
||||||
# We should get a valid Nova exception that the compute manager expects
|
|
||||||
self.assertRaises(exception.InstancePowerOffFailure,
|
|
||||||
vm.power_off, None, self.inst)
|
|
||||||
|
|
||||||
# Non-pvm error raises directly
|
|
||||||
mock_power_off.side_effect = ValueError()
|
|
||||||
self.assertRaises(ValueError, vm.power_off, None, self.inst)
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.power.power_on', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.power.power_off_progressive', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.power.PowerOp', autospec=True)
|
|
||||||
@mock.patch('oslo_concurrency.lockutils.lock', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_instance_wrapper')
|
|
||||||
def test_reboot(self, mock_wrap, mock_lock, mock_pop, mock_pwroff,
|
|
||||||
mock_pwron):
|
|
||||||
entry = mock.Mock(state=pvm_bp.LPARState.NOT_ACTIVATED)
|
|
||||||
mock_wrap.return_value = entry
|
|
||||||
|
|
||||||
# No power_off
|
|
||||||
vm.reboot('adap', self.inst, False)
|
|
||||||
mock_lock.assert_called_once_with('power_%s' % self.inst.uuid)
|
|
||||||
mock_wrap.assert_called_once_with('adap', self.inst)
|
|
||||||
mock_pwron.assert_called_once_with(entry, None)
|
|
||||||
self.assertEqual(0, mock_pwroff.call_count)
|
|
||||||
self.assertEqual(0, mock_pop.stop.call_count)
|
|
||||||
|
|
||||||
mock_pwron.reset_mock()
|
|
||||||
|
|
||||||
# power_off (no power_on) hard
|
|
||||||
entry.state = pvm_bp.LPARState.RUNNING
|
|
||||||
vm.reboot('adap', self.inst, True)
|
|
||||||
self.assertEqual(0, mock_pwron.call_count)
|
|
||||||
self.assertEqual(0, mock_pwroff.call_count)
|
|
||||||
mock_pop.stop.assert_called_once_with(entry, opts=mock.ANY)
|
|
||||||
self.assertEqual(
|
|
||||||
'PowerOff(immediate=true, operation=shutdown, restart=true)',
|
|
||||||
str(mock_pop.stop.call_args[1]['opts']))
|
|
||||||
|
|
||||||
mock_pop.reset_mock()
|
|
||||||
|
|
||||||
# power_off (no power_on) soft
|
|
||||||
entry.state = pvm_bp.LPARState.RUNNING
|
|
||||||
vm.reboot('adap', self.inst, False)
|
|
||||||
self.assertEqual(0, mock_pwron.call_count)
|
|
||||||
mock_pwroff.assert_called_once_with(entry, restart=True)
|
|
||||||
self.assertEqual(0, mock_pop.stop.call_count)
|
|
||||||
|
|
||||||
mock_pwroff.reset_mock()
|
|
||||||
|
|
||||||
# PowerVM error is converted
|
|
||||||
mock_pop.stop.side_effect = pvm_exc.TimeoutError("Timed out")
|
|
||||||
self.assertRaises(exception.InstanceRebootFailure,
|
|
||||||
vm.reboot, 'adap', self.inst, True)
|
|
||||||
|
|
||||||
# Non-PowerVM error is raised directly
|
|
||||||
mock_pwroff.side_effect = ValueError
|
|
||||||
self.assertRaises(ValueError, vm.reboot, 'adap', self.inst, False)
|
|
||||||
|
|
||||||
@mock.patch('oslo_serialization.jsonutils.loads')
|
|
||||||
def test_get_vm_qp(self, mock_loads):
|
|
||||||
self.apt.helpers = ['helper1', pvm_log.log_helper, 'helper3']
|
|
||||||
|
|
||||||
# Defaults
|
|
||||||
self.assertEqual(mock_loads.return_value,
|
|
||||||
vm.get_vm_qp(self.apt, 'lpar_uuid'))
|
|
||||||
self.apt.read.assert_called_once_with(
|
|
||||||
'LogicalPartition', root_id='lpar_uuid', suffix_type='quick',
|
|
||||||
suffix_parm=None)
|
|
||||||
mock_loads.assert_called_once_with(self.apt.read.return_value.body)
|
|
||||||
|
|
||||||
self.apt.read.reset_mock()
|
|
||||||
mock_loads.reset_mock()
|
|
||||||
|
|
||||||
# Specific qprop, no logging errors
|
|
||||||
self.assertEqual(mock_loads.return_value,
|
|
||||||
vm.get_vm_qp(self.apt, 'lpar_uuid', qprop='Prop',
|
|
||||||
log_errors=False))
|
|
||||||
self.apt.read.assert_called_once_with(
|
|
||||||
'LogicalPartition', root_id='lpar_uuid', suffix_type='quick',
|
|
||||||
suffix_parm='Prop', helpers=['helper1', 'helper3'])
|
|
||||||
|
|
||||||
resp = mock.MagicMock()
|
|
||||||
resp.status = 404
|
|
||||||
self.apt.read.side_effect = pvm_exc.HttpError(resp)
|
|
||||||
self.assertRaises(exception.InstanceNotFound, vm.get_vm_qp, self.apt,
|
|
||||||
'lpar_uuid', log_errors=False)
|
|
||||||
|
|
||||||
self.apt.read.side_effect = pvm_exc.Error("message", response=None)
|
|
||||||
self.assertRaises(pvm_exc.Error, vm.get_vm_qp, self.apt,
|
|
||||||
'lpar_uuid', log_errors=False)
|
|
||||||
|
|
||||||
resp.status = 500
|
|
||||||
self.apt.read.side_effect = pvm_exc.Error("message", response=resp)
|
|
||||||
self.assertRaises(pvm_exc.Error, vm.get_vm_qp, self.apt,
|
|
||||||
'lpar_uuid', log_errors=False)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_pvm_uuid')
|
|
||||||
@mock.patch('pypowervm.wrappers.network.CNA.search')
|
|
||||||
@mock.patch('pypowervm.wrappers.network.CNA.get')
|
|
||||||
def test_get_cnas(self, mock_get, mock_search, mock_uuid):
|
|
||||||
# No kwargs: get
|
|
||||||
self.assertEqual(mock_get.return_value, vm.get_cnas(self.apt, 'inst'))
|
|
||||||
mock_uuid.assert_called_once_with('inst')
|
|
||||||
mock_get.assert_called_once_with(self.apt, parent_type=pvm_lpar.LPAR,
|
|
||||||
parent_uuid=mock_uuid.return_value)
|
|
||||||
mock_search.assert_not_called()
|
|
||||||
# With kwargs: search
|
|
||||||
mock_get.reset_mock()
|
|
||||||
mock_uuid.reset_mock()
|
|
||||||
self.assertEqual(mock_search.return_value, vm.get_cnas(
|
|
||||||
self.apt, 'inst', one=2, three=4))
|
|
||||||
mock_uuid.assert_called_once_with('inst')
|
|
||||||
mock_search.assert_called_once_with(
|
|
||||||
self.apt, parent_type=pvm_lpar.LPAR,
|
|
||||||
parent_uuid=mock_uuid.return_value, one=2, three=4)
|
|
||||||
mock_get.assert_not_called()
|
|
||||||
|
|
||||||
def test_norm_mac(self):
|
|
||||||
EXPECTED = "12:34:56:78:90:ab"
|
|
||||||
self.assertEqual(EXPECTED, vm.norm_mac("12:34:56:78:90:ab"))
|
|
||||||
self.assertEqual(EXPECTED, vm.norm_mac("1234567890ab"))
|
|
||||||
self.assertEqual(EXPECTED, vm.norm_mac("12:34:56:78:90:AB"))
|
|
||||||
self.assertEqual(EXPECTED, vm.norm_mac("1234567890AB"))
|
|
@ -1,456 +0,0 @@
|
|||||||
# Copyright 2015, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from pypowervm import const as pvm_const
|
|
||||||
from pypowervm.tasks import hdisk
|
|
||||||
from pypowervm.tests import test_fixtures as pvm_fx
|
|
||||||
from pypowervm.utils import transaction as pvm_tx
|
|
||||||
from pypowervm.wrappers import storage as pvm_stor
|
|
||||||
from pypowervm.wrappers import virtual_io_server as pvm_vios
|
|
||||||
from unittest import mock
|
|
||||||
|
|
||||||
from nova import conf as cfg
|
|
||||||
from nova import exception as exc
|
|
||||||
from nova import test
|
|
||||||
from nova.virt.powervm.volume import fcvscsi
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
|
||||||
|
|
||||||
I_WWPN_1 = '21000024FF649104'
|
|
||||||
I_WWPN_2 = '21000024FF649105'
|
|
||||||
|
|
||||||
|
|
||||||
class TestVSCSIAdapter(test.NoDBTestCase):
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestVSCSIAdapter, self).setUp()
|
|
||||||
|
|
||||||
self.adpt = self.useFixture(pvm_fx.AdapterFx()).adpt
|
|
||||||
self.wtsk = mock.create_autospec(pvm_tx.WrapperTask, instance=True)
|
|
||||||
self.ftsk = mock.create_autospec(pvm_tx.FeedTask, instance=True)
|
|
||||||
self.ftsk.configure_mock(wrapper_tasks={'vios_uuid': self.wtsk})
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_pvm_uuid')
|
|
||||||
def init_vol_adpt(mock_pvm_uuid):
|
|
||||||
con_info = {
|
|
||||||
'serial': 'id',
|
|
||||||
'data': {
|
|
||||||
'initiator_target_map': {
|
|
||||||
I_WWPN_1: ['t1'],
|
|
||||||
I_WWPN_2: ['t2', 't3']
|
|
||||||
},
|
|
||||||
'target_lun': '1',
|
|
||||||
'volume_id': 'a_volume_identifier',
|
|
||||||
},
|
|
||||||
}
|
|
||||||
mock_inst = mock.MagicMock()
|
|
||||||
mock_pvm_uuid.return_value = '1234'
|
|
||||||
|
|
||||||
return fcvscsi.FCVscsiVolumeAdapter(
|
|
||||||
self.adpt, mock_inst, con_info, stg_ftsk=self.ftsk)
|
|
||||||
self.vol_drv = init_vol_adpt()
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.utils.transaction.FeedTask', autospec=True)
|
|
||||||
@mock.patch('pypowervm.wrappers.virtual_io_server.VIOS', autospec=True)
|
|
||||||
def test_reset_stg_ftsk(self, mock_vios, mock_ftsk):
|
|
||||||
self.vol_drv.reset_stg_ftsk('stg_ftsk')
|
|
||||||
self.assertEqual('stg_ftsk', self.vol_drv.stg_ftsk)
|
|
||||||
|
|
||||||
mock_vios.getter.return_value = 'getter'
|
|
||||||
mock_ftsk.return_value = 'local_feed_task'
|
|
||||||
self.vol_drv.reset_stg_ftsk()
|
|
||||||
self.assertEqual('local_feed_task', self.vol_drv.stg_ftsk)
|
|
||||||
mock_vios.getter.assert_called_once_with(
|
|
||||||
self.adpt, xag=[pvm_const.XAG.VIO_SMAP])
|
|
||||||
mock_ftsk.assert_called_once_with('local_feed_task', 'getter')
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.partition.get_physical_wwpns', autospec=True)
|
|
||||||
def test_wwpns(self, mock_vio_wwpns):
|
|
||||||
mock_vio_wwpns.return_value = ['aa', 'bb']
|
|
||||||
wwpns = fcvscsi.wwpns(self.adpt)
|
|
||||||
self.assertListEqual(['aa', 'bb'], wwpns)
|
|
||||||
mock_vio_wwpns.assert_called_once_with(self.adpt, force_refresh=False)
|
|
||||||
|
|
||||||
def test_set_udid(self):
|
|
||||||
# Mock connection info
|
|
||||||
self.vol_drv.connection_info['data'][fcvscsi.UDID_KEY] = None
|
|
||||||
|
|
||||||
# Set the UDID
|
|
||||||
self.vol_drv._set_udid('udid')
|
|
||||||
|
|
||||||
# Verify
|
|
||||||
self.assertEqual('udid',
|
|
||||||
self.vol_drv.connection_info['data'][fcvscsi.UDID_KEY])
|
|
||||||
|
|
||||||
def test_get_udid(self):
|
|
||||||
# Set the value to retrieve
|
|
||||||
self.vol_drv.connection_info['data'][fcvscsi.UDID_KEY] = 'udid'
|
|
||||||
retrieved_udid = self.vol_drv._get_udid()
|
|
||||||
# Check key found
|
|
||||||
self.assertEqual('udid', retrieved_udid)
|
|
||||||
|
|
||||||
# Check key not found
|
|
||||||
self.vol_drv.connection_info['data'].pop(fcvscsi.UDID_KEY)
|
|
||||||
retrieved_udid = self.vol_drv._get_udid()
|
|
||||||
# Check key not found
|
|
||||||
self.assertIsNone(retrieved_udid)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_instance_wrapper')
|
|
||||||
@mock.patch('pypowervm.utils.transaction.FeedTask', autospec=True)
|
|
||||||
def test_attach_volume(self, mock_feed_task, mock_get_wrap):
|
|
||||||
mock_lpar_wrap = mock.MagicMock()
|
|
||||||
mock_lpar_wrap.can_modify_io.return_value = True, None
|
|
||||||
mock_get_wrap.return_value = mock_lpar_wrap
|
|
||||||
mock_attach_ftsk = mock_feed_task.return_value
|
|
||||||
|
|
||||||
# Pass if all vioses modified
|
|
||||||
mock_ret = {'wrapper_task_rets': {'vios1': {'vio_modified': True},
|
|
||||||
'vios2': {'vio_modified': True}}}
|
|
||||||
mock_attach_ftsk.execute.return_value = mock_ret
|
|
||||||
self.vol_drv.attach_volume()
|
|
||||||
mock_feed_task.assert_called_once()
|
|
||||||
mock_attach_ftsk.add_functor_subtask.assert_called_once_with(
|
|
||||||
self.vol_drv._attach_volume_to_vio, provides='vio_modified',
|
|
||||||
flag_update=False)
|
|
||||||
mock_attach_ftsk.execute.assert_called_once()
|
|
||||||
self.ftsk.execute.assert_called_once()
|
|
||||||
|
|
||||||
mock_feed_task.reset_mock()
|
|
||||||
mock_attach_ftsk.reset_mock()
|
|
||||||
self.ftsk.reset_mock()
|
|
||||||
|
|
||||||
# Pass if 1 vios modified
|
|
||||||
mock_ret = {'wrapper_task_rets': {'vios1': {'vio_modified': True},
|
|
||||||
'vios2': {'vio_modified': False}}}
|
|
||||||
mock_attach_ftsk.execute.return_value = mock_ret
|
|
||||||
self.vol_drv.attach_volume()
|
|
||||||
mock_feed_task.assert_called_once()
|
|
||||||
mock_attach_ftsk.add_functor_subtask.assert_called_once_with(
|
|
||||||
self.vol_drv._attach_volume_to_vio, provides='vio_modified',
|
|
||||||
flag_update=False)
|
|
||||||
mock_attach_ftsk.execute.assert_called_once()
|
|
||||||
self.ftsk.execute.assert_called_once()
|
|
||||||
|
|
||||||
# Raise if no vios modified
|
|
||||||
mock_ret = {'wrapper_task_rets': {'vios1': {'vio_modified': False},
|
|
||||||
'vios2': {'vio_modified': False}}}
|
|
||||||
mock_attach_ftsk.execute.return_value = mock_ret
|
|
||||||
self.assertRaises(exc.VolumeAttachFailed, self.vol_drv.attach_volume)
|
|
||||||
|
|
||||||
# Raise if vm in invalid state
|
|
||||||
mock_lpar_wrap.can_modify_io.return_value = False, None
|
|
||||||
self.assertRaises(exc.VolumeAttachFailed, self.vol_drv.attach_volume)
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.volume.fcvscsi.FCVscsiVolumeAdapter.'
|
|
||||||
'_set_udid')
|
|
||||||
@mock.patch('nova.virt.powervm.volume.fcvscsi.FCVscsiVolumeAdapter.'
|
|
||||||
'_add_append_mapping')
|
|
||||||
@mock.patch('nova.virt.powervm.volume.fcvscsi.FCVscsiVolumeAdapter.'
|
|
||||||
'_discover_volume_on_vios')
|
|
||||||
@mock.patch('pypowervm.tasks.hdisk.good_discovery', autospec=True)
|
|
||||||
def test_attach_volume_to_vio(self, mock_good_disc, mock_disc_vol,
|
|
||||||
mock_add_map, mock_set_udid):
|
|
||||||
# Setup mocks
|
|
||||||
mock_vios = mock.MagicMock()
|
|
||||||
mock_vios.uuid = 'uuid'
|
|
||||||
mock_disc_vol.return_value = 'status', 'devname', 'udid'
|
|
||||||
|
|
||||||
# Bad discovery
|
|
||||||
mock_good_disc.return_value = False
|
|
||||||
ret = self.vol_drv._attach_volume_to_vio(mock_vios)
|
|
||||||
self.assertFalse(ret)
|
|
||||||
mock_disc_vol.assert_called_once_with(mock_vios)
|
|
||||||
mock_good_disc.assert_called_once_with('status', 'devname')
|
|
||||||
|
|
||||||
# Good discovery
|
|
||||||
mock_good_disc.return_value = True
|
|
||||||
ret = self.vol_drv._attach_volume_to_vio(mock_vios)
|
|
||||||
self.assertTrue(ret)
|
|
||||||
mock_add_map.assert_called_once_with(
|
|
||||||
'uuid', 'devname', tag='a_volume_identifier')
|
|
||||||
mock_set_udid.assert_called_once_with('udid')
|
|
||||||
|
|
||||||
def test_extend_volume(self):
|
|
||||||
# Ensure the method is implemented
|
|
||||||
self.vol_drv.extend_volume()
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.volume.fcvscsi.LOG')
|
|
||||||
@mock.patch('pypowervm.tasks.hdisk.good_discovery', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.hdisk.discover_hdisk', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.hdisk.build_itls', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.volume.fcvscsi.FCVscsiVolumeAdapter.'
|
|
||||||
'_get_hdisk_itls')
|
|
||||||
def test_discover_volume_on_vios(self, mock_get_itls, mock_build_itls,
|
|
||||||
mock_disc_hdisk, mock_good_disc,
|
|
||||||
mock_log):
|
|
||||||
mock_vios = mock.MagicMock()
|
|
||||||
mock_vios.uuid = 'uuid'
|
|
||||||
mock_get_itls.return_value = 'v_wwpns', 't_wwpns', 'lun'
|
|
||||||
mock_build_itls.return_value = 'itls'
|
|
||||||
mock_disc_hdisk.return_value = 'status', 'devname', 'udid'
|
|
||||||
|
|
||||||
# Good discovery
|
|
||||||
mock_good_disc.return_value = True
|
|
||||||
status, devname, udid = self.vol_drv._discover_volume_on_vios(
|
|
||||||
mock_vios)
|
|
||||||
self.assertEqual(mock_disc_hdisk.return_value[0], status)
|
|
||||||
self.assertEqual(mock_disc_hdisk.return_value[1], devname)
|
|
||||||
self.assertEqual(mock_disc_hdisk.return_value[2], udid)
|
|
||||||
mock_get_itls.assert_called_once_with(mock_vios)
|
|
||||||
mock_build_itls.assert_called_once_with('v_wwpns', 't_wwpns', 'lun')
|
|
||||||
mock_disc_hdisk.assert_called_once_with(self.adpt, 'uuid', 'itls')
|
|
||||||
mock_good_disc.assert_called_once_with('status', 'devname')
|
|
||||||
mock_log.info.assert_called_once()
|
|
||||||
mock_log.warning.assert_not_called()
|
|
||||||
|
|
||||||
mock_log.reset_mock()
|
|
||||||
|
|
||||||
# Bad discovery, not device in use status
|
|
||||||
mock_good_disc.return_value = False
|
|
||||||
self.vol_drv._discover_volume_on_vios(mock_vios)
|
|
||||||
mock_log.warning.assert_not_called()
|
|
||||||
mock_log.info.assert_not_called()
|
|
||||||
|
|
||||||
# Bad discovery, device in use status
|
|
||||||
mock_disc_hdisk.return_value = (hdisk.LUAStatus.DEVICE_IN_USE, 'dev',
|
|
||||||
'udid')
|
|
||||||
self.vol_drv._discover_volume_on_vios(mock_vios)
|
|
||||||
mock_log.warning.assert_called_once()
|
|
||||||
|
|
||||||
def test_get_hdisk_itls(self):
|
|
||||||
"""Validates the _get_hdisk_itls method."""
|
|
||||||
|
|
||||||
mock_vios = mock.MagicMock()
|
|
||||||
mock_vios.get_active_pfc_wwpns.return_value = [I_WWPN_1]
|
|
||||||
|
|
||||||
i_wwpn, t_wwpns, lun = self.vol_drv._get_hdisk_itls(mock_vios)
|
|
||||||
self.assertListEqual([I_WWPN_1], i_wwpn)
|
|
||||||
self.assertListEqual(['t1'], t_wwpns)
|
|
||||||
self.assertEqual('1', lun)
|
|
||||||
|
|
||||||
mock_vios.get_active_pfc_wwpns.return_value = [I_WWPN_2]
|
|
||||||
i_wwpn, t_wwpns, lun = self.vol_drv._get_hdisk_itls(mock_vios)
|
|
||||||
self.assertListEqual([I_WWPN_2], i_wwpn)
|
|
||||||
self.assertListEqual(['t2', 't3'], t_wwpns)
|
|
||||||
|
|
||||||
mock_vios.get_active_pfc_wwpns.return_value = ['12345']
|
|
||||||
i_wwpn, t_wwpns, lun = self.vol_drv._get_hdisk_itls(mock_vios)
|
|
||||||
self.assertListEqual([], i_wwpn)
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.wrappers.storage.PV', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.build_vscsi_mapping',
|
|
||||||
autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.add_map', autospec=True)
|
|
||||||
def test_add_append_mapping(self, mock_add_map, mock_bld_map, mock_pv):
|
|
||||||
def test_afs(add_func):
|
|
||||||
mock_vios = mock.create_autospec(pvm_vios.VIOS)
|
|
||||||
self.assertEqual(mock_add_map.return_value, add_func(mock_vios))
|
|
||||||
mock_pv.bld.assert_called_once_with(self.adpt, 'devname', tag=None)
|
|
||||||
mock_bld_map.assert_called_once_with(
|
|
||||||
None, mock_vios, self.vol_drv.vm_uuid,
|
|
||||||
mock_pv.bld.return_value)
|
|
||||||
mock_add_map.assert_called_once_with(
|
|
||||||
mock_vios, mock_bld_map.return_value)
|
|
||||||
|
|
||||||
self.wtsk.add_functor_subtask.side_effect = test_afs
|
|
||||||
self.vol_drv._add_append_mapping('vios_uuid', 'devname')
|
|
||||||
self.wtsk.add_functor_subtask.assert_called_once()
|
|
||||||
|
|
||||||
@mock.patch('nova.virt.powervm.volume.fcvscsi.LOG.warning')
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_instance_wrapper')
|
|
||||||
@mock.patch('pypowervm.utils.transaction.FeedTask', autospec=True)
|
|
||||||
def test_detach_volume(self, mock_feed_task, mock_get_wrap, mock_log):
|
|
||||||
mock_lpar_wrap = mock.MagicMock()
|
|
||||||
mock_lpar_wrap.can_modify_io.return_value = True, None
|
|
||||||
mock_get_wrap.return_value = mock_lpar_wrap
|
|
||||||
mock_detach_ftsk = mock_feed_task.return_value
|
|
||||||
|
|
||||||
# Multiple vioses modified
|
|
||||||
mock_ret = {'wrapper_task_rets': {'vios1': {'vio_modified': True},
|
|
||||||
'vios2': {'vio_modified': True}}}
|
|
||||||
mock_detach_ftsk.execute.return_value = mock_ret
|
|
||||||
self.vol_drv.detach_volume()
|
|
||||||
mock_feed_task.assert_called_once()
|
|
||||||
mock_detach_ftsk.add_functor_subtask.assert_called_once_with(
|
|
||||||
self.vol_drv._detach_vol_for_vio, provides='vio_modified',
|
|
||||||
flag_update=False)
|
|
||||||
mock_detach_ftsk.execute.assert_called_once_with()
|
|
||||||
self.ftsk.execute.assert_called_once_with()
|
|
||||||
mock_log.assert_not_called()
|
|
||||||
|
|
||||||
# 1 vios modified
|
|
||||||
mock_ret = {'wrapper_task_rets': {'vios1': {'vio_modified': True},
|
|
||||||
'vios2': {'vio_modified': False}}}
|
|
||||||
mock_detach_ftsk.execute.return_value = mock_ret
|
|
||||||
self.vol_drv.detach_volume()
|
|
||||||
mock_log.assert_not_called()
|
|
||||||
|
|
||||||
# No vioses modifed
|
|
||||||
mock_ret = {'wrapper_task_rets': {'vios1': {'vio_modified': False},
|
|
||||||
'vios2': {'vio_modified': False}}}
|
|
||||||
mock_detach_ftsk.execute.return_value = mock_ret
|
|
||||||
self.vol_drv.detach_volume()
|
|
||||||
mock_log.assert_called_once()
|
|
||||||
|
|
||||||
# Raise if exception during execute
|
|
||||||
mock_detach_ftsk.execute.side_effect = Exception()
|
|
||||||
self.assertRaises(exc.VolumeDetachFailed, self.vol_drv.detach_volume)
|
|
||||||
|
|
||||||
# Raise if vm in invalid state
|
|
||||||
mock_lpar_wrap.can_modify_io.return_value = False, None
|
|
||||||
self.assertRaises(exc.VolumeDetachFailed, self.vol_drv.detach_volume)
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.hdisk.good_discovery', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.volume.fcvscsi.FCVscsiVolumeAdapter.'
|
|
||||||
'_discover_volume_on_vios')
|
|
||||||
@mock.patch('nova.virt.powervm.volume.fcvscsi.FCVscsiVolumeAdapter.'
|
|
||||||
'_add_remove_mapping')
|
|
||||||
@mock.patch('nova.virt.powervm.volume.fcvscsi.FCVscsiVolumeAdapter.'
|
|
||||||
'_add_remove_hdisk')
|
|
||||||
@mock.patch('nova.virt.powervm.vm.get_vm_qp')
|
|
||||||
def test_detach_vol_for_vio(self, mock_get_qp, mock_rm_hdisk, mock_rm_map,
|
|
||||||
mock_disc_vol, mock_good_disc):
|
|
||||||
# Good detach, bdm data is found
|
|
||||||
self.vol_drv._set_udid('udid')
|
|
||||||
mock_vios = mock.MagicMock()
|
|
||||||
mock_vios.uuid = 'vios_uuid'
|
|
||||||
mock_vios.hdisk_from_uuid.return_value = 'devname'
|
|
||||||
mock_get_qp.return_value = 'part_id'
|
|
||||||
ret = self.vol_drv._detach_vol_for_vio(mock_vios)
|
|
||||||
self.assertTrue(ret)
|
|
||||||
mock_vios.hdisk_from_uuid.assert_called_once_with('udid')
|
|
||||||
mock_rm_map.assert_called_once_with('part_id', 'vios_uuid', 'devname')
|
|
||||||
mock_rm_hdisk.assert_called_once_with(mock_vios, 'devname')
|
|
||||||
|
|
||||||
mock_vios.reset_mock()
|
|
||||||
mock_rm_map.reset_mock()
|
|
||||||
mock_rm_hdisk.reset_mock()
|
|
||||||
|
|
||||||
# Good detach, no udid
|
|
||||||
self.vol_drv._set_udid(None)
|
|
||||||
mock_disc_vol.return_value = 'status', 'devname', 'udid'
|
|
||||||
mock_good_disc.return_value = True
|
|
||||||
ret = self.vol_drv._detach_vol_for_vio(mock_vios)
|
|
||||||
self.assertTrue(ret)
|
|
||||||
mock_vios.hdisk_from_uuid.assert_not_called()
|
|
||||||
mock_disc_vol.assert_called_once_with(mock_vios)
|
|
||||||
mock_good_disc.assert_called_once_with('status', 'devname')
|
|
||||||
mock_rm_map.assert_called_once_with('part_id', 'vios_uuid', 'devname')
|
|
||||||
mock_rm_hdisk.assert_called_once_with(mock_vios, 'devname')
|
|
||||||
|
|
||||||
mock_vios.reset_mock()
|
|
||||||
mock_disc_vol.reset_mock()
|
|
||||||
mock_good_disc.reset_mock()
|
|
||||||
mock_rm_map.reset_mock()
|
|
||||||
mock_rm_hdisk.reset_mock()
|
|
||||||
|
|
||||||
# Good detach, no device name
|
|
||||||
self.vol_drv._set_udid('udid')
|
|
||||||
mock_vios.hdisk_from_uuid.return_value = None
|
|
||||||
ret = self.vol_drv._detach_vol_for_vio(mock_vios)
|
|
||||||
self.assertTrue(ret)
|
|
||||||
mock_vios.hdisk_from_uuid.assert_called_once_with('udid')
|
|
||||||
mock_disc_vol.assert_called_once_with(mock_vios)
|
|
||||||
mock_good_disc.assert_called_once_with('status', 'devname')
|
|
||||||
mock_rm_map.assert_called_once_with('part_id', 'vios_uuid', 'devname')
|
|
||||||
mock_rm_hdisk.assert_called_once_with(mock_vios, 'devname')
|
|
||||||
|
|
||||||
mock_rm_map.reset_mock()
|
|
||||||
mock_rm_hdisk.reset_mock()
|
|
||||||
|
|
||||||
# Bad detach, invalid state
|
|
||||||
mock_good_disc.return_value = False
|
|
||||||
ret = self.vol_drv._detach_vol_for_vio(mock_vios)
|
|
||||||
self.assertFalse(ret)
|
|
||||||
mock_rm_map.assert_not_called()
|
|
||||||
mock_rm_hdisk.assert_not_called()
|
|
||||||
|
|
||||||
# Bad detach, exception discovering volume on vios
|
|
||||||
mock_disc_vol.side_effect = Exception()
|
|
||||||
ret = self.vol_drv._detach_vol_for_vio(mock_vios)
|
|
||||||
self.assertFalse(ret)
|
|
||||||
mock_rm_map.assert_not_called()
|
|
||||||
mock_rm_hdisk.assert_not_called()
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.gen_match_func', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.remove_maps', autospec=True)
|
|
||||||
def test_add_remove_mapping(self, mock_rm_maps, mock_gen_match):
|
|
||||||
def test_afs(rm_func):
|
|
||||||
mock_vios = mock.create_autospec(pvm_vios.VIOS)
|
|
||||||
self.assertEqual(mock_rm_maps.return_value, rm_func(mock_vios))
|
|
||||||
mock_gen_match.assert_called_once_with(
|
|
||||||
pvm_stor.PV, names=['devname'])
|
|
||||||
mock_rm_maps.assert_called_once_with(
|
|
||||||
mock_vios, 'vm_uuid', mock_gen_match.return_value)
|
|
||||||
|
|
||||||
self.wtsk.add_functor_subtask.side_effect = test_afs
|
|
||||||
self.vol_drv._add_remove_mapping('vm_uuid', 'vios_uuid', 'devname')
|
|
||||||
self.wtsk.add_functor_subtask.assert_called_once()
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.hdisk.remove_hdisk', autospec=True)
|
|
||||||
@mock.patch('taskflow.task.FunctorTask', autospec=True)
|
|
||||||
@mock.patch('nova.virt.powervm.volume.fcvscsi.FCVscsiVolumeAdapter.'
|
|
||||||
'_check_host_mappings')
|
|
||||||
def test_add_remove_hdisk(self, mock_check_maps, mock_functask,
|
|
||||||
mock_rm_hdisk):
|
|
||||||
mock_vios = mock.MagicMock()
|
|
||||||
mock_vios.uuid = 'uuid'
|
|
||||||
mock_check_maps.return_value = True
|
|
||||||
self.vol_drv._add_remove_hdisk(mock_vios, 'devname')
|
|
||||||
mock_functask.assert_not_called()
|
|
||||||
self.ftsk.add_post_execute.assert_not_called()
|
|
||||||
mock_check_maps.assert_called_once_with(mock_vios, 'devname')
|
|
||||||
self.assertEqual(0, mock_rm_hdisk.call_count)
|
|
||||||
|
|
||||||
def test_functor_task(rm_hdisk, name=None):
|
|
||||||
rm_hdisk()
|
|
||||||
return 'functor_task'
|
|
||||||
|
|
||||||
mock_check_maps.return_value = False
|
|
||||||
mock_functask.side_effect = test_functor_task
|
|
||||||
self.vol_drv._add_remove_hdisk(mock_vios, 'devname')
|
|
||||||
mock_functask.assert_called_once()
|
|
||||||
self.ftsk.add_post_execute.assert_called_once_with('functor_task')
|
|
||||||
mock_rm_hdisk.assert_called_once_with(self.adpt, CONF.host,
|
|
||||||
'devname', 'uuid')
|
|
||||||
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.gen_match_func', autospec=True)
|
|
||||||
@mock.patch('pypowervm.tasks.scsi_mapper.find_maps', autospec=True)
|
|
||||||
def test_check_host_mappings(self, mock_find_maps, mock_gen_match):
|
|
||||||
mock_vios = mock.MagicMock()
|
|
||||||
mock_vios.uuid = 'uuid2'
|
|
||||||
mock_v1 = mock.MagicMock(scsi_mappings='scsi_maps_1', uuid='uuid1')
|
|
||||||
mock_v2 = mock.MagicMock(scsi_mappings='scsi_maps_2', uuid='uuid2')
|
|
||||||
mock_feed = [mock_v1, mock_v2]
|
|
||||||
self.ftsk.feed = mock_feed
|
|
||||||
|
|
||||||
# Multiple mappings found
|
|
||||||
mock_find_maps.return_value = ['map1', 'map2']
|
|
||||||
ret = self.vol_drv._check_host_mappings(mock_vios, 'devname')
|
|
||||||
self.assertTrue(ret)
|
|
||||||
mock_gen_match.assert_called_once_with(pvm_stor.PV, names=['devname'])
|
|
||||||
mock_find_maps.assert_called_once_with('scsi_maps_2', None,
|
|
||||||
mock_gen_match.return_value)
|
|
||||||
|
|
||||||
# One mapping found
|
|
||||||
mock_find_maps.return_value = ['map1']
|
|
||||||
ret = self.vol_drv._check_host_mappings(mock_vios, 'devname')
|
|
||||||
self.assertFalse(ret)
|
|
||||||
|
|
||||||
# No mappings found
|
|
||||||
mock_find_maps.return_value = []
|
|
||||||
ret = self.vol_drv._check_host_mappings(mock_vios, 'devname')
|
|
||||||
self.assertFalse(ret)
|
|
@ -1,17 +0,0 @@
|
|||||||
# Copyright 2017 IBM Corp.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from nova.virt.powervm import driver
|
|
||||||
|
|
||||||
PowerVMDriver = driver.PowerVMDriver
|
|
@ -1,268 +0,0 @@
|
|||||||
# Copyright 2013 OpenStack Foundation
|
|
||||||
# Copyright 2015, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import abc
|
|
||||||
|
|
||||||
import oslo_log.log as logging
|
|
||||||
import pypowervm.const as pvm_const
|
|
||||||
import pypowervm.tasks.scsi_mapper as tsk_map
|
|
||||||
import pypowervm.util as pvm_u
|
|
||||||
import pypowervm.wrappers.virtual_io_server as pvm_vios
|
|
||||||
|
|
||||||
from nova import exception
|
|
||||||
from nova.virt.powervm import mgmt
|
|
||||||
from nova.virt.powervm import vm
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class DiskType(object):
|
|
||||||
BOOT = 'boot'
|
|
||||||
IMAGE = 'image'
|
|
||||||
|
|
||||||
|
|
||||||
class IterableToFileAdapter(object):
|
|
||||||
"""A degenerate file-like so that an iterable can be read like a file.
|
|
||||||
|
|
||||||
The Glance client returns an iterable, but PowerVM requires a file. This
|
|
||||||
is the adapter between the two.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, iterable):
|
|
||||||
self.iterator = iterable.__iter__()
|
|
||||||
self.remaining_data = ''
|
|
||||||
|
|
||||||
def read(self, size):
|
|
||||||
chunk = self.remaining_data
|
|
||||||
try:
|
|
||||||
while not chunk:
|
|
||||||
chunk = next(self.iterator)
|
|
||||||
except StopIteration:
|
|
||||||
return ''
|
|
||||||
return_value = chunk[0:size]
|
|
||||||
self.remaining_data = chunk[size:]
|
|
||||||
return return_value
|
|
||||||
|
|
||||||
|
|
||||||
class DiskAdapter(metaclass=abc.ABCMeta):
|
|
||||||
|
|
||||||
capabilities = {
|
|
||||||
'shared_storage': False,
|
|
||||||
'has_imagecache': False,
|
|
||||||
'snapshot': False,
|
|
||||||
}
|
|
||||||
|
|
||||||
def __init__(self, adapter, host_uuid):
|
|
||||||
"""Initialize the DiskAdapter.
|
|
||||||
|
|
||||||
:param adapter: The pypowervm adapter.
|
|
||||||
:param host_uuid: The UUID of the PowerVM host.
|
|
||||||
"""
|
|
||||||
self._adapter = adapter
|
|
||||||
self._host_uuid = host_uuid
|
|
||||||
self.mp_uuid = mgmt.mgmt_uuid(self._adapter)
|
|
||||||
|
|
||||||
@abc.abstractproperty
|
|
||||||
def _vios_uuids(self):
|
|
||||||
"""List the UUIDs of the Virtual I/O Servers hosting the storage."""
|
|
||||||
raise NotImplementedError()
|
|
||||||
|
|
||||||
@abc.abstractmethod
|
|
||||||
def _disk_match_func(self, disk_type, instance):
|
|
||||||
"""Return a matching function to locate the disk for an instance.
|
|
||||||
|
|
||||||
:param disk_type: One of the DiskType enum values.
|
|
||||||
:param instance: The instance whose disk is to be found.
|
|
||||||
:return: Callable suitable for the match_func parameter of the
|
|
||||||
pypowervm.tasks.scsi_mapper.find_maps method, with the
|
|
||||||
following specification:
|
|
||||||
def match_func(storage_elem)
|
|
||||||
param storage_elem: A backing storage element wrapper (VOpt,
|
|
||||||
VDisk, PV, or LU) to be analyzed.
|
|
||||||
return: True if the storage_elem's mapping should be included;
|
|
||||||
False otherwise.
|
|
||||||
"""
|
|
||||||
raise NotImplementedError()
|
|
||||||
|
|
||||||
def get_bootdisk_path(self, instance, vios_uuid):
|
|
||||||
"""Find the local path for the instance's boot disk.
|
|
||||||
|
|
||||||
:param instance: nova.objects.instance.Instance object owning the
|
|
||||||
requested disk.
|
|
||||||
:param vios_uuid: PowerVM UUID of the VIOS to search for mappings.
|
|
||||||
:return: Local path for instance's boot disk.
|
|
||||||
"""
|
|
||||||
vm_uuid = vm.get_pvm_uuid(instance)
|
|
||||||
match_func = self._disk_match_func(DiskType.BOOT, instance)
|
|
||||||
vios_wrap = pvm_vios.VIOS.get(self._adapter, uuid=vios_uuid,
|
|
||||||
xag=[pvm_const.XAG.VIO_SMAP])
|
|
||||||
maps = tsk_map.find_maps(vios_wrap.scsi_mappings,
|
|
||||||
client_lpar_id=vm_uuid, match_func=match_func)
|
|
||||||
if maps:
|
|
||||||
return maps[0].server_adapter.backing_dev_name
|
|
||||||
return None
|
|
||||||
|
|
||||||
def _get_bootdisk_iter(self, instance):
|
|
||||||
"""Return an iterator of (storage_elem, VIOS) tuples for the instance.
|
|
||||||
|
|
||||||
This method returns an iterator of (storage_elem, VIOS) tuples, where
|
|
||||||
storage_element is a pypowervm storage element wrapper associated with
|
|
||||||
the instance boot disk and VIOS is the wrapper of the Virtual I/O
|
|
||||||
server owning that storage element.
|
|
||||||
|
|
||||||
:param instance: nova.objects.instance.Instance object owning the
|
|
||||||
requested disk.
|
|
||||||
:return: Iterator of tuples of (storage_elem, VIOS).
|
|
||||||
"""
|
|
||||||
lpar_wrap = vm.get_instance_wrapper(self._adapter, instance)
|
|
||||||
match_func = self._disk_match_func(DiskType.BOOT, instance)
|
|
||||||
for vios_uuid in self._vios_uuids:
|
|
||||||
vios_wrap = pvm_vios.VIOS.get(
|
|
||||||
self._adapter, uuid=vios_uuid, xag=[pvm_const.XAG.VIO_SMAP])
|
|
||||||
for scsi_map in tsk_map.find_maps(
|
|
||||||
vios_wrap.scsi_mappings, client_lpar_id=lpar_wrap.id,
|
|
||||||
match_func=match_func):
|
|
||||||
yield scsi_map.backing_storage, vios_wrap
|
|
||||||
|
|
||||||
def connect_instance_disk_to_mgmt(self, instance):
|
|
||||||
"""Connect an instance's boot disk to the management partition.
|
|
||||||
|
|
||||||
:param instance: The instance whose boot disk is to be mapped.
|
|
||||||
:return stg_elem: The storage element (LU, VDisk, etc.) that was mapped
|
|
||||||
:return vios: The EntryWrapper of the VIOS from which the mapping was
|
|
||||||
made.
|
|
||||||
:raise InstanceDiskMappingFailed: If the mapping could not be done.
|
|
||||||
"""
|
|
||||||
for stg_elem, vios in self._get_bootdisk_iter(instance):
|
|
||||||
msg_args = {'disk_name': stg_elem.name, 'vios_name': vios.name}
|
|
||||||
|
|
||||||
# Create a new mapping. NOTE: If there's an existing mapping on
|
|
||||||
# the other VIOS but not this one, we'll create a second mapping
|
|
||||||
# here. It would take an extreme sequence of events to get to that
|
|
||||||
# point, and the second mapping would be harmless anyway. The
|
|
||||||
# alternative would be always checking all VIOSes for existing
|
|
||||||
# mappings, which increases the response time of the common case by
|
|
||||||
# an entire GET of VIOS+VIO_SMAP.
|
|
||||||
LOG.debug("Mapping boot disk %(disk_name)s to the management "
|
|
||||||
"partition from Virtual I/O Server %(vios_name)s.",
|
|
||||||
msg_args, instance=instance)
|
|
||||||
try:
|
|
||||||
tsk_map.add_vscsi_mapping(self._host_uuid, vios, self.mp_uuid,
|
|
||||||
stg_elem)
|
|
||||||
# If that worked, we're done. add_vscsi_mapping logged.
|
|
||||||
return stg_elem, vios
|
|
||||||
except Exception:
|
|
||||||
LOG.exception("Failed to map boot disk %(disk_name)s to the "
|
|
||||||
"management partition from Virtual I/O Server "
|
|
||||||
"%(vios_name)s.", msg_args, instance=instance)
|
|
||||||
# Try the next hit, if available.
|
|
||||||
# We either didn't find the boot dev, or failed all attempts to map it.
|
|
||||||
raise exception.InstanceDiskMappingFailed(instance_name=instance.name)
|
|
||||||
|
|
||||||
@abc.abstractmethod
|
|
||||||
def disconnect_disk_from_mgmt(self, vios_uuid, disk_name):
|
|
||||||
"""Disconnect a disk from the management partition.
|
|
||||||
|
|
||||||
:param vios_uuid: The UUID of the Virtual I/O Server serving the
|
|
||||||
mapping.
|
|
||||||
:param disk_name: The name of the disk to unmap.
|
|
||||||
"""
|
|
||||||
raise NotImplementedError()
|
|
||||||
|
|
||||||
@abc.abstractproperty
|
|
||||||
def capacity(self):
|
|
||||||
"""Capacity of the storage in gigabytes.
|
|
||||||
|
|
||||||
Default is to make the capacity arbitrarily large.
|
|
||||||
"""
|
|
||||||
raise NotImplementedError()
|
|
||||||
|
|
||||||
@abc.abstractproperty
|
|
||||||
def capacity_used(self):
|
|
||||||
"""Capacity of the storage in gigabytes that is used.
|
|
||||||
|
|
||||||
Default is to say none of it is used.
|
|
||||||
"""
|
|
||||||
raise NotImplementedError()
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _get_disk_name(disk_type, instance, short=False):
|
|
||||||
"""Generate a name for a virtual disk associated with an instance.
|
|
||||||
|
|
||||||
:param disk_type: One of the DiskType enum values.
|
|
||||||
:param instance: The instance for which the disk is to be created.
|
|
||||||
:param short: If True, the generated name will be limited to 15
|
|
||||||
characters (the limit for virtual disk). If False, it
|
|
||||||
will be limited by the API (79 characters currently).
|
|
||||||
:return: The sanitized file name for the disk.
|
|
||||||
"""
|
|
||||||
prefix = '%s_' % (disk_type[0] if short else disk_type)
|
|
||||||
base = ('%s_%s' % (instance.name[:8], instance.uuid[:4]) if short
|
|
||||||
else instance.name)
|
|
||||||
return pvm_u.sanitize_file_name_for_api(
|
|
||||||
base, prefix=prefix, max_len=pvm_const.MaxLen.VDISK_NAME if short
|
|
||||||
else pvm_const.MaxLen.FILENAME_DEFAULT)
|
|
||||||
|
|
||||||
@abc.abstractmethod
|
|
||||||
def detach_disk(self, instance):
|
|
||||||
"""Detaches the storage adapters from the image disk.
|
|
||||||
|
|
||||||
:param instance: instance to detach the image for.
|
|
||||||
:return: A list of all the backing storage elements that were
|
|
||||||
detached from the I/O Server and VM.
|
|
||||||
"""
|
|
||||||
raise NotImplementedError()
|
|
||||||
|
|
||||||
@abc.abstractmethod
|
|
||||||
def delete_disks(self, storage_elems):
|
|
||||||
"""Removes the disks specified by the mappings.
|
|
||||||
|
|
||||||
:param storage_elems: A list of the storage elements that are to be
|
|
||||||
deleted. Derived from the return value from
|
|
||||||
detach_disk.
|
|
||||||
"""
|
|
||||||
raise NotImplementedError()
|
|
||||||
|
|
||||||
@abc.abstractmethod
|
|
||||||
def create_disk_from_image(self, context, instance, image_meta):
|
|
||||||
"""Creates a disk and copies the specified image to it.
|
|
||||||
|
|
||||||
Cleans up created disk if an error occurs.
|
|
||||||
:param context: nova context used to retrieve image from glance
|
|
||||||
:param instance: instance to create the disk for.
|
|
||||||
:param image_meta: nova.objects.ImageMeta object with the metadata of
|
|
||||||
the image of the instance.
|
|
||||||
:return: The backing pypowervm storage object that was created.
|
|
||||||
"""
|
|
||||||
raise NotImplementedError()
|
|
||||||
|
|
||||||
@abc.abstractmethod
|
|
||||||
def attach_disk(self, instance, disk_info, stg_ftsk):
|
|
||||||
"""Attaches the disk image to the Virtual Machine.
|
|
||||||
|
|
||||||
:param instance: nova instance to attach the disk to.
|
|
||||||
:param disk_info: The pypowervm storage element returned from
|
|
||||||
create_disk_from_image. Ex. VOptMedia, VDisk, LU,
|
|
||||||
or PV.
|
|
||||||
:param stg_ftsk: (Optional) The pypowervm transaction FeedTask for the
|
|
||||||
I/O Operations. If provided, the Virtual I/O Server
|
|
||||||
mapping updates will be added to the FeedTask. This
|
|
||||||
defers the updates to some later point in time. If
|
|
||||||
the FeedTask is not provided, the updates will be run
|
|
||||||
immediately when this method is executed.
|
|
||||||
"""
|
|
||||||
raise NotImplementedError()
|
|
@ -1,211 +0,0 @@
|
|||||||
# Copyright 2013 OpenStack Foundation
|
|
||||||
# Copyright 2015, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import oslo_log.log as logging
|
|
||||||
from pypowervm import const as pvm_const
|
|
||||||
from pypowervm.tasks import scsi_mapper as tsk_map
|
|
||||||
from pypowervm.tasks import storage as tsk_stg
|
|
||||||
from pypowervm.wrappers import storage as pvm_stg
|
|
||||||
from pypowervm.wrappers import virtual_io_server as pvm_vios
|
|
||||||
|
|
||||||
from nova import conf
|
|
||||||
from nova import exception
|
|
||||||
from nova.image import glance
|
|
||||||
from nova.virt.powervm.disk import driver as disk_dvr
|
|
||||||
from nova.virt.powervm import vm
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
CONF = conf.CONF
|
|
||||||
IMAGE_API = glance.API()
|
|
||||||
|
|
||||||
|
|
||||||
class LocalStorage(disk_dvr.DiskAdapter):
|
|
||||||
|
|
||||||
def __init__(self, adapter, host_uuid):
|
|
||||||
super(LocalStorage, self).__init__(adapter, host_uuid)
|
|
||||||
|
|
||||||
self.capabilities = {
|
|
||||||
'shared_storage': False,
|
|
||||||
'has_imagecache': False,
|
|
||||||
# NOTE(efried): 'snapshot' capability set dynamically below.
|
|
||||||
}
|
|
||||||
|
|
||||||
# Query to get the Volume Group UUID
|
|
||||||
if not CONF.powervm.volume_group_name:
|
|
||||||
raise exception.OptRequiredIfOtherOptValue(
|
|
||||||
if_opt='disk_driver', if_value='localdisk',
|
|
||||||
then_opt='volume_group_name')
|
|
||||||
self.vg_name = CONF.powervm.volume_group_name
|
|
||||||
vios_w, vg_w = tsk_stg.find_vg(adapter, self.vg_name)
|
|
||||||
self._vios_uuid = vios_w.uuid
|
|
||||||
self.vg_uuid = vg_w.uuid
|
|
||||||
# Set the 'snapshot' capability dynamically. If we're hosting I/O on
|
|
||||||
# the management partition, we can snapshot. If we're hosting I/O on
|
|
||||||
# traditional VIOS, we are limited by the fact that a VSCSI device
|
|
||||||
# can't be mapped to two partitions (the VIOS and the management) at
|
|
||||||
# once.
|
|
||||||
self.capabilities['snapshot'] = self.mp_uuid == self._vios_uuid
|
|
||||||
LOG.info("Local Storage driver initialized: volume group: '%s'",
|
|
||||||
self.vg_name)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def _vios_uuids(self):
|
|
||||||
"""List the UUIDs of the Virtual I/O Servers hosting the storage.
|
|
||||||
|
|
||||||
For localdisk, there's only one.
|
|
||||||
"""
|
|
||||||
return [self._vios_uuid]
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _disk_match_func(disk_type, instance):
|
|
||||||
"""Return a matching function to locate the disk for an instance.
|
|
||||||
|
|
||||||
:param disk_type: One of the DiskType enum values.
|
|
||||||
:param instance: The instance whose disk is to be found.
|
|
||||||
:return: Callable suitable for the match_func parameter of the
|
|
||||||
pypowervm.tasks.scsi_mapper.find_maps method.
|
|
||||||
"""
|
|
||||||
disk_name = LocalStorage._get_disk_name(
|
|
||||||
disk_type, instance, short=True)
|
|
||||||
return tsk_map.gen_match_func(pvm_stg.VDisk, names=[disk_name])
|
|
||||||
|
|
||||||
@property
|
|
||||||
def capacity(self):
|
|
||||||
"""Capacity of the storage in gigabytes."""
|
|
||||||
vg_wrap = self._get_vg_wrap()
|
|
||||||
return float(vg_wrap.capacity)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def capacity_used(self):
|
|
||||||
"""Capacity of the storage in gigabytes that is used."""
|
|
||||||
vg_wrap = self._get_vg_wrap()
|
|
||||||
# Subtract available from capacity
|
|
||||||
return float(vg_wrap.capacity) - float(vg_wrap.available_size)
|
|
||||||
|
|
||||||
def delete_disks(self, storage_elems):
|
|
||||||
"""Removes the specified disks.
|
|
||||||
|
|
||||||
:param storage_elems: A list of the storage elements that are to be
|
|
||||||
deleted. Derived from the return value from
|
|
||||||
detach_disk.
|
|
||||||
"""
|
|
||||||
# All of localdisk is done against the volume group. So reload
|
|
||||||
# that (to get new etag) and then update against it.
|
|
||||||
tsk_stg.rm_vg_storage(self._get_vg_wrap(), vdisks=storage_elems)
|
|
||||||
|
|
||||||
def detach_disk(self, instance):
|
|
||||||
"""Detaches the storage adapters from the image disk.
|
|
||||||
|
|
||||||
:param instance: Instance to disconnect the image for.
|
|
||||||
:return: A list of all the backing storage elements that were
|
|
||||||
disconnected from the I/O Server and VM.
|
|
||||||
"""
|
|
||||||
lpar_uuid = vm.get_pvm_uuid(instance)
|
|
||||||
|
|
||||||
# Build the match function
|
|
||||||
match_func = tsk_map.gen_match_func(pvm_stg.VDisk)
|
|
||||||
|
|
||||||
vios_w = pvm_vios.VIOS.get(
|
|
||||||
self._adapter, uuid=self._vios_uuid, xag=[pvm_const.XAG.VIO_SMAP])
|
|
||||||
|
|
||||||
# Remove the mappings.
|
|
||||||
mappings = tsk_map.remove_maps(
|
|
||||||
vios_w, lpar_uuid, match_func=match_func)
|
|
||||||
|
|
||||||
# Update the VIOS with the removed mappings.
|
|
||||||
vios_w.update()
|
|
||||||
|
|
||||||
return [x.backing_storage for x in mappings]
|
|
||||||
|
|
||||||
def disconnect_disk_from_mgmt(self, vios_uuid, disk_name):
|
|
||||||
"""Disconnect a disk from the management partition.
|
|
||||||
|
|
||||||
:param vios_uuid: The UUID of the Virtual I/O Server serving the
|
|
||||||
mapping.
|
|
||||||
:param disk_name: The name of the disk to unmap.
|
|
||||||
"""
|
|
||||||
tsk_map.remove_vdisk_mapping(self._adapter, vios_uuid, self.mp_uuid,
|
|
||||||
disk_names=[disk_name])
|
|
||||||
LOG.info("Unmapped boot disk %(disk_name)s from the management "
|
|
||||||
"partition from Virtual I/O Server %(vios_name)s.",
|
|
||||||
{'disk_name': disk_name, 'mp_uuid': self.mp_uuid,
|
|
||||||
'vios_name': vios_uuid})
|
|
||||||
|
|
||||||
def create_disk_from_image(self, context, instance, image_meta):
|
|
||||||
"""Creates a disk and copies the specified image to it.
|
|
||||||
|
|
||||||
Cleans up the created disk if an error occurs.
|
|
||||||
|
|
||||||
:param context: nova context used to retrieve image from glance
|
|
||||||
:param instance: instance to create the disk for.
|
|
||||||
:param image_meta: The metadata of the image of the instance.
|
|
||||||
:return: The backing pypowervm storage object that was created.
|
|
||||||
"""
|
|
||||||
LOG.info('Create disk.', instance=instance)
|
|
||||||
|
|
||||||
return self._upload_image(context, instance, image_meta)
|
|
||||||
|
|
||||||
# TODO(esberglu): Copy vdisk when implementing image cache.
|
|
||||||
|
|
||||||
def _upload_image(self, context, instance, image_meta):
|
|
||||||
"""Upload a new image.
|
|
||||||
|
|
||||||
:param context: Nova context used to retrieve image from glance.
|
|
||||||
:param image_meta: The metadata of the image of the instance.
|
|
||||||
:return: The virtual disk containing the image.
|
|
||||||
"""
|
|
||||||
|
|
||||||
img_name = self._get_disk_name(disk_dvr.DiskType.BOOT, instance,
|
|
||||||
short=True)
|
|
||||||
|
|
||||||
# TODO(esberglu) Add check for cached image when adding imagecache.
|
|
||||||
|
|
||||||
return tsk_stg.upload_new_vdisk(
|
|
||||||
self._adapter, self._vios_uuid, self.vg_uuid,
|
|
||||||
disk_dvr.IterableToFileAdapter(
|
|
||||||
IMAGE_API.download(context, image_meta.id)), img_name,
|
|
||||||
image_meta.size, d_size=image_meta.size,
|
|
||||||
upload_type=tsk_stg.UploadType.IO_STREAM,
|
|
||||||
file_format=image_meta.disk_format)[0]
|
|
||||||
|
|
||||||
def attach_disk(self, instance, disk_info, stg_ftsk):
|
|
||||||
"""Attaches the disk image to the Virtual Machine.
|
|
||||||
|
|
||||||
:param instance: nova instance to connect the disk to.
|
|
||||||
:param disk_info: The pypowervm storage element returned from
|
|
||||||
create_disk_from_image. Ex. VOptMedia, VDisk, LU,
|
|
||||||
or PV.
|
|
||||||
:param stg_ftsk: The pypowervm transaction FeedTask for the
|
|
||||||
I/O Operations. The Virtual I/O Server mapping updates
|
|
||||||
will be added to the FeedTask. This defers the updates
|
|
||||||
to some later point in time.
|
|
||||||
"""
|
|
||||||
lpar_uuid = vm.get_pvm_uuid(instance)
|
|
||||||
|
|
||||||
def add_func(vios_w):
|
|
||||||
LOG.info("Adding logical volume disk connection to VIOS %(vios)s.",
|
|
||||||
{'vios': vios_w.name}, instance=instance)
|
|
||||||
mapping = tsk_map.build_vscsi_mapping(
|
|
||||||
self._host_uuid, vios_w, lpar_uuid, disk_info)
|
|
||||||
return tsk_map.add_map(vios_w, mapping)
|
|
||||||
|
|
||||||
stg_ftsk.wrapper_tasks[self._vios_uuid].add_functor_subtask(add_func)
|
|
||||||
|
|
||||||
def _get_vg_wrap(self):
|
|
||||||
return pvm_stg.VG.get(self._adapter, uuid=self.vg_uuid,
|
|
||||||
parent_type=pvm_vios.VIOS,
|
|
||||||
parent_uuid=self._vios_uuid)
|
|
@ -1,258 +0,0 @@
|
|||||||
# Copyright 2015, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import random
|
|
||||||
|
|
||||||
import oslo_log.log as logging
|
|
||||||
from pypowervm import const as pvm_const
|
|
||||||
from pypowervm import exceptions as pvm_exc
|
|
||||||
from pypowervm.tasks import cluster_ssp as tsk_cs
|
|
||||||
from pypowervm.tasks import partition as tsk_par
|
|
||||||
from pypowervm.tasks import scsi_mapper as tsk_map
|
|
||||||
from pypowervm.tasks import storage as tsk_stg
|
|
||||||
import pypowervm.util as pvm_u
|
|
||||||
import pypowervm.wrappers.cluster as pvm_clust
|
|
||||||
import pypowervm.wrappers.storage as pvm_stg
|
|
||||||
|
|
||||||
from nova import exception
|
|
||||||
from nova.image import glance
|
|
||||||
from nova.virt.powervm.disk import driver as disk_drv
|
|
||||||
from nova.virt.powervm import vm
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
IMAGE_API = glance.API()
|
|
||||||
|
|
||||||
|
|
||||||
class SSPDiskAdapter(disk_drv.DiskAdapter):
|
|
||||||
"""Provides a disk adapter for Shared Storage Pools.
|
|
||||||
|
|
||||||
Shared Storage Pools are a clustered file system technology that can link
|
|
||||||
together Virtual I/O Servers.
|
|
||||||
|
|
||||||
This adapter provides the connection for nova ephemeral storage (not
|
|
||||||
Cinder) to connect to virtual machines.
|
|
||||||
"""
|
|
||||||
|
|
||||||
capabilities = {
|
|
||||||
'shared_storage': True,
|
|
||||||
# NOTE(efried): Whereas the SSP disk driver definitely does image
|
|
||||||
# caching, it's not through the nova.virt.imagecache.ImageCacheManager
|
|
||||||
# API. Setting `has_imagecache` to True here would have the side
|
|
||||||
# effect of having a periodic task try to call this class's
|
|
||||||
# manage_image_cache method (not implemented here; and a no-op in the
|
|
||||||
# superclass) which would be harmless, but unnecessary.
|
|
||||||
'has_imagecache': False,
|
|
||||||
'snapshot': True,
|
|
||||||
}
|
|
||||||
|
|
||||||
def __init__(self, adapter, host_uuid):
|
|
||||||
"""Initialize the SSPDiskAdapter.
|
|
||||||
|
|
||||||
:param adapter: pypowervm.adapter.Adapter for the PowerVM REST API.
|
|
||||||
:param host_uuid: PowerVM UUID of the managed system.
|
|
||||||
"""
|
|
||||||
super(SSPDiskAdapter, self).__init__(adapter, host_uuid)
|
|
||||||
|
|
||||||
try:
|
|
||||||
self._clust = pvm_clust.Cluster.get(self._adapter)[0]
|
|
||||||
self._ssp = pvm_stg.SSP.get_by_href(
|
|
||||||
self._adapter, self._clust.ssp_uri)
|
|
||||||
self._tier = tsk_stg.default_tier_for_ssp(self._ssp)
|
|
||||||
except pvm_exc.Error:
|
|
||||||
LOG.exception("A unique PowerVM Cluster and Shared Storage Pool "
|
|
||||||
"is required in the default Tier.")
|
|
||||||
raise exception.NotFound()
|
|
||||||
|
|
||||||
LOG.info(
|
|
||||||
"SSP Storage driver initialized. Cluster '%(clust_name)s'; "
|
|
||||||
"SSP '%(ssp_name)s'; Tier '%(tier_name)s'",
|
|
||||||
{'clust_name': self._clust.name, 'ssp_name': self._ssp.name,
|
|
||||||
'tier_name': self._tier.name})
|
|
||||||
|
|
||||||
@property
|
|
||||||
def capacity(self):
|
|
||||||
"""Capacity of the storage in gigabytes."""
|
|
||||||
# Retrieving the Tier is faster (because don't have to refresh LUs.)
|
|
||||||
return float(self._tier.refresh().capacity)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def capacity_used(self):
|
|
||||||
"""Capacity of the storage in gigabytes that is used."""
|
|
||||||
self._ssp = self._ssp.refresh()
|
|
||||||
return float(self._ssp.capacity) - float(self._ssp.free_space)
|
|
||||||
|
|
||||||
def detach_disk(self, instance):
|
|
||||||
"""Detaches the storage adapters from the disk.
|
|
||||||
|
|
||||||
:param instance: instance from which to detach the image.
|
|
||||||
:return: A list of all the backing storage elements that were detached
|
|
||||||
from the I/O Server and VM.
|
|
||||||
"""
|
|
||||||
stg_ftsk = tsk_par.build_active_vio_feed_task(
|
|
||||||
self._adapter, name='ssp', xag=[pvm_const.XAG.VIO_SMAP])
|
|
||||||
|
|
||||||
lpar_uuid = vm.get_pvm_uuid(instance)
|
|
||||||
match_func = tsk_map.gen_match_func(pvm_stg.LU)
|
|
||||||
|
|
||||||
def rm_func(vwrap):
|
|
||||||
LOG.info("Removing SSP disk connection to VIOS %s.",
|
|
||||||
vwrap.name, instance=instance)
|
|
||||||
return tsk_map.remove_maps(vwrap, lpar_uuid,
|
|
||||||
match_func=match_func)
|
|
||||||
|
|
||||||
# Remove the mapping from *each* VIOS on the LPAR's host.
|
|
||||||
# The LPAR's host has to be self._host_uuid, else the PowerVM API will
|
|
||||||
# fail.
|
|
||||||
#
|
|
||||||
# Note - this may not be all the VIOSes on the system...just the ones
|
|
||||||
# in the SSP cluster.
|
|
||||||
#
|
|
||||||
# The mappings will normally be the same on all VIOSes, unless a VIOS
|
|
||||||
# was down when a disk was added. So for the return value, we need to
|
|
||||||
# collect the union of all relevant mappings from all VIOSes.
|
|
||||||
lu_set = set()
|
|
||||||
for vios_uuid in self._vios_uuids:
|
|
||||||
# Add the remove for the VIO
|
|
||||||
stg_ftsk.wrapper_tasks[vios_uuid].add_functor_subtask(rm_func)
|
|
||||||
|
|
||||||
# Find the active LUs so that a delete op knows what to remove.
|
|
||||||
vios_w = stg_ftsk.wrapper_tasks[vios_uuid].wrapper
|
|
||||||
mappings = tsk_map.find_maps(vios_w.scsi_mappings,
|
|
||||||
client_lpar_id=lpar_uuid,
|
|
||||||
match_func=match_func)
|
|
||||||
if mappings:
|
|
||||||
lu_set.update([x.backing_storage for x in mappings])
|
|
||||||
|
|
||||||
stg_ftsk.execute()
|
|
||||||
|
|
||||||
return list(lu_set)
|
|
||||||
|
|
||||||
def delete_disks(self, storage_elems):
|
|
||||||
"""Removes the disks specified by the mappings.
|
|
||||||
|
|
||||||
:param storage_elems: A list of the storage elements (LU
|
|
||||||
ElementWrappers) that are to be deleted. Derived
|
|
||||||
from the return value from detach_disk.
|
|
||||||
"""
|
|
||||||
tsk_stg.rm_tier_storage(storage_elems, tier=self._tier)
|
|
||||||
|
|
||||||
def create_disk_from_image(self, context, instance, image_meta):
|
|
||||||
"""Creates a boot disk and links the specified image to it.
|
|
||||||
|
|
||||||
If the specified image has not already been uploaded, an Image LU is
|
|
||||||
created for it. A Disk LU is then created for the instance and linked
|
|
||||||
to the Image LU.
|
|
||||||
|
|
||||||
:param context: nova context used to retrieve image from glance
|
|
||||||
:param instance: instance to create the disk for.
|
|
||||||
:param nova.objects.ImageMeta image_meta:
|
|
||||||
The metadata of the image of the instance.
|
|
||||||
:return: The backing pypowervm LU storage object that was created.
|
|
||||||
"""
|
|
||||||
LOG.info('SSP: Create boot disk from image %s.', image_meta.id,
|
|
||||||
instance=instance)
|
|
||||||
|
|
||||||
image_lu = tsk_cs.get_or_upload_image_lu(
|
|
||||||
self._tier, pvm_u.sanitize_file_name_for_api(
|
|
||||||
image_meta.name, prefix=disk_drv.DiskType.IMAGE + '_',
|
|
||||||
suffix='_' + image_meta.checksum),
|
|
||||||
random.choice(self._vios_uuids), disk_drv.IterableToFileAdapter(
|
|
||||||
IMAGE_API.download(context, image_meta.id)), image_meta.size,
|
|
||||||
upload_type=tsk_stg.UploadType.IO_STREAM)
|
|
||||||
|
|
||||||
boot_lu_name = pvm_u.sanitize_file_name_for_api(
|
|
||||||
instance.name, prefix=disk_drv.DiskType.BOOT + '_')
|
|
||||||
|
|
||||||
LOG.info('SSP: Disk name is %s', boot_lu_name, instance=instance)
|
|
||||||
|
|
||||||
return tsk_stg.crt_lu(
|
|
||||||
self._tier, boot_lu_name, instance.flavor.root_gb,
|
|
||||||
typ=pvm_stg.LUType.DISK, clone=image_lu)[1]
|
|
||||||
|
|
||||||
def attach_disk(self, instance, disk_info, stg_ftsk):
|
|
||||||
"""Connects the disk image to the Virtual Machine.
|
|
||||||
|
|
||||||
:param instance: nova instance to which to attach the disk.
|
|
||||||
:param disk_info: The pypowervm storage element returned from
|
|
||||||
create_disk_from_image. Ex. VOptMedia, VDisk, LU,
|
|
||||||
or PV.
|
|
||||||
:param stg_ftsk: FeedTask to defer storage connectivity operations.
|
|
||||||
"""
|
|
||||||
# Create the LU structure
|
|
||||||
lu = pvm_stg.LU.bld_ref(self._adapter, disk_info.name, disk_info.udid)
|
|
||||||
lpar_uuid = vm.get_pvm_uuid(instance)
|
|
||||||
|
|
||||||
# This is the delay apply mapping
|
|
||||||
def add_func(vios_w):
|
|
||||||
LOG.info("Attaching SSP disk from VIOS %s.",
|
|
||||||
vios_w.name, instance=instance)
|
|
||||||
mapping = tsk_map.build_vscsi_mapping(
|
|
||||||
self._host_uuid, vios_w, lpar_uuid, lu)
|
|
||||||
return tsk_map.add_map(vios_w, mapping)
|
|
||||||
|
|
||||||
# Add the mapping to *each* VIOS on the LPAR's host.
|
|
||||||
# The LPAR's host has to be self._host_uuid, else the PowerVM API will
|
|
||||||
# fail.
|
|
||||||
#
|
|
||||||
# Note: this may not be all the VIOSes on the system - just the ones
|
|
||||||
# in the SSP cluster.
|
|
||||||
for vios_uuid in self._vios_uuids:
|
|
||||||
stg_ftsk.wrapper_tasks[vios_uuid].add_functor_subtask(add_func)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def _vios_uuids(self):
|
|
||||||
"""List the UUIDs of our cluster's VIOSes on this host.
|
|
||||||
|
|
||||||
(If a VIOS is not on this host, we can't interact with it, even if its
|
|
||||||
URI and therefore its UUID happen to be available in the pypowervm
|
|
||||||
wrapper.)
|
|
||||||
|
|
||||||
:return: A list of VIOS UUID strings.
|
|
||||||
"""
|
|
||||||
ret = []
|
|
||||||
for n in self._clust.nodes:
|
|
||||||
# Skip any nodes that we don't have the VIOS uuid or uri
|
|
||||||
if not (n.vios_uuid and n.vios_uri):
|
|
||||||
continue
|
|
||||||
if self._host_uuid == pvm_u.get_req_path_uuid(
|
|
||||||
n.vios_uri, preserve_case=True, root=True):
|
|
||||||
ret.append(n.vios_uuid)
|
|
||||||
return ret
|
|
||||||
|
|
||||||
def disconnect_disk_from_mgmt(self, vios_uuid, disk_name):
|
|
||||||
"""Disconnect a disk from the management partition.
|
|
||||||
|
|
||||||
:param vios_uuid: The UUID of the Virtual I/O Server serving the
|
|
||||||
mapping.
|
|
||||||
:param disk_name: The name of the disk to unmap.
|
|
||||||
"""
|
|
||||||
tsk_map.remove_lu_mapping(self._adapter, vios_uuid, self.mp_uuid,
|
|
||||||
disk_names=[disk_name])
|
|
||||||
LOG.info("Unmapped boot disk %(disk_name)s from the management "
|
|
||||||
"partition from Virtual I/O Server %(vios_uuid)s.",
|
|
||||||
{'disk_name': disk_name, 'mp_uuid': self.mp_uuid,
|
|
||||||
'vios_uuid': vios_uuid})
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _disk_match_func(disk_type, instance):
|
|
||||||
"""Return a matching function to locate the disk for an instance.
|
|
||||||
|
|
||||||
:param disk_type: One of the DiskType enum values.
|
|
||||||
:param instance: The instance whose disk is to be found.
|
|
||||||
:return: Callable suitable for the match_func parameter of the
|
|
||||||
pypowervm.tasks.scsi_mapper.find_maps method.
|
|
||||||
"""
|
|
||||||
disk_name = SSPDiskAdapter._get_disk_name(disk_type, instance)
|
|
||||||
return tsk_map.gen_match_func(pvm_stg.LU, names=[disk_name])
|
|
@ -1,709 +0,0 @@
|
|||||||
# Copyright 2014, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
"""Connection to PowerVM hypervisor through NovaLink."""
|
|
||||||
|
|
||||||
import os_resource_classes as orc
|
|
||||||
from oslo_log import log as logging
|
|
||||||
from oslo_utils import excutils
|
|
||||||
from oslo_utils import importutils
|
|
||||||
from pypowervm import adapter as pvm_apt
|
|
||||||
from pypowervm import const as pvm_const
|
|
||||||
from pypowervm import exceptions as pvm_exc
|
|
||||||
from pypowervm.helpers import log_helper as log_hlp
|
|
||||||
from pypowervm.helpers import vios_busy as vio_hlp
|
|
||||||
from pypowervm.tasks import partition as pvm_par
|
|
||||||
from pypowervm.tasks import storage as pvm_stor
|
|
||||||
from pypowervm.tasks import vterm as pvm_vterm
|
|
||||||
from pypowervm.wrappers import managed_system as pvm_ms
|
|
||||||
from taskflow.patterns import linear_flow as tf_lf
|
|
||||||
|
|
||||||
from nova.compute import task_states
|
|
||||||
from nova import conf as cfg
|
|
||||||
from nova.console import type as console_type
|
|
||||||
from nova import exception as exc
|
|
||||||
from nova.i18n import _
|
|
||||||
from nova.image import glance
|
|
||||||
from nova.virt import configdrive
|
|
||||||
from nova.virt import driver
|
|
||||||
from nova.virt.powervm import host as pvm_host
|
|
||||||
from nova.virt.powervm.tasks import base as tf_base
|
|
||||||
from nova.virt.powervm.tasks import image as tf_img
|
|
||||||
from nova.virt.powervm.tasks import network as tf_net
|
|
||||||
from nova.virt.powervm.tasks import storage as tf_stg
|
|
||||||
from nova.virt.powervm.tasks import vm as tf_vm
|
|
||||||
from nova.virt.powervm import vm
|
|
||||||
from nova.virt.powervm import volume
|
|
||||||
from nova.virt.powervm.volume import fcvscsi
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
CONF = cfg.CONF
|
|
||||||
|
|
||||||
DISK_ADPT_NS = 'nova.virt.powervm.disk'
|
|
||||||
DISK_ADPT_MAPPINGS = {
|
|
||||||
'localdisk': 'localdisk.LocalStorage',
|
|
||||||
'ssp': 'ssp.SSPDiskAdapter'
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
class PowerVMDriver(driver.ComputeDriver):
|
|
||||||
"""PowerVM NovaLink Implementation of Compute Driver.
|
|
||||||
|
|
||||||
https://wiki.openstack.org/wiki/PowerVM
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, virtapi):
|
|
||||||
# NOTE(edmondsw) some of these will be dynamic in future, so putting
|
|
||||||
# capabilities on the instance rather than on the class.
|
|
||||||
self.capabilities = {
|
|
||||||
'has_imagecache': False,
|
|
||||||
'supports_bfv_rescue': False,
|
|
||||||
'supports_evacuate': False,
|
|
||||||
'supports_migrate_to_same_host': False,
|
|
||||||
'supports_attach_interface': True,
|
|
||||||
'supports_device_tagging': False,
|
|
||||||
'supports_tagged_attach_interface': False,
|
|
||||||
'supports_tagged_attach_volume': False,
|
|
||||||
'supports_extend_volume': True,
|
|
||||||
'supports_multiattach': False,
|
|
||||||
'supports_trusted_certs': False,
|
|
||||||
'supports_pcpus': False,
|
|
||||||
'supports_accelerators': False,
|
|
||||||
'supports_vtpm': False,
|
|
||||||
'supports_secure_boot': False,
|
|
||||||
'supports_socket_pci_numa_affinity': False,
|
|
||||||
'supports_remote_managed_ports': False,
|
|
||||||
|
|
||||||
# Supported image types
|
|
||||||
"supports_image_type_aki": False,
|
|
||||||
"supports_image_type_ami": False,
|
|
||||||
"supports_image_type_ari": False,
|
|
||||||
"supports_image_type_iso": False,
|
|
||||||
"supports_image_type_qcow2": False,
|
|
||||||
"supports_image_type_raw": True,
|
|
||||||
"supports_image_type_vdi": False,
|
|
||||||
"supports_image_type_vhd": False,
|
|
||||||
"supports_image_type_vhdx": False,
|
|
||||||
"supports_image_type_vmdk": False,
|
|
||||||
"supports_image_type_ploop": False,
|
|
||||||
}
|
|
||||||
super(PowerVMDriver, self).__init__(virtapi)
|
|
||||||
|
|
||||||
def init_host(self, host):
|
|
||||||
"""Initialize anything that is necessary for the driver to function.
|
|
||||||
|
|
||||||
Includes catching up with currently running VMs on the given host.
|
|
||||||
"""
|
|
||||||
LOG.warning(
|
|
||||||
'The powervm virt driver is deprecated and may be removed in a '
|
|
||||||
'future release. The driver is not tested by the OpenStack '
|
|
||||||
'project nor does it have clear maintainers and thus its quality'
|
|
||||||
'can not be ensured. If you are using the driver in production '
|
|
||||||
'please let us know the openstack-discuss mailing list or on IRC'
|
|
||||||
)
|
|
||||||
|
|
||||||
# Build the adapter. May need to attempt the connection multiple times
|
|
||||||
# in case the PowerVM management API service is starting.
|
|
||||||
# TODO(efried): Implement async compute service enable/disable like
|
|
||||||
# I73a34eb6e0ca32d03e54d12a5e066b2ed4f19a61
|
|
||||||
self.adapter = pvm_apt.Adapter(
|
|
||||||
pvm_apt.Session(conn_tries=60),
|
|
||||||
helpers=[log_hlp.log_helper, vio_hlp.vios_busy_retry_helper])
|
|
||||||
# Make sure the Virtual I/O Server(s) are available.
|
|
||||||
pvm_par.validate_vios_ready(self.adapter)
|
|
||||||
self.host_wrapper = pvm_ms.System.get(self.adapter)[0]
|
|
||||||
|
|
||||||
# Do a scrub of the I/O plane to make sure the system is in good shape
|
|
||||||
LOG.info("Clearing stale I/O connections on driver init.")
|
|
||||||
pvm_stor.ComprehensiveScrub(self.adapter).execute()
|
|
||||||
|
|
||||||
# Initialize the disk adapter
|
|
||||||
self.disk_dvr = importutils.import_object_ns(
|
|
||||||
DISK_ADPT_NS, DISK_ADPT_MAPPINGS[CONF.powervm.disk_driver.lower()],
|
|
||||||
self.adapter, self.host_wrapper.uuid)
|
|
||||||
self.image_api = glance.API()
|
|
||||||
|
|
||||||
LOG.info("The PowerVM compute driver has been initialized.")
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _log_operation(op, instance):
|
|
||||||
"""Log entry point of driver operations."""
|
|
||||||
LOG.info('Operation: %(op)s. Virtual machine display name: '
|
|
||||||
'%(display_name)s, name: %(name)s',
|
|
||||||
{'op': op, 'display_name': instance.display_name,
|
|
||||||
'name': instance.name}, instance=instance)
|
|
||||||
|
|
||||||
def get_info(self, instance, use_cache=True):
|
|
||||||
"""Get the current status of an instance.
|
|
||||||
|
|
||||||
:param instance: nova.objects.instance.Instance object
|
|
||||||
:param use_cache: unused in this driver
|
|
||||||
:returns: An InstanceInfo object.
|
|
||||||
"""
|
|
||||||
return vm.get_vm_info(self.adapter, instance)
|
|
||||||
|
|
||||||
def list_instances(self):
|
|
||||||
"""Return the names of all the instances known to the virt host.
|
|
||||||
|
|
||||||
:return: VM Names as a list.
|
|
||||||
"""
|
|
||||||
return vm.get_lpar_names(self.adapter)
|
|
||||||
|
|
||||||
def get_available_nodes(self, refresh=False):
|
|
||||||
"""Returns nodenames of all nodes managed by the compute service.
|
|
||||||
|
|
||||||
This method is for multi compute-nodes support. If a driver supports
|
|
||||||
multi compute-nodes, this method returns a list of nodenames managed
|
|
||||||
by the service. Otherwise, this method should return
|
|
||||||
[hypervisor_hostname].
|
|
||||||
"""
|
|
||||||
|
|
||||||
return [CONF.host]
|
|
||||||
|
|
||||||
def get_available_resource(self, nodename):
|
|
||||||
"""Retrieve resource information.
|
|
||||||
|
|
||||||
This method is called when nova-compute launches, and as part of a
|
|
||||||
periodic task.
|
|
||||||
|
|
||||||
:param nodename: Node from which the caller wants to get resources.
|
|
||||||
A driver that manages only one node can safely ignore
|
|
||||||
this.
|
|
||||||
:return: Dictionary describing resources.
|
|
||||||
"""
|
|
||||||
# Do this here so it refreshes each time this method is called.
|
|
||||||
self.host_wrapper = pvm_ms.System.get(self.adapter)[0]
|
|
||||||
return self._get_available_resource()
|
|
||||||
|
|
||||||
def _get_available_resource(self):
|
|
||||||
# Get host information
|
|
||||||
data = pvm_host.build_host_resource_from_ms(self.host_wrapper)
|
|
||||||
|
|
||||||
# Add the disk information
|
|
||||||
data["local_gb"] = self.disk_dvr.capacity
|
|
||||||
data["local_gb_used"] = self.disk_dvr.capacity_used
|
|
||||||
|
|
||||||
return data
|
|
||||||
|
|
||||||
def update_provider_tree(self, provider_tree, nodename, allocations=None):
|
|
||||||
"""Update a ProviderTree with current provider and inventory data.
|
|
||||||
|
|
||||||
:param nova.compute.provider_tree.ProviderTree provider_tree:
|
|
||||||
A nova.compute.provider_tree.ProviderTree object representing all
|
|
||||||
the providers in the tree associated with the compute node, and any
|
|
||||||
sharing providers (those with the ``MISC_SHARES_VIA_AGGREGATE``
|
|
||||||
trait) associated via aggregate with any of those providers (but
|
|
||||||
not *their* tree- or aggregate-associated providers), as currently
|
|
||||||
known by placement.
|
|
||||||
:param nodename:
|
|
||||||
String name of the compute node (i.e.
|
|
||||||
ComputeNode.hypervisor_hostname) for which the caller is requesting
|
|
||||||
updated provider information.
|
|
||||||
:param allocations: Currently ignored by this driver.
|
|
||||||
"""
|
|
||||||
# Get (legacy) resource information. Same as get_available_resource,
|
|
||||||
# but we don't need to refresh self.host_wrapper as it was *just*
|
|
||||||
# refreshed by get_available_resource in the resource tracker's
|
|
||||||
# update_available_resource flow.
|
|
||||||
data = self._get_available_resource()
|
|
||||||
|
|
||||||
# NOTE(yikun): If the inv record does not exists, the allocation_ratio
|
|
||||||
# will use the CONF.xxx_allocation_ratio value if xxx_allocation_ratio
|
|
||||||
# is set, and fallback to use the initial_xxx_allocation_ratio
|
|
||||||
# otherwise.
|
|
||||||
inv = provider_tree.data(nodename).inventory
|
|
||||||
ratios = self._get_allocation_ratios(inv)
|
|
||||||
# TODO(efried): Fix these to reflect something like reality
|
|
||||||
cpu_reserved = CONF.reserved_host_cpus
|
|
||||||
mem_reserved = CONF.reserved_host_memory_mb
|
|
||||||
disk_reserved = self._get_reserved_host_disk_gb_from_config()
|
|
||||||
|
|
||||||
inventory = {
|
|
||||||
orc.VCPU: {
|
|
||||||
'total': data['vcpus'],
|
|
||||||
'max_unit': data['vcpus'],
|
|
||||||
'allocation_ratio': ratios[orc.VCPU],
|
|
||||||
'reserved': cpu_reserved,
|
|
||||||
},
|
|
||||||
orc.MEMORY_MB: {
|
|
||||||
'total': data['memory_mb'],
|
|
||||||
'max_unit': data['memory_mb'],
|
|
||||||
'allocation_ratio': ratios[orc.MEMORY_MB],
|
|
||||||
'reserved': mem_reserved,
|
|
||||||
},
|
|
||||||
orc.DISK_GB: {
|
|
||||||
# TODO(efried): Proper DISK_GB sharing when SSP driver in play
|
|
||||||
'total': int(data['local_gb']),
|
|
||||||
'max_unit': int(data['local_gb']),
|
|
||||||
'allocation_ratio': ratios[orc.DISK_GB],
|
|
||||||
'reserved': disk_reserved,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
provider_tree.update_inventory(nodename, inventory)
|
|
||||||
|
|
||||||
def spawn(self, context, instance, image_meta, injected_files,
|
|
||||||
admin_password, allocations, network_info=None,
|
|
||||||
block_device_info=None, power_on=True, accel_info=None):
|
|
||||||
"""Create a new instance/VM/domain on the virtualization platform.
|
|
||||||
|
|
||||||
Once this successfully completes, the instance should be
|
|
||||||
running (power_state.RUNNING).
|
|
||||||
|
|
||||||
If this fails, any partial instance should be completely
|
|
||||||
cleaned up, and the virtualization platform should be in the state
|
|
||||||
that it was before this call began.
|
|
||||||
|
|
||||||
:param context: security context
|
|
||||||
:param instance: nova.objects.instance.Instance
|
|
||||||
This function should use the data there to guide
|
|
||||||
the creation of the new instance.
|
|
||||||
:param nova.objects.ImageMeta image_meta:
|
|
||||||
The metadata of the image of the instance.
|
|
||||||
:param injected_files: User files to inject into instance.
|
|
||||||
:param admin_password: Administrator password to set in instance.
|
|
||||||
:param allocations: Information about resources allocated to the
|
|
||||||
instance via placement, of the form returned by
|
|
||||||
SchedulerReportClient.get_allocations_for_consumer.
|
|
||||||
:param network_info: instance network information
|
|
||||||
:param block_device_info: Information about block devices to be
|
|
||||||
attached to the instance.
|
|
||||||
:param power_on: True if the instance should be powered on, False
|
|
||||||
otherwise
|
|
||||||
"""
|
|
||||||
self._log_operation('spawn', instance)
|
|
||||||
# Define the flow
|
|
||||||
flow_spawn = tf_lf.Flow("spawn")
|
|
||||||
|
|
||||||
# This FeedTask accumulates VIOS storage connection operations to be
|
|
||||||
# run in parallel. Include both SCSI and fibre channel mappings for
|
|
||||||
# the scrubber.
|
|
||||||
stg_ftsk = pvm_par.build_active_vio_feed_task(
|
|
||||||
self.adapter, xag={pvm_const.XAG.VIO_SMAP, pvm_const.XAG.VIO_FMAP})
|
|
||||||
|
|
||||||
flow_spawn.add(tf_vm.Create(
|
|
||||||
self.adapter, self.host_wrapper, instance, stg_ftsk))
|
|
||||||
|
|
||||||
# Create a flow for the IO
|
|
||||||
flow_spawn.add(tf_net.PlugVifs(
|
|
||||||
self.virtapi, self.adapter, instance, network_info))
|
|
||||||
flow_spawn.add(tf_net.PlugMgmtVif(
|
|
||||||
self.adapter, instance))
|
|
||||||
|
|
||||||
# Create the boot image.
|
|
||||||
flow_spawn.add(tf_stg.CreateDiskForImg(
|
|
||||||
self.disk_dvr, context, instance, image_meta))
|
|
||||||
# Connects up the disk to the LPAR
|
|
||||||
flow_spawn.add(tf_stg.AttachDisk(
|
|
||||||
self.disk_dvr, instance, stg_ftsk=stg_ftsk))
|
|
||||||
|
|
||||||
# Extract the block devices.
|
|
||||||
bdms = driver.block_device_info_get_mapping(block_device_info)
|
|
||||||
|
|
||||||
# Determine if there are volumes to connect. If so, add a connection
|
|
||||||
# for each type.
|
|
||||||
for bdm, vol_drv in self._vol_drv_iter(context, instance, bdms,
|
|
||||||
stg_ftsk=stg_ftsk):
|
|
||||||
# Connect the volume. This will update the connection_info.
|
|
||||||
flow_spawn.add(tf_stg.AttachVolume(vol_drv))
|
|
||||||
|
|
||||||
# If the config drive is needed, add those steps. Should be done
|
|
||||||
# after all the other I/O.
|
|
||||||
if configdrive.required_by(instance):
|
|
||||||
flow_spawn.add(tf_stg.CreateAndConnectCfgDrive(
|
|
||||||
self.adapter, instance, injected_files, network_info,
|
|
||||||
stg_ftsk, admin_pass=admin_password))
|
|
||||||
|
|
||||||
# Add the transaction manager flow at the end of the 'I/O
|
|
||||||
# connection' tasks. This will run all the connections in parallel.
|
|
||||||
flow_spawn.add(stg_ftsk)
|
|
||||||
|
|
||||||
# Last step is to power on the system.
|
|
||||||
flow_spawn.add(tf_vm.PowerOn(self.adapter, instance))
|
|
||||||
|
|
||||||
# Run the flow.
|
|
||||||
tf_base.run(flow_spawn, instance=instance)
|
|
||||||
|
|
||||||
def destroy(self, context, instance, network_info, block_device_info=None,
|
|
||||||
destroy_disks=True):
|
|
||||||
"""Destroy the specified instance from the Hypervisor.
|
|
||||||
|
|
||||||
If the instance is not found (for example if networking failed), this
|
|
||||||
function should still succeed. It's probably a good idea to log a
|
|
||||||
warning in that case.
|
|
||||||
|
|
||||||
:param context: security context
|
|
||||||
:param instance: Instance object as returned by DB layer.
|
|
||||||
:param network_info: instance network information
|
|
||||||
:param block_device_info: Information about block devices that should
|
|
||||||
be detached from the instance.
|
|
||||||
:param destroy_disks: Indicates if disks should be destroyed
|
|
||||||
"""
|
|
||||||
# TODO(thorst, efried) Add resize checks for destroy
|
|
||||||
|
|
||||||
self._log_operation('destroy', instance)
|
|
||||||
|
|
||||||
def _setup_flow_and_run():
|
|
||||||
# Define the flow
|
|
||||||
flow = tf_lf.Flow("destroy")
|
|
||||||
|
|
||||||
# Power Off the LPAR. If its disks are about to be deleted, issue a
|
|
||||||
# hard shutdown.
|
|
||||||
flow.add(tf_vm.PowerOff(self.adapter, instance,
|
|
||||||
force_immediate=destroy_disks))
|
|
||||||
|
|
||||||
# The FeedTask accumulates storage disconnection tasks to be run in
|
|
||||||
# parallel.
|
|
||||||
stg_ftsk = pvm_par.build_active_vio_feed_task(
|
|
||||||
self.adapter, xag=[pvm_const.XAG.VIO_SMAP])
|
|
||||||
|
|
||||||
# Call the unplug VIFs task. While CNAs get removed from the LPAR
|
|
||||||
# directly on the destroy, this clears up the I/O Host side.
|
|
||||||
flow.add(tf_net.UnplugVifs(self.adapter, instance, network_info))
|
|
||||||
|
|
||||||
# Add the disconnect/deletion of the vOpt to the transaction
|
|
||||||
# manager.
|
|
||||||
if configdrive.required_by(instance):
|
|
||||||
flow.add(tf_stg.DeleteVOpt(
|
|
||||||
self.adapter, instance, stg_ftsk=stg_ftsk))
|
|
||||||
|
|
||||||
# Extract the block devices.
|
|
||||||
bdms = driver.block_device_info_get_mapping(block_device_info)
|
|
||||||
|
|
||||||
# Determine if there are volumes to detach. If so, remove each
|
|
||||||
# volume (within the transaction manager)
|
|
||||||
for bdm, vol_drv in self._vol_drv_iter(
|
|
||||||
context, instance, bdms, stg_ftsk=stg_ftsk):
|
|
||||||
flow.add(tf_stg.DetachVolume(vol_drv))
|
|
||||||
|
|
||||||
# Detach the disk storage adapters
|
|
||||||
flow.add(tf_stg.DetachDisk(self.disk_dvr, instance))
|
|
||||||
|
|
||||||
# Accumulated storage disconnection tasks next
|
|
||||||
flow.add(stg_ftsk)
|
|
||||||
|
|
||||||
# Delete the storage disks
|
|
||||||
if destroy_disks:
|
|
||||||
flow.add(tf_stg.DeleteDisk(self.disk_dvr))
|
|
||||||
|
|
||||||
# TODO(thorst, efried) Add LPAR id based scsi map clean up task
|
|
||||||
flow.add(tf_vm.Delete(self.adapter, instance))
|
|
||||||
|
|
||||||
# Build the engine & run!
|
|
||||||
tf_base.run(flow, instance=instance)
|
|
||||||
|
|
||||||
try:
|
|
||||||
_setup_flow_and_run()
|
|
||||||
except exc.InstanceNotFound:
|
|
||||||
LOG.debug('VM was not found during destroy operation.',
|
|
||||||
instance=instance)
|
|
||||||
return
|
|
||||||
except pvm_exc.Error as e:
|
|
||||||
LOG.exception("PowerVM error during destroy.", instance=instance)
|
|
||||||
# Convert to a Nova exception
|
|
||||||
raise exc.InstanceTerminationFailure(reason=str(e))
|
|
||||||
|
|
||||||
def snapshot(self, context, instance, image_id, update_task_state):
|
|
||||||
"""Snapshots the specified instance.
|
|
||||||
|
|
||||||
:param context: security context
|
|
||||||
:param instance: nova.objects.instance.Instance
|
|
||||||
:param image_id: Reference to a pre-created image that will hold the
|
|
||||||
snapshot.
|
|
||||||
:param update_task_state: Callback function to update the task_state
|
|
||||||
on the instance while the snapshot operation progresses. The
|
|
||||||
function takes a task_state argument and an optional
|
|
||||||
expected_task_state kwarg which defaults to
|
|
||||||
nova.compute.task_states.IMAGE_SNAPSHOT. See
|
|
||||||
nova.objects.instance.Instance.save for expected_task_state usage.
|
|
||||||
"""
|
|
||||||
|
|
||||||
if not self.disk_dvr.capabilities.get('snapshot'):
|
|
||||||
raise exc.NotSupportedWithOption(
|
|
||||||
message=_("The snapshot operation is not supported in "
|
|
||||||
"conjunction with a [powervm]/disk_driver setting "
|
|
||||||
"of %s.") % CONF.powervm.disk_driver)
|
|
||||||
|
|
||||||
self._log_operation('snapshot', instance)
|
|
||||||
|
|
||||||
# Define the flow.
|
|
||||||
flow = tf_lf.Flow("snapshot")
|
|
||||||
|
|
||||||
# Notify that we're starting the process.
|
|
||||||
flow.add(tf_img.UpdateTaskState(update_task_state,
|
|
||||||
task_states.IMAGE_PENDING_UPLOAD))
|
|
||||||
|
|
||||||
# Connect the instance's boot disk to the management partition, and
|
|
||||||
# scan the scsi bus and bring the device into the management partition.
|
|
||||||
flow.add(tf_stg.InstanceDiskToMgmt(self.disk_dvr, instance))
|
|
||||||
|
|
||||||
# Notify that the upload is in progress.
|
|
||||||
flow.add(tf_img.UpdateTaskState(
|
|
||||||
update_task_state, task_states.IMAGE_UPLOADING,
|
|
||||||
expected_state=task_states.IMAGE_PENDING_UPLOAD))
|
|
||||||
|
|
||||||
# Stream the disk to glance.
|
|
||||||
flow.add(tf_img.StreamToGlance(context, self.image_api, image_id,
|
|
||||||
instance))
|
|
||||||
|
|
||||||
# Disconnect the boot disk from the management partition and delete the
|
|
||||||
# device.
|
|
||||||
flow.add(tf_stg.RemoveInstanceDiskFromMgmt(self.disk_dvr, instance))
|
|
||||||
|
|
||||||
# Run the flow.
|
|
||||||
tf_base.run(flow, instance=instance)
|
|
||||||
|
|
||||||
def power_off(self, instance, timeout=0, retry_interval=0):
|
|
||||||
"""Power off the specified instance.
|
|
||||||
|
|
||||||
:param instance: nova.objects.instance.Instance
|
|
||||||
:param timeout: time to wait for GuestOS to shutdown
|
|
||||||
:param retry_interval: How often to signal guest while
|
|
||||||
waiting for it to shutdown
|
|
||||||
"""
|
|
||||||
self._log_operation('power_off', instance)
|
|
||||||
force_immediate = (timeout == 0)
|
|
||||||
timeout = timeout or None
|
|
||||||
vm.power_off(self.adapter, instance, force_immediate=force_immediate,
|
|
||||||
timeout=timeout)
|
|
||||||
|
|
||||||
def power_on(self, context, instance, network_info,
|
|
||||||
block_device_info=None, accel_info=None):
|
|
||||||
"""Power on the specified instance.
|
|
||||||
|
|
||||||
:param instance: nova.objects.instance.Instance
|
|
||||||
"""
|
|
||||||
self._log_operation('power_on', instance)
|
|
||||||
vm.power_on(self.adapter, instance)
|
|
||||||
|
|
||||||
def reboot(self, context, instance, network_info, reboot_type,
|
|
||||||
block_device_info=None, bad_volumes_callback=None,
|
|
||||||
accel_info=None):
|
|
||||||
"""Reboot the specified instance.
|
|
||||||
|
|
||||||
After this is called successfully, the instance's state
|
|
||||||
goes back to power_state.RUNNING. The virtualization
|
|
||||||
platform should ensure that the reboot action has completed
|
|
||||||
successfully even in cases in which the underlying domain/vm
|
|
||||||
is paused or halted/stopped.
|
|
||||||
|
|
||||||
:param instance: nova.objects.instance.Instance
|
|
||||||
:param network_info: `nova.network.models.NetworkInfo` object
|
|
||||||
describing the network metadata.
|
|
||||||
:param reboot_type: Either a HARD or SOFT reboot
|
|
||||||
:param block_device_info: Info pertaining to attached volumes
|
|
||||||
:param bad_volumes_callback: Function to handle any bad volumes
|
|
||||||
encountered
|
|
||||||
:param accel_info: List of accelerator request dicts. The exact
|
|
||||||
data struct is doc'd in nova/virt/driver.py::spawn().
|
|
||||||
"""
|
|
||||||
self._log_operation(reboot_type + ' reboot', instance)
|
|
||||||
vm.reboot(self.adapter, instance, reboot_type == 'HARD')
|
|
||||||
# pypowervm exceptions are sufficient to indicate real failure.
|
|
||||||
# Otherwise, pypowervm thinks the instance is up.
|
|
||||||
|
|
||||||
def attach_interface(self, context, instance, image_meta, vif):
|
|
||||||
"""Attach an interface to the instance."""
|
|
||||||
self.plug_vifs(instance, [vif])
|
|
||||||
|
|
||||||
def detach_interface(self, context, instance, vif):
|
|
||||||
"""Detach an interface from the instance."""
|
|
||||||
self.unplug_vifs(instance, [vif])
|
|
||||||
|
|
||||||
def plug_vifs(self, instance, network_info):
|
|
||||||
"""Plug VIFs into networks."""
|
|
||||||
self._log_operation('plug_vifs', instance)
|
|
||||||
|
|
||||||
# Define the flow
|
|
||||||
flow = tf_lf.Flow("plug_vifs")
|
|
||||||
|
|
||||||
# Get the LPAR Wrapper
|
|
||||||
flow.add(tf_vm.Get(self.adapter, instance))
|
|
||||||
|
|
||||||
# Run the attach
|
|
||||||
flow.add(tf_net.PlugVifs(self.virtapi, self.adapter, instance,
|
|
||||||
network_info))
|
|
||||||
|
|
||||||
# Run the flow
|
|
||||||
try:
|
|
||||||
tf_base.run(flow, instance=instance)
|
|
||||||
except exc.InstanceNotFound:
|
|
||||||
raise exc.VirtualInterfacePlugException(
|
|
||||||
_("Plug vif failed because instance %s was not found.")
|
|
||||||
% instance.name)
|
|
||||||
except Exception:
|
|
||||||
LOG.exception("PowerVM error plugging vifs.", instance=instance)
|
|
||||||
raise exc.VirtualInterfacePlugException(
|
|
||||||
_("Plug vif failed because of an unexpected error."))
|
|
||||||
|
|
||||||
def unplug_vifs(self, instance, network_info):
|
|
||||||
"""Unplug VIFs from networks."""
|
|
||||||
self._log_operation('unplug_vifs', instance)
|
|
||||||
|
|
||||||
# Define the flow
|
|
||||||
flow = tf_lf.Flow("unplug_vifs")
|
|
||||||
|
|
||||||
# Run the detach
|
|
||||||
flow.add(tf_net.UnplugVifs(self.adapter, instance, network_info))
|
|
||||||
|
|
||||||
# Run the flow
|
|
||||||
try:
|
|
||||||
tf_base.run(flow, instance=instance)
|
|
||||||
except exc.InstanceNotFound:
|
|
||||||
LOG.warning('VM was not found during unplug operation as it is '
|
|
||||||
'already possibly deleted.', instance=instance)
|
|
||||||
except Exception:
|
|
||||||
LOG.exception("PowerVM error trying to unplug vifs.",
|
|
||||||
instance=instance)
|
|
||||||
raise exc.InterfaceDetachFailed(instance_uuid=instance.uuid)
|
|
||||||
|
|
||||||
def get_vnc_console(self, context, instance):
|
|
||||||
"""Get connection info for a vnc console.
|
|
||||||
|
|
||||||
:param context: security context
|
|
||||||
:param instance: nova.objects.instance.Instance
|
|
||||||
|
|
||||||
:return: An instance of console.type.ConsoleVNC
|
|
||||||
"""
|
|
||||||
self._log_operation('get_vnc_console', instance)
|
|
||||||
lpar_uuid = vm.get_pvm_uuid(instance)
|
|
||||||
|
|
||||||
# Build the connection to the VNC.
|
|
||||||
host = CONF.vnc.server_proxyclient_address
|
|
||||||
# TODO(thorst, efried) Add the x509 certificate support when it lands
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Open up a remote vterm
|
|
||||||
port = pvm_vterm.open_remotable_vnc_vterm(
|
|
||||||
self.adapter, lpar_uuid, host, vnc_path=lpar_uuid)
|
|
||||||
# Note that the VNC viewer will wrap the internal_access_path with
|
|
||||||
# the HTTP content.
|
|
||||||
return console_type.ConsoleVNC(host=host, port=port,
|
|
||||||
internal_access_path=lpar_uuid)
|
|
||||||
except pvm_exc.HttpError as e:
|
|
||||||
with excutils.save_and_reraise_exception(logger=LOG) as sare:
|
|
||||||
# If the LPAR was not found, raise a more descriptive error
|
|
||||||
if e.response.status == 404:
|
|
||||||
sare.reraise = False
|
|
||||||
raise exc.InstanceNotFound(instance_id=instance.uuid)
|
|
||||||
|
|
||||||
def attach_volume(self, context, connection_info, instance, mountpoint,
|
|
||||||
disk_bus=None, device_type=None, encryption=None):
|
|
||||||
"""Attach the volume to the instance using the connection_info.
|
|
||||||
|
|
||||||
:param context: security context
|
|
||||||
:param connection_info: Volume connection information from the block
|
|
||||||
device mapping
|
|
||||||
:param instance: nova.objects.instance.Instance
|
|
||||||
:param mountpoint: Unused
|
|
||||||
:param disk_bus: Unused
|
|
||||||
:param device_type: Unused
|
|
||||||
:param encryption: Unused
|
|
||||||
"""
|
|
||||||
self._log_operation('attach_volume', instance)
|
|
||||||
|
|
||||||
# Define the flow
|
|
||||||
flow = tf_lf.Flow("attach_volume")
|
|
||||||
|
|
||||||
# Build the driver
|
|
||||||
vol_drv = volume.build_volume_driver(self.adapter, instance,
|
|
||||||
connection_info)
|
|
||||||
|
|
||||||
# Add the volume attach to the flow.
|
|
||||||
flow.add(tf_stg.AttachVolume(vol_drv))
|
|
||||||
|
|
||||||
# Run the flow
|
|
||||||
tf_base.run(flow, instance=instance)
|
|
||||||
|
|
||||||
# The volume connector may have updated the system metadata. Save
|
|
||||||
# the instance to persist the data. Spawn/destroy auto saves instance,
|
|
||||||
# but the attach does not. Detach does not need this save - as the
|
|
||||||
# detach flows do not (currently) modify system metadata. May need
|
|
||||||
# to revise in the future as volume connectors evolve.
|
|
||||||
instance.save()
|
|
||||||
|
|
||||||
def detach_volume(self, context, connection_info, instance, mountpoint,
|
|
||||||
encryption=None):
|
|
||||||
"""Detach the volume attached to the instance.
|
|
||||||
|
|
||||||
:param context: security context
|
|
||||||
:param connection_info: Volume connection information from the block
|
|
||||||
device mapping
|
|
||||||
:param instance: nova.objects.instance.Instance
|
|
||||||
:param mountpoint: Unused
|
|
||||||
:param encryption: Unused
|
|
||||||
"""
|
|
||||||
self._log_operation('detach_volume', instance)
|
|
||||||
|
|
||||||
# Define the flow
|
|
||||||
flow = tf_lf.Flow("detach_volume")
|
|
||||||
|
|
||||||
# Get a volume adapter for this volume
|
|
||||||
vol_drv = volume.build_volume_driver(self.adapter, instance,
|
|
||||||
connection_info)
|
|
||||||
|
|
||||||
# Add a task to detach the volume
|
|
||||||
flow.add(tf_stg.DetachVolume(vol_drv))
|
|
||||||
|
|
||||||
# Run the flow
|
|
||||||
tf_base.run(flow, instance=instance)
|
|
||||||
|
|
||||||
def extend_volume(self, context, connection_info, instance,
|
|
||||||
requested_size):
|
|
||||||
"""Extend the disk attached to the instance.
|
|
||||||
|
|
||||||
:param context: security context
|
|
||||||
:param dict connection_info: The connection for the extended volume.
|
|
||||||
:param nova.objects.instance.Instance instance:
|
|
||||||
The instance whose volume gets extended.
|
|
||||||
:param int requested_size: The requested new volume size in bytes.
|
|
||||||
:return: None
|
|
||||||
"""
|
|
||||||
|
|
||||||
vol_drv = volume.build_volume_driver(
|
|
||||||
self.adapter, instance, connection_info)
|
|
||||||
vol_drv.extend_volume()
|
|
||||||
|
|
||||||
def _vol_drv_iter(self, context, instance, bdms, stg_ftsk=None):
|
|
||||||
"""Yields a bdm and volume driver.
|
|
||||||
|
|
||||||
:param context: security context
|
|
||||||
:param instance: nova.objects.instance.Instance
|
|
||||||
:param bdms: block device mappings
|
|
||||||
:param stg_ftsk: storage FeedTask
|
|
||||||
"""
|
|
||||||
# Get a volume driver for each volume
|
|
||||||
for bdm in bdms or []:
|
|
||||||
conn_info = bdm.get('connection_info')
|
|
||||||
vol_drv = volume.build_volume_driver(self.adapter, instance,
|
|
||||||
conn_info, stg_ftsk=stg_ftsk)
|
|
||||||
yield bdm, vol_drv
|
|
||||||
|
|
||||||
def get_volume_connector(self, instance):
|
|
||||||
"""Get connector information for the instance for attaching to volumes.
|
|
||||||
|
|
||||||
Connector information is a dictionary representing information about
|
|
||||||
the system that will be making the connection.
|
|
||||||
|
|
||||||
:param instance: nova.objects.instance.Instance
|
|
||||||
"""
|
|
||||||
# Put the values in the connector
|
|
||||||
connector = {}
|
|
||||||
wwpn_list = fcvscsi.wwpns(self.adapter)
|
|
||||||
|
|
||||||
if wwpn_list is not None:
|
|
||||||
connector["wwpns"] = wwpn_list
|
|
||||||
connector["multipath"] = False
|
|
||||||
connector['host'] = CONF.host
|
|
||||||
connector['initiator'] = None
|
|
||||||
|
|
||||||
return connector
|
|
@ -1,66 +0,0 @@
|
|||||||
# Copyright 2014, 2017 IBM Corp.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import math
|
|
||||||
|
|
||||||
from oslo_serialization import jsonutils
|
|
||||||
|
|
||||||
from nova import conf as cfg
|
|
||||||
from nova.objects import fields
|
|
||||||
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
|
||||||
|
|
||||||
# Power VM hypervisor info
|
|
||||||
# Normally, the hypervisor version is a string in the form of '8.0.0' and
|
|
||||||
# converted to an int with nova.virt.utils.convert_version_to_int() however
|
|
||||||
# there isn't currently a mechanism to retrieve the exact version.
|
|
||||||
# Complicating this is the fact that nova conductor only allows live migration
|
|
||||||
# from the source host to the destination if the source is equal to or less
|
|
||||||
# than the destination version. PowerVM live migration limitations are
|
|
||||||
# checked by the PowerVM capabilities flags and not specific version levels.
|
|
||||||
# For that reason, we'll just publish the major level.
|
|
||||||
IBM_POWERVM_HYPERVISOR_VERSION = 8
|
|
||||||
|
|
||||||
# The types of LPARS that are supported.
|
|
||||||
POWERVM_SUPPORTED_INSTANCES = [
|
|
||||||
(fields.Architecture.PPC64, fields.HVType.PHYP, fields.VMMode.HVM),
|
|
||||||
(fields.Architecture.PPC64LE, fields.HVType.PHYP, fields.VMMode.HVM)]
|
|
||||||
|
|
||||||
|
|
||||||
def build_host_resource_from_ms(ms_w):
|
|
||||||
"""Build the host resource dict from a ManagedSystem PowerVM wrapper.
|
|
||||||
|
|
||||||
:param ms_w: The pypowervm System wrapper describing the managed system.
|
|
||||||
"""
|
|
||||||
data = {}
|
|
||||||
# Calculate the vcpus
|
|
||||||
proc_units = ms_w.proc_units_configurable
|
|
||||||
pu_used = float(proc_units) - float(ms_w.proc_units_avail)
|
|
||||||
data['vcpus'] = int(math.ceil(float(proc_units)))
|
|
||||||
data['vcpus_used'] = int(math.ceil(pu_used))
|
|
||||||
data['memory_mb'] = ms_w.memory_configurable
|
|
||||||
data['memory_mb_used'] = (ms_w.memory_configurable -
|
|
||||||
ms_w.memory_free)
|
|
||||||
data["hypervisor_type"] = fields.HVType.PHYP
|
|
||||||
data["hypervisor_version"] = IBM_POWERVM_HYPERVISOR_VERSION
|
|
||||||
data["hypervisor_hostname"] = CONF.host
|
|
||||||
data["cpu_info"] = jsonutils.dumps({'vendor': 'ibm', 'arch': 'ppc64'})
|
|
||||||
data["numa_topology"] = None
|
|
||||||
data["supported_instances"] = POWERVM_SUPPORTED_INSTANCES
|
|
||||||
stats = {'proc_units': '%.2f' % float(proc_units),
|
|
||||||
'proc_units_used': '%.2f' % pu_used,
|
|
||||||
'memory_region_size': ms_w.memory_region_size}
|
|
||||||
data["stats"] = stats
|
|
||||||
return data
|
|
@ -1,62 +0,0 @@
|
|||||||
# Copyright 2015, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""Utilities related to glance image management for the PowerVM driver."""
|
|
||||||
|
|
||||||
from nova import utils
|
|
||||||
|
|
||||||
|
|
||||||
def stream_blockdev_to_glance(context, image_api, image_id, metadata, devpath):
|
|
||||||
"""Stream the entire contents of a block device to a glance image.
|
|
||||||
|
|
||||||
:param context: Nova security context.
|
|
||||||
:param image_api: Handle to the glance image API.
|
|
||||||
:param image_id: UUID of the prepared glance image.
|
|
||||||
:param metadata: Dictionary of metadata for the image.
|
|
||||||
:param devpath: String path to device file of block device to be uploaded,
|
|
||||||
e.g. "/dev/sde".
|
|
||||||
"""
|
|
||||||
# Make the device file owned by the current user for the duration of the
|
|
||||||
# operation.
|
|
||||||
with utils.temporary_chown(devpath), open(devpath, 'rb') as stream:
|
|
||||||
# Stream it. This is synchronous.
|
|
||||||
image_api.update(context, image_id, metadata, stream)
|
|
||||||
|
|
||||||
|
|
||||||
def generate_snapshot_metadata(context, image_api, image_id, instance):
|
|
||||||
"""Generate a metadata dictionary for an instance snapshot.
|
|
||||||
|
|
||||||
:param context: Nova security context.
|
|
||||||
:param image_api: Handle to the glance image API.
|
|
||||||
:param image_id: UUID of the prepared glance image.
|
|
||||||
:param instance: The Nova instance whose disk is to be snapshotted.
|
|
||||||
:return: A dict of metadata suitable for image_api.update.
|
|
||||||
"""
|
|
||||||
image = image_api.get(context, image_id)
|
|
||||||
|
|
||||||
# TODO(esberglu): Update this to v2 metadata
|
|
||||||
metadata = {
|
|
||||||
'name': image['name'],
|
|
||||||
'status': 'active',
|
|
||||||
'disk_format': 'raw',
|
|
||||||
'container_format': 'bare',
|
|
||||||
'properties': {
|
|
||||||
'image_location': 'snapshot',
|
|
||||||
'image_state': 'available',
|
|
||||||
'owner_id': instance.project_id,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return metadata
|
|
@ -1,237 +0,0 @@
|
|||||||
# Copyright 2015, 2017 IBM Corp.
|
|
||||||
#
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import copy
|
|
||||||
import os
|
|
||||||
import tempfile
|
|
||||||
|
|
||||||
from oslo_log import log as logging
|
|
||||||
from oslo_utils import excutils
|
|
||||||
from pypowervm import const as pvm_const
|
|
||||||
from pypowervm.tasks import scsi_mapper as tsk_map
|
|
||||||
from pypowervm.tasks import storage as tsk_stg
|
|
||||||
from pypowervm.tasks import vopt as tsk_vopt
|
|
||||||
from pypowervm import util as pvm_util
|
|
||||||
from pypowervm.wrappers import storage as pvm_stg
|
|
||||||
from pypowervm.wrappers import virtual_io_server as pvm_vios
|
|
||||||
import retrying
|
|
||||||
from taskflow import task
|
|
||||||
|
|
||||||
from nova.api.metadata import base as instance_metadata
|
|
||||||
from nova.network import model as network_model
|
|
||||||
from nova.virt import configdrive
|
|
||||||
from nova.virt.powervm import vm
|
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
_LLA_SUBNET = "fe80::/64"
|
|
||||||
# TODO(efried): CONF these (maybe)
|
|
||||||
_VOPT_VG = 'rootvg'
|
|
||||||
_VOPT_SIZE_GB = 1
|
|
||||||
|
|
||||||
|
|
||||||
class ConfigDrivePowerVM(object):
|
|
||||||
|
|
||||||
def __init__(self, adapter):
|
|
||||||
"""Creates the config drive manager for PowerVM.
|
|
||||||
|
|
||||||
:param adapter: The pypowervm adapter to communicate with the system.
|
|
||||||
"""
|
|
||||||
self.adapter = adapter
|
|
||||||
|
|
||||||
# Validate that the virtual optical exists
|
|
||||||
self.vios_uuid, self.vg_uuid = tsk_vopt.validate_vopt_repo_exists(
|
|
||||||
self.adapter, vopt_media_volume_group=_VOPT_VG,
|
|
||||||
vopt_media_rep_size=_VOPT_SIZE_GB)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _sanitize_network_info(network_info):
|
|
||||||
"""Will sanitize the network info for the config drive.
|
|
||||||
|
|
||||||
Newer versions of cloud-init look at the vif type information in
|
|
||||||
the network info and utilize it to determine what to do. There are
|
|
||||||
a limited number of vif types, and it seems to be built on the idea
|
|
||||||
that the neutron vif type is the cloud init vif type (which is not
|
|
||||||
quite right).
|
|
||||||
|
|
||||||
This sanitizes the network info that gets passed into the config
|
|
||||||
drive to work properly with cloud-inits.
|
|
||||||
"""
|
|
||||||
network_info = copy.deepcopy(network_info)
|
|
||||||
|
|
||||||
# OVS is the only supported vif type. All others (SEA, PowerVM SR-IOV)
|
|
||||||
# will default to generic vif.
|
|
||||||
for vif in network_info:
|
|
||||||
if vif.get('type') != 'ovs':
|
|
||||||
LOG.debug('Changing vif type from %(type)s to vif for vif '
|
|
||||||
'%(id)s.', {'type': vif.get('type'),
|
|
||||||
'id': vif.get('id')})
|
|
||||||
vif['type'] = 'vif'
|
|
||||||
return network_info
|
|
||||||
|
|
||||||
def _create_cfg_dr_iso(self, instance, injected_files, network_info,
|
|
||||||
iso_path, admin_pass=None):
|
|
||||||
"""Creates an ISO file that contains the injected files.
|
|
||||||
|
|
||||||
Used for config drive.
|
|
||||||
|
|
||||||
:param instance: The VM instance from OpenStack.
|
|
||||||
:param injected_files: A list of file paths that will be injected into
|
|
||||||
the ISO.
|
|
||||||
:param network_info: The network_info from the nova spawn method.
|
|
||||||
:param iso_path: The absolute file path for the new ISO
|
|
||||||
:param admin_pass: Optional password to inject for the VM.
|
|
||||||
"""
|
|
||||||
LOG.info("Creating config drive.", instance=instance)
|
|
||||||
extra_md = {}
|
|
||||||
if admin_pass is not None:
|
|
||||||
extra_md['admin_pass'] = admin_pass
|
|
||||||
|
|
||||||
# Sanitize the vifs for the network config
|
|
||||||
network_info = self._sanitize_network_info(network_info)
|
|
||||||
|
|
||||||
inst_md = instance_metadata.InstanceMetadata(instance,
|
|
||||||
content=injected_files,
|
|
||||||
extra_md=extra_md,
|
|
||||||
network_info=network_info)
|
|
||||||
|
|
||||||
with configdrive.ConfigDriveBuilder(instance_md=inst_md) as cdb:
|
|
||||||
LOG.info("Config drive ISO being built in %s.", iso_path,
|
|
||||||
instance=instance)
|
|
||||||
|
|
||||||
# There may be an OSError exception when create the config drive.
|
|
||||||
# If so, retry the operation before raising.
|
|
||||||
@retrying.retry(retry_on_exception=lambda exc: isinstance(
|
|
||||||
exc, OSError), stop_max_attempt_number=2)
|
|
||||||
def _make_cfg_drive(iso_path):
|
|
||||||
cdb.make_drive(iso_path)
|
|
||||||
|
|
||||||
try:
|
|
||||||
_make_cfg_drive(iso_path)
|
|
||||||
except OSError:
|
|
||||||
with excutils.save_and_reraise_exception(logger=LOG):
|
|
||||||
LOG.exception("Config drive ISO could not be built",
|
|
||||||
instance=instance)
|
|
||||||
|
|
||||||
def create_cfg_drv_vopt(self, instance, injected_files, network_info,
|
|
||||||
stg_ftsk, admin_pass=None, mgmt_cna=None):
|
|
||||||
"""Create the config drive virtual optical and attach to VM.
|
|
||||||
|
|
||||||
:param instance: The VM instance from OpenStack.
|
|
||||||
:param injected_files: A list of file paths that will be injected into
|
|
||||||
the ISO.
|
|
||||||
:param network_info: The network_info from the nova spawn method.
|
|
||||||
:param stg_ftsk: FeedTask to defer storage connectivity operations.
|
|
||||||
:param admin_pass: (Optional) password to inject for the VM.
|
|
||||||
:param mgmt_cna: (Optional) The management (RMC) CNA wrapper.
|
|
||||||
"""
|
|
||||||
# If there is a management client network adapter, then we should
|
|
||||||
# convert that to a VIF and add it to the network info
|
|
||||||
if mgmt_cna is not None:
|
|
||||||
network_info = copy.deepcopy(network_info)
|
|
||||||
network_info.append(self._mgmt_cna_to_vif(mgmt_cna))
|
|
||||||
|
|
||||||
# Pick a file name for when we upload the media to VIOS
|
|
||||||
file_name = pvm_util.sanitize_file_name_for_api(
|
|
||||||
instance.uuid.replace('-', ''), prefix='cfg_', suffix='.iso',
|
|
||||||
max_len=pvm_const.MaxLen.VOPT_NAME)
|
|
||||||
|
|
||||||
# Create and upload the media
|
|
||||||
with tempfile.NamedTemporaryFile(mode='rb') as fh:
|
|
||||||
self._create_cfg_dr_iso(instance, injected_files, network_info,
|
|
||||||
fh.name, admin_pass=admin_pass)
|
|
||||||
vopt, f_uuid = tsk_stg.upload_vopt(
|
|
||||||
self.adapter, self.vios_uuid, fh, file_name,
|
|
||||||
os.path.getsize(fh.name))
|
|
||||||
|
|
||||||
# Define the function to build and add the mapping
|
|
||||||
def add_func(vios_w):
|
|
||||||
LOG.info("Adding cfg drive mapping to Virtual I/O Server %s.",
|
|
||||||
vios_w.name, instance=instance)
|
|
||||||
mapping = tsk_map.build_vscsi_mapping(
|
|
||||||
None, vios_w, vm.get_pvm_uuid(instance), vopt)
|
|
||||||
return tsk_map.add_map(vios_w, mapping)
|
|
||||||
|
|
||||||
# Add the subtask to create the mapping when the FeedTask runs
|
|
||||||
stg_ftsk.wrapper_tasks[self.vios_uuid].add_functor_subtask(add_func)
|
|
||||||
|
|
||||||
def _mgmt_cna_to_vif(self, cna):
|
|
||||||
"""Converts the mgmt CNA to VIF format for network injection."""
|
|
||||||
mac = vm.norm_mac(cna.mac)
|
|
||||||
ipv6_link_local = self._mac_to_link_local(mac)
|
|
||||||
|
|
||||||
subnet = network_model.Subnet(
|
|
||||||
version=6, cidr=_LLA_SUBNET,
|
|
||||||
ips=[network_model.FixedIP(address=ipv6_link_local)])
|
|
||||||
network = network_model.Network(id='mgmt', subnets=[subnet],
|
|
||||||
injected='yes')
|
|
||||||
return network_model.VIF(id='mgmt_vif', address=mac,
|
|
||||||
network=network)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _mac_to_link_local(mac):
|
|
||||||
# Convert the address to IPv6. The first step is to separate out the
|
|
||||||
# mac address
|
|
||||||
splits = mac.split(':')
|
|
||||||
|
|
||||||
# Create EUI-64 id per RFC 4291 Appendix A
|
|
||||||
splits.insert(3, 'ff')
|
|
||||||
splits.insert(4, 'fe')
|
|
||||||
|
|
||||||
# Create modified EUI-64 id via bit flip per RFC 4291 Appendix A
|
|
||||||
splits[0] = "%.2x" % (int(splits[0], 16) ^ 0b00000010)
|
|
||||||
|
|
||||||
# Convert to the IPv6 link local format. The prefix is fe80::. Join
|
|
||||||
# the hexes together at every other digit.
|
|
||||||
ll = ['fe80:']
|
|
||||||
ll.extend([splits[x] + splits[x + 1]
|
|
||||||
for x in range(0, len(splits), 2)])
|
|
||||||
return ':'.join(ll)
|
|
||||||
|
|
||||||
def dlt_vopt(self, instance, stg_ftsk):
|
|
||||||
"""Deletes the virtual optical and scsi mappings for a VM.
|
|
||||||
|
|
||||||
:param instance: The nova instance whose VOpt(s) are to be removed.
|
|
||||||
:param stg_ftsk: A FeedTask. The actions to modify the storage will be
|
|
||||||
added as batched functions onto the FeedTask.
|
|
||||||
"""
|
|
||||||
lpar_uuid = vm.get_pvm_uuid(instance)
|
|
||||||
|
|
||||||
# The matching function for find_maps, remove_maps
|
|
||||||
match_func = tsk_map.gen_match_func(pvm_stg.VOptMedia)
|
|
||||||
|
|
||||||
# Add a function to remove the mappings
|
|
||||||
stg_ftsk.wrapper_tasks[self.vios_uuid].add_functor_subtask(
|
|
||||||
tsk_map.remove_maps, lpar_uuid, match_func=match_func)
|
|
||||||
|
|
||||||
# Find the VOpt device based from the mappings
|
|
||||||
media_mappings = tsk_map.find_maps(
|
|
||||||
stg_ftsk.get_wrapper(self.vios_uuid).scsi_mappings,
|
|
||||||
client_lpar_id=lpar_uuid, match_func=match_func)
|
|
||||||
media_elems = [x.backing_storage for x in media_mappings]
|
|
||||||
|
|
||||||
def rm_vopt():
|
|
||||||
LOG.info("Removing virtual optical storage.",
|
|
||||||
instance=instance)
|
|
||||||
vg_wrap = pvm_stg.VG.get(self.adapter, uuid=self.vg_uuid,
|
|
||||||
parent_type=pvm_vios.VIOS,
|
|
||||||
parent_uuid=self.vios_uuid)
|
|
||||||
tsk_stg.rm_vg_storage(vg_wrap, vopts=media_elems)
|
|
||||||
|
|
||||||
# Add task to remove the media if it exists
|
|
||||||
if media_elems:
|
|
||||||
stg_ftsk.add_post_execute(task.FunctorTask(rm_vopt))
|
|
@ -1,175 +0,0 @@
|
|||||||
# Copyright 2015, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""Utilities related to the PowerVM management partition.
|
|
||||||
|
|
||||||
The management partition is a special LPAR that runs the PowerVM REST API
|
|
||||||
service. It itself appears through the REST API as a LogicalPartition of type
|
|
||||||
aixlinux, but with the is_mgmt_partition property set to True.
|
|
||||||
The PowerVM Nova Compute service runs on the management partition.
|
|
||||||
"""
|
|
||||||
import glob
|
|
||||||
import os
|
|
||||||
from os import path
|
|
||||||
|
|
||||||
from oslo_concurrency import lockutils
|
|
||||||
from oslo_log import log as logging
|
|
||||||
from pypowervm.tasks import partition as pvm_par
|
|
||||||
import retrying
|
|
||||||
|
|
||||||
from nova import exception
|
|
||||||
import nova.privsep.path
|
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
_MP_UUID = None
|
|
||||||
|
|
||||||
|
|
||||||
@lockutils.synchronized("mgmt_lpar_uuid")
|
|
||||||
def mgmt_uuid(adapter):
|
|
||||||
"""Returns the management partitions UUID."""
|
|
||||||
global _MP_UUID
|
|
||||||
if not _MP_UUID:
|
|
||||||
_MP_UUID = pvm_par.get_this_partition(adapter).uuid
|
|
||||||
return _MP_UUID
|
|
||||||
|
|
||||||
|
|
||||||
def discover_vscsi_disk(mapping, scan_timeout=300):
|
|
||||||
"""Bring a mapped device into the management partition and find its name.
|
|
||||||
|
|
||||||
Based on a VSCSIMapping, scan the appropriate virtual SCSI host bus,
|
|
||||||
causing the operating system to discover the mapped device. Find and
|
|
||||||
return the path of the newly-discovered device based on its UDID in the
|
|
||||||
mapping.
|
|
||||||
|
|
||||||
Note: scanning the bus will cause the operating system to discover *all*
|
|
||||||
devices on that bus. However, this method will only return the path for
|
|
||||||
the specific device from the input mapping, based on its UDID.
|
|
||||||
|
|
||||||
:param mapping: The pypowervm.wrappers.virtual_io_server.VSCSIMapping
|
|
||||||
representing the mapping of the desired disk to the
|
|
||||||
management partition.
|
|
||||||
:param scan_timeout: The maximum number of seconds after scanning to wait
|
|
||||||
for the specified device to appear.
|
|
||||||
:return: The udev-generated ("/dev/sdX") name of the discovered disk.
|
|
||||||
:raise NoDiskDiscoveryException: If the disk did not appear after the
|
|
||||||
specified timeout.
|
|
||||||
:raise UniqueDiskDiscoveryException: If more than one disk appears with the
|
|
||||||
expected UDID.
|
|
||||||
"""
|
|
||||||
# Calculate the Linux slot number from the client adapter slot number.
|
|
||||||
lslot = 0x30000000 | mapping.client_adapter.lpar_slot_num
|
|
||||||
# We'll match the device ID based on the UDID, which is actually the last
|
|
||||||
# 32 chars of the field we get from PowerVM.
|
|
||||||
udid = mapping.backing_storage.udid[-32:]
|
|
||||||
|
|
||||||
LOG.debug("Trying to discover VSCSI disk with UDID %(udid)s on slot "
|
|
||||||
"%(slot)x.", {'udid': udid, 'slot': lslot})
|
|
||||||
|
|
||||||
# Find the special file to scan the bus, and scan it.
|
|
||||||
# This glob should yield exactly one result, but use the loop just in case.
|
|
||||||
for scanpath in glob.glob(
|
|
||||||
'/sys/bus/vio/devices/%x/host*/scsi_host/host*/scan' % lslot):
|
|
||||||
# Writing '- - -' to this sysfs file triggers bus rescan
|
|
||||||
nova.privsep.path.writefile(scanpath, 'a', '- - -')
|
|
||||||
|
|
||||||
# Now see if our device showed up. If so, we can reliably match it based
|
|
||||||
# on its Linux ID, which ends with the disk's UDID.
|
|
||||||
dpathpat = '/dev/disk/by-id/*%s' % udid
|
|
||||||
|
|
||||||
# The bus scan is asynchronous. Need to poll, waiting for the device to
|
|
||||||
# spring into existence. Stop when glob finds at least one device, or
|
|
||||||
# after the specified timeout. Sleep 1/4 second between polls.
|
|
||||||
@retrying.retry(retry_on_result=lambda result: not result, wait_fixed=250,
|
|
||||||
stop_max_delay=scan_timeout * 1000)
|
|
||||||
def _poll_for_dev(globpat):
|
|
||||||
return glob.glob(globpat)
|
|
||||||
try:
|
|
||||||
disks = _poll_for_dev(dpathpat)
|
|
||||||
except retrying.RetryError as re:
|
|
||||||
raise exception.NoDiskDiscoveryException(
|
|
||||||
bus=lslot, udid=udid, polls=re.last_attempt.attempt_number,
|
|
||||||
timeout=scan_timeout)
|
|
||||||
# If we get here, _poll_for_dev returned a nonempty list. If not exactly
|
|
||||||
# one entry, this is an error.
|
|
||||||
if len(disks) != 1:
|
|
||||||
raise exception.UniqueDiskDiscoveryException(path_pattern=dpathpat,
|
|
||||||
count=len(disks))
|
|
||||||
|
|
||||||
# The by-id path is a symlink. Resolve to the /dev/sdX path
|
|
||||||
dpath = path.realpath(disks[0])
|
|
||||||
LOG.debug("Discovered VSCSI disk with UDID %(udid)s on slot %(slot)x at "
|
|
||||||
"path %(devname)s.",
|
|
||||||
{'udid': udid, 'slot': lslot, 'devname': dpath})
|
|
||||||
return dpath
|
|
||||||
|
|
||||||
|
|
||||||
def remove_block_dev(devpath, scan_timeout=10):
|
|
||||||
"""Remove a block device from the management partition.
|
|
||||||
|
|
||||||
This method causes the operating system of the management partition to
|
|
||||||
delete the device special files associated with the specified block device.
|
|
||||||
|
|
||||||
:param devpath: Any path to the block special file associated with the
|
|
||||||
device to be removed.
|
|
||||||
:param scan_timeout: The maximum number of seconds after scanning to wait
|
|
||||||
for the specified device to disappear.
|
|
||||||
:raise InvalidDevicePath: If the specified device or its 'delete' special
|
|
||||||
file cannot be found.
|
|
||||||
:raise DeviceDeletionException: If the deletion was attempted, but the
|
|
||||||
device special file is still present
|
|
||||||
afterward.
|
|
||||||
"""
|
|
||||||
# Resolve symlinks, if any, to get to the /dev/sdX path
|
|
||||||
devpath = path.realpath(devpath)
|
|
||||||
try:
|
|
||||||
os.stat(devpath)
|
|
||||||
except OSError:
|
|
||||||
raise exception.InvalidDevicePath(path=devpath)
|
|
||||||
devname = devpath.rsplit('/', 1)[-1]
|
|
||||||
delpath = '/sys/block/%s/device/delete' % devname
|
|
||||||
try:
|
|
||||||
os.stat(delpath)
|
|
||||||
except OSError:
|
|
||||||
raise exception.InvalidDevicePath(path=delpath)
|
|
||||||
LOG.debug("Deleting block device %(devpath)s from the management "
|
|
||||||
"partition via special file %(delpath)s.",
|
|
||||||
{'devpath': devpath, 'delpath': delpath})
|
|
||||||
# Writing '1' to this sysfs file deletes the block device and rescans.
|
|
||||||
nova.privsep.path.writefile(delpath, 'a', '1')
|
|
||||||
|
|
||||||
# The bus scan is asynchronous. Need to poll, waiting for the device to
|
|
||||||
# disappear. Stop when stat raises OSError (dev file not found) - which is
|
|
||||||
# success - or after the specified timeout (which is failure). Sleep 1/4
|
|
||||||
# second between polls.
|
|
||||||
@retrying.retry(retry_on_result=lambda result: result, wait_fixed=250,
|
|
||||||
stop_max_delay=scan_timeout * 1000)
|
|
||||||
def _poll_for_del(statpath):
|
|
||||||
try:
|
|
||||||
os.stat(statpath)
|
|
||||||
return True
|
|
||||||
except OSError:
|
|
||||||
# Device special file is absent, as expected
|
|
||||||
return False
|
|
||||||
try:
|
|
||||||
_poll_for_del(devpath)
|
|
||||||
except retrying.RetryError as re:
|
|
||||||
# stat just kept returning (dev file continued to exist).
|
|
||||||
raise exception.DeviceDeletionException(
|
|
||||||
devpath=devpath, polls=re.last_attempt.attempt_number,
|
|
||||||
timeout=scan_timeout)
|
|
||||||
# Else stat raised - the device disappeared - all done.
|
|
@ -1,38 +0,0 @@
|
|||||||
# Copyright 2016, 2017 IBM Corp.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
from oslo_log import log as logging
|
|
||||||
from taskflow import engines as tf_eng
|
|
||||||
from taskflow.listeners import timing as tf_tm
|
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def run(flow, instance=None):
|
|
||||||
"""Run a TaskFlow Flow with task timing and logging with instance.
|
|
||||||
|
|
||||||
:param flow: A taskflow.flow.Flow to run.
|
|
||||||
:param instance: A nova instance, for logging.
|
|
||||||
:return: The result of taskflow.engines.run(), a dictionary of named
|
|
||||||
results of the Flow's execution.
|
|
||||||
"""
|
|
||||||
def log_with_instance(*args, **kwargs):
|
|
||||||
"""Wrapper for LOG.info(*args, **kwargs, instance=instance)."""
|
|
||||||
if instance is not None:
|
|
||||||
kwargs['instance'] = instance
|
|
||||||
LOG.info(*args, **kwargs)
|
|
||||||
|
|
||||||
eng = tf_eng.load(flow)
|
|
||||||
with tf_tm.PrintingDurationListener(eng, printer=log_with_instance):
|
|
||||||
return eng.run()
|
|
@ -1,81 +0,0 @@
|
|||||||
# Copyright 2015, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from oslo_log import log as logging
|
|
||||||
from taskflow import task
|
|
||||||
|
|
||||||
from nova.virt.powervm import image
|
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class UpdateTaskState(task.Task):
|
|
||||||
|
|
||||||
def __init__(self, update_task_state, task_state, expected_state=None):
|
|
||||||
"""Invoke the update_task_state callback with the desired arguments.
|
|
||||||
|
|
||||||
:param update_task_state: update_task_state callable passed into
|
|
||||||
snapshot.
|
|
||||||
:param task_state: The new task state (from nova.compute.task_states)
|
|
||||||
to set.
|
|
||||||
:param expected_state: Optional. The expected state of the task prior
|
|
||||||
to this request.
|
|
||||||
"""
|
|
||||||
self.update_task_state = update_task_state
|
|
||||||
self.task_state = task_state
|
|
||||||
self.kwargs = {}
|
|
||||||
if expected_state is not None:
|
|
||||||
# We only want to pass expected state if it's not None! That's so
|
|
||||||
# we take the update_task_state method's default.
|
|
||||||
self.kwargs['expected_state'] = expected_state
|
|
||||||
super(UpdateTaskState, self).__init__(
|
|
||||||
name='update_task_state_%s' % task_state)
|
|
||||||
|
|
||||||
def execute(self):
|
|
||||||
self.update_task_state(self.task_state, **self.kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
class StreamToGlance(task.Task):
|
|
||||||
|
|
||||||
"""Task around streaming a block device to glance."""
|
|
||||||
|
|
||||||
def __init__(self, context, image_api, image_id, instance):
|
|
||||||
"""Initialize the flow for streaming a block device to glance.
|
|
||||||
|
|
||||||
Requires: disk_path: Path to the block device file for the instance's
|
|
||||||
boot disk.
|
|
||||||
:param context: Nova security context.
|
|
||||||
:param image_api: Handle to the glance API.
|
|
||||||
:param image_id: UUID of the prepared glance image.
|
|
||||||
:param instance: Instance whose backing device is being captured.
|
|
||||||
"""
|
|
||||||
self.context = context
|
|
||||||
self.image_api = image_api
|
|
||||||
self.image_id = image_id
|
|
||||||
self.instance = instance
|
|
||||||
super(StreamToGlance, self).__init__(name='stream_to_glance',
|
|
||||||
requires='disk_path')
|
|
||||||
|
|
||||||
def execute(self, disk_path):
|
|
||||||
metadata = image.generate_snapshot_metadata(
|
|
||||||
self.context, self.image_api, self.image_id, self.instance)
|
|
||||||
LOG.info("Starting stream of boot device (local blockdev %(devpath)s) "
|
|
||||||
"to glance image %(img_id)s.",
|
|
||||||
{'devpath': disk_path, 'img_id': self.image_id},
|
|
||||||
instance=self.instance)
|
|
||||||
image.stream_blockdev_to_glance(self.context, self.image_api,
|
|
||||||
self.image_id, metadata, disk_path)
|
|
@ -1,259 +0,0 @@
|
|||||||
# Copyright 2015, 2017 IBM Corp.
|
|
||||||
#
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import eventlet
|
|
||||||
from oslo_log import log as logging
|
|
||||||
from pypowervm.tasks import cna as pvm_cna
|
|
||||||
from pypowervm.wrappers import managed_system as pvm_ms
|
|
||||||
from pypowervm.wrappers import network as pvm_net
|
|
||||||
from taskflow import task
|
|
||||||
|
|
||||||
from nova import conf as cfg
|
|
||||||
from nova import exception
|
|
||||||
from nova.virt.powervm import vif
|
|
||||||
from nova.virt.powervm import vm
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
CONF = cfg.CONF
|
|
||||||
|
|
||||||
SECURE_RMC_VSWITCH = 'MGMTSWITCH'
|
|
||||||
SECURE_RMC_VLAN = 4094
|
|
||||||
|
|
||||||
|
|
||||||
class PlugVifs(task.Task):
|
|
||||||
|
|
||||||
"""The task to plug the Virtual Network Interfaces to a VM."""
|
|
||||||
|
|
||||||
def __init__(self, virt_api, adapter, instance, network_infos):
|
|
||||||
"""Create the task.
|
|
||||||
|
|
||||||
Provides 'vm_cnas' - the list of the Virtual Machine's Client Network
|
|
||||||
Adapters as they stand after all VIFs are plugged. May be None, in
|
|
||||||
which case the Task requiring 'vm_cnas' should discover them afresh.
|
|
||||||
|
|
||||||
:param virt_api: The VirtAPI for the operation.
|
|
||||||
:param adapter: The pypowervm adapter.
|
|
||||||
:param instance: The nova instance.
|
|
||||||
:param network_infos: The network information containing the nova
|
|
||||||
VIFs to create.
|
|
||||||
"""
|
|
||||||
self.virt_api = virt_api
|
|
||||||
self.adapter = adapter
|
|
||||||
self.instance = instance
|
|
||||||
self.network_infos = network_infos or []
|
|
||||||
self.crt_network_infos, self.update_network_infos = [], []
|
|
||||||
# Cache of CNAs that is filled on initial _vif_exists() call.
|
|
||||||
self.cnas = None
|
|
||||||
|
|
||||||
super(PlugVifs, self).__init__(
|
|
||||||
name='plug_vifs', provides='vm_cnas', requires=['lpar_wrap'])
|
|
||||||
|
|
||||||
def _vif_exists(self, network_info):
|
|
||||||
"""Does the instance have a CNA for a given net?
|
|
||||||
|
|
||||||
:param network_info: A network information dict. This method expects
|
|
||||||
it to contain key 'address' (MAC address).
|
|
||||||
:return: True if a CNA with the network_info's MAC address exists on
|
|
||||||
the instance. False otherwise.
|
|
||||||
"""
|
|
||||||
if self.cnas is None:
|
|
||||||
self.cnas = vm.get_cnas(self.adapter, self.instance)
|
|
||||||
vifs = self.cnas
|
|
||||||
|
|
||||||
return network_info['address'] in [vm.norm_mac(v.mac) for v in vifs]
|
|
||||||
|
|
||||||
def execute(self, lpar_wrap):
|
|
||||||
# Check to see if the LPAR is OK to add VIFs to.
|
|
||||||
modifiable, reason = lpar_wrap.can_modify_io()
|
|
||||||
if not modifiable:
|
|
||||||
LOG.error("Unable to create VIF(s) for instance in the system's "
|
|
||||||
"current state. The reason from the system is: %s",
|
|
||||||
reason, instance=self.instance)
|
|
||||||
raise exception.VirtualInterfaceCreateException()
|
|
||||||
|
|
||||||
# We will have two types of network infos. One is for newly created
|
|
||||||
# vifs. The others are those that exist, but should be re-'treated'
|
|
||||||
for network_info in self.network_infos:
|
|
||||||
if self._vif_exists(network_info):
|
|
||||||
self.update_network_infos.append(network_info)
|
|
||||||
else:
|
|
||||||
self.crt_network_infos.append(network_info)
|
|
||||||
|
|
||||||
# If there are no vifs to create or update, then just exit immediately.
|
|
||||||
if not self.crt_network_infos and not self.update_network_infos:
|
|
||||||
return []
|
|
||||||
|
|
||||||
# For existing VIFs that we just need to update, run the plug but do
|
|
||||||
# not wait for the neutron event as that likely won't be sent (it was
|
|
||||||
# already done).
|
|
||||||
for network_info in self.update_network_infos:
|
|
||||||
LOG.info("Updating VIF with mac %s for instance.",
|
|
||||||
network_info['address'], instance=self.instance)
|
|
||||||
vif.plug(self.adapter, self.instance, network_info, new_vif=False)
|
|
||||||
|
|
||||||
# For the new VIFs, run the creates (and wait for the events back)
|
|
||||||
try:
|
|
||||||
with self.virt_api.wait_for_instance_event(
|
|
||||||
self.instance, self._get_vif_events(),
|
|
||||||
deadline=CONF.vif_plugging_timeout,
|
|
||||||
error_callback=self._vif_callback_failed):
|
|
||||||
for network_info in self.crt_network_infos:
|
|
||||||
LOG.info('Creating VIF with mac %s for instance.',
|
|
||||||
network_info['address'], instance=self.instance)
|
|
||||||
new_vif = vif.plug(
|
|
||||||
self.adapter, self.instance, network_info,
|
|
||||||
new_vif=True)
|
|
||||||
if self.cnas is not None:
|
|
||||||
self.cnas.append(new_vif)
|
|
||||||
except eventlet.timeout.Timeout:
|
|
||||||
LOG.error('Error waiting for VIF to be created for instance.',
|
|
||||||
instance=self.instance)
|
|
||||||
raise exception.VirtualInterfaceCreateException()
|
|
||||||
|
|
||||||
return self.cnas
|
|
||||||
|
|
||||||
def _vif_callback_failed(self, event_name, instance):
|
|
||||||
LOG.error('VIF Plug failure for callback on event %s for instance.',
|
|
||||||
event_name, instance=self.instance)
|
|
||||||
if CONF.vif_plugging_is_fatal:
|
|
||||||
raise exception.VirtualInterfaceCreateException()
|
|
||||||
|
|
||||||
def _get_vif_events(self):
|
|
||||||
"""Returns the VIF events that need to be received for a VIF plug.
|
|
||||||
|
|
||||||
In order for a VIF plug to be successful, certain events should be
|
|
||||||
received from other components within the OpenStack ecosystem. This
|
|
||||||
method returns the events neutron needs for a given deploy.
|
|
||||||
"""
|
|
||||||
# See libvirt's driver.py -> _get_neutron_events method for
|
|
||||||
# more information.
|
|
||||||
if CONF.vif_plugging_is_fatal and CONF.vif_plugging_timeout:
|
|
||||||
return [('network-vif-plugged', network_info['id'])
|
|
||||||
for network_info in self.crt_network_infos
|
|
||||||
if not network_info.get('active', True)]
|
|
||||||
|
|
||||||
def revert(self, lpar_wrap, result, flow_failures):
|
|
||||||
if not self.network_infos:
|
|
||||||
return
|
|
||||||
|
|
||||||
LOG.warning('VIF creation being rolled back for instance.',
|
|
||||||
instance=self.instance)
|
|
||||||
|
|
||||||
# Get the current adapters on the system
|
|
||||||
cna_w_list = vm.get_cnas(self.adapter, self.instance)
|
|
||||||
for network_info in self.crt_network_infos:
|
|
||||||
try:
|
|
||||||
vif.unplug(self.adapter, self.instance, network_info,
|
|
||||||
cna_w_list=cna_w_list)
|
|
||||||
except Exception:
|
|
||||||
LOG.exception("An exception occurred during an unplug in the "
|
|
||||||
"vif rollback. Ignoring.",
|
|
||||||
instance=self.instance)
|
|
||||||
|
|
||||||
|
|
||||||
class UnplugVifs(task.Task):
|
|
||||||
|
|
||||||
"""The task to unplug Virtual Network Interfaces from a VM."""
|
|
||||||
|
|
||||||
def __init__(self, adapter, instance, network_infos):
|
|
||||||
"""Create the task.
|
|
||||||
|
|
||||||
:param adapter: The pypowervm adapter.
|
|
||||||
:param instance: The nova instance.
|
|
||||||
:param network_infos: The network information containing the nova
|
|
||||||
VIFs to create.
|
|
||||||
"""
|
|
||||||
self.adapter = adapter
|
|
||||||
self.instance = instance
|
|
||||||
self.network_infos = network_infos or []
|
|
||||||
|
|
||||||
super(UnplugVifs, self).__init__(name='unplug_vifs')
|
|
||||||
|
|
||||||
def execute(self):
|
|
||||||
# If the LPAR is not in an OK state for deleting, then throw an
|
|
||||||
# error up front.
|
|
||||||
lpar_wrap = vm.get_instance_wrapper(self.adapter, self.instance)
|
|
||||||
modifiable, reason = lpar_wrap.can_modify_io()
|
|
||||||
if not modifiable:
|
|
||||||
LOG.error("Unable to remove VIFs from instance in the system's "
|
|
||||||
"current state. The reason reported by the system is: "
|
|
||||||
"%s", reason, instance=self.instance)
|
|
||||||
raise exception.VirtualInterfaceUnplugException(reason=reason)
|
|
||||||
|
|
||||||
# Get all the current Client Network Adapters (CNA) on the VM itself.
|
|
||||||
cna_w_list = vm.get_cnas(self.adapter, self.instance)
|
|
||||||
|
|
||||||
# Walk through the VIFs and delete the corresponding CNA on the VM.
|
|
||||||
for network_info in self.network_infos:
|
|
||||||
vif.unplug(self.adapter, self.instance, network_info,
|
|
||||||
cna_w_list=cna_w_list)
|
|
||||||
|
|
||||||
|
|
||||||
class PlugMgmtVif(task.Task):
|
|
||||||
|
|
||||||
"""The task to plug the Management VIF into a VM."""
|
|
||||||
|
|
||||||
def __init__(self, adapter, instance):
|
|
||||||
"""Create the task.
|
|
||||||
|
|
||||||
Requires 'vm_cnas' from PlugVifs. If None, this Task will retrieve the
|
|
||||||
VM's list of CNAs.
|
|
||||||
|
|
||||||
Provides the mgmt_cna. This may be None if no management device was
|
|
||||||
created. This is the CNA of the mgmt vif for the VM.
|
|
||||||
|
|
||||||
:param adapter: The pypowervm adapter.
|
|
||||||
:param instance: The nova instance.
|
|
||||||
"""
|
|
||||||
self.adapter = adapter
|
|
||||||
self.instance = instance
|
|
||||||
|
|
||||||
super(PlugMgmtVif, self).__init__(
|
|
||||||
name='plug_mgmt_vif', provides='mgmt_cna', requires=['vm_cnas'])
|
|
||||||
|
|
||||||
def execute(self, vm_cnas):
|
|
||||||
LOG.info('Plugging the Management Network Interface to instance.',
|
|
||||||
instance=self.instance)
|
|
||||||
# Determine if we need to create the secure RMC VIF. This should only
|
|
||||||
# be needed if there is not a VIF on the secure RMC vSwitch
|
|
||||||
vswitch = None
|
|
||||||
vswitches = pvm_net.VSwitch.search(
|
|
||||||
self.adapter, parent_type=pvm_ms.System.schema_type,
|
|
||||||
parent_uuid=self.adapter.sys_uuid, name=SECURE_RMC_VSWITCH)
|
|
||||||
if len(vswitches) == 1:
|
|
||||||
vswitch = vswitches[0]
|
|
||||||
|
|
||||||
if vswitch is None:
|
|
||||||
LOG.warning('No management VIF created for instance due to lack '
|
|
||||||
'of Management Virtual Switch', instance=self.instance)
|
|
||||||
return None
|
|
||||||
|
|
||||||
# This next check verifies that there are no existing NICs on the
|
|
||||||
# vSwitch, so that the VM does not end up with multiple RMC VIFs.
|
|
||||||
if vm_cnas is None:
|
|
||||||
has_mgmt_vif = vm.get_cnas(self.adapter, self.instance,
|
|
||||||
vswitch_uri=vswitch.href)
|
|
||||||
else:
|
|
||||||
has_mgmt_vif = vswitch.href in [cna.vswitch_uri for cna in vm_cnas]
|
|
||||||
|
|
||||||
if has_mgmt_vif:
|
|
||||||
LOG.debug('Management VIF already created for instance',
|
|
||||||
instance=self.instance)
|
|
||||||
return None
|
|
||||||
|
|
||||||
lpar_uuid = vm.get_pvm_uuid(self.instance)
|
|
||||||
return pvm_cna.crt_cna(self.adapter, None, lpar_uuid, SECURE_RMC_VLAN,
|
|
||||||
vswitch=SECURE_RMC_VSWITCH, crt_vswitch=True)
|
|
@ -1,429 +0,0 @@
|
|||||||
# Copyright 2015, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from oslo_log import log as logging
|
|
||||||
from pypowervm import exceptions as pvm_exc
|
|
||||||
from pypowervm.tasks import scsi_mapper as pvm_smap
|
|
||||||
from taskflow import task
|
|
||||||
from taskflow.types import failure as task_fail
|
|
||||||
|
|
||||||
from nova import exception
|
|
||||||
from nova.virt import block_device
|
|
||||||
from nova.virt.powervm import media
|
|
||||||
from nova.virt.powervm import mgmt
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class AttachVolume(task.Task):
|
|
||||||
|
|
||||||
"""The task to attach a volume to an instance."""
|
|
||||||
|
|
||||||
def __init__(self, vol_drv):
|
|
||||||
"""Create the task.
|
|
||||||
|
|
||||||
:param vol_drv: The volume driver. Ties the storage to a connection
|
|
||||||
type (ex. vSCSI).
|
|
||||||
"""
|
|
||||||
self.vol_drv = vol_drv
|
|
||||||
self.vol_id = block_device.get_volume_id(self.vol_drv.connection_info)
|
|
||||||
|
|
||||||
super(AttachVolume, self).__init__(name='attach_vol_%s' % self.vol_id)
|
|
||||||
|
|
||||||
def execute(self):
|
|
||||||
LOG.info('Attaching volume %(vol)s.', {'vol': self.vol_id},
|
|
||||||
instance=self.vol_drv.instance)
|
|
||||||
self.vol_drv.attach_volume()
|
|
||||||
|
|
||||||
def revert(self, result, flow_failures):
|
|
||||||
LOG.warning('Rolling back attachment for volume %(vol)s.',
|
|
||||||
{'vol': self.vol_id}, instance=self.vol_drv.instance)
|
|
||||||
|
|
||||||
# Note that the rollback is *instant*. Resetting the FeedTask ensures
|
|
||||||
# immediate rollback.
|
|
||||||
self.vol_drv.reset_stg_ftsk()
|
|
||||||
try:
|
|
||||||
# We attempt to detach in case we 'partially attached'. In
|
|
||||||
# the attach scenario, perhaps one of the Virtual I/O Servers
|
|
||||||
# was attached. This attempts to clear anything out to make sure
|
|
||||||
# the terminate attachment runs smoothly.
|
|
||||||
self.vol_drv.detach_volume()
|
|
||||||
except exception.VolumeDetachFailed:
|
|
||||||
# Does not block due to being in the revert flow.
|
|
||||||
LOG.exception("Unable to detach volume %s during rollback.",
|
|
||||||
self.vol_id, instance=self.vol_drv.instance)
|
|
||||||
|
|
||||||
|
|
||||||
class DetachVolume(task.Task):
|
|
||||||
|
|
||||||
"""The task to detach a volume from an instance."""
|
|
||||||
|
|
||||||
def __init__(self, vol_drv):
|
|
||||||
"""Create the task.
|
|
||||||
|
|
||||||
:param vol_drv: The volume driver. Ties the storage to a connection
|
|
||||||
type (ex. vSCSI).
|
|
||||||
"""
|
|
||||||
self.vol_drv = vol_drv
|
|
||||||
self.vol_id = self.vol_drv.connection_info['data']['volume_id']
|
|
||||||
|
|
||||||
super(DetachVolume, self).__init__(name='detach_vol_%s' % self.vol_id)
|
|
||||||
|
|
||||||
def execute(self):
|
|
||||||
LOG.info('Detaching volume %(vol)s.',
|
|
||||||
{'vol': self.vol_id}, instance=self.vol_drv.instance)
|
|
||||||
self.vol_drv.detach_volume()
|
|
||||||
|
|
||||||
def revert(self, result, flow_failures):
|
|
||||||
LOG.warning('Reattaching volume %(vol)s on detach rollback.',
|
|
||||||
{'vol': self.vol_id}, instance=self.vol_drv.instance)
|
|
||||||
|
|
||||||
# Note that the rollback is *instant*. Resetting the FeedTask ensures
|
|
||||||
# immediate rollback.
|
|
||||||
self.vol_drv.reset_stg_ftsk()
|
|
||||||
try:
|
|
||||||
# We try to reattach the volume here so that it maintains its
|
|
||||||
# linkage (in the hypervisor) to the VM. This makes it easier for
|
|
||||||
# operators to understand the linkage between the VMs and volumes
|
|
||||||
# in error scenarios. This is simply useful for debug purposes
|
|
||||||
# if there is an operational error.
|
|
||||||
self.vol_drv.attach_volume()
|
|
||||||
except exception.VolumeAttachFailed:
|
|
||||||
# Does not block due to being in the revert flow. See above.
|
|
||||||
LOG.exception("Unable to reattach volume %s during rollback.",
|
|
||||||
self.vol_id, instance=self.vol_drv.instance)
|
|
||||||
|
|
||||||
|
|
||||||
class CreateDiskForImg(task.Task):
|
|
||||||
|
|
||||||
"""The Task to create the disk from an image in the storage."""
|
|
||||||
|
|
||||||
def __init__(self, disk_dvr, context, instance, image_meta):
|
|
||||||
"""Create the Task.
|
|
||||||
|
|
||||||
Provides the 'disk_dev_info' for other tasks. Comes from the disk_dvr
|
|
||||||
create_disk_from_image method.
|
|
||||||
|
|
||||||
:param disk_dvr: The storage driver.
|
|
||||||
:param context: The context passed into the driver method.
|
|
||||||
:param instance: The nova instance.
|
|
||||||
:param nova.objects.ImageMeta image_meta:
|
|
||||||
The metadata of the image of the instance.
|
|
||||||
"""
|
|
||||||
super(CreateDiskForImg, self).__init__(
|
|
||||||
name='create_disk_from_img', provides='disk_dev_info')
|
|
||||||
self.disk_dvr = disk_dvr
|
|
||||||
self.instance = instance
|
|
||||||
self.context = context
|
|
||||||
self.image_meta = image_meta
|
|
||||||
|
|
||||||
def execute(self):
|
|
||||||
return self.disk_dvr.create_disk_from_image(
|
|
||||||
self.context, self.instance, self.image_meta)
|
|
||||||
|
|
||||||
def revert(self, result, flow_failures):
|
|
||||||
# If there is no result, or its a direct failure, then there isn't
|
|
||||||
# anything to delete.
|
|
||||||
if result is None or isinstance(result, task_fail.Failure):
|
|
||||||
return
|
|
||||||
|
|
||||||
# Run the delete. The result is a single disk. Wrap into list
|
|
||||||
# as the method works with plural disks.
|
|
||||||
try:
|
|
||||||
self.disk_dvr.delete_disks([result])
|
|
||||||
except pvm_exc.Error:
|
|
||||||
# Don't allow revert exceptions to interrupt the revert flow.
|
|
||||||
LOG.exception("Disk deletion failed during revert. Ignoring.",
|
|
||||||
instance=self.instance)
|
|
||||||
|
|
||||||
|
|
||||||
class AttachDisk(task.Task):
|
|
||||||
|
|
||||||
"""The task to attach the disk to the instance."""
|
|
||||||
|
|
||||||
def __init__(self, disk_dvr, instance, stg_ftsk):
|
|
||||||
"""Create the Task for the attach disk to instance method.
|
|
||||||
|
|
||||||
Requires disk info through requirement of disk_dev_info (provided by
|
|
||||||
crt_disk_from_img)
|
|
||||||
|
|
||||||
:param disk_dvr: The disk driver.
|
|
||||||
:param instance: The nova instance.
|
|
||||||
:param stg_ftsk: FeedTask to defer storage connectivity operations.
|
|
||||||
"""
|
|
||||||
super(AttachDisk, self).__init__(
|
|
||||||
name='attach_disk', requires=['disk_dev_info'])
|
|
||||||
self.disk_dvr = disk_dvr
|
|
||||||
self.instance = instance
|
|
||||||
self.stg_ftsk = stg_ftsk
|
|
||||||
|
|
||||||
def execute(self, disk_dev_info):
|
|
||||||
self.disk_dvr.attach_disk(self.instance, disk_dev_info, self.stg_ftsk)
|
|
||||||
|
|
||||||
def revert(self, disk_dev_info, result, flow_failures):
|
|
||||||
try:
|
|
||||||
self.disk_dvr.detach_disk(self.instance)
|
|
||||||
except pvm_exc.Error:
|
|
||||||
# Don't allow revert exceptions to interrupt the revert flow.
|
|
||||||
LOG.exception("Disk detach failed during revert. Ignoring.",
|
|
||||||
instance=self.instance)
|
|
||||||
|
|
||||||
|
|
||||||
class DetachDisk(task.Task):
|
|
||||||
|
|
||||||
"""The task to detach the disk storage from the instance."""
|
|
||||||
|
|
||||||
def __init__(self, disk_dvr, instance):
|
|
||||||
"""Creates the Task to detach the storage adapters.
|
|
||||||
|
|
||||||
Provides the stor_adpt_mappings. A list of pypowervm
|
|
||||||
VSCSIMappings or VFCMappings (depending on the storage adapter).
|
|
||||||
|
|
||||||
:param disk_dvr: The DiskAdapter for the VM.
|
|
||||||
:param instance: The nova instance.
|
|
||||||
"""
|
|
||||||
super(DetachDisk, self).__init__(
|
|
||||||
name='detach_disk', provides='stor_adpt_mappings')
|
|
||||||
self.instance = instance
|
|
||||||
self.disk_dvr = disk_dvr
|
|
||||||
|
|
||||||
def execute(self):
|
|
||||||
return self.disk_dvr.detach_disk(self.instance)
|
|
||||||
|
|
||||||
|
|
||||||
class DeleteDisk(task.Task):
|
|
||||||
|
|
||||||
"""The task to delete the backing storage."""
|
|
||||||
|
|
||||||
def __init__(self, disk_dvr):
|
|
||||||
"""Creates the Task to delete the disk storage from the system.
|
|
||||||
|
|
||||||
Requires the stor_adpt_mappings.
|
|
||||||
|
|
||||||
:param disk_dvr: The DiskAdapter for the VM.
|
|
||||||
"""
|
|
||||||
super(DeleteDisk, self).__init__(
|
|
||||||
name='delete_disk', requires=['stor_adpt_mappings'])
|
|
||||||
self.disk_dvr = disk_dvr
|
|
||||||
|
|
||||||
def execute(self, stor_adpt_mappings):
|
|
||||||
self.disk_dvr.delete_disks(stor_adpt_mappings)
|
|
||||||
|
|
||||||
|
|
||||||
class CreateAndConnectCfgDrive(task.Task):
|
|
||||||
|
|
||||||
"""The task to create the config drive."""
|
|
||||||
|
|
||||||
def __init__(self, adapter, instance, injected_files,
|
|
||||||
network_info, stg_ftsk, admin_pass=None):
|
|
||||||
"""Create the Task that creates and connects the config drive.
|
|
||||||
|
|
||||||
Requires the 'mgmt_cna'
|
|
||||||
|
|
||||||
:param adapter: The adapter for the pypowervm API
|
|
||||||
:param instance: The nova instance
|
|
||||||
:param injected_files: A list of file paths that will be injected into
|
|
||||||
the ISO.
|
|
||||||
:param network_info: The network_info from the nova spawn method.
|
|
||||||
:param stg_ftsk: FeedTask to defer storage connectivity operations.
|
|
||||||
:param admin_pass (Optional, Default None): Password to inject for the
|
|
||||||
VM.
|
|
||||||
"""
|
|
||||||
super(CreateAndConnectCfgDrive, self).__init__(
|
|
||||||
name='cfg_drive', requires=['mgmt_cna'])
|
|
||||||
self.adapter = adapter
|
|
||||||
self.instance = instance
|
|
||||||
self.injected_files = injected_files
|
|
||||||
self.network_info = network_info
|
|
||||||
self.stg_ftsk = stg_ftsk
|
|
||||||
self.ad_pass = admin_pass
|
|
||||||
self.mb = None
|
|
||||||
|
|
||||||
def execute(self, mgmt_cna):
|
|
||||||
self.mb = media.ConfigDrivePowerVM(self.adapter)
|
|
||||||
self.mb.create_cfg_drv_vopt(self.instance, self.injected_files,
|
|
||||||
self.network_info, self.stg_ftsk,
|
|
||||||
admin_pass=self.ad_pass, mgmt_cna=mgmt_cna)
|
|
||||||
|
|
||||||
def revert(self, mgmt_cna, result, flow_failures):
|
|
||||||
# No media builder, nothing to do
|
|
||||||
if self.mb is None:
|
|
||||||
return
|
|
||||||
|
|
||||||
# Delete the virtual optical media. We don't care if it fails
|
|
||||||
try:
|
|
||||||
self.mb.dlt_vopt(self.instance, self.stg_ftsk)
|
|
||||||
except pvm_exc.Error:
|
|
||||||
LOG.exception('VOpt removal (as part of reversion) failed.',
|
|
||||||
instance=self.instance)
|
|
||||||
|
|
||||||
|
|
||||||
class DeleteVOpt(task.Task):
|
|
||||||
|
|
||||||
"""The task to delete the virtual optical."""
|
|
||||||
|
|
||||||
def __init__(self, adapter, instance, stg_ftsk=None):
|
|
||||||
"""Creates the Task to delete the instance's virtual optical media.
|
|
||||||
|
|
||||||
:param adapter: The adapter for the pypowervm API
|
|
||||||
:param instance: The nova instance.
|
|
||||||
:param stg_ftsk: FeedTask to defer storage connectivity operations.
|
|
||||||
"""
|
|
||||||
super(DeleteVOpt, self).__init__(name='vopt_delete')
|
|
||||||
self.adapter = adapter
|
|
||||||
self.instance = instance
|
|
||||||
self.stg_ftsk = stg_ftsk
|
|
||||||
|
|
||||||
def execute(self):
|
|
||||||
media_builder = media.ConfigDrivePowerVM(self.adapter)
|
|
||||||
media_builder.dlt_vopt(self.instance, stg_ftsk=self.stg_ftsk)
|
|
||||||
|
|
||||||
|
|
||||||
class InstanceDiskToMgmt(task.Task):
|
|
||||||
|
|
||||||
"""The task to connect an instance's disk to the management partition.
|
|
||||||
|
|
||||||
This task will connect the instance's disk to the management partition and
|
|
||||||
discover it. We do these two pieces together because their reversion
|
|
||||||
happens in the same order.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, disk_dvr, instance):
|
|
||||||
"""Create the Task for connecting boot disk to mgmt partition.
|
|
||||||
|
|
||||||
Provides:
|
|
||||||
stg_elem: The storage element wrapper (pypowervm LU, PV, etc.) that was
|
|
||||||
connected.
|
|
||||||
vios_wrap: The Virtual I/O Server wrapper from which the storage
|
|
||||||
element was mapped.
|
|
||||||
disk_path: The local path to the mapped-and-discovered device, e.g.
|
|
||||||
'/dev/sde'.
|
|
||||||
|
|
||||||
:param disk_dvr: The disk driver.
|
|
||||||
:param instance: The nova instance whose boot disk is to be connected.
|
|
||||||
"""
|
|
||||||
super(InstanceDiskToMgmt, self).__init__(
|
|
||||||
name='instance_disk_to_mgmt',
|
|
||||||
provides=['stg_elem', 'vios_wrap', 'disk_path'])
|
|
||||||
self.disk_dvr = disk_dvr
|
|
||||||
self.instance = instance
|
|
||||||
self.stg_elem = None
|
|
||||||
self.vios_wrap = None
|
|
||||||
self.disk_path = None
|
|
||||||
|
|
||||||
def execute(self):
|
|
||||||
"""Map the instance's boot disk and discover it."""
|
|
||||||
|
|
||||||
# Search for boot disk on the NovaLink partition.
|
|
||||||
if self.disk_dvr.mp_uuid in self.disk_dvr._vios_uuids:
|
|
||||||
dev_name = self.disk_dvr.get_bootdisk_path(
|
|
||||||
self.instance, self.disk_dvr.mp_uuid)
|
|
||||||
if dev_name is not None:
|
|
||||||
return None, None, dev_name
|
|
||||||
|
|
||||||
self.stg_elem, self.vios_wrap = (
|
|
||||||
self.disk_dvr.connect_instance_disk_to_mgmt(self.instance))
|
|
||||||
new_maps = pvm_smap.find_maps(
|
|
||||||
self.vios_wrap.scsi_mappings, client_lpar_id=self.disk_dvr.mp_uuid,
|
|
||||||
stg_elem=self.stg_elem)
|
|
||||||
if not new_maps:
|
|
||||||
raise exception.NewMgmtMappingNotFoundException(
|
|
||||||
stg_name=self.stg_elem.name, vios_name=self.vios_wrap.name)
|
|
||||||
|
|
||||||
# new_maps should be length 1, but even if it's not - i.e. we somehow
|
|
||||||
# matched more than one mapping of the same dev to the management
|
|
||||||
# partition from the same VIOS - it is safe to use the first one.
|
|
||||||
mapping = new_maps[0]
|
|
||||||
# Scan the SCSI bus, discover the disk, find its canonical path.
|
|
||||||
LOG.info("Discovering device and path for mapping of %(dev_name)s "
|
|
||||||
"on the management partition.",
|
|
||||||
{'dev_name': self.stg_elem.name}, instance=self.instance)
|
|
||||||
self.disk_path = mgmt.discover_vscsi_disk(mapping)
|
|
||||||
return self.stg_elem, self.vios_wrap, self.disk_path
|
|
||||||
|
|
||||||
def revert(self, result, flow_failures):
|
|
||||||
"""Unmap the disk and then remove it from the management partition.
|
|
||||||
|
|
||||||
We use this order to avoid rediscovering the device in case some other
|
|
||||||
thread scans the SCSI bus between when we remove and when we unmap.
|
|
||||||
"""
|
|
||||||
if self.vios_wrap is None or self.stg_elem is None:
|
|
||||||
# We never even got connected - nothing to do.
|
|
||||||
return
|
|
||||||
LOG.warning("Unmapping boot disk %(disk_name)s from the management "
|
|
||||||
"partition via Virtual I/O Server %(vioname)s.",
|
|
||||||
{'disk_name': self.stg_elem.name,
|
|
||||||
'vioname': self.vios_wrap.name}, instance=self.instance)
|
|
||||||
self.disk_dvr.disconnect_disk_from_mgmt(self.vios_wrap.uuid,
|
|
||||||
self.stg_elem.name)
|
|
||||||
|
|
||||||
if self.disk_path is None:
|
|
||||||
# We did not discover the disk - nothing else to do.
|
|
||||||
return
|
|
||||||
LOG.warning("Removing disk %(dpath)s from the management partition.",
|
|
||||||
{'dpath': self.disk_path}, instance=self.instance)
|
|
||||||
try:
|
|
||||||
mgmt.remove_block_dev(self.disk_path)
|
|
||||||
except pvm_exc.Error:
|
|
||||||
# Don't allow revert exceptions to interrupt the revert flow.
|
|
||||||
LOG.exception("Remove disk failed during revert. Ignoring.",
|
|
||||||
instance=self.instance)
|
|
||||||
|
|
||||||
|
|
||||||
class RemoveInstanceDiskFromMgmt(task.Task):
|
|
||||||
|
|
||||||
"""Unmap and remove an instance's boot disk from the mgmt partition."""
|
|
||||||
|
|
||||||
def __init__(self, disk_dvr, instance):
|
|
||||||
"""Create task to unmap and remove an instance's boot disk from mgmt.
|
|
||||||
|
|
||||||
Requires (from InstanceDiskToMgmt):
|
|
||||||
stg_elem: The storage element wrapper (pypowervm LU, PV, etc.) that was
|
|
||||||
connected.
|
|
||||||
vios_wrap: The Virtual I/O Server wrapper.
|
|
||||||
(pypowervm.wrappers.virtual_io_server.VIOS) from which the
|
|
||||||
storage element was mapped.
|
|
||||||
disk_path: The local path to the mapped-and-discovered device, e.g.
|
|
||||||
'/dev/sde'.
|
|
||||||
:param disk_dvr: The disk driver.
|
|
||||||
:param instance: The nova instance whose boot disk is to be connected.
|
|
||||||
"""
|
|
||||||
self.disk_dvr = disk_dvr
|
|
||||||
self.instance = instance
|
|
||||||
super(RemoveInstanceDiskFromMgmt, self).__init__(
|
|
||||||
name='remove_inst_disk_from_mgmt',
|
|
||||||
requires=['stg_elem', 'vios_wrap', 'disk_path'])
|
|
||||||
|
|
||||||
def execute(self, stg_elem, vios_wrap, disk_path):
|
|
||||||
"""Unmap and remove an instance's boot disk from the mgmt partition.
|
|
||||||
|
|
||||||
Input parameters ('requires') provided by InstanceDiskToMgmt task.
|
|
||||||
:param stg_elem: The storage element wrapper (pypowervm LU, PV, etc.)
|
|
||||||
to be disconnected.
|
|
||||||
:param vios_wrap: The Virtual I/O Server wrapper from which the
|
|
||||||
mapping is to be removed.
|
|
||||||
:param disk_path: The local path to the disk device to be removed, e.g.
|
|
||||||
'/dev/sde'
|
|
||||||
"""
|
|
||||||
# stg_elem is None if boot disk was not mapped to management partition.
|
|
||||||
if stg_elem is None:
|
|
||||||
return
|
|
||||||
LOG.info("Unmapping boot disk %(disk_name)s from the management "
|
|
||||||
"partition via Virtual I/O Server %(vios_name)s.",
|
|
||||||
{'disk_name': stg_elem.name, 'vios_name': vios_wrap.name},
|
|
||||||
instance=self.instance)
|
|
||||||
self.disk_dvr.disconnect_disk_from_mgmt(vios_wrap.uuid, stg_elem.name)
|
|
||||||
LOG.info("Removing disk %(disk_path)s from the management partition.",
|
|
||||||
{'disk_path': disk_path}, instance=self.instance)
|
|
||||||
mgmt.remove_block_dev(disk_path)
|
|
@ -1,154 +0,0 @@
|
|||||||
# Copyright 2015, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from oslo_log import log as logging
|
|
||||||
from pypowervm import exceptions as pvm_exc
|
|
||||||
from pypowervm.tasks import storage as pvm_stg
|
|
||||||
from taskflow import task
|
|
||||||
from taskflow.types import failure as task_fail
|
|
||||||
|
|
||||||
from nova.virt.powervm import vm
|
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class Get(task.Task):
|
|
||||||
|
|
||||||
"""The task for getting a VM entry."""
|
|
||||||
|
|
||||||
def __init__(self, adapter, instance):
|
|
||||||
"""Creates the Task for getting a VM entry.
|
|
||||||
|
|
||||||
Provides the 'lpar_wrap' for other tasks.
|
|
||||||
|
|
||||||
:param adapter: The adapter for the pypowervm API
|
|
||||||
:param instance: The nova instance.
|
|
||||||
"""
|
|
||||||
super(Get, self).__init__(name='get_vm', provides='lpar_wrap')
|
|
||||||
self.adapter = adapter
|
|
||||||
self.instance = instance
|
|
||||||
|
|
||||||
def execute(self):
|
|
||||||
return vm.get_instance_wrapper(self.adapter, self.instance)
|
|
||||||
|
|
||||||
|
|
||||||
class Create(task.Task):
|
|
||||||
"""The task for creating a VM."""
|
|
||||||
|
|
||||||
def __init__(self, adapter, host_wrapper, instance, stg_ftsk):
|
|
||||||
"""Creates the Task for creating a VM.
|
|
||||||
|
|
||||||
The revert method only needs to do something for failed rebuilds.
|
|
||||||
Since the rebuild and build methods have different flows, it is
|
|
||||||
necessary to clean up the destination LPAR on fails during rebuild.
|
|
||||||
|
|
||||||
The revert method is not implemented for build because the compute
|
|
||||||
manager calls the driver destroy operation for spawn errors. By
|
|
||||||
not deleting the lpar, it's a cleaner flow through the destroy
|
|
||||||
operation and accomplishes the same result.
|
|
||||||
|
|
||||||
Any stale storage associated with the new VM's (possibly recycled) ID
|
|
||||||
will be cleaned up. The cleanup work will be delegated to the FeedTask
|
|
||||||
represented by the stg_ftsk parameter.
|
|
||||||
|
|
||||||
:param adapter: The adapter for the pypowervm API
|
|
||||||
:param host_wrapper: The managed system wrapper
|
|
||||||
:param instance: The nova instance.
|
|
||||||
:param stg_ftsk: FeedTask to defer storage connectivity operations.
|
|
||||||
"""
|
|
||||||
super(Create, self).__init__(name='crt_vm', provides='lpar_wrap')
|
|
||||||
self.instance = instance
|
|
||||||
self.adapter = adapter
|
|
||||||
self.host_wrapper = host_wrapper
|
|
||||||
self.stg_ftsk = stg_ftsk
|
|
||||||
|
|
||||||
def execute(self):
|
|
||||||
wrap = vm.create_lpar(self.adapter, self.host_wrapper, self.instance)
|
|
||||||
# Get rid of any stale storage and/or mappings associated with the new
|
|
||||||
# LPAR's ID, so it doesn't accidentally have access to something it
|
|
||||||
# oughtn't.
|
|
||||||
LOG.info('Scrubbing stale storage.', instance=self.instance)
|
|
||||||
pvm_stg.add_lpar_storage_scrub_tasks([wrap.id], self.stg_ftsk,
|
|
||||||
lpars_exist=True)
|
|
||||||
return wrap
|
|
||||||
|
|
||||||
|
|
||||||
class PowerOn(task.Task):
|
|
||||||
"""The task to power on the instance."""
|
|
||||||
|
|
||||||
def __init__(self, adapter, instance):
|
|
||||||
"""Create the Task for the power on of the LPAR.
|
|
||||||
|
|
||||||
:param adapter: The pypowervm adapter.
|
|
||||||
:param instance: The nova instance.
|
|
||||||
"""
|
|
||||||
super(PowerOn, self).__init__(name='pwr_vm')
|
|
||||||
self.adapter = adapter
|
|
||||||
self.instance = instance
|
|
||||||
|
|
||||||
def execute(self):
|
|
||||||
vm.power_on(self.adapter, self.instance)
|
|
||||||
|
|
||||||
def revert(self, result, flow_failures):
|
|
||||||
if isinstance(result, task_fail.Failure):
|
|
||||||
# The power on itself failed...can't power off.
|
|
||||||
LOG.debug('Power on failed. Not performing power off.',
|
|
||||||
instance=self.instance)
|
|
||||||
return
|
|
||||||
|
|
||||||
LOG.warning('Powering off instance.', instance=self.instance)
|
|
||||||
try:
|
|
||||||
vm.power_off(self.adapter, self.instance, force_immediate=True)
|
|
||||||
except pvm_exc.Error:
|
|
||||||
# Don't raise revert exceptions
|
|
||||||
LOG.exception("Power-off failed during revert.",
|
|
||||||
instance=self.instance)
|
|
||||||
|
|
||||||
|
|
||||||
class PowerOff(task.Task):
|
|
||||||
"""The task to power off a VM."""
|
|
||||||
|
|
||||||
def __init__(self, adapter, instance, force_immediate=False):
|
|
||||||
"""Creates the Task to power off an LPAR.
|
|
||||||
|
|
||||||
:param adapter: The adapter for the pypowervm API
|
|
||||||
:param instance: The nova instance.
|
|
||||||
:param force_immediate: Boolean. Perform a VSP hard power off.
|
|
||||||
"""
|
|
||||||
super(PowerOff, self).__init__(name='pwr_off_vm')
|
|
||||||
self.instance = instance
|
|
||||||
self.adapter = adapter
|
|
||||||
self.force_immediate = force_immediate
|
|
||||||
|
|
||||||
def execute(self):
|
|
||||||
vm.power_off(self.adapter, self.instance,
|
|
||||||
force_immediate=self.force_immediate)
|
|
||||||
|
|
||||||
|
|
||||||
class Delete(task.Task):
|
|
||||||
"""The task to delete the instance from the system."""
|
|
||||||
|
|
||||||
def __init__(self, adapter, instance):
|
|
||||||
"""Create the Task to delete the VM from the system.
|
|
||||||
|
|
||||||
:param adapter: The adapter for the pypowervm API.
|
|
||||||
:param instance: The nova instance.
|
|
||||||
"""
|
|
||||||
super(Delete, self).__init__(name='dlt_vm')
|
|
||||||
self.adapter = adapter
|
|
||||||
self.instance = instance
|
|
||||||
|
|
||||||
def execute(self):
|
|
||||||
vm.delete_lpar(self.adapter, self.instance)
|
|
@ -1,373 +0,0 @@
|
|||||||
# Copyright 2016, 2017 IBM Corp.
|
|
||||||
#
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import abc
|
|
||||||
|
|
||||||
from oslo_log import log
|
|
||||||
from oslo_serialization import jsonutils
|
|
||||||
from oslo_utils import excutils
|
|
||||||
from oslo_utils import importutils
|
|
||||||
from pypowervm import exceptions as pvm_ex
|
|
||||||
from pypowervm.tasks import cna as pvm_cna
|
|
||||||
from pypowervm.tasks import partition as pvm_par
|
|
||||||
from pypowervm.wrappers import event as pvm_evt
|
|
||||||
|
|
||||||
from nova import exception
|
|
||||||
from nova.network import model as network_model
|
|
||||||
from nova.virt.powervm import vm
|
|
||||||
|
|
||||||
LOG = log.getLogger(__name__)
|
|
||||||
|
|
||||||
NOVALINK_VSWITCH = 'NovaLinkVEABridge'
|
|
||||||
|
|
||||||
# Provider tag for custom events from this module
|
|
||||||
EVENT_PROVIDER_ID = 'NOVA_PVM_VIF'
|
|
||||||
|
|
||||||
VIF_TYPE_PVM_SEA = 'pvm_sea'
|
|
||||||
VIF_TYPE_PVM_OVS = 'ovs'
|
|
||||||
VIF_MAPPING = {VIF_TYPE_PVM_SEA:
|
|
||||||
'nova.virt.powervm.vif.PvmSeaVifDriver',
|
|
||||||
VIF_TYPE_PVM_OVS:
|
|
||||||
'nova.virt.powervm.vif.PvmOvsVifDriver'}
|
|
||||||
|
|
||||||
|
|
||||||
def _build_vif_driver(adapter, instance, vif):
|
|
||||||
"""Returns the appropriate VIF Driver for the given VIF.
|
|
||||||
|
|
||||||
:param adapter: The pypowervm adapter API interface.
|
|
||||||
:param instance: The nova instance.
|
|
||||||
:param vif: The virtual interface.
|
|
||||||
:return: The appropriate PvmVifDriver for the VIF.
|
|
||||||
"""
|
|
||||||
if vif.get('type') is None:
|
|
||||||
LOG.exception("Failed to build vif driver. Missing vif type.",
|
|
||||||
instance=instance)
|
|
||||||
raise exception.VirtualInterfacePlugException()
|
|
||||||
|
|
||||||
# Check the type to the implementations
|
|
||||||
if VIF_MAPPING.get(vif['type']):
|
|
||||||
return importutils.import_object(
|
|
||||||
VIF_MAPPING.get(vif['type']), adapter, instance)
|
|
||||||
|
|
||||||
# No matching implementation, raise error.
|
|
||||||
LOG.exception("Failed to build vif driver. Invalid vif type provided.",
|
|
||||||
instance=instance)
|
|
||||||
raise exception.VirtualInterfacePlugException()
|
|
||||||
|
|
||||||
|
|
||||||
def _push_vif_event(adapter, action, vif_w, instance, vif_type):
|
|
||||||
"""Push a custom event to the REST server for a vif action (plug/unplug).
|
|
||||||
|
|
||||||
This event prompts the neutron agent to mark the port up or down. It is
|
|
||||||
consumed by custom neutron agents (e.g. Shared Ethernet Adapter)
|
|
||||||
|
|
||||||
:param adapter: The pypowervm adapter.
|
|
||||||
:param action: The action taken on the vif - either 'plug' or 'unplug'
|
|
||||||
:param vif_w: The pypowervm wrapper of the affected vif (CNA, VNIC, etc.)
|
|
||||||
:param instance: The nova instance for the event
|
|
||||||
:param vif_type: The type of event source (pvm_sea, ovs, bridge,
|
|
||||||
pvm_sriov etc)
|
|
||||||
"""
|
|
||||||
data = vif_w.href
|
|
||||||
detail = jsonutils.dumps(dict(provider=EVENT_PROVIDER_ID, action=action,
|
|
||||||
mac=vif_w.mac, type=vif_type))
|
|
||||||
event = pvm_evt.Event.bld(adapter, data, detail)
|
|
||||||
try:
|
|
||||||
event = event.create()
|
|
||||||
LOG.debug('Pushed custom event for consumption by neutron agent: %s',
|
|
||||||
str(event), instance=instance)
|
|
||||||
except Exception:
|
|
||||||
with excutils.save_and_reraise_exception(logger=LOG):
|
|
||||||
LOG.exception('Custom VIF event push failed. %s', str(event),
|
|
||||||
instance=instance)
|
|
||||||
|
|
||||||
|
|
||||||
def plug(adapter, instance, vif, new_vif=True):
|
|
||||||
"""Plugs a virtual interface (network) into a VM.
|
|
||||||
|
|
||||||
:param adapter: The pypowervm adapter.
|
|
||||||
:param instance: The nova instance object.
|
|
||||||
:param vif: The virtual interface to plug into the instance.
|
|
||||||
:param new_vif: (Optional, Default: True) If set, indicates that it is
|
|
||||||
a brand new VIF. If False, it indicates that the VIF
|
|
||||||
is already on the client but should be treated on the
|
|
||||||
bridge.
|
|
||||||
:return: The wrapper (CNA) representing the plugged virtual network. None
|
|
||||||
if the vnet was not created.
|
|
||||||
"""
|
|
||||||
vif_drv = _build_vif_driver(adapter, instance, vif)
|
|
||||||
|
|
||||||
try:
|
|
||||||
vnet_w = vif_drv.plug(vif, new_vif=new_vif)
|
|
||||||
except pvm_ex.HttpError:
|
|
||||||
LOG.exception('VIF plug failed for instance.', instance=instance)
|
|
||||||
raise exception.VirtualInterfacePlugException()
|
|
||||||
# Other exceptions are (hopefully) custom VirtualInterfacePlugException
|
|
||||||
# generated lower in the call stack.
|
|
||||||
|
|
||||||
# Push a custom event if we really plugged the vif
|
|
||||||
if vnet_w is not None:
|
|
||||||
_push_vif_event(adapter, 'plug', vnet_w, instance, vif['type'])
|
|
||||||
|
|
||||||
return vnet_w
|
|
||||||
|
|
||||||
|
|
||||||
def unplug(adapter, instance, vif, cna_w_list=None):
|
|
||||||
"""Unplugs a virtual interface (network) from a VM.
|
|
||||||
|
|
||||||
:param adapter: The pypowervm adapter.
|
|
||||||
:param instance: The nova instance object.
|
|
||||||
:param vif: The virtual interface to plug into the instance.
|
|
||||||
:param cna_w_list: (Optional, Default: None) The list of Client Network
|
|
||||||
Adapters from pypowervm. Providing this input
|
|
||||||
allows for an improvement in operation speed.
|
|
||||||
"""
|
|
||||||
vif_drv = _build_vif_driver(adapter, instance, vif)
|
|
||||||
try:
|
|
||||||
vnet_w = vif_drv.unplug(vif, cna_w_list=cna_w_list)
|
|
||||||
except pvm_ex.HttpError as he:
|
|
||||||
LOG.exception('VIF unplug failed for instance', instance=instance)
|
|
||||||
raise exception.VirtualInterfaceUnplugException(reason=he.args[0])
|
|
||||||
|
|
||||||
# Push a custom event if we successfully unplugged the vif.
|
|
||||||
if vnet_w:
|
|
||||||
_push_vif_event(adapter, 'unplug', vnet_w, instance, vif['type'])
|
|
||||||
|
|
||||||
|
|
||||||
class PvmVifDriver(metaclass=abc.ABCMeta):
|
|
||||||
"""Represents an abstract class for a PowerVM Vif Driver.
|
|
||||||
|
|
||||||
A VIF Driver understands a given virtual interface type (network). It
|
|
||||||
understands how to plug and unplug a given VIF for a virtual machine.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, adapter, instance):
|
|
||||||
"""Initializes a VIF Driver.
|
|
||||||
:param adapter: The pypowervm adapter API interface.
|
|
||||||
:param instance: The nova instance that the vif action will be run
|
|
||||||
against.
|
|
||||||
"""
|
|
||||||
self.adapter = adapter
|
|
||||||
self.instance = instance
|
|
||||||
|
|
||||||
@abc.abstractmethod
|
|
||||||
def plug(self, vif, new_vif=True):
|
|
||||||
"""Plugs a virtual interface (network) into a VM.
|
|
||||||
|
|
||||||
:param vif: The virtual interface to plug into the instance.
|
|
||||||
:param new_vif: (Optional, Default: True) If set, indicates that it is
|
|
||||||
a brand new VIF. If False, it indicates that the VIF
|
|
||||||
is already on the client but should be treated on the
|
|
||||||
bridge.
|
|
||||||
:return: The new vif that was created. Only returned if new_vif is
|
|
||||||
set to True. Otherwise None is expected.
|
|
||||||
"""
|
|
||||||
pass
|
|
||||||
|
|
||||||
def unplug(self, vif, cna_w_list=None):
|
|
||||||
"""Unplugs a virtual interface (network) from a VM.
|
|
||||||
|
|
||||||
:param vif: The virtual interface to plug into the instance.
|
|
||||||
:param cna_w_list: (Optional, Default: None) The list of Client Network
|
|
||||||
Adapters from pypowervm. Providing this input
|
|
||||||
allows for an improvement in operation speed.
|
|
||||||
:return cna_w: The deleted Client Network Adapter or None if the CNA
|
|
||||||
is not found.
|
|
||||||
"""
|
|
||||||
# This is a default implementation that most implementations will
|
|
||||||
# require.
|
|
||||||
|
|
||||||
# Need to find the adapters if they were not provided
|
|
||||||
if not cna_w_list:
|
|
||||||
cna_w_list = vm.get_cnas(self.adapter, self.instance)
|
|
||||||
|
|
||||||
cna_w = self._find_cna_for_vif(cna_w_list, vif)
|
|
||||||
if not cna_w:
|
|
||||||
LOG.warning('Unable to unplug VIF with mac %(mac)s. The VIF was '
|
|
||||||
'not found on the instance.',
|
|
||||||
{'mac': vif['address']}, instance=self.instance)
|
|
||||||
return None
|
|
||||||
|
|
||||||
LOG.info('Deleting VIF with mac %(mac)s.',
|
|
||||||
{'mac': vif['address']}, instance=self.instance)
|
|
||||||
try:
|
|
||||||
cna_w.delete()
|
|
||||||
except Exception as e:
|
|
||||||
LOG.exception('Unable to unplug VIF with mac %(mac)s.',
|
|
||||||
{'mac': vif['address']}, instance=self.instance)
|
|
||||||
raise exception.VirtualInterfaceUnplugException(
|
|
||||||
reason=str(e))
|
|
||||||
return cna_w
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _find_cna_for_vif(cna_w_list, vif):
|
|
||||||
"""Finds the PowerVM CNA for a given Nova VIF.
|
|
||||||
|
|
||||||
:param cna_w_list: The list of Client Network Adapter wrappers from
|
|
||||||
pypowervm.
|
|
||||||
:param vif: The Nova Virtual Interface (virtual network interface).
|
|
||||||
:return: The CNA that corresponds to the VIF. None if one is not
|
|
||||||
part of the cna_w_list.
|
|
||||||
"""
|
|
||||||
for cna_w in cna_w_list:
|
|
||||||
if vm.norm_mac(cna_w.mac) == vif['address']:
|
|
||||||
return cna_w
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
class PvmOvsVifDriver(PvmVifDriver):
|
|
||||||
"""The Open vSwitch VIF driver for PowerVM."""
|
|
||||||
|
|
||||||
def plug(self, vif, new_vif=True):
|
|
||||||
"""Plugs a virtual interface (network) into a VM.
|
|
||||||
|
|
||||||
Creates a 'peer to peer' connection between the Management partition
|
|
||||||
hosting the Linux I/O and the client VM. There will be one trunk
|
|
||||||
adapter for a given client adapter.
|
|
||||||
|
|
||||||
The device will be 'up' on the mgmt partition.
|
|
||||||
|
|
||||||
Will make sure that the trunk device has the appropriate metadata (e.g.
|
|
||||||
port id) set on it so that the Open vSwitch agent picks it up properly.
|
|
||||||
|
|
||||||
:param vif: The virtual interface to plug into the instance.
|
|
||||||
:param new_vif: (Optional, Default: True) If set, indicates that it is
|
|
||||||
a brand new VIF. If False, it indicates that the VIF
|
|
||||||
is already on the client but should be treated on the
|
|
||||||
bridge.
|
|
||||||
:return: The new vif that was created. Only returned if new_vif is
|
|
||||||
set to True. Otherwise None is expected.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Create the trunk and client adapter.
|
|
||||||
lpar_uuid = vm.get_pvm_uuid(self.instance)
|
|
||||||
mgmt_uuid = pvm_par.get_this_partition(self.adapter).uuid
|
|
||||||
|
|
||||||
mtu = vif['network'].get_meta('mtu')
|
|
||||||
if 'devname' in vif:
|
|
||||||
dev_name = vif['devname']
|
|
||||||
else:
|
|
||||||
dev_name = ("nic" + vif['id'])[:network_model.NIC_NAME_LEN]
|
|
||||||
|
|
||||||
meta_attrs = ','.join([
|
|
||||||
'iface-id=%s' % (vif.get('ovs_interfaceid') or vif['id']),
|
|
||||||
'iface-status=active',
|
|
||||||
'attached-mac=%s' % vif['address'],
|
|
||||||
'vm-uuid=%s' % self.instance.uuid])
|
|
||||||
|
|
||||||
if new_vif:
|
|
||||||
return pvm_cna.crt_p2p_cna(
|
|
||||||
self.adapter, None, lpar_uuid, [mgmt_uuid], NOVALINK_VSWITCH,
|
|
||||||
crt_vswitch=True, mac_addr=vif['address'], dev_name=dev_name,
|
|
||||||
ovs_bridge=vif['network']['bridge'],
|
|
||||||
ovs_ext_ids=meta_attrs, configured_mtu=mtu)[0]
|
|
||||||
else:
|
|
||||||
# Bug : https://bugs.launchpad.net/nova-powervm/+bug/1731548
|
|
||||||
# When a host is rebooted, something is discarding tap devices for
|
|
||||||
# VMs deployed with OVS vif. To prevent VMs losing network
|
|
||||||
# connectivity, this is fixed by recreating the tap devices during
|
|
||||||
# init of the nova compute service, which will call vif plug with
|
|
||||||
# new_vif==False.
|
|
||||||
|
|
||||||
# Find the CNA for this vif.
|
|
||||||
# TODO(esberglu) improve performance by caching VIOS wrapper(s) and
|
|
||||||
# CNA lists (in case >1 vif per VM).
|
|
||||||
cna_w_list = vm.get_cnas(self.adapter, self.instance)
|
|
||||||
cna_w = self._find_cna_for_vif(cna_w_list, vif)
|
|
||||||
if not cna_w:
|
|
||||||
LOG.warning('Unable to plug VIF with mac %s for instance. The '
|
|
||||||
'VIF was not found on the instance.',
|
|
||||||
vif['address'], instance=self.instance)
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Find the corresponding trunk adapter
|
|
||||||
trunks = pvm_cna.find_trunks(self.adapter, cna_w)
|
|
||||||
for trunk in trunks:
|
|
||||||
# Set MTU, OVS external ids, and OVS bridge metadata
|
|
||||||
trunk.configured_mtu = mtu
|
|
||||||
trunk.ovs_ext_ids = meta_attrs
|
|
||||||
trunk.ovs_bridge = vif['network']['bridge']
|
|
||||||
# Updating the trunk adapter will cause NovaLink to reassociate
|
|
||||||
# the tap device.
|
|
||||||
trunk.update()
|
|
||||||
|
|
||||||
def unplug(self, vif, cna_w_list=None):
|
|
||||||
"""Unplugs a virtual interface (network) from a VM.
|
|
||||||
|
|
||||||
Extends the base implementation, but before calling it will remove
|
|
||||||
the adapter from the Open vSwitch and delete the trunk.
|
|
||||||
|
|
||||||
:param vif: The virtual interface to plug into the instance.
|
|
||||||
:param cna_w_list: (Optional, Default: None) The list of Client Network
|
|
||||||
Adapters from pypowervm. Providing this input
|
|
||||||
allows for an improvement in operation speed.
|
|
||||||
:return cna_w: The deleted Client Network Adapter or None if the CNA
|
|
||||||
is not found.
|
|
||||||
"""
|
|
||||||
# Need to find the adapters if they were not provided
|
|
||||||
if not cna_w_list:
|
|
||||||
cna_w_list = vm.get_cnas(self.adapter, self.instance)
|
|
||||||
|
|
||||||
# Find the CNA for this vif.
|
|
||||||
cna_w = self._find_cna_for_vif(cna_w_list, vif)
|
|
||||||
|
|
||||||
if not cna_w:
|
|
||||||
LOG.warning('Unable to unplug VIF with mac %s for instance. The '
|
|
||||||
'VIF was not found on the instance.', vif['address'],
|
|
||||||
instance=self.instance)
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Find and delete the trunk adapters
|
|
||||||
trunks = pvm_cna.find_trunks(self.adapter, cna_w)
|
|
||||||
for trunk in trunks:
|
|
||||||
trunk.delete()
|
|
||||||
|
|
||||||
# Delete the client CNA
|
|
||||||
return super(PvmOvsVifDriver, self).unplug(vif, cna_w_list=cna_w_list)
|
|
||||||
|
|
||||||
|
|
||||||
class PvmSeaVifDriver(PvmVifDriver):
|
|
||||||
"""The PowerVM Shared Ethernet Adapter VIF Driver."""
|
|
||||||
|
|
||||||
def plug(self, vif, new_vif=True):
|
|
||||||
"""Plugs a virtual interface (network) into a VM.
|
|
||||||
|
|
||||||
This method simply creates the client network adapter into the VM.
|
|
||||||
|
|
||||||
:param vif: The virtual interface to plug into the instance.
|
|
||||||
:param new_vif: (Optional, Default: True) If set, indicates that it is
|
|
||||||
a brand new VIF. If False, it indicates that the VIF
|
|
||||||
is already on the client but should be treated on the
|
|
||||||
bridge.
|
|
||||||
:return: The new vif that was created. Only returned if new_vif is
|
|
||||||
set to True. Otherwise None is expected.
|
|
||||||
"""
|
|
||||||
# Do nothing if not a new VIF
|
|
||||||
if not new_vif:
|
|
||||||
return None
|
|
||||||
|
|
||||||
lpar_uuid = vm.get_pvm_uuid(self.instance)
|
|
||||||
|
|
||||||
# CNA's require a VLAN. The networking-powervm neutron agent puts this
|
|
||||||
# in the vif details.
|
|
||||||
vlan = int(vif['details']['vlan'])
|
|
||||||
|
|
||||||
LOG.debug("Creating SEA-based VIF with VLAN %s", str(vlan),
|
|
||||||
instance=self.instance)
|
|
||||||
cna_w = pvm_cna.crt_cna(self.adapter, None, lpar_uuid, vlan,
|
|
||||||
mac_addr=vif['address'])
|
|
||||||
|
|
||||||
return cna_w
|
|
@ -1,543 +0,0 @@
|
|||||||
# Copyright 2014, 2017 IBM Corp.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import re
|
|
||||||
|
|
||||||
from oslo_concurrency import lockutils
|
|
||||||
from oslo_log import log as logging
|
|
||||||
from oslo_serialization import jsonutils
|
|
||||||
from oslo_utils import excutils
|
|
||||||
from oslo_utils import strutils as stru
|
|
||||||
from pypowervm import exceptions as pvm_exc
|
|
||||||
from pypowervm.helpers import log_helper as pvm_log
|
|
||||||
from pypowervm.tasks import power
|
|
||||||
from pypowervm.tasks import power_opts as popts
|
|
||||||
from pypowervm.tasks import vterm
|
|
||||||
from pypowervm import util as pvm_u
|
|
||||||
from pypowervm.utils import lpar_builder as lpar_bldr
|
|
||||||
from pypowervm.utils import uuid as pvm_uuid
|
|
||||||
from pypowervm.utils import validation as pvm_vldn
|
|
||||||
from pypowervm.wrappers import base_partition as pvm_bp
|
|
||||||
from pypowervm.wrappers import logical_partition as pvm_lpar
|
|
||||||
from pypowervm.wrappers import network as pvm_net
|
|
||||||
from pypowervm.wrappers import shared_proc_pool as pvm_spp
|
|
||||||
|
|
||||||
from nova.compute import power_state
|
|
||||||
from nova import conf
|
|
||||||
from nova import exception as exc
|
|
||||||
from nova.i18n import _
|
|
||||||
from nova.virt import hardware
|
|
||||||
|
|
||||||
|
|
||||||
CONF = conf.CONF
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
_POWERVM_STARTABLE_STATE = (pvm_bp.LPARState.NOT_ACTIVATED,)
|
|
||||||
_POWERVM_STOPPABLE_STATE = (
|
|
||||||
pvm_bp.LPARState.RUNNING, pvm_bp.LPARState.STARTING,
|
|
||||||
pvm_bp.LPARState.OPEN_FIRMWARE, pvm_bp.LPARState.SHUTTING_DOWN,
|
|
||||||
pvm_bp.LPARState.ERROR, pvm_bp.LPARState.RESUMING,
|
|
||||||
pvm_bp.LPARState.SUSPENDING)
|
|
||||||
_POWERVM_TO_NOVA_STATE = {
|
|
||||||
pvm_bp.LPARState.MIGRATING_RUNNING: power_state.RUNNING,
|
|
||||||
pvm_bp.LPARState.RUNNING: power_state.RUNNING,
|
|
||||||
pvm_bp.LPARState.STARTING: power_state.RUNNING,
|
|
||||||
# map open firmware state to active since it can be shut down
|
|
||||||
pvm_bp.LPARState.OPEN_FIRMWARE: power_state.RUNNING,
|
|
||||||
# It is running until it is off.
|
|
||||||
pvm_bp.LPARState.SHUTTING_DOWN: power_state.RUNNING,
|
|
||||||
# It is running until the suspend completes
|
|
||||||
pvm_bp.LPARState.SUSPENDING: power_state.RUNNING,
|
|
||||||
|
|
||||||
pvm_bp.LPARState.MIGRATING_NOT_ACTIVE: power_state.SHUTDOWN,
|
|
||||||
pvm_bp.LPARState.NOT_ACTIVATED: power_state.SHUTDOWN,
|
|
||||||
|
|
||||||
pvm_bp.LPARState.UNKNOWN: power_state.NOSTATE,
|
|
||||||
pvm_bp.LPARState.HARDWARE_DISCOVERY: power_state.NOSTATE,
|
|
||||||
pvm_bp.LPARState.NOT_AVAILBLE: power_state.NOSTATE,
|
|
||||||
|
|
||||||
# While resuming, we should be considered suspended still. Only once
|
|
||||||
# resumed will we be active (which is represented by the RUNNING state)
|
|
||||||
pvm_bp.LPARState.RESUMING: power_state.SUSPENDED,
|
|
||||||
pvm_bp.LPARState.SUSPENDED: power_state.SUSPENDED,
|
|
||||||
|
|
||||||
pvm_bp.LPARState.ERROR: power_state.CRASHED}
|
|
||||||
|
|
||||||
|
|
||||||
def get_cnas(adapter, instance, **search):
|
|
||||||
"""Returns the (possibly filtered) current CNAs on the instance.
|
|
||||||
|
|
||||||
The Client Network Adapters are the Ethernet adapters for a VM.
|
|
||||||
|
|
||||||
:param adapter: The pypowervm adapter.
|
|
||||||
:param instance: The nova instance.
|
|
||||||
:param search: Keyword arguments for CNA.search. If omitted, all CNAs are
|
|
||||||
returned.
|
|
||||||
:return The CNA wrappers that represent the ClientNetworkAdapters on the VM
|
|
||||||
"""
|
|
||||||
meth = pvm_net.CNA.search if search else pvm_net.CNA.get
|
|
||||||
|
|
||||||
return meth(adapter, parent_type=pvm_lpar.LPAR,
|
|
||||||
parent_uuid=get_pvm_uuid(instance), **search)
|
|
||||||
|
|
||||||
|
|
||||||
def get_lpar_names(adp):
|
|
||||||
"""Get a list of the LPAR names.
|
|
||||||
|
|
||||||
:param adp: A pypowervm.adapter.Adapter instance for the PowerVM API.
|
|
||||||
:return: A list of string names of the PowerVM Logical Partitions.
|
|
||||||
"""
|
|
||||||
return [x.name for x in pvm_lpar.LPAR.search(adp, is_mgmt_partition=False)]
|
|
||||||
|
|
||||||
|
|
||||||
def get_pvm_uuid(instance):
|
|
||||||
"""Get the corresponding PowerVM VM uuid of an instance uuid.
|
|
||||||
|
|
||||||
Maps a OpenStack instance uuid to a PowerVM uuid. The UUID between the
|
|
||||||
Nova instance and PowerVM will be 1 to 1 mapped. This method runs the
|
|
||||||
algorithm against the instance's uuid to convert it to the PowerVM
|
|
||||||
UUID.
|
|
||||||
|
|
||||||
:param instance: nova.objects.instance.Instance.
|
|
||||||
:return: The PowerVM UUID for the LPAR corresponding to the instance.
|
|
||||||
"""
|
|
||||||
return pvm_uuid.convert_uuid_to_pvm(instance.uuid).upper()
|
|
||||||
|
|
||||||
|
|
||||||
def get_instance_wrapper(adapter, instance):
|
|
||||||
"""Get the LPAR wrapper for a given Nova instance.
|
|
||||||
|
|
||||||
:param adapter: The adapter for the pypowervm API
|
|
||||||
:param instance: The nova instance.
|
|
||||||
:return: The pypowervm logical_partition wrapper.
|
|
||||||
"""
|
|
||||||
pvm_inst_uuid = get_pvm_uuid(instance)
|
|
||||||
try:
|
|
||||||
return pvm_lpar.LPAR.get(adapter, uuid=pvm_inst_uuid)
|
|
||||||
except pvm_exc.Error as e:
|
|
||||||
with excutils.save_and_reraise_exception(logger=LOG) as sare:
|
|
||||||
LOG.exception("Failed to retrieve LPAR associated with instance.",
|
|
||||||
instance=instance)
|
|
||||||
if e.response is not None and e.response.status == 404:
|
|
||||||
sare.reraise = False
|
|
||||||
raise exc.InstanceNotFound(instance_id=pvm_inst_uuid)
|
|
||||||
|
|
||||||
|
|
||||||
def power_on(adapter, instance):
|
|
||||||
"""Powers on a VM.
|
|
||||||
|
|
||||||
:param adapter: A pypowervm.adapter.Adapter.
|
|
||||||
:param instance: The nova instance to power on.
|
|
||||||
:raises: InstancePowerOnFailure
|
|
||||||
"""
|
|
||||||
# Synchronize power-on and power-off ops on a given instance
|
|
||||||
with lockutils.lock('power_%s' % instance.uuid):
|
|
||||||
entry = get_instance_wrapper(adapter, instance)
|
|
||||||
# Get the current state and see if we can start the VM
|
|
||||||
if entry.state in _POWERVM_STARTABLE_STATE:
|
|
||||||
# Now start the lpar
|
|
||||||
try:
|
|
||||||
power.power_on(entry, None)
|
|
||||||
except pvm_exc.Error as e:
|
|
||||||
LOG.exception("PowerVM error during power_on.",
|
|
||||||
instance=instance)
|
|
||||||
raise exc.InstancePowerOnFailure(reason=str(e))
|
|
||||||
|
|
||||||
|
|
||||||
def power_off(adapter, instance, force_immediate=False, timeout=None):
|
|
||||||
"""Powers off a VM.
|
|
||||||
|
|
||||||
:param adapter: A pypowervm.adapter.Adapter.
|
|
||||||
:param instance: The nova instance to power off.
|
|
||||||
:param timeout: (Optional, Default None) How long to wait for the job
|
|
||||||
to complete. By default, is None which indicates it should
|
|
||||||
use the default from pypowervm's power off method.
|
|
||||||
:param force_immediate: (Optional, Default False) Should it be immediately
|
|
||||||
shut down.
|
|
||||||
:raises: InstancePowerOffFailure
|
|
||||||
"""
|
|
||||||
# Synchronize power-on and power-off ops on a given instance
|
|
||||||
with lockutils.lock('power_%s' % instance.uuid):
|
|
||||||
entry = get_instance_wrapper(adapter, instance)
|
|
||||||
# Get the current state and see if we can stop the VM
|
|
||||||
LOG.debug("Powering off request for instance in state %(state)s. "
|
|
||||||
"Force Immediate Flag: %(force)s.",
|
|
||||||
{'state': entry.state, 'force': force_immediate},
|
|
||||||
instance=instance)
|
|
||||||
if entry.state in _POWERVM_STOPPABLE_STATE:
|
|
||||||
# Now stop the lpar
|
|
||||||
try:
|
|
||||||
LOG.debug("Power off executing.", instance=instance)
|
|
||||||
kwargs = {'timeout': timeout} if timeout else {}
|
|
||||||
if force_immediate:
|
|
||||||
power.PowerOp.stop(
|
|
||||||
entry, opts=popts.PowerOffOpts().vsp_hard(), **kwargs)
|
|
||||||
else:
|
|
||||||
power.power_off_progressive(entry, **kwargs)
|
|
||||||
except pvm_exc.Error as e:
|
|
||||||
LOG.exception("PowerVM error during power_off.",
|
|
||||||
instance=instance)
|
|
||||||
raise exc.InstancePowerOffFailure(reason=str(e))
|
|
||||||
else:
|
|
||||||
LOG.debug("Power off not required for instance %(inst)s.",
|
|
||||||
{'inst': instance.name})
|
|
||||||
|
|
||||||
|
|
||||||
def reboot(adapter, instance, hard):
|
|
||||||
"""Reboots a VM.
|
|
||||||
|
|
||||||
:param adapter: A pypowervm.adapter.Adapter.
|
|
||||||
:param instance: The nova instance to reboot.
|
|
||||||
:param hard: Boolean True if hard reboot, False otherwise.
|
|
||||||
:raises: InstanceRebootFailure
|
|
||||||
"""
|
|
||||||
# Synchronize power-on and power-off ops on a given instance
|
|
||||||
with lockutils.lock('power_%s' % instance.uuid):
|
|
||||||
try:
|
|
||||||
entry = get_instance_wrapper(adapter, instance)
|
|
||||||
if entry.state != pvm_bp.LPARState.NOT_ACTIVATED:
|
|
||||||
if hard:
|
|
||||||
power.PowerOp.stop(
|
|
||||||
entry, opts=popts.PowerOffOpts().vsp_hard().restart())
|
|
||||||
else:
|
|
||||||
power.power_off_progressive(entry, restart=True)
|
|
||||||
else:
|
|
||||||
# pypowervm does NOT throw an exception if "already down".
|
|
||||||
# Any other exception from pypowervm is a legitimate failure;
|
|
||||||
# let it raise up.
|
|
||||||
# If we get here, pypowervm thinks the instance is down.
|
|
||||||
power.power_on(entry, None)
|
|
||||||
except pvm_exc.Error as e:
|
|
||||||
LOG.exception("PowerVM error during reboot.", instance=instance)
|
|
||||||
raise exc.InstanceRebootFailure(reason=str(e))
|
|
||||||
|
|
||||||
|
|
||||||
def delete_lpar(adapter, instance):
|
|
||||||
"""Delete an LPAR.
|
|
||||||
|
|
||||||
:param adapter: The adapter for the pypowervm API.
|
|
||||||
:param instance: The nova instance corresponding to the lpar to delete.
|
|
||||||
"""
|
|
||||||
lpar_uuid = get_pvm_uuid(instance)
|
|
||||||
# Attempt to delete the VM. To avoid failures due to open vterm, we will
|
|
||||||
# attempt to close the vterm before issuing the delete.
|
|
||||||
try:
|
|
||||||
LOG.info('Deleting virtual machine.', instance=instance)
|
|
||||||
# Ensure any vterms are closed. Will no-op otherwise.
|
|
||||||
vterm.close_vterm(adapter, lpar_uuid)
|
|
||||||
# Run the LPAR delete
|
|
||||||
resp = adapter.delete(pvm_lpar.LPAR.schema_type, root_id=lpar_uuid)
|
|
||||||
LOG.info('Virtual machine delete status: %d', resp.status,
|
|
||||||
instance=instance)
|
|
||||||
return resp
|
|
||||||
except pvm_exc.HttpError as e:
|
|
||||||
with excutils.save_and_reraise_exception(logger=LOG) as sare:
|
|
||||||
if e.response and e.response.status == 404:
|
|
||||||
# LPAR is already gone - don't fail
|
|
||||||
sare.reraise = False
|
|
||||||
LOG.info('Virtual Machine not found', instance=instance)
|
|
||||||
else:
|
|
||||||
LOG.error('HttpError deleting virtual machine.',
|
|
||||||
instance=instance)
|
|
||||||
except pvm_exc.Error:
|
|
||||||
with excutils.save_and_reraise_exception(logger=LOG):
|
|
||||||
# Attempting to close vterm did not help so raise exception
|
|
||||||
LOG.error('Virtual machine delete failed: LPARID=%s', lpar_uuid)
|
|
||||||
|
|
||||||
|
|
||||||
def create_lpar(adapter, host_w, instance):
|
|
||||||
"""Create an LPAR based on the host based on the instance.
|
|
||||||
|
|
||||||
:param adapter: The adapter for the pypowervm API.
|
|
||||||
:param host_w: The host's System wrapper.
|
|
||||||
:param instance: The nova instance.
|
|
||||||
:return: The LPAR wrapper response from the API.
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
# Translate the nova flavor into a PowerVM Wrapper Object.
|
|
||||||
lpar_b = VMBuilder(host_w, adapter).lpar_builder(instance)
|
|
||||||
pending_lpar_w = lpar_b.build()
|
|
||||||
# Run validation against it. This is just for nice(r) error messages.
|
|
||||||
pvm_vldn.LPARWrapperValidator(pending_lpar_w,
|
|
||||||
host_w).validate_all()
|
|
||||||
# Create it. The API returns a new wrapper with the actual system data.
|
|
||||||
return pending_lpar_w.create(parent=host_w)
|
|
||||||
except lpar_bldr.LPARBuilderException as e:
|
|
||||||
# Raise the BuildAbortException since LPAR failed to build
|
|
||||||
raise exc.BuildAbortException(instance_uuid=instance.uuid, reason=e)
|
|
||||||
except pvm_exc.HttpError as he:
|
|
||||||
# Raise the API exception
|
|
||||||
LOG.exception("PowerVM HttpError creating LPAR.", instance=instance)
|
|
||||||
raise exc.PowerVMAPIFailed(inst_name=instance.name, reason=he)
|
|
||||||
|
|
||||||
|
|
||||||
def _translate_vm_state(pvm_state):
|
|
||||||
"""Find the current state of the lpar.
|
|
||||||
|
|
||||||
:return: The appropriate integer state value from power_state, converted
|
|
||||||
from the PowerVM state.
|
|
||||||
"""
|
|
||||||
if pvm_state is None:
|
|
||||||
return power_state.NOSTATE
|
|
||||||
try:
|
|
||||||
return _POWERVM_TO_NOVA_STATE[pvm_state.lower()]
|
|
||||||
except KeyError:
|
|
||||||
return power_state.NOSTATE
|
|
||||||
|
|
||||||
|
|
||||||
def get_vm_qp(adapter, lpar_uuid, qprop=None, log_errors=True):
|
|
||||||
"""Returns one or all quick properties of an LPAR.
|
|
||||||
|
|
||||||
:param adapter: The pypowervm adapter.
|
|
||||||
:param lpar_uuid: The (powervm) UUID for the LPAR.
|
|
||||||
:param qprop: The quick property key to return. If specified, that single
|
|
||||||
property value is returned. If None/unspecified, all quick
|
|
||||||
properties are returned in a dictionary.
|
|
||||||
:param log_errors: Indicator whether to log REST data after an exception
|
|
||||||
:return: Either a single quick property value or a dictionary of all quick
|
|
||||||
properties.
|
|
||||||
"""
|
|
||||||
kwds = dict(root_id=lpar_uuid, suffix_type='quick', suffix_parm=qprop)
|
|
||||||
if not log_errors:
|
|
||||||
# Remove the log helper from the list of helpers.
|
|
||||||
# Note that adapter.helpers returns a copy - the .remove doesn't affect
|
|
||||||
# the adapter's original helpers list.
|
|
||||||
helpers = adapter.helpers
|
|
||||||
try:
|
|
||||||
helpers.remove(pvm_log.log_helper)
|
|
||||||
except ValueError:
|
|
||||||
# It's not an error if we didn't find it.
|
|
||||||
pass
|
|
||||||
kwds['helpers'] = helpers
|
|
||||||
try:
|
|
||||||
resp = adapter.read(pvm_lpar.LPAR.schema_type, **kwds)
|
|
||||||
except pvm_exc.HttpError as e:
|
|
||||||
with excutils.save_and_reraise_exception(logger=LOG) as sare:
|
|
||||||
# 404 error indicates the LPAR has been deleted
|
|
||||||
if e.response and e.response.status == 404:
|
|
||||||
sare.reraise = False
|
|
||||||
raise exc.InstanceNotFound(instance_id=lpar_uuid)
|
|
||||||
# else raise the original exception
|
|
||||||
return jsonutils.loads(resp.body)
|
|
||||||
|
|
||||||
|
|
||||||
def get_vm_info(adapter, instance):
|
|
||||||
"""Get the InstanceInfo for an instance.
|
|
||||||
|
|
||||||
:param adapter: The pypowervm.adapter.Adapter for the PowerVM REST API.
|
|
||||||
:param instance: nova.objects.instance.Instance object
|
|
||||||
:returns: An InstanceInfo object.
|
|
||||||
"""
|
|
||||||
pvm_uuid = get_pvm_uuid(instance)
|
|
||||||
pvm_state = get_vm_qp(adapter, pvm_uuid, 'PartitionState')
|
|
||||||
nova_state = _translate_vm_state(pvm_state)
|
|
||||||
return hardware.InstanceInfo(nova_state)
|
|
||||||
|
|
||||||
|
|
||||||
def norm_mac(mac):
|
|
||||||
"""Normalizes a MAC address from pypowervm format to OpenStack.
|
|
||||||
|
|
||||||
That means that the format will be converted to lower case and will
|
|
||||||
have colons added.
|
|
||||||
|
|
||||||
:param mac: A pypowervm mac address. Ex. 1234567890AB
|
|
||||||
:return: A mac that matches the standard neutron format.
|
|
||||||
Ex. 12:34:56:78:90:ab
|
|
||||||
"""
|
|
||||||
# Need the replacement if the mac is already normalized.
|
|
||||||
mac = mac.lower().replace(':', '')
|
|
||||||
return ':'.join(mac[i:i + 2] for i in range(0, len(mac), 2))
|
|
||||||
|
|
||||||
|
|
||||||
class VMBuilder(object):
|
|
||||||
"""Converts a Nova Instance/Flavor into a pypowervm LPARBuilder."""
|
|
||||||
_PVM_PROC_COMPAT = 'powervm:processor_compatibility'
|
|
||||||
_PVM_UNCAPPED = 'powervm:uncapped'
|
|
||||||
_PVM_DED_SHAR_MODE = 'powervm:dedicated_sharing_mode'
|
|
||||||
_PVM_SHAR_PROC_POOL = 'powervm:shared_proc_pool_name'
|
|
||||||
_PVM_SRR_CAPABILITY = 'powervm:srr_capability'
|
|
||||||
|
|
||||||
# Map of PowerVM extra specs to the lpar builder attributes.
|
|
||||||
# '' is used for attributes that are not implemented yet.
|
|
||||||
# None means there is no direct attribute mapping and must
|
|
||||||
# be handled individually
|
|
||||||
_ATTRS_MAP = {
|
|
||||||
'powervm:min_mem': lpar_bldr.MIN_MEM,
|
|
||||||
'powervm:max_mem': lpar_bldr.MAX_MEM,
|
|
||||||
'powervm:min_vcpu': lpar_bldr.MIN_VCPU,
|
|
||||||
'powervm:max_vcpu': lpar_bldr.MAX_VCPU,
|
|
||||||
'powervm:proc_units': lpar_bldr.PROC_UNITS,
|
|
||||||
'powervm:min_proc_units': lpar_bldr.MIN_PROC_U,
|
|
||||||
'powervm:max_proc_units': lpar_bldr.MAX_PROC_U,
|
|
||||||
'powervm:dedicated_proc': lpar_bldr.DED_PROCS,
|
|
||||||
'powervm:shared_weight': lpar_bldr.UNCAPPED_WEIGHT,
|
|
||||||
'powervm:availability_priority': lpar_bldr.AVAIL_PRIORITY,
|
|
||||||
_PVM_UNCAPPED: None,
|
|
||||||
_PVM_DED_SHAR_MODE: None,
|
|
||||||
_PVM_PROC_COMPAT: None,
|
|
||||||
_PVM_SHAR_PROC_POOL: None,
|
|
||||||
_PVM_SRR_CAPABILITY: None,
|
|
||||||
}
|
|
||||||
|
|
||||||
_DED_SHARING_MODES_MAP = {
|
|
||||||
'share_idle_procs': pvm_bp.DedicatedSharingMode.SHARE_IDLE_PROCS,
|
|
||||||
'keep_idle_procs': pvm_bp.DedicatedSharingMode.KEEP_IDLE_PROCS,
|
|
||||||
'share_idle_procs_active':
|
|
||||||
pvm_bp.DedicatedSharingMode.SHARE_IDLE_PROCS_ACTIVE,
|
|
||||||
'share_idle_procs_always':
|
|
||||||
pvm_bp.DedicatedSharingMode.SHARE_IDLE_PROCS_ALWAYS,
|
|
||||||
}
|
|
||||||
|
|
||||||
def __init__(self, host_w, adapter):
|
|
||||||
"""Initialize the converter.
|
|
||||||
|
|
||||||
:param host_w: The host System wrapper.
|
|
||||||
:param adapter: The pypowervm.adapter.Adapter for the PowerVM REST API.
|
|
||||||
"""
|
|
||||||
self.adapter = adapter
|
|
||||||
self.host_w = host_w
|
|
||||||
kwargs = dict(proc_units_factor=CONF.powervm.proc_units_factor)
|
|
||||||
self.stdz = lpar_bldr.DefaultStandardize(host_w, **kwargs)
|
|
||||||
|
|
||||||
def lpar_builder(self, inst):
|
|
||||||
"""Returns the pypowervm LPARBuilder for a given Nova flavor.
|
|
||||||
|
|
||||||
:param inst: the VM instance
|
|
||||||
"""
|
|
||||||
attrs = self._format_flavor(inst)
|
|
||||||
# TODO(thorst, efried) Add in IBMi attributes
|
|
||||||
return lpar_bldr.LPARBuilder(self.adapter, attrs, self.stdz)
|
|
||||||
|
|
||||||
def _format_flavor(self, inst):
|
|
||||||
"""Returns the pypowervm format of the flavor.
|
|
||||||
|
|
||||||
:param inst: The Nova VM instance.
|
|
||||||
:return: A dict that can be used by the LPAR builder.
|
|
||||||
"""
|
|
||||||
# The attrs are what is sent to pypowervm to convert the lpar.
|
|
||||||
attrs = {
|
|
||||||
lpar_bldr.NAME: pvm_u.sanitize_partition_name_for_api(inst.name),
|
|
||||||
# The uuid is only actually set on a create of an LPAR
|
|
||||||
lpar_bldr.UUID: get_pvm_uuid(inst),
|
|
||||||
lpar_bldr.MEM: inst.flavor.memory_mb,
|
|
||||||
lpar_bldr.VCPU: inst.flavor.vcpus,
|
|
||||||
# Set the srr capability to True by default
|
|
||||||
lpar_bldr.SRR_CAPABLE: True}
|
|
||||||
|
|
||||||
# Loop through the extra specs and process powervm keys
|
|
||||||
for key in inst.flavor.extra_specs.keys():
|
|
||||||
# If it is not a valid key, then can skip.
|
|
||||||
if not self._is_pvm_valid_key(key):
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Look for the mapping to the lpar builder
|
|
||||||
bldr_key = self._ATTRS_MAP.get(key)
|
|
||||||
|
|
||||||
# Check for no direct mapping, if the value is none, need to
|
|
||||||
# derive the complex type
|
|
||||||
if bldr_key is None:
|
|
||||||
self._build_complex_type(key, attrs, inst.flavor)
|
|
||||||
else:
|
|
||||||
# We found a direct mapping
|
|
||||||
attrs[bldr_key] = inst.flavor.extra_specs[key]
|
|
||||||
|
|
||||||
return attrs
|
|
||||||
|
|
||||||
def _is_pvm_valid_key(self, key):
|
|
||||||
"""Will return if this is a valid PowerVM key.
|
|
||||||
|
|
||||||
:param key: The powervm key.
|
|
||||||
:return: True if valid key. False if non-powervm key and should be
|
|
||||||
skipped.
|
|
||||||
"""
|
|
||||||
# If not a powervm key, then it is not 'pvm_valid'
|
|
||||||
if not key.startswith('powervm:'):
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Check if this is a valid attribute
|
|
||||||
if key not in self._ATTRS_MAP:
|
|
||||||
# Could be a key from a future release - warn, but ignore.
|
|
||||||
LOG.warning("Unhandled PowerVM key '%s'.", key)
|
|
||||||
return False
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
def _build_complex_type(self, key, attrs, flavor):
|
|
||||||
"""If a key does not directly map, this method derives the right value.
|
|
||||||
|
|
||||||
Some types are complex, in that the flavor may have one key that maps
|
|
||||||
to several different attributes in the lpar builder. This method
|
|
||||||
handles the complex types.
|
|
||||||
|
|
||||||
:param key: The flavor's key.
|
|
||||||
:param attrs: The attribute map to put the value into.
|
|
||||||
:param flavor: The Nova instance flavor.
|
|
||||||
:return: The value to put in for the key.
|
|
||||||
"""
|
|
||||||
# Map uncapped to sharing mode
|
|
||||||
if key == self._PVM_UNCAPPED:
|
|
||||||
attrs[lpar_bldr.SHARING_MODE] = (
|
|
||||||
pvm_bp.SharingMode.UNCAPPED
|
|
||||||
if stru.bool_from_string(flavor.extra_specs[key], strict=True)
|
|
||||||
else pvm_bp.SharingMode.CAPPED)
|
|
||||||
elif key == self._PVM_DED_SHAR_MODE:
|
|
||||||
# Dedicated sharing modes...map directly
|
|
||||||
shr_mode_key = flavor.extra_specs[key]
|
|
||||||
mode = self._DED_SHARING_MODES_MAP.get(shr_mode_key)
|
|
||||||
if mode is None:
|
|
||||||
raise exc.InvalidParameterValue(err=_(
|
|
||||||
"Invalid dedicated sharing mode '%s'!") % shr_mode_key)
|
|
||||||
attrs[lpar_bldr.SHARING_MODE] = mode
|
|
||||||
elif key == self._PVM_SHAR_PROC_POOL:
|
|
||||||
pool_name = flavor.extra_specs[key]
|
|
||||||
attrs[lpar_bldr.SPP] = self._spp_pool_id(pool_name)
|
|
||||||
elif key == self._PVM_PROC_COMPAT:
|
|
||||||
# Handle variants of the supported values
|
|
||||||
attrs[lpar_bldr.PROC_COMPAT] = re.sub(
|
|
||||||
r'\+', '_Plus', flavor.extra_specs[key])
|
|
||||||
elif key == self._PVM_SRR_CAPABILITY:
|
|
||||||
attrs[lpar_bldr.SRR_CAPABLE] = stru.bool_from_string(
|
|
||||||
flavor.extra_specs[key], strict=True)
|
|
||||||
else:
|
|
||||||
# There was no mapping or we didn't handle it. This is a BUG!
|
|
||||||
raise KeyError(_(
|
|
||||||
"Unhandled PowerVM key '%s'! Please report this bug.") % key)
|
|
||||||
|
|
||||||
def _spp_pool_id(self, pool_name):
|
|
||||||
"""Returns the shared proc pool id for a given pool name.
|
|
||||||
|
|
||||||
:param pool_name: The shared proc pool name.
|
|
||||||
:return: The internal API id for the shared proc pool.
|
|
||||||
"""
|
|
||||||
if (pool_name is None or
|
|
||||||
pool_name == pvm_spp.DEFAULT_POOL_DISPLAY_NAME):
|
|
||||||
# The default pool is 0
|
|
||||||
return 0
|
|
||||||
|
|
||||||
# Search for the pool with this name
|
|
||||||
pool_wraps = pvm_spp.SharedProcPool.search(
|
|
||||||
self.adapter, name=pool_name, parent=self.host_w)
|
|
||||||
|
|
||||||
# Check to make sure there is a pool with the name, and only one pool.
|
|
||||||
if len(pool_wraps) > 1:
|
|
||||||
msg = (_('Multiple Shared Processing Pools with name %(pool)s.') %
|
|
||||||
{'pool': pool_name})
|
|
||||||
raise exc.ValidationError(msg)
|
|
||||||
elif len(pool_wraps) == 0:
|
|
||||||
msg = (_('Unable to find Shared Processing Pool %(pool)s') %
|
|
||||||
{'pool': pool_name})
|
|
||||||
raise exc.ValidationError(msg)
|
|
||||||
|
|
||||||
# Return the singular pool id.
|
|
||||||
return pool_wraps[0].id
|
|
@ -1,28 +0,0 @@
|
|||||||
# Copyright 2015, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from nova import exception
|
|
||||||
from nova.i18n import _
|
|
||||||
from nova.virt.powervm.volume import fcvscsi
|
|
||||||
|
|
||||||
|
|
||||||
def build_volume_driver(adapter, instance, conn_info, stg_ftsk=None):
|
|
||||||
drv_type = conn_info.get('driver_volume_type')
|
|
||||||
if drv_type != 'fibre_channel':
|
|
||||||
reason = _("Invalid connection type of %s") % drv_type
|
|
||||||
raise exception.InvalidVolume(reason=reason)
|
|
||||||
return fcvscsi.FCVscsiVolumeAdapter(adapter, instance, conn_info,
|
|
||||||
stg_ftsk=stg_ftsk)
|
|
@ -1,468 +0,0 @@
|
|||||||
# Copyright 2015, 2018 IBM Corp.
|
|
||||||
#
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from oslo_concurrency import lockutils
|
|
||||||
from oslo_log import log as logging
|
|
||||||
from pypowervm import const as pvm_const
|
|
||||||
from pypowervm.tasks import hdisk
|
|
||||||
from pypowervm.tasks import partition as pvm_tpar
|
|
||||||
from pypowervm.tasks import scsi_mapper as tsk_map
|
|
||||||
from pypowervm.utils import transaction as pvm_tx
|
|
||||||
from pypowervm.wrappers import storage as pvm_stor
|
|
||||||
from pypowervm.wrappers import virtual_io_server as pvm_vios
|
|
||||||
from taskflow import task
|
|
||||||
|
|
||||||
from nova import conf as cfg
|
|
||||||
from nova import exception as exc
|
|
||||||
from nova.i18n import _
|
|
||||||
from nova.virt import block_device
|
|
||||||
from nova.virt.powervm import vm
|
|
||||||
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
LOCAL_FEED_TASK = 'local_feed_task'
|
|
||||||
UDID_KEY = 'target_UDID'
|
|
||||||
|
|
||||||
# A global variable that will cache the physical WWPNs on the system.
|
|
||||||
_vscsi_pfc_wwpns = None
|
|
||||||
|
|
||||||
|
|
||||||
@lockutils.synchronized('vscsi_wwpns')
|
|
||||||
def wwpns(adapter):
|
|
||||||
"""Builds the WWPNs of the adapters that will connect the ports.
|
|
||||||
|
|
||||||
:return: The list of WWPNs that need to be included in the zone set.
|
|
||||||
"""
|
|
||||||
return pvm_tpar.get_physical_wwpns(adapter, force_refresh=False)
|
|
||||||
|
|
||||||
|
|
||||||
class FCVscsiVolumeAdapter(object):
|
|
||||||
|
|
||||||
def __init__(self, adapter, instance, connection_info, stg_ftsk=None):
|
|
||||||
"""Initialize the PowerVMVolumeAdapter
|
|
||||||
|
|
||||||
:param adapter: The pypowervm adapter.
|
|
||||||
:param instance: The nova instance that the volume should attach to.
|
|
||||||
:param connection_info: The volume connection info generated from the
|
|
||||||
BDM. Used to determine how to attach the
|
|
||||||
volume to the VM.
|
|
||||||
:param stg_ftsk: (Optional) The pypowervm transaction FeedTask for the
|
|
||||||
I/O Operations. If provided, the Virtual I/O Server
|
|
||||||
mapping updates will be added to the FeedTask. This
|
|
||||||
defers the updates to some later point in time. If
|
|
||||||
the FeedTask is not provided, the updates will be run
|
|
||||||
immediately when the respective method is executed.
|
|
||||||
"""
|
|
||||||
self.adapter = adapter
|
|
||||||
self.instance = instance
|
|
||||||
self.connection_info = connection_info
|
|
||||||
self.vm_uuid = vm.get_pvm_uuid(instance)
|
|
||||||
self.reset_stg_ftsk(stg_ftsk=stg_ftsk)
|
|
||||||
self._pfc_wwpns = None
|
|
||||||
|
|
||||||
@property
|
|
||||||
def volume_id(self):
|
|
||||||
"""Method to return the volume id.
|
|
||||||
|
|
||||||
Every driver must implement this method if the default impl will
|
|
||||||
not work for their data.
|
|
||||||
"""
|
|
||||||
return block_device.get_volume_id(self.connection_info)
|
|
||||||
|
|
||||||
def reset_stg_ftsk(self, stg_ftsk=None):
|
|
||||||
"""Resets the pypowervm transaction FeedTask to a new value.
|
|
||||||
|
|
||||||
The previous updates from the original FeedTask WILL NOT be migrated
|
|
||||||
to this new FeedTask.
|
|
||||||
|
|
||||||
:param stg_ftsk: (Optional) The pypowervm transaction FeedTask for the
|
|
||||||
I/O Operations. If provided, the Virtual I/O Server
|
|
||||||
mapping updates will be added to the FeedTask. This
|
|
||||||
defers the updates to some later point in time. If
|
|
||||||
the FeedTask is not provided, the updates will be run
|
|
||||||
immediately when this method is executed.
|
|
||||||
"""
|
|
||||||
if stg_ftsk is None:
|
|
||||||
getter = pvm_vios.VIOS.getter(
|
|
||||||
self.adapter, xag=[pvm_const.XAG.VIO_SMAP])
|
|
||||||
self.stg_ftsk = pvm_tx.FeedTask(LOCAL_FEED_TASK, getter)
|
|
||||||
else:
|
|
||||||
self.stg_ftsk = stg_ftsk
|
|
||||||
|
|
||||||
def _set_udid(self, udid):
|
|
||||||
"""This method will set the hdisk udid in the connection_info.
|
|
||||||
|
|
||||||
:param udid: The hdisk target_udid to be stored in system_metadata
|
|
||||||
"""
|
|
||||||
self.connection_info['data'][UDID_KEY] = udid
|
|
||||||
|
|
||||||
def _get_udid(self):
|
|
||||||
"""This method will return the hdisk udid stored in connection_info.
|
|
||||||
|
|
||||||
:return: The target_udid associated with the hdisk
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
return self.connection_info['data'][UDID_KEY]
|
|
||||||
except (KeyError, ValueError):
|
|
||||||
# It's common to lose our specific data in the BDM. The connection
|
|
||||||
# information can be 'refreshed' by operations like live migrate
|
|
||||||
# and resize
|
|
||||||
LOG.info('Failed to retrieve target_UDID key from BDM for volume '
|
|
||||||
'id %s', self.volume_id, instance=self.instance)
|
|
||||||
return None
|
|
||||||
|
|
||||||
def attach_volume(self):
|
|
||||||
"""Attaches the volume."""
|
|
||||||
|
|
||||||
# Check if the VM is in a state where the attach is acceptable.
|
|
||||||
lpar_w = vm.get_instance_wrapper(self.adapter, self.instance)
|
|
||||||
capable, reason = lpar_w.can_modify_io()
|
|
||||||
if not capable:
|
|
||||||
raise exc.VolumeAttachFailed(
|
|
||||||
volume_id=self.volume_id, reason=reason)
|
|
||||||
|
|
||||||
# Its about to get weird. The transaction manager has a list of
|
|
||||||
# VIOSes. We could use those, but they only have SCSI mappings (by
|
|
||||||
# design). They do not have storage (super expensive).
|
|
||||||
#
|
|
||||||
# We need the storage xag when we are determining which mappings to
|
|
||||||
# add to the system. But we don't want to tie it to the stg_ftsk. If
|
|
||||||
# we do, every retry, every etag gather, etc... takes MUCH longer.
|
|
||||||
#
|
|
||||||
# So we get the VIOSes with the storage xag here, separately, to save
|
|
||||||
# the stg_ftsk from potentially having to run it multiple times.
|
|
||||||
attach_ftsk = pvm_tx.FeedTask(
|
|
||||||
'attach_volume_to_vio', pvm_vios.VIOS.getter(
|
|
||||||
self.adapter, xag=[pvm_const.XAG.VIO_STOR,
|
|
||||||
pvm_const.XAG.VIO_SMAP]))
|
|
||||||
|
|
||||||
# Find valid hdisks and map to VM.
|
|
||||||
attach_ftsk.add_functor_subtask(
|
|
||||||
self._attach_volume_to_vio, provides='vio_modified',
|
|
||||||
flag_update=False)
|
|
||||||
|
|
||||||
ret = attach_ftsk.execute()
|
|
||||||
|
|
||||||
# Check the number of VIOSes
|
|
||||||
vioses_modified = 0
|
|
||||||
for result in ret['wrapper_task_rets'].values():
|
|
||||||
if result['vio_modified']:
|
|
||||||
vioses_modified += 1
|
|
||||||
|
|
||||||
# Validate that a vios was found
|
|
||||||
if vioses_modified == 0:
|
|
||||||
msg = (_('Failed to discover valid hdisk on any Virtual I/O '
|
|
||||||
'Server for volume %(volume_id)s.') %
|
|
||||||
{'volume_id': self.volume_id})
|
|
||||||
ex_args = {'volume_id': self.volume_id, 'reason': msg}
|
|
||||||
raise exc.VolumeAttachFailed(**ex_args)
|
|
||||||
|
|
||||||
self.stg_ftsk.execute()
|
|
||||||
|
|
||||||
def _attach_volume_to_vio(self, vios_w):
|
|
||||||
"""Attempts to attach a volume to a given VIO.
|
|
||||||
|
|
||||||
:param vios_w: The Virtual I/O Server wrapper to attach to.
|
|
||||||
:return: True if the volume was attached. False if the volume was
|
|
||||||
not (could be the Virtual I/O Server does not have
|
|
||||||
connectivity to the hdisk).
|
|
||||||
"""
|
|
||||||
status, device_name, udid = self._discover_volume_on_vios(vios_w)
|
|
||||||
|
|
||||||
if hdisk.good_discovery(status, device_name):
|
|
||||||
# Found a hdisk on this Virtual I/O Server. Add the action to
|
|
||||||
# map it to the VM when the stg_ftsk is executed.
|
|
||||||
with lockutils.lock(self.volume_id):
|
|
||||||
self._add_append_mapping(vios_w.uuid, device_name,
|
|
||||||
tag=self.volume_id)
|
|
||||||
|
|
||||||
# Save the UDID for the disk in the connection info. It is
|
|
||||||
# used for the detach.
|
|
||||||
self._set_udid(udid)
|
|
||||||
LOG.debug('Added deferred task to attach device %(device_name)s '
|
|
||||||
'to vios %(vios_name)s.',
|
|
||||||
{'device_name': device_name, 'vios_name': vios_w.name},
|
|
||||||
instance=self.instance)
|
|
||||||
|
|
||||||
# Valid attachment
|
|
||||||
return True
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
def extend_volume(self):
|
|
||||||
# The compute node does not need to take any additional steps for the
|
|
||||||
# client to see the extended volume.
|
|
||||||
pass
|
|
||||||
|
|
||||||
def _discover_volume_on_vios(self, vios_w):
|
|
||||||
"""Discovers an hdisk on a single vios for the volume.
|
|
||||||
|
|
||||||
:param vios_w: VIOS wrapper to process
|
|
||||||
:returns: Status of the volume or None
|
|
||||||
:returns: Device name or None
|
|
||||||
:returns: UDID or None
|
|
||||||
"""
|
|
||||||
# Get the initiatior WWPNs, targets and Lun for the given VIOS.
|
|
||||||
vio_wwpns, t_wwpns, lun = self._get_hdisk_itls(vios_w)
|
|
||||||
|
|
||||||
# Build the ITL map and discover the hdisks on the Virtual I/O
|
|
||||||
# Server (if any).
|
|
||||||
itls = hdisk.build_itls(vio_wwpns, t_wwpns, lun)
|
|
||||||
if len(itls) == 0:
|
|
||||||
LOG.debug('No ITLs for VIOS %(vios)s for volume %(volume_id)s.',
|
|
||||||
{'vios': vios_w.name, 'volume_id': self.volume_id},
|
|
||||||
instance=self.instance)
|
|
||||||
return None, None, None
|
|
||||||
|
|
||||||
status, device_name, udid = hdisk.discover_hdisk(self.adapter,
|
|
||||||
vios_w.uuid, itls)
|
|
||||||
|
|
||||||
if hdisk.good_discovery(status, device_name):
|
|
||||||
LOG.info('Discovered %(hdisk)s on vios %(vios)s for volume '
|
|
||||||
'%(volume_id)s. Status code: %(status)s.',
|
|
||||||
{'hdisk': device_name, 'vios': vios_w.name,
|
|
||||||
'volume_id': self.volume_id, 'status': status},
|
|
||||||
instance=self.instance)
|
|
||||||
elif status == hdisk.LUAStatus.DEVICE_IN_USE:
|
|
||||||
LOG.warning('Discovered device %(dev)s for volume %(volume)s '
|
|
||||||
'on %(vios)s is in use. Error code: %(status)s.',
|
|
||||||
{'dev': device_name, 'volume': self.volume_id,
|
|
||||||
'vios': vios_w.name, 'status': status},
|
|
||||||
instance=self.instance)
|
|
||||||
|
|
||||||
return status, device_name, udid
|
|
||||||
|
|
||||||
def _get_hdisk_itls(self, vios_w):
|
|
||||||
"""Returns the mapped ITLs for the hdisk for the given VIOS.
|
|
||||||
|
|
||||||
A PowerVM system may have multiple Virtual I/O Servers to virtualize
|
|
||||||
the I/O to the virtual machines. Each Virtual I/O server may have their
|
|
||||||
own set of initiator WWPNs, target WWPNs and Lun on which hdisk is
|
|
||||||
mapped. It will determine and return the ITLs for the given VIOS.
|
|
||||||
|
|
||||||
:param vios_w: A virtual I/O Server wrapper.
|
|
||||||
:return: List of the i_wwpns that are part of the vios_w,
|
|
||||||
:return: List of the t_wwpns that are part of the vios_w,
|
|
||||||
:return: Target lun id of the hdisk for the vios_w.
|
|
||||||
"""
|
|
||||||
it_map = self.connection_info['data']['initiator_target_map']
|
|
||||||
i_wwpns = it_map.keys()
|
|
||||||
|
|
||||||
active_wwpns = vios_w.get_active_pfc_wwpns()
|
|
||||||
vio_wwpns = [x for x in i_wwpns if x in active_wwpns]
|
|
||||||
|
|
||||||
t_wwpns = []
|
|
||||||
for it_key in vio_wwpns:
|
|
||||||
t_wwpns.extend(it_map[it_key])
|
|
||||||
lun = self.connection_info['data']['target_lun']
|
|
||||||
|
|
||||||
return vio_wwpns, t_wwpns, lun
|
|
||||||
|
|
||||||
def _add_append_mapping(self, vios_uuid, device_name, tag=None):
|
|
||||||
"""Update the stg_ftsk to append the mapping to the VIOS.
|
|
||||||
|
|
||||||
:param vios_uuid: The UUID of the vios for the pypowervm adapter.
|
|
||||||
:param device_name: The hdisk device name.
|
|
||||||
:param tag: String tag to set on the physical volume.
|
|
||||||
"""
|
|
||||||
def add_func(vios_w):
|
|
||||||
LOG.info("Adding vSCSI mapping to Physical Volume %(dev)s on "
|
|
||||||
"vios %(vios)s.",
|
|
||||||
{'dev': device_name, 'vios': vios_w.name},
|
|
||||||
instance=self.instance)
|
|
||||||
pv = pvm_stor.PV.bld(self.adapter, device_name, tag=tag)
|
|
||||||
v_map = tsk_map.build_vscsi_mapping(None, vios_w, self.vm_uuid, pv)
|
|
||||||
return tsk_map.add_map(vios_w, v_map)
|
|
||||||
self.stg_ftsk.wrapper_tasks[vios_uuid].add_functor_subtask(add_func)
|
|
||||||
|
|
||||||
def detach_volume(self):
|
|
||||||
"""Detach the volume."""
|
|
||||||
|
|
||||||
# Check if the VM is in a state where the detach is acceptable.
|
|
||||||
lpar_w = vm.get_instance_wrapper(self.adapter, self.instance)
|
|
||||||
capable, reason = lpar_w.can_modify_io()
|
|
||||||
if not capable:
|
|
||||||
raise exc.VolumeDetachFailed(
|
|
||||||
volume_id=self.volume_id, reason=reason)
|
|
||||||
|
|
||||||
# Run the detach
|
|
||||||
try:
|
|
||||||
# See logic in attach_volume for why this new FeedTask is here.
|
|
||||||
detach_ftsk = pvm_tx.FeedTask(
|
|
||||||
'detach_volume_from_vio', pvm_vios.VIOS.getter(
|
|
||||||
self.adapter, xag=[pvm_const.XAG.VIO_STOR,
|
|
||||||
pvm_const.XAG.VIO_SMAP]))
|
|
||||||
# Find hdisks to detach
|
|
||||||
detach_ftsk.add_functor_subtask(
|
|
||||||
self._detach_vol_for_vio, provides='vio_modified',
|
|
||||||
flag_update=False)
|
|
||||||
|
|
||||||
ret = detach_ftsk.execute()
|
|
||||||
|
|
||||||
# Warn if no hdisks detached.
|
|
||||||
if not any([result['vio_modified']
|
|
||||||
for result in ret['wrapper_task_rets'].values()]):
|
|
||||||
LOG.warning("Detach Volume: Failed to detach the "
|
|
||||||
"volume %(volume_id)s on ANY of the Virtual "
|
|
||||||
"I/O Servers.", {'volume_id': self.volume_id},
|
|
||||||
instance=self.instance)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
LOG.exception('PowerVM error detaching volume from virtual '
|
|
||||||
'machine.', instance=self.instance)
|
|
||||||
ex_args = {'volume_id': self.volume_id, 'reason': str(e)}
|
|
||||||
raise exc.VolumeDetachFailed(**ex_args)
|
|
||||||
self.stg_ftsk.execute()
|
|
||||||
|
|
||||||
def _detach_vol_for_vio(self, vios_w):
|
|
||||||
"""Removes the volume from a specific Virtual I/O Server.
|
|
||||||
|
|
||||||
:param vios_w: The VIOS wrapper.
|
|
||||||
:return: True if a remove action was done against this VIOS. False
|
|
||||||
otherwise.
|
|
||||||
"""
|
|
||||||
LOG.debug("Detach volume %(vol)s from vios %(vios)s",
|
|
||||||
dict(vol=self.volume_id, vios=vios_w.name),
|
|
||||||
instance=self.instance)
|
|
||||||
device_name = None
|
|
||||||
udid = self._get_udid()
|
|
||||||
try:
|
|
||||||
if udid:
|
|
||||||
# This will only work if vios_w has the Storage XAG.
|
|
||||||
device_name = vios_w.hdisk_from_uuid(udid)
|
|
||||||
|
|
||||||
if not udid or not device_name:
|
|
||||||
# We lost our bdm data. We'll need to discover it.
|
|
||||||
status, device_name, udid = self._discover_volume_on_vios(
|
|
||||||
vios_w)
|
|
||||||
|
|
||||||
# Check if the hdisk is in a bad state in the I/O Server.
|
|
||||||
# Subsequent scrub code on future deploys will clean this up.
|
|
||||||
if not hdisk.good_discovery(status, device_name):
|
|
||||||
LOG.warning(
|
|
||||||
"Detach Volume: The backing hdisk for volume "
|
|
||||||
"%(volume_id)s on Virtual I/O Server %(vios)s is "
|
|
||||||
"not in a valid state. This may be the result of "
|
|
||||||
"an evacuate.",
|
|
||||||
{'volume_id': self.volume_id, 'vios': vios_w.name},
|
|
||||||
instance=self.instance)
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception:
|
|
||||||
LOG.exception(
|
|
||||||
"Detach Volume: Failed to find disk on Virtual I/O "
|
|
||||||
"Server %(vios_name)s for volume %(volume_id)s. Volume "
|
|
||||||
"UDID: %(volume_uid)s.",
|
|
||||||
{'vios_name': vios_w.name, 'volume_id': self.volume_id,
|
|
||||||
'volume_uid': udid, }, instance=self.instance)
|
|
||||||
return False
|
|
||||||
|
|
||||||
# We have found the device name
|
|
||||||
LOG.info("Detach Volume: Discovered the device %(hdisk)s "
|
|
||||||
"on Virtual I/O Server %(vios_name)s for volume "
|
|
||||||
"%(volume_id)s. Volume UDID: %(volume_uid)s.",
|
|
||||||
{'hdisk': device_name, 'vios_name': vios_w.name,
|
|
||||||
'volume_id': self.volume_id, 'volume_uid': udid},
|
|
||||||
instance=self.instance)
|
|
||||||
|
|
||||||
# Add the action to remove the mapping when the stg_ftsk is run.
|
|
||||||
partition_id = vm.get_vm_qp(self.adapter, self.vm_uuid,
|
|
||||||
qprop='PartitionID')
|
|
||||||
|
|
||||||
with lockutils.lock(self.volume_id):
|
|
||||||
self._add_remove_mapping(partition_id, vios_w.uuid,
|
|
||||||
device_name)
|
|
||||||
|
|
||||||
# Add a step to also remove the hdisk
|
|
||||||
self._add_remove_hdisk(vios_w, device_name)
|
|
||||||
|
|
||||||
# Found a valid element to remove
|
|
||||||
return True
|
|
||||||
|
|
||||||
def _add_remove_mapping(self, vm_uuid, vios_uuid, device_name):
|
|
||||||
"""Adds a subtask to remove the storage mapping.
|
|
||||||
|
|
||||||
:param vm_uuid: The UUID of the VM instance
|
|
||||||
:param vios_uuid: The UUID of the vios for the pypowervm adapter.
|
|
||||||
:param device_name: The hdisk device name.
|
|
||||||
"""
|
|
||||||
def rm_func(vios_w):
|
|
||||||
LOG.info("Removing vSCSI mapping from physical volume %(dev)s "
|
|
||||||
"on vios %(vios)s",
|
|
||||||
{'dev': device_name, 'vios': vios_w.name},
|
|
||||||
instance=self.instance)
|
|
||||||
removed_maps = tsk_map.remove_maps(
|
|
||||||
vios_w, vm_uuid,
|
|
||||||
tsk_map.gen_match_func(pvm_stor.PV, names=[device_name]))
|
|
||||||
return removed_maps
|
|
||||||
self.stg_ftsk.wrapper_tasks[vios_uuid].add_functor_subtask(rm_func)
|
|
||||||
|
|
||||||
def _add_remove_hdisk(self, vio_wrap, device_name):
|
|
||||||
"""Adds a post-mapping task to remove the hdisk from the VIOS.
|
|
||||||
|
|
||||||
This removal is only done after the mapping updates have completed.
|
|
||||||
|
|
||||||
:param vio_wrap: The Virtual I/O Server wrapper to remove the disk
|
|
||||||
from.
|
|
||||||
:param device_name: The hdisk name to remove.
|
|
||||||
"""
|
|
||||||
def rm_hdisk():
|
|
||||||
LOG.info("Removing hdisk %(hdisk)s from Virtual I/O Server "
|
|
||||||
"%(vios)s", {'hdisk': device_name, 'vios': vio_wrap.name},
|
|
||||||
instance=self.instance)
|
|
||||||
try:
|
|
||||||
# Attempt to remove the hDisk
|
|
||||||
hdisk.remove_hdisk(self.adapter, CONF.host, device_name,
|
|
||||||
vio_wrap.uuid)
|
|
||||||
except Exception:
|
|
||||||
# If there is a failure, log it, but don't stop the process
|
|
||||||
LOG.exception("There was an error removing the hdisk "
|
|
||||||
"%(disk)s from Virtual I/O Server %(vios)s.",
|
|
||||||
{'disk': device_name, 'vios': vio_wrap.name},
|
|
||||||
instance=self.instance)
|
|
||||||
|
|
||||||
# Check if there are not multiple mapping for the device
|
|
||||||
if not self._check_host_mappings(vio_wrap, device_name):
|
|
||||||
name = 'rm_hdisk_%s_%s' % (vio_wrap.name, device_name)
|
|
||||||
self.stg_ftsk.add_post_execute(task.FunctorTask(
|
|
||||||
rm_hdisk, name=name))
|
|
||||||
else:
|
|
||||||
LOG.info("hdisk %(disk)s is not removed from Virtual I/O Server "
|
|
||||||
"%(vios)s because it has existing storage mappings",
|
|
||||||
{'disk': device_name, 'vios': vio_wrap.name},
|
|
||||||
instance=self.instance)
|
|
||||||
|
|
||||||
def _check_host_mappings(self, vios_wrap, device_name):
|
|
||||||
"""Checks if the given hdisk has multiple mappings
|
|
||||||
|
|
||||||
:param vio_wrap: The Virtual I/O Server wrapper to remove the disk
|
|
||||||
from.
|
|
||||||
:param device_name: The hdisk name to remove.
|
|
||||||
:return: True if there are multiple instances using the given hdisk
|
|
||||||
"""
|
|
||||||
vios_scsi_mappings = next(v.scsi_mappings for v in self.stg_ftsk.feed
|
|
||||||
if v.uuid == vios_wrap.uuid)
|
|
||||||
mappings = tsk_map.find_maps(
|
|
||||||
vios_scsi_mappings, None,
|
|
||||||
tsk_map.gen_match_func(pvm_stor.PV, names=[device_name]))
|
|
||||||
|
|
||||||
LOG.debug("%(num)d storage mapping(s) found for %(dev)s on VIOS "
|
|
||||||
"%(vios)s", {'num': len(mappings), 'dev': device_name,
|
|
||||||
'vios': vios_wrap.name}, instance=self.instance)
|
|
||||||
# The mapping is still present as the task feed removes it later.
|
|
||||||
return len(mappings) > 1
|
|
6
releasenotes/notes/remove-powervm-6132cc10255ca205.yaml
Normal file
6
releasenotes/notes/remove-powervm-6132cc10255ca205.yaml
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
---
|
||||||
|
upgrade:
|
||||||
|
- |
|
||||||
|
The powervm virt driver has been removed. The driver was not tested by
|
||||||
|
the OpenStack project nor did it have clear maintainers and thus its
|
||||||
|
quality could not be ensured.
|
@ -61,7 +61,6 @@ tooz>=1.58.0 # Apache-2.0
|
|||||||
cursive>=0.2.1 # Apache-2.0
|
cursive>=0.2.1 # Apache-2.0
|
||||||
retrying>=1.3.3,!=1.3.0 # Apache-2.0
|
retrying>=1.3.3,!=1.3.0 # Apache-2.0
|
||||||
os-service-types>=1.7.0 # Apache-2.0
|
os-service-types>=1.7.0 # Apache-2.0
|
||||||
taskflow>=3.8.0 # Apache-2.0
|
|
||||||
python-dateutil>=2.7.0 # BSD
|
python-dateutil>=2.7.0 # BSD
|
||||||
futurist>=1.8.0 # Apache-2.0
|
futurist>=1.8.0 # Apache-2.0
|
||||||
openstacksdk>=0.35.0 # Apache-2.0
|
openstacksdk>=0.35.0 # Apache-2.0
|
||||||
|
@ -28,8 +28,6 @@ classifiers =
|
|||||||
[extras]
|
[extras]
|
||||||
osprofiler =
|
osprofiler =
|
||||||
osprofiler>=1.4.0 # Apache-2.0
|
osprofiler>=1.4.0 # Apache-2.0
|
||||||
powervm =
|
|
||||||
pypowervm>=1.1.15 # Apache-2.0
|
|
||||||
zvm =
|
zvm =
|
||||||
zVMCloudConnector>=1.3.0;sys_platform!='win32' # Apache 2.0 License
|
zVMCloudConnector>=1.3.0;sys_platform!='win32' # Apache 2.0 License
|
||||||
hyperv =
|
hyperv =
|
||||||
@ -69,7 +67,6 @@ nova.api.extra_spec_validators =
|
|||||||
null = nova.api.validation.extra_specs.null
|
null = nova.api.validation.extra_specs.null
|
||||||
os = nova.api.validation.extra_specs.os
|
os = nova.api.validation.extra_specs.os
|
||||||
pci_passthrough = nova.api.validation.extra_specs.pci_passthrough
|
pci_passthrough = nova.api.validation.extra_specs.pci_passthrough
|
||||||
powervm = nova.api.validation.extra_specs.powervm
|
|
||||||
quota = nova.api.validation.extra_specs.quota
|
quota = nova.api.validation.extra_specs.quota
|
||||||
resources = nova.api.validation.extra_specs.resources
|
resources = nova.api.validation.extra_specs.resources
|
||||||
traits = nova.api.validation.extra_specs.traits
|
traits = nova.api.validation.extra_specs.traits
|
||||||
|
@ -8,7 +8,6 @@ types-paramiko>=0.1.3 # Apache-2.0
|
|||||||
coverage!=4.4,>=4.0 # Apache-2.0
|
coverage!=4.4,>=4.0 # Apache-2.0
|
||||||
ddt>=1.2.1 # MIT
|
ddt>=1.2.1 # MIT
|
||||||
fixtures>=3.0.0 # Apache-2.0/BSD
|
fixtures>=3.0.0 # Apache-2.0/BSD
|
||||||
mock>=3.0.0 # BSD
|
|
||||||
psycopg2-binary>=2.8 # LGPL/ZPL
|
psycopg2-binary>=2.8 # LGPL/ZPL
|
||||||
PyMySQL>=0.8.0 # MIT License
|
PyMySQL>=0.8.0 # MIT License
|
||||||
python-barbicanclient>=4.5.2 # Apache-2.0
|
python-barbicanclient>=4.5.2 # Apache-2.0
|
||||||
|
Loading…
Reference in New Issue
Block a user