This reverts commit e06ad602f3.
This gets us back to Ib0cf5d55750f13d0499a570f14024dca551ed4d4
which was meant to address an issue introduced
by Id188d48609f3d22d14e16c7f6114291d547a8986.
So we essentially had three changes:
1. Hard reboot would blow away volumes and vifs and then wait for the
vifs to be plugged; this caused a problem for some vif types (
linuxbridge was reported) because the event never came and we
timed out.
2. To workaround that, a second change was made to simply not wait for
vif plugging events.
3. Since #2 was a bit heavy-handed for a problem that didn't impact
openvswitch, another change was made to only wait for non-bridge vif
types, so we'd wait for OVS.
But it turns out that opendaylight is an OVS vif type and doesn't send
events for plugging the vif, only for binding the port (and we don't
re-bind the port during reboot). There is also a report of this being a
problem for other types of ports, see
If209f77cff2de00f694b01b2507c633ec3882c82.
So rather than try to special-case every possible vif type that could
be impacted by this, we are simply reverting the change so we no longer
wait for vif plugged events during hard reboot.
Note that if we went back to Id188d48609f3d22d14e16c7f6114291d547a8986
and tweaked that to not unplug/plug the vifs we wouldn't have this
problem either, and that change was really meant to deal with an
encrypted volume issue on reboot. But changing that logic is out of the
scope of this change. Alternatively, we could re-bind the port during
reboot but that could have other implications, or neutron could put
something into the port details telling us which vifs will send events
and which won't, but again that's all outside of the scope of this
patch.
Change-Id: Ib3f10706a7191c58909ec1938042ce338df4d499
Closes-Bug: #1755890
In change Ib0cf5d55750f13d0499a570f14024dca551ed4d4, we stopped waiting
for vif plug events during hard reboot and start, because the behavior
of neutron plug events depended on the vif type and we couldn't rely on
the stale network info cache.
This refines the logic not to wait for vif plug events only for the
bridge vif type, instead of for all types. We also add a flag to
_create_domain_and_network to indicate that we're in the middle of a
reboot and to expect to wait for plug events for all vif types except
the bridge vif type, regardless of the vif's 'active' status.
We only query network_info from neutron at the beginning of a reboot,
before we've unplugged anything, so the majority of the time, the vifs
will be active=True. The current logic in get_neutron_events will only
expect events for vifs with 'active' status False. This adds a way to
override that if we know we've already unplugged vifs as part of a
reboot.
Conflicts:
nova/virt/libvirt/driver.py
NOTE(lyarwood): Ica323b87fa85a454fca9d46ada3677f18fe50022 and
I13de970c3beed29311d43991115a0c6d28ac14e0 are the source of the above
conflicts in driver.py. The first removed encryptor attach logic from
_create_domain_and_network in Queens while the second altered the
signature of _create_domain_and_network in Queens, removing reboot that
is then reintroduced with this change.
Related-Bug: #1744361
Change-Id: Ib08afad3822f2ca95cfeea18d7f4fc4cb407b4d6
(cherry picked from commit aaf37a26d6)
(cherry picked from commit 5a10047f9d)
Originally, in change Id188d48609f3d22d14e16c7f6114291d547a8986 we
added a re-initialization of volumes, encryptors, and vifs to hard
reboot. When creating the libvirt domain and network, we were waiting
for vif plug events from neutron when we replugged the vifs. Then, we
started seeing timeouts in the linuxbridge gate job because compute
was timing out waiting for plug events from neutron during a hard
reboot.
It turns out that the behavior of neutron plug events depends on what
vif type we're using and we're also using a stale network info_cache
throughout the hard reboot code path, so we can't be 100% sure we know
which vifs to wait for plug events from anyway. We coincidentally get
some info_cache refreshes from network-changed events from neutron,
but we shouldn't rely on that.
Ideally, we could do something like wait for an unplug event after we
unplug the vif, then refresh the network_info cache, then wait for the
plug event. BUT, in the case of the os-vif linuxbridge unplug method,
it is a no-op, so I don't think we could expect to get an unplug
event for it (and we don't see any network-vif-unplugged events sent
in the q-svc log for the linuxbridge job during a hard reboot).
Closes-Bug: #1744361
Change-Id: Ib0cf5d55750f13d0499a570f14024dca551ed4d4
(cherry picked from commit 236bb54493)
We call _hard_reboot during reboot, power_on, and
resume_state_on_host_boot. It functions essentially by tearing as much
of an instance as possible before recreating it, which additionally
makes it useful to operators for attempting automated recovery of
instances in an inconsistent state.
The Libvirt driver would previously only call _destroy and
_undefine_domain when hard rebooting an instance. This would leave vifs
plugged, volumes connected, and encryptors attached on the host. It
also means that when we try to restart the instance, we assume all
these things are correctly configured. If they are not, the instance
may fail to start at all, or may be incorrectly configured when
starting.
For example, consider an instance with an encrypted volume after a
compute host reboot. When we attempt to start the instance, power_on
will call _hard_reboot. The volume will be coincidentally re-attached
as a side-effect of calling _get_guest_xml(!), but when we call
_create_domain_and_network we pass reboot=True, which tells it not to
reattach the encryptor, as it is assumed to be already attached. We
are therefore left presenting the encrypted volume data directly to
the instance without decryption.
The approach in this patch is to ensure we recreate the instance as
fully as possible during hard reboot. This means not passing
vifs_already_plugged and reboot to _create_domain_and_network, which
in turn requires that we fully destroy the instance first. This
addresses the specific problem given in the example, but also a whole
class of potential volume and vif related issues of inconsistent
state.
Because we now always tear down volumes, encryptors, and vifs, we are
relying on the tear down of these things to be idempotent. This
highlighted that detach of the luks and cryptsetup encryptors were not
idempotent. We depend on the fixes for those os-brick drivers.
NOTE(melwitt): Instead of depending on the os-brick changes to handle
the "already detached" scenario during cleanup for the stable
backports, we handle it in the driver since we can't bump g-r for
stable branches.
Closes-bug: #1724573
Change-Id: Id188d48609f3d22d14e16c7f6114291d547a8986
(cherry picked from commit 3f8daf0804)
Nova is assuming 'host-model' for KVM/QEMU setup.
On AArch64 it results in "libvirtError: unsupported configuration: CPU
mode 'host-model' for aarch64 kvm domain on aarch64 host is not
supported by hypervisor" message.
AArch64 lacks 'host-model' support because neither libvirt nor QEMU
are able to tell what the host CPU model exactly is. And there is no
CPU description code for ARM(64) at this point.
So instead we fallback to 'host-passthrough' to get VM instances
running. This will completely break live migration, *unless* all the
Compute nodes (running libvirtd) have *identical* CPUs.
Small summary: https://marcin.juszkiewicz.com.pl/2018/01/04/today-i-was-fighting-with-nova-no-idea-who-won/
Closes-bug: #1741230
Co-authored-by: Kevin Zhao <Kevin.Zhao@arm.com>
Co-authored-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Change-Id: Iafb5f1790d68489db73b9f0549333108c6426a00
(cherry-picked from commit 8bc7b950b7)
There are two ways of booting VM on AArch64 architecture:
1. UEFI
2. kernel+initrd
No one sane goes for 2nd option so hw_firmware_type=uefi has to be set
for every image. Otherwise they simply hang. So let's set is as default
if no other value for hw_firmware_type is set.
If someone will implement own way then they can set hw_firmware_type to
own value and add support to Nova.
Closes-Bug: #1740824
Co-authored-by: Kevin Zhao <Kevin.Zhao@arm.com>
Change-Id: I70ad5ecb420b7d469854e8743e38ba27fd204747
(cherry picked from commit 6f54f5c1e3)
Some of error messages from qemu-guest-agent is
localized encoding.
We may get GB2312(Chinese charactors) from qga's error message
from a Chinese version of Windows.
nova didn't cover this senario, we may get errors like:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe6
in position 138: ordinal not in range(128)
This patch will fix this issue by exception_to_unicode
Closes-bug: #1727643
Change-Id: I3b8bfaec8af0e9b4859dcfe7e35fc5bb26c208dc
Signed-off-by: Chen Hanxiao <chenhx@certusnet.com.cn>
(cherry picked from commit 13418faa17)
When a user calls the volume-update API, we swap_volume in the libvirt
driver from the old volume attachment to the new volume attachment.
Currently, we're saving the domain XML with the old configuration prior
to updating the volume and upon a soft-reboot request, it results in an
error:
Instance soft reboot failed: Cannot access storage file <old path>
and falls back to a hard reboot, which is like pulling the power cord,
possibly resulting in file system inconsistencies.
This changes to saving the new, updated domain XML after the volume
swap.
Closes-Bug: #1713857
Change-Id: I166cde5ad8b00699e4ec02609f0d7b69236d855d
(cherry picked from commit 5b008c6540)
If we specify block migration, but there are no disks which actually
require block migration we call libvirt's migrateToURI3() with
VIR_MIGRATE_NON_SHARED_INC in flags and an empty migrate_disks in
params. Libvirt interprets this to be the default block migration
behaviour of "block migrate all writeable disks". However,
migrate_disks may only be empty because we filtered attached volumes
out of it, in which case libvirt will block migrate attached volumes.
This is a data corruptor.
This change addresses the issue at the point we call migrateToURI3().
As we never want the default block migration behaviour, we can safely
remove the flag if the list of disks to migrate is empty.
(cherry picked from commit ea9bf5216b)
nova/tests/unit/virt/libvirt/test_driver.py:
Explicitly asserts byte string destination_xml in
_test_live_migration_block_migration_flags. Not required in master
due to change I85cd9a90.
Change-Id: I9b545ca8aa6dd7b41ddea2d333190c9fbed19bc1
Resolves-bug: #1719362
When confirming a resize, the libvirt driver on the source host checks
to see if the instance base directory (which contains the domain xml
files, etc) exists and if the root disk image does not, it removes the
instance base directory.
However, the root image disk won't exist on local storage for a
volume-backed instance and if the instance base directory is on shared
storage, e.g. NFS or Ceph, between the source and destination host, the
instance base directory is incorrectly deleted.
This adds a check to see if the instance is volume-backed when checking
to see if the instance base directory should be removed from the source
host when confirming a resize.
Change-Id: I29fac80d08baf64bf69e54cf673e55123174de2a
Closes-Bug: #1728603
(cherry picked from commit f02afc6569)
One of the things this commit:
commit 14c38ac0f2
Author: Kashyap Chamarthy <kchamart@redhat.com>
Date: Thu Jul 20 19:01:23 2017 +0200
libvirt: Post-migration, set cache value for Cinder volume(s)
[...]
did was to supposedly remove "duplicate" calls to _set_cache_mode().
But that came back to bite us.
Now, while the Cinder volumes are taken care of w.r.t handling its cache
value during migration, but the above referred commit (14c38ac) seemed
to introduce a regression because it disregards the 'disk_cachemodes'
Nova config parameter altogether for boot disks -- i.e. even though if
a user set the cache mode to be 'writeback', it's ignored and
instead 'none' is set unconditionally.
Add the _set_cache_mode() calls back in _get_guest_storage_config().
Co-Authored-By: melanie witt <melwittt@gmail.com>
Closes-Bug: #1727558
Change-Id: I7370cc2942a6c8c51ab5355b50a9e5666cca042e
(cherry picked from commit 24e79bcbf7)
When ironic updates the instance.flavor to require the new custom
resource class, we really need the allocations to get updated. Easiest
way to do that is to make the resource tracker keep updating allocations
for the ironic virt driver. This can be dropped once the transition to
custom resource classes is complete.
If we were not to claim the extra resources, placement will pick nodes
that already have instances running on them when you boot an instance
with a flavor that only requests the custom resource class. This should
be what all ironic flavors do, before the upgrade to queens is
performed.
Closes-Bug: #1724589
Change-Id: Ibbf65a8d817d359786abcdffa6358089ed1107f6
(cherry picked from commit 5c2b8675e3)
Qemu 2.10 added the requirement of a --force-share flag to qemu-img
info when reading information about a disk that is in use by a
guest. We do this a lot in Nova for operations like gathering
information before live migration.
Up until this point all qemu/libvirt version matching has been solely
inside the libvirt driver, however all the image manip code was moved
out to nova.virt.images. We need the version of QEMU available there.
This does it by initializing that version on driver init host. The net
effect is also that broken libvirt connections are figured out
earlier, as there is an active probe for this value.
Co-Authored-By: Sean Dague <sean@dague.net>
Change-Id: Iae2962bb86100f03fd3ad9aac3767da876291e74
Closes-Bug: #1718295
(cherry picked from commit 807579755c)
If trying to create a new VM with the same instance name on the same
compute host, old existing VM become destroyed. guest object is get by
instance name so returned object is existing active instance and it is
destroyed. However this destroying is not intended situation. This
commit makes get the correct guest object by using UUID.
Co-Authored-By: Maciej Kucia <m.kucia@partner.samsung.com>
Change-Id: Ic6f81dc1f8b3610e181914f6d977652cb6d3f6d0
Closes-Bug: #1712460
Signed-off-by: Wonil Choi <wonil22.choi@samsung.com>
Signed-off-by: Maciej Kucia <m.kucia@partner.samsung.com>
(cherry picked from commit ac4705516a)
Defining the 'keymap' option in libvirt results in the '-k' option being
passed through to QEMU [1][2]. This QEMU option has some uses, primarily
for users interacting with QEMU via stdin on the text console. However,
for users interacting with QEMU via VNC or Spice, like nova users do, it
is strongly recommended to never add the "-k" option. Doing so will
force QEMU to do keymap conversions which are known to be lossy. This
disproportionately affects users with non-US keyboard layouts, who would
be better served by relying on the guest OS to manage this.
In the long term, we would like to deprecate these options. However, we
must do this in three parts. This part allows users to unset the options
and warns users who have them set about the side effects. This change is
intended to be backported. A future change will fully deprecate the
options. Finally, after the deprecation cycle has passed, we can remove
these options in their entirety.
[1] https://github.com/libvirt/libvirt/blob/v1.2.9-maint/src/qemu/qemu_command.c#L6985-L6986
[2] https://github.com/libvirt/libvirt/blob/v1.2.9-maint/src/qemu/qemu_command.c#L7215-L7216
Change-Id: I6b1d719db0537b0f53768dbb00a5b4d01c85ba3a
Related-Bug: #1682020
(cherry picked from commit d983234288)
This was noticed in a downstream bug when a Nova instance with Cinder
volume (in this case, both the Nova instance storage _and_ Cinder volume
are located on Ceph) is migrated to a target Compute node, the disk
cache value for the Cinder volume gets changed. I.e. the QEMU
command-line for the Cinder volume stored on Ceph turns into the
following:
Pre-migration, QEMU command-line for the Nova instance:
[...] -drive file=rbd:volumes/volume-[...],cache=writeback
Post-migration, QEMU command-line for the Nova instance:
[...] -drive file=rbd:volumes/volume-[...],cache=none
Furthermore, Jason Dillaman from Ceph confirms RBD cache being enabled
pre-migration:
$ ceph --admin-daemon /var/run/qemu/ceph-client.openstack.[...] \
config get rbd_cache
{
"rbd_cache": "true"
}
And disabled, post-migration:
$ ceph --admin-daemon /var/run/qemu/ceph-client.openstack.[...] \
config get rbd_cache
{
"rbd_cache": "false"
}
This change in cache value post-migration causes I/O latency on the
Cinder volume.
From a chat with Daniel Berrangé on IRC: Prior to live migration, Nova
rewrites all the <disk> elements, and passes this updated guest XML
across to target libvirt. And it is never calling _set_cache_mode()
when doing this. So `nova.conf`'s `writeback` setting is getting lost,
leaving the default `cache=none` setting. And this mistake (of leaving
the default cache value to 'none') will of course be correct when you
reboot the guest on the target later.
So:
- Call _set_cache_mode() in _get_volume_config() method -- because it
is a callback function to _update_volume_xml() in
nova/virt/libvirt/migration.py.
- And remove duplicate calls to _set_cache_mode() in
_get_guest_storage_config() and attach_volume().
- Fix broken unit tests; adjust test_get_volume_config() to reflect
the disk cache mode.
Thanks: Jason Dillaman of Ceph for observing the change in cache modes
in a downstream bug analysis, Daniel Berrangé for help in
analysis from a Nova libvirt driver POV, and Stefan Hajnoczi
from QEMU for help on I/O latency instrumentation with `perf`.
Closes-bug: 1706083
Change-Id: I4184382b49dd2193d6a21bfe02ea973d02d8b09f
Allow Cinder to use external events to signal a volume extension.
1) Nova will then call os-brick to perform the volume extension
so the host can detect its new size.
2) Compute driver will resize the device in QEMU so instance can detect
the new disk size without rebooting.
This change:
* Adds the 'volume-extended' external event.
The event tag needs to be the extended volume id.
* Bumps the latest microversion to 2.51.
* Exposes non-traceback instance action event details for
non-admins on the microversion. This is needed for the
non-admin API user that initiated the volume extend
operation to be able to tell when the nova-compute side
is complete.
Co-Authored-By: Matt Riedemann <mriedem.os@gmail.com>
Blueprint: nova-support-attached-volume-extend
Change-Id: If10cffd0dc4c9879f6754ce39bee5fae1d04f474
This commit is fixing the issue by adding the address element to the
volumes configuration.
Closes-Bug: #1686116
Change-Id: I701145abc0e300711a01889e8d62b1f7887da120
When using virtio-scsi it is possible to handle up to 256 disk but
because we do not specifically set the information about the
controller only 6 can be handled.
This commit is fixing the issue by adding the address element to the
disk configuration.
Closes-Bug: #1686116
Change-Id: I98e53b378cc99747765066001a0b51880543d2dd
If a volume that was tagged with a device role tag is detached, this
patch deletes its metadata from the instance's device_metadata.
Change-Id: I94ffe4488173f40b31dd15832e5ed3f403b59e1f
Implements: blueprint virt-device-tagged-attach-detach
disk_available_least is a free disk size information of hypervisors.
This is calculated by the following fomula:
disk_available_least = <free disk size> - <Total gap between virtual
disk size and actual disk size for all instances>
But stopped instance's virtual disk sizes were not calculated
after merging following patch in Juno cycle:
https://review.openstack.org/#/c/105127
So disk_available_least might be larger than actual free disk size.
As a result, instances might be scheduled beyond the actual free
disk size if stopped instances were on a host.
This patch fix it.
Stopped instance's disks will be calculated after merging this patch.
Change-Id: I8abf6edfa80e3920539e4f6d4906c573f9543b91
Closes-Bug: #1693679
Virtuozzo containers based on OS-level virtualization.
OS containers don't use swap as additional disk.
So Virtuozzo doesn't support swap in terms of OpenStack.
We should fail "nova boot" operation if
user try to create Virtuozzo container which is based on flavor with swap disk.
In this patch we move the code which checks swap disk
to the beginning of the function.
We should check swap disk before creating ephemeral and root disk, because
in case of error we don't need to delete created disks.
Closes-Bug: 1696697
Change-Id: I1a17d9af3050fe70d49adfb7e65d6b292b6a52f5
Xen's console is attached to hvc0 in PV guests [1]. If an image does set
it up on its grub configuration, the instance may not boot properly.
Nova is setting the os_cmdline for other hypervisors, so it should do
the same for Xen. This way libvirt will configures the domain properly.
[1] https://wiki.xen.org/wiki/Xen_FAQ_Console
Change-Id: I7600c6e966ab3829185d008077463e9689b9afd5
Closes-Bug: 1691190
Previously the libvirt driver would always assume that it was only
detaching devices (volumes or virtual interfaces) from a persistent
domain however that is not always the case.
For example when rolling back from a live migration an attempt is made
to detach volumes from the transient destination domain that is being
cleaned up. This attempt would fail with the previous assumption of the
domain being persistent in place.
This change introduces a simple call to has_persistent_configuration
within detach_device_with_retry to confirm the state of the domain
before attempting to detach.
Closes-Bug: #1669857
Closes-Bug: #1696125
Change-Id: I95948721a0119f5f54dbe50d4455fd47d422164b
This enhances the warning we log when we timeout waiting
for the network-vif-plugged event from Neutron. It includes
the vm_state and task_state for context on the instance operation
since this is done for more than just the initial create, it's
also for things like rebuild.
The explicit instance uuid is also removed from the message since
sending the instance kwarg to LOG.warning logs the instance uuid
automatically. And _LW is also dropped since we no longer translate
log messages.
Change-Id: I6daf1569cba2cfcb4e8da0b46c91d5251c9c6740
Related-Bug: #1694371
Previously, in swap_volume, the VIR_DOMAIN_BLOCK_REBASE_COPY_DEV flag
was not passed to virDomainBlockRebase. In the case of iSCSI-backed
disks, this caused the XML to change from <source dev=/dev/iscsi/lun>
to <source file=/dev/iscsi/lun>. This was a problem because
/dev/iscsi/lun is not a regular file. This patch passes the
VIR_DOMAIN_BLOCK_REBASE_COPY_DEV flag to virDomainBlockRebase, causing
the correct <source dev=/dev/iscsi/lun> to be generated upon
volume-update.
Change-Id: I868a0dae0baf8cded9c7c5807ea63ffc5eec0c5e
Closes-bug: 1691195
This was missed during I6cca3b5ddd387dac86750be70f49c764a1be2fca
where the migrate_data kwarg was removed from the destroy() method
in the virt drivers.
The existing test was already brittle in that it just asserts that
was passed to the mocked method. This change at least makes the test
a bit less brittle by validating the signature, but there should be
a better way to do this with autospecing and mock, but I couldn't
figure it out.
Change-Id: Iaec2db92b4aa0e6d069c9c5071900e2627683e4e
Closes-Bug: #1694834
New objects are children of NovaObject class. Such parent
will give us a good control of objects versioning. Also it
will force contributors to dump objects version in case of
any changes. These objects will be used to transmit data
via RPC. Old objects were used only for storing data.
During RPC transmission they are transform into dictionaries.
It is not right approach because we will have some problems
in case of adding new diagnostics fields.
Version of compute RPC was bumped.
Also some changes were made in libvirt driver method
get_instance_diagnostics(). These changes were made to fill
new fields of 'Diagnostics' object.
Implementation of method get_instance_diagnostics()
for other virt drivers will be changed in future patches.
blueprint: restore-vm-diagnostics
Change-Id: I5e2d116df045f803c654f7810b939b5fc859cfbf
Image hide_hypervisor_id property helps libvirt set an xml instance file
property, which will hide on guest host KVM hypervisor signature
("KVMKVMKVM\0\0\0").
According to the commit message in QEMU repository [1]:
"The latest Nvidia driver (337.88) specifically checks
for KVM as the hypervisor and reports Code 43 for the
driver in a Windows guest when found. Removing or
changing the KVM signature is sufficient for the driver
to load and work."
DocImpact: New feature ``img_hide_hypervisor_id`` image property should be
added in the glance-property-keys page of the cli-reference docs [2].
[1]: http://git.qemu.org/?p=qemu.git;a=commitdiff;h=f522d2a
[2]: https://docs.openstack.org/cli-reference/glance-property-keys.html
Implements: blueprint add-kvm-hidden-feature
Co-Authored-By: Adam Kijak <adam.kijak@corp.ovh.com>
Change-Id: Ie8227fececa40e502aaa39d77de2a1cd0cd72682
The migrate_data kwarg was added to the virt driver destroy()
method in change bc45c56f10 but
was never actually used, i.e. nothing passes migrate_data to
the destroy() method. The compute manager passes migrate_data to
the cleanup() method. This change just removes the unused kwarg
so it's not confusing for new virt drivers like powervm.
Change-Id: I6cca3b5ddd387dac86750be70f49c764a1be2fca
Cinder deprecated the GlusterFS volume driver
in Newton and deleted it in Ocata:
I10c576602dd0e65947d1a1af5d04b8ada54f4625
Since it's unused, unmaintained and unsupported in
Cinder we should remove it from the libvirt driver.
This also removes the related configuration options.
A note is left in the code since I'm unsure if anything
relies on checking for the netfs disk source_protocol which
was added at the same time as the glusterfs support in
Ic6dd861b40b692b25df67c9d5b63fd436c690fde.
Change-Id: I2745f5578646ec994b53f6b5c0a5f62637b0948a
_ensure_console_log_for_instance[1] ensures VM console.log existence.
A change[2] updated in order to succeed if the file exists without nova
being able to read it (typically happens when libvirt rewrites uid/gid)
by ignoring EPERM errors.
It seems the method should ignore EACCES errors. Indeed EACCES errors
are raised when an action is not permitted because of insufficient
permissions where EPERM errors when an action is not permitted at all.
[1] nova.virt.libvirt.driver
[2] https://review.openstack.org/392643
Closes-Bug: #1691831
Change-Id: Ifc075a0fd91fc87651fcb306d6439be5369009b6
This patch adds support for tagged volume attachment to the libvirt
driver.
Change-Id: I8b475992b881db08cf1354299cc86042413074cc
Implements: blueprint bp/virt-device-tagged-attach-detach
This patch adds support for tagged network interface attachment to the
libvirt driver.
Change-Id: Ib86b2b8a5e784e998da930d3f48b6bc64aead6c3
Implements: blueprint virt-device-tagged-attach-detach