[network] Rework the SR-IOV guide

Rework the SR-IOV guide to provide some additional clarity:

* Rework sections to emphasise some content
* Rewrap lines
* Correct some grammar nits
* Don't reference older versions of OpenStack: the docs are already
  versioned
* Rename the document to be consistent with other sections in the
  "Advanced" section
* Clarify the expectation that an init-system script be used to start
  the sriov-nic-agent automatically

Change-Id: I0792f16de23dd4628c1aca33cf89fdeef36940a9
Related-bug: #1508557
This commit is contained in:
Stephen Finucane 2016-06-15 11:31:32 +01:00 committed by Stephen Finucane
parent 8755545ce3
commit a5191ca4b4
1 changed files with 199 additions and 196 deletions

View File

@ -1,28 +1,26 @@
.. _config-sriov: .. _config-sriov:
========================== ======
Using SR-IOV functionality SR-IOV
========================== ======
The purpose of this page is to describe how to enable SR-IOV The purpose of this page is to describe how to enable SR-IOV functionality
functionality available in OpenStack (using OpenStack Networking) as of available in OpenStack (using OpenStack Networking) as of the Juno release. It
the Juno release. This page serves as a how-to guide on configuring is intended to server as a guide for how to configure OpenStack Networking and
OpenStack Networking and OpenStack Compute to create neutron SR-IOV ports. OpenStack Compute to create SR-IOV ports.
The basics The basics
~~~~~~~~~~ ~~~~~~~~~~
PCI-SIG Single Root I/O Virtualization and Sharing (SR-IOV) PCI-SIG Single Root I/O Virtualization and Sharing (SR-IOV) functionality is
specification defines a standardized mechanism to virtualize PCIe devices. available in OpenStack since the Juno release. The SR-IOV specification
The mechanism can virtualize a single PCIe Ethernet controller to appear as defines a standardized mechanism to virtualize PCIe devices. This mechanism
multiple PCIe devices. You can directly assign each virtual PCIe device to can virtualize a single PCIe Ethernet controller to appear as multiple PCIe
a VM, bypassing the hypervisor and virtual switch layer. As a result, users devices. Each device can be directly assigned to an instance, bypassing the
are able to achieve low latency and near-line wire speed. hypervisor and virtual switch layer. As a result, users are able to achieve
low latency and near-line wire speed.
SR-IOV with ethernet The following terms are used throughout this document:
~~~~~~~~~~~~~~~~~~~~
The following terms are used over the document:
.. list-table:: .. list-table::
:header-rows: 1 :header-rows: 1
@ -31,12 +29,48 @@ The following terms are used over the document:
* - Term * - Term
- Definition - Definition
* - PF * - PF
- Physical Function. This is the physical Ethernet controller - Physical Function. The physical Ethernet controller that supports
that supports SR-IOV. SR-IOV.
* - VF * - VF
- Virtual Function. This is a virtual PCIe device created - Virtual Function. The virtual PCIe device created from a physical
from a physical Ethernet controller. Ethernet controller.
SR-IOV agent
------------
The SR-IOV agent allows you to set the admin state of ports, configure port
security (enable and disable spoof checking), and configure QoS rate limiting.
You must include the SR-IOV agent on each compute node using SR-IOV ports.
.. note::
The SR-IOV agent was optional before Mitaka, and was not enabled by default
before Liberty.
.. note::
The ability to control port security and QoS rate limit settings was added
in Liberty.
Supported Ethernet controllers
------------------------------
The following manufacturers are known to work:
- Intel
- Mellanox
- QLogic
For information on **Mellanox SR-IOV Ethernet cards** see:
`Mellanox: How To Configure SR-IOV VFs
<https://community.mellanox.com/docs/DOC-1484>`_
For information on **QLogic SR-IOV Ethernet cards** see:
`User's Guide OpenStack Deployment with SR-IOV Configuration
<http://www.qlogic.com/solutions/Documents/UsersGuide_OpenStack_SR-IOV.pdf>`_
Using SR-IOV interfaces
~~~~~~~~~~~~~~~~~~~~~~~
In order to enable SR-IOV, the following steps are required: In order to enable SR-IOV, the following steps are required:
@ -46,97 +80,44 @@ In order to enable SR-IOV, the following steps are required:
#. Configure nova-scheduler (Controller) #. Configure nova-scheduler (Controller)
#. Enable neutron sriov-agent (Compute) #. Enable neutron sriov-agent (Compute)
**Neutron sriov-agent** We recommend using VLAN provider networks for segregation. This way you can
combine instances without SR-IOV ports and instances with SR-IOV ports on a
Neutron sriov-agent is required since Mitaka release. single network.
Neutron sriov-agent allows you to set the admin state of ports and
starting from Liberty allows you to control
port security (enable and disable spoof checking) and QoS rate limit settings.
.. note:: .. note::
Neutron sriov-agent was optional before Mitaka, and was not Throughout this guide, ``eth3`` is used as the PF and ``physnet2`` is used
enabled by default before Liberty. as the provider network configured as a VLAN range. These ports may vary in
different environments.
Known limitations
-----------------
* QoS is supported since Liberty, while it has limitations.
``max_burst_kbps`` (burst over ``max_kbps``) is not supported.
``max_kbps`` is rounded to Mbps.
* Security Group is not supported. the agent is only working with
``firewall_driver = neutron.agent.firewall.NoopFirewallDriver``.
* No OpenStack Dashboard integration. Users need to use CLI or API to
create neutron SR-IOV ports.
* Live migration is not supported for instances with SR-IOV ports.
* ARP spoofing filtering was not supported before Mitaka when using
neutron sriov-agent.
Environment example
-------------------
We recommend using Open vSwitch with VLAN as segregation. This
way you can combine normal VMs without SR-IOV ports
and instances with SR-IOV ports on a single neutron
network.
.. note::
Throughout this guide, eth3 is used as the PF and
physnet2 is used as the provider network configured as a VLAN range.
You are expected to change this according to your actual
environment.
Create Virtual Functions (Compute) Create Virtual Functions (Compute)
---------------------------------- ----------------------------------
In this step, create the VFs for the network
interface that will be used for SR-IOV.
Use eth3 as PF, which is also used
as the interface for Open vSwitch VLAN and has access
to the private networks of all machines.
The step to create VFs differ between SR-IOV card Ethernet controller Create the VFs for the network interface that will be used for SR-IOV. We use
manufacturers. Currently the following manufacturers are known to work: ``eth3`` as PF, which is also used as the interface for the VLAN provider
network and has access to the private networks of all machines.
- Intel .. note::
- Mellanox
- QLogic
For **Mellanox SR-IOV Ethernet cards** see: The steps detail how to create VFs using **Intel SR-IOV Ethernet cards** on
`Mellanox: HowTo Configure SR-IOV VFs an Intel system. Steps may differ for different hardware configurations.
<https://community.mellanox.com/docs/DOC-1484>`_
To create the VFs on Ubuntu for **Intel SR-IOV Ethernet cards**, #. Ensure SR-IOV and VT-d are enabled in BIOS.
do the following:
#. Make sure SR-IOV is enabled in BIOS, check for VT-d and #. Enable IOMMU in Linux by adding ``intel_iommu=on`` to the kernel parameters,
make sure it is enabled. After enabling VT-d, enable IOMMU on for example, using GRUB.
Linux by adding ``intel_iommu=on`` to kernel parameters. Edit the file
``/etc/default/grub``:
.. code-block:: ini
GRUB_CMDLINE_LINUX_DEFAULT="nomdmonddf nomdmonisw intel_iommu=on
#. Run the following if you have added new parameters:
.. code-block:: console
# update-grub
# reboot
#. On each compute node, create the VFs via the PCI SYS interface: #. On each compute node, create the VFs via the PCI SYS interface:
.. code-block:: console .. code-block:: console
# echo '7' > /sys/class/net/eth3/device/sriov_numvfs # echo '8' > /sys/class/net/eth3/device/sriov_numvfs
.. note:: .. note::
On some PCI devices, observe that when changing the amount of VFs you On some PCI devices, observe that when changing the amount of VFs you
receive the error ``Device or resource busy``. In this case, you first receive the error ``Device or resource busy``. In this case, you must
need to set ``sriov_numvfs`` to ``0``, then set it to your new value. first set ``sriov_numvfs`` to ``0``, then set it to your new value.
.. warning:: .. warning::
@ -152,12 +133,24 @@ do the following:
# cat /sys/class/net/eth3/device/sriov_totalvfs # cat /sys/class/net/eth3/device/sriov_totalvfs
63 63
If the interface is down, make sure it is set to ``up`` before launching a #. Verify that the VFs have been created and are in ``up`` state:
guest, otherwise the instance will fail to spawn:
.. code-block:: console
# lspci | grep Ethernet
82:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
82:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
82:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
82:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
82:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
82:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
82:11.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
82:11.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
82:11.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
82:11.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
.. code-block:: console .. code-block:: console
# ip link set eth3 up
# ip link show eth3 # ip link show eth3
8: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000 8: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000
link/ether a0:36:9f:8f:3f:b8 brd ff:ff:ff:ff:ff:ff link/ether a0:36:9f:8f:3f:b8 brd ff:ff:ff:ff:ff:ff
@ -170,12 +163,12 @@ do the following:
vf 6 MAC 00:00:00:00:00:00, spoof checking on, link-state auto vf 6 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
vf 7 MAC 00:00:00:00:00:00, spoof checking on, link-state auto vf 7 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
#. Now verify that the VFs have been created (should see Virtual Function If the interfaces are down, set them to ``up`` before launching a guest,
device): otherwise the instance will fail to spawn:
.. code-block:: console .. code-block:: console
# lspci | grep Ethernet # ip link set eth3 up
#. Persist created VFs on reboot: #. Persist created VFs on reboot:
@ -183,72 +176,70 @@ do the following:
# echo "echo '7' > /sys/class/net/eth3/device/sriov_numvfs" >> /etc/rc.local # echo "echo '7' > /sys/class/net/eth3/device/sriov_numvfs" >> /etc/rc.local
.. note:: .. note::
The suggested way of making PCI SYS settings persistent The suggested way of making PCI SYS settings persistent is through
is through :file:`sysfs.conf` but for unknown reason the ``sysfsutils`` tool. However, this is not available by default on
changing :file:`sysfs.conf` does not have any effect on Ubuntu 14.04. many major distributions.
For **QLogic SR-IOV Ethernet cards** see:
`User's Guide OpenStack Deployment with SR-IOV Configuration
<http://www.qlogic.com/solutions/Documents/UsersGuide_OpenStack_SR-IOV.pdf>`_
Whitelist PCI devices nova-compute (Compute) Whitelist PCI devices nova-compute (Compute)
-------------------------------------------- --------------------------------------------
Tell ``nova-compute`` which pci devices are allowed to be passed #. Configure which PCI devices the ``nova-compute`` service may use. Edit
through. Edit the file ``nova.conf``: the ``nova.conf`` file:
.. code-block:: ini .. code-block:: ini
[default] [default]
pci_passthrough_whitelist = { "devname": "eth3", "physical_network": "physnet2"} pci_passthrough_whitelist = { "devname": "eth3", "physical_network": "physnet2"}
This tells nova that all VFs belonging to eth3 are allowed to be passed This tells the Compute service that all VFs belonging to ``eth3`` are
through to VMs and belong to the neutron provider network physnet2. Restart allowed to be passed through to instances and belong to the provider network
the ``nova-compute`` service for the changes to go into effect. ``physnet2``.
Alternatively the ``pci_passthrough_whitelist`` parameter also supports Alternatively the ``pci_passthrough_whitelist`` parameter also supports
whitelisting by: whitelisting by:
- PCI address: The address uses the same syntax as in ``lspci`` and an - PCI address: The address uses the same syntax as in ``lspci`` and an
asterisk (*) can be used to match anything. asterisk (*) can be used to match anything.
.. code-block:: ini .. code-block:: ini
pci_passthrough_whitelist = { "address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]", "physical_network": "physnet2" } pci_passthrough_whitelist = { "address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]", "physical_network": "physnet2" }
# Example match any domain, bus 0a, slot 00, all function For example, to match any domain, bus 0a, slot 00, and all functions:
pci_passthrough_whitelist = { "address": "*:0a:00.*", "physical_network": "physnet2" }
- PCI ``vendor_id`` and ``product_id`` as displayed by the Linux utility .. code-block: ini
``lspci``.
.. code-block:: ini pci_passthrough_whitelist = { "address": "*:0a:00.*", "physical_network": "physnet2" }
pci_passthrough_whitelist = { "vendor_id": "<id>", "product_id": "<id>", - PCI ``vendor_id`` and ``product_id`` as displayed by the Linux utility
"physical_network": "physnet2"} ``lspci``.
.. code-block:: ini
If the device defined by the PCI address or devname corresponds to a SR-IOV PF, pci_passthrough_whitelist = { "vendor_id": "<id>", "product_id": "<id>", "physical_network": "physnet2" }
all VFs under the PF will match the entry. Multiple pci_passthrough_whitelist
entries per host are supported. If the device defined by the PCI address or ``devname`` corresponds to an
SR-IOV PF, all VFs under the PF will match the entry. Multiple
``pci_passthrough_whitelist`` entries per host are supported.
#. Restart the ``nova-compute`` service for the changes to go into effect.
.. _configure_sriov_neutron_server: .. _configure_sriov_neutron_server:
Configure neutron-server (Controller) Configure neutron-server (Controller)
------------------------------------- -------------------------------------
#. Add ``sriovnicswitch`` as mechanism driver, edit the file ``ml2_conf.ini``: #. Add ``sriovnicswitch`` as mechanism driver. Edit the ``ml2_conf.ini`` file
on each controller:
.. code-block:: ini .. code-block:: ini
mechanism_drivers = openvswitch,sriovnicswitch mechanism_drivers = openvswitch,sriovnicswitch
#. Find out the ``vendor_id`` and ``product_id`` of your **VFs** by logging #. Identify the ``vendor_id`` and ``product_id`` of the VFs on each compute
in to your compute node with VFs previously created: node:
.. code-block:: console .. code-block:: console
@ -257,51 +248,49 @@ Configure neutron-server (Controller)
87:10.1 Ethernet controller [0200]: Intel Corporation 82599 Ethernet Controller Virtual Function [8086:10ed] (rev 01) 87:10.1 Ethernet controller [0200]: Intel Corporation 82599 Ethernet Controller Virtual Function [8086:10ed] (rev 01)
87:10.3 Ethernet controller [0200]: Intel Corporation 82599 Ethernet Controller Virtual Function [8086:10ed] (rev 01) 87:10.3 Ethernet controller [0200]: Intel Corporation 82599 Ethernet Controller Virtual Function [8086:10ed] (rev 01)
#. Update the ``ml2_conf_sriov.ini`` on each controller. #. Add the VF as a supported device. Edit the ``ml2_conf_sriov.ini`` file on
In our case the ``vendor_id`` is ``8086`` and the ``product_id`` each controller.
is ``10ed``. Tell neutron the ``vendor_id`` and ``product_id`` of the VFs
that are supported.
.. code-block:: ini .. code-block:: ini
supported_pci_vendor_devs = 8086:10ed supported_pci_vendor_devs = 8086:10ed
#. Add the newly configured ``ml2_conf_sriov.ini`` file as parameter to the
#. Add the newly configured ``ml2_conf_sriov.ini`` as parameter to ``neutron-server`` service. Edit the appropriate initialization script to
the ``neutron-server`` daemon. Edit the appropriate initialization script configure the ``neutron-server`` service to load the SRI-OV configuration
to configure the ``neutron-server`` service to load file:
the SRIOV configuration file:
.. code-block:: ini .. code-block:: ini
--config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-file /etc/neutron/neutron.conf
--config-file /etc/neutron/plugin.ini
--config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini
#. For the changes to go into effect, restart the ``neutron-server`` service. #. Restart the ``neutron-server`` service.
Configure nova-scheduler (Controller) Configure nova-scheduler (Controller)
------------------------------------- -------------------------------------
#. On every controller node running the ``nova-scheduler`` service, add #. On every controller node running the ``nova-scheduler`` service, enable the
``PciPassthroughFilter`` to the ``scheduler_default_filters`` parameter ``PciPassthroughFilter`` by setting the ``scheduler_default_filters``
and add a new line for ``scheduler_available_filters`` parameter parameter to ``all_filters``. Add a new line for
under the ``[DEFAULT]`` section in ``nova.conf``: ``scheduler_available_filters`` parameter under the ``[DEFAULT]`` section in
``nova.conf``:
.. code-block:: ini .. code-block:: ini
[DEFAULT] [DEFAULT]
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, PciPassthroughFilter scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, PciPassthroughFilter
scheduler_available_filters = nova.scheduler.filters.all_filters scheduler_available_filters = nova.scheduler.filters.all_filters
scheduler_available_filters = nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter
#. Restart the ``nova-scheduler`` service. #. Restart the ``nova-scheduler`` service.
Enable neutron sriov-agent (Compute) Enable neutron sriov-agent (Compute)
------------------------------------- -------------------------------------
#. On each compute node, edit the file ``sriov_agent.ini``: #. Install the SR-IOV agent.
#. Edit the ``sriov_agent.ini`` file on each compute node. For example:
.. code-block:: ini .. code-block:: ini
@ -315,52 +304,51 @@ Enable neutron sriov-agent (Compute)
.. note:: .. note::
The ``physical_device_mappings`` parameter is not limited to be a 1-1 The ``physical_device_mappings`` parameter is not limited to be a 1-1
mapping between physnets and NICs. This enables you to map the same mapping between physical networks and NICs. This enables you to map the
physnet to more than one NIC. For example, if ``physnet2`` is same physical network to more than one NIC. For example, if ``physnet2``
connected to ``eth3`` and ``eth4``, then ``physnet2:eth3,physnet2:eth4`` is connected to ``eth3`` and ``eth4``, then
is a valid option. ``physnet2:eth3,physnet2:eth4`` is a valid option.
The ``exclude_devices`` parameter is empty, therefore, all the VFs associated with eth3 may be The ``exclude_devices`` parameter is empty, therefore, all the VFs
configured by the agent. To exclude specific VFs, add associated with eth3 may be configured by the agent. To exclude specific
them to the ``exclude_devices`` parameter as follows: VFs, add them to the ``exclude_devices`` parameter as follows:
.. code-block:: ini .. code-block:: ini
exclude_devices = eth1:0000:07:00.2; 0000:07:00.3, eth2:0000:05:00.1; 0000:05:00.2 exclude_devices = eth1:0000:07:00.2;0000:07:00.3,eth2:0000:05:00.1;0000:05:00.2
#. Test whether the neutron sriov-agent runs successfully: #. Ensure the neutron sriov-agent runs successfully:
.. code-block:: console .. code-block:: console
# neutron-sriov-nic-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/sriov_agent.ini # neutron-sriov-nic-agent \
--config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/sriov_agent.ini
#. Enable the neutron sriov-agent service. #. Enable the neutron sriov-agent service.
If installing from source, you must configure a daemon file for the init
system manually.
FDB L2 agent extension (Optional) FDB L2 agent extension
---------------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The FDB population is an L2 agent extension to OVS agent or Linux bridge. Its
objective is to update the FDB table for existing instance using normal port.
This enables communication between SR-IOV instances and normal instances.
The use cases of the FDB population extension are:
#. Direct port and normal port instances reside on the same compute node. Forwarding DataBase (FDB) population is an L2 agent extension to OVS agent or
Linux bridge. Its objective is to update the FDB table for existing instance
using normal port. This enables communication between SR-IOV instances and
normal instances. The use cases of the FDB population extension are:
#. Direct port instance that uses floating IP address and network node * Direct port and normal port instances reside on the same compute node.
* Direct port instance that uses floating IP address and network node
are located on the same host. are located on the same host.
Additional information describing the problem, see: For additional information describing the problem, refer to:
`Virtual switching technologies and Linux bridge. `Virtual switching technologies and Linux bridge.
<http://events.linuxfoundation.org/sites/events/files/slides/LinuxConJapan2014_makita_0.pdf>`_ <http://events.linuxfoundation.org/sites/events/files/slides/LinuxConJapan2014_makita_0.pdf>`_
.. note:: #. Edit the ``ovs_agent.ini`` or ``linuxbridge_agent.ini`` file on each compute
node. For example:
This feature is supported from Newton release.
To enable this extension, edit the relevant L2 agent
``ovs_agent.ini/linuxbridge_agent.ini`` config file:
#. In the ``[agent]`` section, add FDB to the extensions:
.. code-block:: console .. code-block:: console
@ -368,48 +356,45 @@ To enable this extension, edit the relevant L2 agent
extensions = fdb extensions = fdb
#. Add the FDB section and the ``shared_physical_device_mappings`` parameter. #. Add the FDB section and the ``shared_physical_device_mappings`` parameter.
This parameter maps each physical port to its physical network name. This parameter maps each physical port to its physical network name. Each
Each physical network can be mapped to several ports: physical network can be mapped to several ports:
.. code-block:: console .. code-block:: console
[FDB] [FDB]
shared_physical_device_mappings = physnet1:p1p1, physnet1:p1p2 shared_physical_device_mappings = physnet1:p1p1, physnet1:p1p2
Launching instances with SR-IOV ports
-------------------------------------
Creating instances with SR-IOV ports Once configuration is complete, you can launch instances with SR-IOV ports.
------------------------------------
After the configuration is done, you can now launch Instances
with neutron SR-IOV ports.
#. Get the id of the neutron network where you want the SR-IOV port to be #. Get the ``id`` of the network where you want the SR-IOV port to be created:
created:
.. code-block:: console .. code-block:: console
$ net_id=`neutron net-show net04 | grep "\ id\ " | awk '{ print $4 }'` $ net_id=`neutron net-show net04 | grep "\ id\ " | awk '{ print $4 }'`
#. Create the SR-IOV port. We specify ``vnic_type=direct``, but other options #. Create the SR-IOV port. ``vnic_type=direct`` is used here, but other options
include ``normal``, ``direct-physical``, and ``macvtap``: include ``normal``, ``direct-physical``, and ``macvtap``:
.. code-block:: console .. code-block:: console
$ port_id=`neutron port-create $net_id --name sriov_port --binding:vnic_type direct | grep "\ id\ " | awk '{ print $4 }'` $ port_id=`neutron port-create $net_id --name sriov_port --binding:vnic_type direct | grep "\ id\ " | awk '{ print $4 }'`
#. Create the VM. For the nic we specify the SR-IOV port created in step 2: #. Create the instance. Specify the SR-IOV port created in step two for the
NIC:
.. code-block:: console .. code-block:: console
$ nova boot --flavor m1.large --image ubuntu_14.04 --nic port-id=$port_id test-sriov $ nova boot --flavor m1.large --image ubuntu_14.04 --nic port-id=$port_id test-sriov
.. note:: .. note::
There are two ways to attach VFs to an instance. You can create a There are two ways to attach VFs to an instance. You can create an SR-IOV
neutron SR-IOV port or use the ``pci_alias`` in nova. port or use the ``pci_alias`` in the Compute service. For more
For more information about using ``pci_alias``, refer to information about using ``pci_alias``, refer to `nova-api
`nova-api configuration`_. configuration`_.
SR-IOV with InfiniBand SR-IOV with InfiniBand
~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~
@ -447,5 +432,23 @@ you must:
[Filters] [Filters]
ebrctl: CommandFilter, ebrctl, root ebrctl: CommandFilter, ebrctl, root
Known limitations
~~~~~~~~~~~~~~~~~
* When using Quality of Service (QoS), ``max_burst_kbps`` (burst over
``max_kbps``) is not supported. In addition, ``max_kbps`` is rounded to
Mbps.
* Security groups are not supported when using SR-IOV, thus, the firewall
driver must be disabled. This can be done in the ``neutron.conf`` file.
.. code-block:: ini
[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver
* SR-IOV is not integrated into the OpenStack Dashboard (horizon). Users must
use the CLI or API to configure SR-IOV interfaces.
* Live migration is not supported for instances with SR-IOV ports.
.. Links .. Links
.. _`nova-api configuration`: http://docs.openstack.org/admin-guide/compute-pci-passthrough.html#configure-nova-api-controller .. _`nova-api configuration`: http://docs.openstack.org/admin-guide/compute-pci-passthrough.html#configure-nova-api-controller