Remove SR-IOV support

This got decided at the PTG. The code is old, not maintained, not tested
and most likely doesn't work anymore. Moreover it gave us a hard
dependency on grpcio and protobuf, which is fairly problematic in Python
and gave us all sorts of headaches.

Change-Id: I0c8c91cdd3e1284e7a3c1e9fe04b4c0fbbde7e45
This commit is contained in:
Michał Dulko 2022-06-29 10:56:32 +02:00
parent 20707b0330
commit 04d4439606
36 changed files with 17 additions and 2640 deletions

View File

@ -406,10 +406,14 @@ rules:
resources: resources:
- endpoints - endpoints
- pods - pods
- nodes
- services - services
- services/status - services/status
- namespaces - namespaces
- apiGroups:
- ""
verbs: ["get", "list", "watch"]
resources:
- nodes
- apiGroups: - apiGroups:
- openstack.org - openstack.org
verbs: ["*"] verbs: ["*"]
@ -470,7 +474,6 @@ rules:
verbs: ["*"] verbs: ["*"]
resources: resources:
- pods - pods
- nodes
- apiGroups: - apiGroups:
- openstack.org - openstack.org
verbs: ["*"] verbs: ["*"]

View File

@ -54,8 +54,7 @@ Multi-VIF driver
The new type of drivers which is used to call other VIF-drivers to attach The new type of drivers which is used to call other VIF-drivers to attach
additional interfaces to Pods. The main aim of this kind of drivers is to get additional interfaces to Pods. The main aim of this kind of drivers is to get
additional interfaces from the Pods definition, then invoke real VIF-drivers additional interfaces from the Pods definition, then invoke real VIF-drivers
like neutron-vif, nested-macvlan or sriov to retrieve the VIF objects like neutron-vif, nested-macvlan to retrieve the VIF objects accordingly.
accordingly.
All Multi-VIF drivers should be derived from class *MultiVIFDriver*. And all All Multi-VIF drivers should be derived from class *MultiVIFDriver*. And all
should implement the *request_additional_vifs* method which returns a list of should implement the *request_additional_vifs* method which returns a list of
@ -87,7 +86,7 @@ Option in config file might look like this:
.. code-block:: ini .. code-block:: ini
[kubernetes] [kubernetes]
multi_vif_drivers = sriov, additional_subnets multi_vif_drivers = additional_subnets
Or like this: Or like this:
@ -122,26 +121,6 @@ additional subnets requests might look like:
]' ]'
SRIOV Driver
~~~~~~~~~~~~
SRIOV driver gets pod object from Multi-vif driver, according to parsed
information (sriov requests) by Multi-vif driver. It should return a list of
created vif objects. Method request_vif() has unified interface with
PodVIFDriver as a base class.
Here's how a Pod Spec with sriov requests might look like:
.. code-block:: yaml
spec:
containers:
- name: vf-container
image: vf-image
resources:
requests:
pod.alpha.kubernetes.io/opaque-int-resource-sriov-vf-physnet2: 1
Specific ports support Specific ports support
---------------------- ----------------------

View File

@ -36,7 +36,6 @@ Design Specs
specs/pike/contrail_support specs/pike/contrail_support
specs/pike/fuxi_kubernetes specs/pike/fuxi_kubernetes
specs/pike/sriov
specs/queens/network_policy specs/queens/network_policy
specs/rocky/npwg_spec_support specs/rocky/npwg_spec_support
specs/stein/vhostuser specs/stein/vhostuser

View File

@ -43,9 +43,7 @@ This section describes how you can install and configure kuryr-kubernetes
testing_nested_connectivity testing_nested_connectivity
containerized containerized
multi_vif_with_npwg_spec multi_vif_with_npwg_spec
sriov
testing_udp_services testing_udp_services
testing_sriov_functional
testing_sctp_services testing_sctp_services
listener_timeouts listener_timeouts
multiple_tenants multiple_tenants

View File

@ -1,294 +0,0 @@
.. _sriov:
=============================
How to configure SR-IOV ports
=============================
Current approach of SR-IOV relies on `sriov-device-plugin`_. While creating
pods with SR-IOV, sriov-device-plugin should be turned on on all nodes. To use
a SR-IOV port on a baremetal or VM installation following steps should be done:
#. Create OpenStack networks and subnets for SR-IOV. Following steps should be
done with admin rights.
.. code-block:: console
$ openstack network create --share --provider-physical-network physnet22 --provider-network-type vlan --provider-segment 3501 vlan-sriov-net-1
$ openstack network create --share --provider-physical-network physnet23 --provider-network-type vlan --provider-segment 3502 vlan-sriov-net-2
$ openstack subnet create --network vlan-sriov-net-1 --subnet-range 192.168.2.0/24 vlan-sriov-subnet-1
$ openstack subnet create --network vlan-sriov-net-2 --subnet-range 192.168.3.0/24 vlan-sriov-subnet-2
Subnet ids of ``vlan-sriov-subnet-1`` and ``vlan-sriov-subnet-2`` will be
used later in NetworkAttachmentDefinition.
#. Add sriov section into kuryr.conf.
.. code-block:: ini
[sriov]
default_physnet_subnets = physnet22:<UUID of vlan-sriov-subnet-1>,physnet23:<UUID of vlan-sriov-subnet-2>
device_plugin_resource_prefix = intel.com
physnet_resource_mappings = physnet22:physnet22,physnet23:physnet23
resource_driver_mappings = physnet22:vfio-pci,physnet23:vfio-pci
#. Prepare NetworkAttachmentDefinition objects. Apply
NetworkAttachmentDefinition with "sriov" driverType inside, as described in
`NPWG spec`_.
.. code-block:: yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: "sriov-net1"
annotations:
openstack.org/kuryr-config: '{
"subnetId": "UUID of vlan-sriov-subnet-1",
"driverType": "sriov"
}'
.. code-block:: yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: "sriov-net2"
annotations:
openstack.org/kuryr-config: '{
"subnetId": "UUID of vlan-sriov-subnet-2",
"driverType": "sriov"
}'
Use the following yaml to create pod with two additional SR-IOV interfaces:
.. code-block:: yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-sriov
labels:
app: nginx-sriov
spec:
replicas: 1
selector:
matchLabels:
app: nginx-sriov
template:
metadata:
labels:
app: nginx-sriov
annotations:
k8s.v1.cni.cncf.io/networks: sriov-net1,sriov-net2
spec:
containers:
- securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
- IPC_LOCK
- SYS_NICE
- SYS_RAWIO
name: nginx-sriov
image: nginx:1.13.8
resources:
requests:
intel.com/physnet22: '1'
intel.com/physnet23: '1'
cpu: "2"
memory: "512Mi"
hugepages-2Mi: 512Mi
limits:
intel.com/physnet22: '1'
intel.com/physnet23: '1'
cpu: "2"
memory: "512Mi"
hugepages-2Mi: 512Mi
volumeMounts:
- name: dev
mountPath: /dev
- name: hugepage
mountPath: /hugepages
- name: sys
mountPath: /sys
volumes:
- name: dev
hostPath:
path: /dev
type: Directory
- name: hugepage
emptyDir:
medium: HugePages
- name: sys
hostPath:
path: /sys
In the above example two SR-IOV devices will be attached to pod. First one
is described in sriov-net-2 NetworkAttachmentDefinition, second one is in
sriov-net-3. They may have different subnetId. It is necessary to mount
``/dev`` and ``/hugepages`` host's directories into pod to make pod available
to use vfio devices. ``privileged: true`` is necessary only in case if node
is a virtual machine. For baremetal node this option is not necessary.
``IPC_LOCK`` capability and other ones are necessary for case when node is
a virtual machine.
#. Specify resource names
The resource names *intel.com/physnet22* and *intel.com/physnet23*, which
are used in the above example are the resource names (see `SRIOV network
device plugin for Kubernetes`_). This name should match "^\[a-zA-Z0-9\_\]+$"
regular expression. To be able to work with arbitrary resource names
physnet_resource_mappings and device_plugin_resource_prefix in [sriov]
section of kuryr-controller configuration file should be filled. The
default value for device_plugin_resource_prefix is ``intel.com``, the same
as in SR-IOV network device plugin, in case of SR-IOV network device plugin
was started with value of -resource-prefix option different from
``intel.com``, than value should be set to device_plugin_resource_prefix,
otherwise kuryr-kubernetes will not work with resource.
Assume we have following SR-IOV network device plugin (defined by
-config-file option)
.. code-block:: json
{
"resourceList":
[
{
"resourceName": "physnet22",
"rootDevices": ["0000:02:00.0"],
"sriovMode": true,
"deviceType": "vfio"
},
{
"resourceName": "physnet23",
"rootDevices": ["0000:02:00.1"],
"sriovMode": true,
"deviceType": "vfio"
}
]
}
The config file above describes two physical devices mapped on two
resources. Virtual functions from these devices will be used for pods.
We defined ``physnet22`` and ``physnet23`` as resource names, also assume
we started sriovdp with -resource-prefix intel.com value. The PCI address
of ens6 interface is "0000:02:00.0" and the PCI address of ens8 interface
is "0000:02:00.1". If we assigned 8 VF to ens6 and 8 VF to ens8 and launch
SR-IOV network device plugin, we can see following state of kubernetes:
.. code-block:: console
$ kubectl get node node1 -o json | jq '.status.allocatable'
{
"cpu": "4",
"ephemeral-storage": "269986638772",
"hugepages-1Gi": "8Gi",
"hugepages-2Mi": "0Gi",
"intel.com/physnet22": "8",
"intel.com/physnet23": "8",
"memory": "7880620Ki",
"pods": "1k"
}
If you use a virtual machine as your worker node, then it is necessary to
use sriov-device-plugin of version 3.1 because it provides selectors which
are important to separate particular VFs which are passed into VM.
Config file for sriov-device-plugin may look like:
.. code-block:: json
{
"resourceList": [{
"resourceName": "physnet22",
"selectors": {
"vendors": ["8086"],
"devices": ["1520"],
"pfNames": ["ens6"]
}
},
{
"resourceName": "physnet23",
"selectors": {
"vendors": ["8086"],
"devices": ["1520"],
"pfNames": ["ens8"]
}
}
]
}
We defined ``physnet22`` resource name that maps to ``ens6`` interface,
which is the first passed into VM virtual function. The same situation is
with ``physnet23``, it maps to ``ens8`` interface. It is important to note
that in case of virtual machine usage we should specify the names of passed
virtual functions as physical devices. Thus we expect sriov-dp to annotate
different pci addresses for each resource:
.. code-block:: console
$ kubectl get node node1 -o json | jq '.status.allocatable'
{
"cpu": "4",
"ephemeral-storage": "269986638772",
"hugepages-2Mi": "2Gi",
"intel.com/physnet22": "1",
"intel.com/physnet23": "1",
"memory": "7880620Ki",
}
#. Enable Kubelet Pod Resources feature
To use SR-IOV functionality properly it is necessary to enable Kubelet Pod
Resources feature. Pod Resources is a service provided by Kubelet via gRPC
server that allows to request list of resources allocated for each pod and
container on the node. These resources are devices allocated by k8s device
plugins. Service was implemented mainly for monitoring purposes, but it also
suitable for SR-IOV binding driver allowing it to know which VF was
allocated for particular container.
To enable Pod Resources service it is needed to add ``--feature-gates
KubeletPodResources=true`` into ``/etc/sysconfig/kubelet``. This file could
look like:
.. code-block:: bash
KUBELET_EXTRA_ARGS="--feature-gates KubeletPodResources=true"
Note that it is important to set right value for parameter
``kubelet_root_dir`` in ``kuryr.conf``. By default it is
``/var/lib/kubelet``. In case of using containerized CNI it is necessary to
mount ``'kubelet_root_dir'/pod-resources`` directory into CNI container.
To use this feature add ``enable_pod_resource_service`` into kuryr.conf.
.. code-block:: ini
[sriov]
enable_pod_resource_service = True
#. Use privileged user
To make neutron ports active kuryr-k8s makes requests to neutron API to
update ports with binding:profile information. Due to this it is necessary
to make actions with privileged user with admin rights.
#. Use vfio devices in containers
To use vfio devices inside containers it is necessary to load vfio-pci
module. Remember that if our worker node is a virtual machine then it
should be loaded without iommu support:
.. code-block:: bash
rmmod vfio_pci
rmmod vfio_iommu_type1
rmmod vfio
modprobe vfio enable_unsafe_noiommu_mode=1
modprobe vfio-pci
.. _NPWG spec: https://docs.openstack.org/kuryr-kubernetes/latest/specs/rocky/npwg_spec_support.html
.. _sriov-device-plugin: https://docs.google.com/document/d/1D3dJeUUmta3sMzqw8JtWFoG2rvcJiWitVro9bsfUTEw
.. _SRIOV network device plugin for Kubernetes: https://github.com/intel/sriov-network-device-plugin

View File

@ -1,247 +0,0 @@
===========================
Testing SRIOV functionality
===========================
Following the steps explained on :ref:`sriov` make sure that you have already
created and applied a ``NetworkAttachmentDefinition`` containing a ``sriov``
driverType. Also make sure that `sriov-device-plugin`_ is enabled on the nodes.
``NetworkAttachmentDefinition`` containing a ``sriov`` driverType might
look like:
.. code-block:: yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: "net-sriov"
annotations:
openstack.org/kuryr-config: '{
"subnetId": "88d0b025-2710-4f02-a348-2829853b45da",
"driverType": "sriov"
}'
Here ``88d0b025-2710-4f02-a348-2829853b45da`` is an id of precreated subnet
that is expected to be used for SR-IOV ports:
.. code-block:: console
$ neutron subnet-show 88d0b025-2710-4f02-a348-2829853b45da
+-------------------+--------------------------------------------------+
| Field | Value |
+-------------------+--------------------------------------------------+
| allocation_pools | {"start": "192.168.2.2", "end": "192.168.2.254"} |
| cidr | 192.168.2.0/24 |
| created_at | 2018-11-21T10:57:34Z |
| description | |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 192.168.2.1 |
| host_routes | |
| id | 88d0b025-2710-4f02-a348-2829853b45da |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | sriov_subnet |
| network_id | 2f8b9103-e9ec-47fa-9617-0fb9deacfc00 |
| project_id | 92a4d7734b17486ba24e635bc7fad595 |
| revision_number | 2 |
| service_types | |
| subnetpool_id | |
| tags | |
| tenant_id | 92a4d7734b17486ba24e635bc7fad595 |
| updated_at | 2018-11-21T10:57:34Z |
+-------------------+--------------------------------------------------+
#. Create deployment definition <DEFINITION_FILE_NAME> with one SR-IOV
interface (apart from default one). Deployment definition file might look
like:
.. code-block:: yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-sriov
spec:
replicas: 1
template:
metadata:
name: nginx-sriov
labels:
app: nginx-sriov
annotations:
k8s.v1.cni.cncf.io/networks: net-sriov
spec:
containers:
1. name: nginx-sriov
image: nginx
resources:
requests:
intel.com/sriov: '1'
cpu: "1"
memory: "512Mi"
limits:
intel.com/sriov: '1'
cpu: "1"
memory: "512Mi"
Here ``net-sriov`` is the name of ``NetworkAttachmentDefinition`` created
before.
#. Create deployment with the following command:
.. code-block:: console
$ kubectl create -f <DEFINITION_FILE_NAME>
#. Wait for the pod to get to Running phase.
.. code-block:: console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-sriov-558db554d7-rvpxs 1/1 Running 0 1m
#. If your image contains ``iputils`` (for example, busybox image), you can
attach to the pod and check that the correct interface has been attached to
the Pod.
.. code-block:: console
$ kubectl get pod
$ kubectl exec -it nginx-sriov-558db554d7-rvpxs -- /bin/bash
$ ip a
You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface.
.. code-block:: console
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if43: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether fa:16:3e:1a:c0:43 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.0.9/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe1a:c043/64 scope link
valid_lft forever preferred_lft forever
13: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether fa:16:3e:b3:2e:70 brd ff:ff:ff:ff:ff:ff
inet 192.168.2.6/24 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fea8:55af/64 scope link
valid_lft forever preferred_lft forever
Alternatively you can login to k8s worker and do the same from the host
system. Use the following command to find out ID of running SR-IOV
container:
.. code-block:: console
$ docker ps
Suppose that ID of created container is ``eb4e10f38763``.
Use the following command to get PID of that container:
.. code-block:: console
$ docker inspect --format {{.State.Pid}} eb4e10f38763
Suppose that output of previous command is bellow:
.. code-block:: console
$ 32609
Use the following command to get interfaces of container:
.. code-block:: console
$ nsenter -n -t 32609 ip a
You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface.
.. code-block:: console
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if43: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether fa:16:3e:1a:c0:43 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.0.9/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe1a:c043/64 scope link
valid_lft forever preferred_lft forever
13: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether fa:16:3e:b3:2e:70 brd ff:ff:ff:ff:ff:ff
inet 192.168.2.6/24 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fea8:55af/64 scope link
valid_lft forever preferred_lft forever
In our example sriov interface has address 192.168.2.6
#. Use neutron CLI to check the port with exact address has been created on
neutron:
.. code-block:: console
$ openstack port list | grep 192.168.2.6
Suppose that previous command returns a list with one openstack port that
has ID ``545ec21d-6bfc-4179-88c6-9dacaf435ea7``. You can see its information
with the following command:
.. code-block:: console
$ openstack port show 545ec21d-6bfc-4179-88c6-9dacaf435ea7
+-----------------------+----------------------------------------------------------------------------+
| Field | Value |
+-----------------------+----------------------------------------------------------------------------+
| admin_state_up | UP |
| allowed_address_pairs | |
| binding_host_id | novactl |
| binding_profile | |
| binding_vif_details | port_filter='True' |
| binding_vif_type | hw_veb |
| binding_vnic_type | direct |
| created_at | 2018-11-26T09:13:07Z |
| description | |
| device_id | 7ab02cf9-f15b-11e8-bdf4-525400152cf3 |
| device_owner | compute:kuryr:sriov |
| dns_assignment | None |
| dns_name | None |
| extra_dhcp_opts | |
| fixed_ips | ip_address='192.168.2.6', subnet_id='88d0b025-2710-4f02-a348-2829853b45da' |
| id | 545ec21d-6bfc-4179-88c6-9dacaf435ea7 |
| ip_address | None |
| mac_address | fa:16:3e:b3:2e:70 |
| name | default/nginx-sriov-558db554d7-rvpxs |
| network_id | 2f8b9103-e9ec-47fa-9617-0fb9deacfc00 |
| option_name | None |
| option_value | None |
| port_security_enabled | False |
| project_id | 92a4d7734b17486ba24e635bc7fad595 |
| qos_policy_id | None |
| revision_number | 5 |
| security_groups | 1e7bb965-2ad5-4a09-a5ac-41aa466af25b |
| status | DOWN |
| subnet_id | None |
| updated_at | 2018-11-26T09:13:07Z |
+-----------------------+----------------------------------------------------------------------------+
The port would have the name of the pod, ``compute::kuryr::sriov`` for
device owner and 'direct' vnic_type. Verify that IP and MAC addresses of the
port match the ones on the container. Currently the neutron-sriov-nic-agent
does not properly detect SR-IOV ports assigned to containers. This means
that direct ports in neutron would always remain in *DOWN* state. This
doesn't affect the feature in any way other than cosmetically.
.. _sriov-device-plugin: https://docs.google.com/document/d/1Ewe9Of84GkP0b2Q2PC0y9RVZNkN2WeVEagX9m99Nrzc

View File

@ -1,291 +0,0 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
Kuryr Kubernetes SR-IOV Integration
===================================
https://blueprints.launchpad.net/kuryr-kubernetes/+spec/kuryr-kubernetes-sriov-support
This spec proposes an approach to allow kuryr-kubernetes manage pods that
require SR-IOV interfaces.
Problem Description
-------------------
SR-IOV (Single-root input/output virtualization) is a technique that allows a
single physical PCIe device to be shared across several clients (VMs or
otherwise). Each such network card would have a single PF (physical function)
and multiple VFs (Virtual Functions), essentially appearing as multiple PCIe
devices. These VFs can be then passed-through to VMs bypassing the hypervisor
and virtual switch. This allows performance comparable to non-virtualized
environments. SR-IOV support is present in nova and neutron, see docs [#]_.
It is possible to implement a similar approach within Kubernetes. Since
Kubernetes uses separate network namespaces for Pods, it is possible to
implement pass-through, simply by assigning a VF device to the desired Pod's
namespace.
There are several challenges that this task poses:
* SR-IOV interfaces are limited and not every Pod would require them. This means
that a Pod should be able to request 0(zero) or more VFs. Since not all Pods
will require VFs, these interfaces should be optional.
* For SR-IOV support to be practical the Pods should be able to request multiple
VFs, possibly from multiple PFs. It's important to note
that Kubernetes only stores information about a single IP
address per Pod, however it does not restrict configuring additional network
interfaces and/or IP addresses for it.
* Different PFs may map to different neutron physical networks(physnets).
Pods need to be able to request VFs specific physnet and physnet information
(vlan id, specifically) should be passed to the CNI for configuration.
* Kubernetes does not have any knowledge about SR-IOV interfaces on the Node it
runs. This can be mitigated by utilising Opaque Integer Resources [#2d]_
feature from 1.5.x and later series.
* This feature would be limited to bare metal installations of Kubernetes,
since it's currently impossible to manage VFs of a PF inside a VM. (There is
work to allow this in newer kernels, but latest stable kernels do not support
it yet)
Proposed Change
---------------
Proposed solution consists of two major parts: add SR-IOV capabilities to VIF
handler of kuryr-kubernetes controller and enhance CNI to allow it
associate VFs to Pods.
Pod scheduling and resource management
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Since Kubernetes is the one who actually schedules the Pods on a Node we need a
way to tell it that a particular node is capable of handling a SR-IOV-enabled
Pods. There are several techniques in Kubernetes, that allow limiting where a
pod should be scheduled (i.e. Labels and NodeSelectors, Taints and Tolerations),
but only Opaque Integer Resources [#2d]_ (OIR) allows exact bookkeeping of VFs.
This spec proposes to use a predefined OIR pattern to track VFs on a node:::
pod.alpha.kubernetes.io/opaque-int-resource-sriov-vf-<PHYSNET_NAME>
For example to request VFs for ``physnet2`` it would be:::
pod.alpha.kubernetes.io/opaque-int-resource-sriov-vf-physnet2
It will be deployer's duty to set these resources, during node setup.
``kubectl`` does not support setting ORI as of yet, so it has to be done as a
PATCH request to Kubernetes API. For example to add 7 VFs from ``physnet2`` to
``k8s-node-1`` one would issue the following request::
curl --header "Content-Type: application/json-patch+json" \
--request PATCH \
--data '[{"op": "add", "path":
"/status/capacity/pod.alpha.kubernetes.io~1opaque-int-resource-sriov-vf-physnet2",
"value": "7"}]' \
http://k8s-master:8080/api/v1/nodes/k8s-node-1/status
For more information please refer to OIR docs. [#2d]_
This process may be automated, using Node Feature Discovery [#]_
or a similar service, however these details are out of the scope of this spec.
Here's how A Pod Spec might look like this:
.. code-block:: yaml
spec:
containers:
- name: vf-container
image: vf-image
resources:
requests:
pod.alpha.kubernetes.io/opaque-int-resource-sriov-vf-physnet2: 1
- name: vf-other-container
image: vf-other-image
resources:
requests:
pod.alpha.kubernetes.io/opaque-int-resource-sriov-vf-physnet2: 1
pod.alpha.kubernetes.io/opaque-int-resource-sriov-vf-physnet3: 1
These requests are per-container, and the total amount of VFs should be
totalled for the Pod, the same way Kubernetes does it.
The example above would require 2 VFs from ``physnet2``
and 1 from ``physnet3``.
An important note should be made about kubernetes Init Containers [#]_. If we
decide that it is important to support requests from Init Containers, they
would have to be treated differently. Init Containers are designed to run
sequentially, so we would need to scan them and get maximum request value
across all of them.
Requesting SR-IOV ports
~~~~~~~~~~~~~~~~~~~~~~~
To implement SR-IOV capabilities current VIF handler will be modified to handle
multiple VIFs.
As a prerequisite of this the following changes have to be implemented:
Multi-VIF capabilities of generic handler
+++++++++++++++++++++++++++++++++++++++++
Instead of storing a single VIF in the annotation VIFHandler would store
a dict, that maps desired interface name to a VIF object. As an alternative we
can store VIFs in a list, but dict would give finer control over interface
naming. Both handler and the CNI would have to be modified to understand
this new format of the annotation. The CNI may also be kept
backward-compatible, i.e. understand the old single-VIF format.
Even though this functionality is not a part of SR-IOV handling it acts as a
prerequisite and would be implemented as part of this spec.
SR-IOV capabilities of generic handler
++++++++++++++++++++++++++++++++++++++
The handler would read OIR requests of a
scheduled Pod and would see if the Pod has requested any SR-IOV VFs. (NOTE: at
this point the Pod should already be scheduled to a node, meaning there are
enough available VFs on that node). The handler would ask SR-IOV driver for
sufficient number of ``direct`` ports from neutron and pass them on
to the CNI via annotations. Network information should also include network's
VLAN info, to setup VF VLAN.
SR-IOV functionality requires additional knowledge of neutron subnets. The
controller needs to know a subnet where it would allocate direct ports for
certain physnet. This can be solved by adding a config setting that will map
physnets to a default neutron subnet
It might look like this:
.. code-block:: ini
default_physnet_subnets = "physnet2:e603a1cc-57e5-40fe-9af1-9fbb30905b10,physnet3:0919e15a-b619-440c-a07e-bb5a28c11a75"
Alternatively we can request this information from neutron. However since there
can be multiple networks within a single physnet and multiple subnets within a
single network there is a lot of space for ambiguity.
Finally we can combine the approaches: request info from neutron only if it's
not set in the config.
Kuryr-cni
~~~~~~~~~
On the CNI side we will implement a CNI binding driver for SR-IOV ports.
Since this work will be based on top of multi-vif support for both CNI and
controller, no additional format changes would be implemented.
The driver would configure the VF and pass it to the Pod's namespace.
It would scan ``/sys/class/net/<PF>/device`` directory for available
virtual functions and pass the acquired device to Pods namespace.
The driver would need to know which
devices map to which physnets. Therefore we would introduce a config
setting ``physical_device_mappings``. It will be identical to
neutron-sriov-nic-agent's setting. It might look like:
.. code-block:: ini
physical_device_mappings = "physnet2:enp1s0f0,physnet3:enp1s0f1"
As an alternative to storing this setting in ``kuryr.conf`` we may store it in
``/etc/cni/net.d/kuryr.conf`` file or in a kubernetes node annotation.
Caveats
~~~~~~~
* Current implementation does not concern itself with setting active status of
the Port on the neutron side. It is not required for the feature to function
properly, but may be undesired from operators standpoint. Doing so may
require some additional integration with neutron-sriov-nic-agent and
verification. There is a concern, that neutron-sriov-nic-agent does not
detect port status correctly all the times.
Optional 2-Phase Approach
~~~~~~~~~~~~~~~~~~~~~~~~~
Initial implementation followed an alternative path, where SR-IOV functionality
has been implemented as a separate handler/cni. This sparked several design
discussions, where community agreed, that multi-VIF handler is preferred over
multi-handler approach. However if implementing multi-vif handler would prove
to be lengthy and difficult we may go with a 2-phase approach. First phase:
polish and merge initial implementation. Second phase: Implement multi-vif
approach and convert sriov-handler to use it.
Alternatives
~~~~~~~~~~~~
* It is possible to implement SR-IOV functionality as a separate handler.
In this scenario both handlers would listen to Pod events and would handle
them separately. They would have to use different annotation keys inside the
Pod object. The CNI would have to be able to handle both annotation keys.
* Since this feature is only practical for bare metal we can implement it
entirely on the CNI side. (i.e. CNI would request ports from neutron).
However this would introduce an alternative control flow.
* It is also possible to implement a separate CNI, that would use static
configuration, compatible with neutrons, much like [#]_. This would eliminate
the need to talk to neutron at all, but would put the burden of configuring
multiple nodes network information on the deployer. This may be however
desirable for some installations and may be considered as an option. At the
same time in this scenario there would be little to no code shared
between this CNI and regular kuryr-kubernetes. In this case it feels like the
code will be more suited to a separate project, than kuryr-kubernetes.
* As an alternative we may implement a separate kuryr-sriov-cni, that would
only handle SR-IOV requests. This will allow a more granular approach and
would decouple SR-IOV functionality from the main code.
Implementing a kuryr-sriov-cni would mean, however, that operators would
need to pick one of the implementations (kuryr-cni vs kuryr-sriov-cni) or
use something like multus-cni [#]_ or CNI-Genie [#]_ to allow them
work together.
Assignee(s)
~~~~~~~~~~~
Primary assignee:
Zaitsev Kirill
Work Items
~~~~~~~~~~
* Implement Multi-VIF handler/CNI
* Implement SR-IOV capabilities
* Implement CNI SR-IOV handler
* Active state monitoring for kuryr-sriov direct ports
* Document deployment procedure for kuryr-sriov support
Possible Further Work
~~~~~~~~~~~~~~~~~~~~~
* It may be desirable to be able to request specific ports from
neutron subnet in the Pod Spec. This functionality may be extended to
normal VIFs, beyond SR-IOV handler.
* It may be desirable to add an option to assign network info to VFs
statically
References
----------
.. [#] https://docs.openstack.org/ocata/networking-guide/config-sriov.html
.. [#2d] https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#opaque-integer-resources-alpha-feature
.. [#] https://github.com/kubernetes-incubator/node-feature-discovery
.. [#] https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
.. [#] https://github.com/hustcat/sriov-cni
.. [#] https://github.com/Intel-Corp/multus-cni
.. [#] https://github.com/Huawei-PaaS/CNI-Genie

View File

@ -29,13 +29,11 @@ from openstack import utils as os_utils
from kuryr_kubernetes import config from kuryr_kubernetes import config
from kuryr_kubernetes import k8s_client from kuryr_kubernetes import k8s_client
from kuryr_kubernetes.pod_resources import client as pr_client
_clients = {} _clients = {}
_NEUTRON_CLIENT = 'neutron-client' _NEUTRON_CLIENT = 'neutron-client'
_KUBERNETES_CLIENT = 'kubernetes-client' _KUBERNETES_CLIENT = 'kubernetes-client'
_OPENSTACKSDK = 'openstacksdk' _OPENSTACKSDK = 'openstacksdk'
_POD_RESOURCES_CLIENT = 'pod-resources-client'
def get_network_client(): def get_network_client():
@ -50,10 +48,6 @@ def get_kubernetes_client() -> k8s_client.K8sClient:
return _clients[_KUBERNETES_CLIENT] return _clients[_KUBERNETES_CLIENT]
def get_pod_resources_client():
return _clients[_POD_RESOURCES_CLIENT]
def get_compute_client(): def get_compute_client():
return _clients[_OPENSTACKSDK].compute return _clients[_OPENSTACKSDK].compute
@ -189,8 +183,3 @@ def setup_openstacksdk():
conn.network.delete_trunk_subports = partial(_delete_trunk_subports, conn.network.delete_trunk_subports = partial(_delete_trunk_subports,
conn.network) conn.network)
_clients[_OPENSTACKSDK] = conn _clients[_OPENSTACKSDK] = conn
def setup_pod_resources_client():
root_dir = config.CONF.sriov.kubelet_root_dir
_clients[_POD_RESOURCES_CLIENT] = pr_client.PodResourcesClient(root_dir)

View File

@ -16,7 +16,6 @@
CLI interface for kuryr status commands. CLI interface for kuryr status commands.
""" """
import copy
import sys import sys
import textwrap import textwrap
import traceback import traceback
@ -31,10 +30,7 @@ from oslo_serialization import jsonutils
from kuryr_kubernetes import clients from kuryr_kubernetes import clients
from kuryr_kubernetes import config from kuryr_kubernetes import config
from kuryr_kubernetes import constants from kuryr_kubernetes import constants
from kuryr_kubernetes import exceptions
from kuryr_kubernetes import objects from kuryr_kubernetes import objects
from kuryr_kubernetes.objects import vif
from kuryr_kubernetes import utils
from kuryr_kubernetes import version from kuryr_kubernetes import version
CONF = config.CONF CONF = config.CONF
@ -108,8 +104,6 @@ class UpgradeCommands(object):
if obj.obj_name() != objects.vif.PodState.obj_name(): if obj.obj_name() != objects.vif.PodState.obj_name():
old_count += 1 old_count += 1
elif not self._has_valid_sriov_annot(obj):
old_count += 1
if malformed_count == 0 and old_count == 0: if malformed_count == 0 and old_count == 0:
return UpgradeCheckResult(0, 'All annotations are updated.') return UpgradeCheckResult(0, 'All annotations are updated.')
@ -149,96 +143,11 @@ class UpgradeCommands(object):
return max(res.code for res in check_results) return max(res.code for res in check_results)
def _convert_annotations(self, test_fn, update_fn):
updated_count = 0
not_updated_count = 0
malformed_count = 0
pods = self.k8s.get('/api/v1/pods')['items']
for pod in pods:
try:
obj = self._get_annotation(pod)
if not obj:
# NOTE(dulek): We ignore pods without annotation, those
# probably are hostNetworking.
continue
except Exception:
malformed_count += 1
continue
if test_fn(obj):
obj = update_fn(obj)
serialized = obj.obj_to_primitive()
try:
ann = {
constants.K8S_ANNOTATION_VIF:
jsonutils.dumps(serialized)
}
self.k8s.annotate(
utils.get_res_link(pod), ann,
pod['metadata']['resourceVersion'])
except exceptions.K8sClientException:
print('Error when updating annotation for pod %s/%s' %
(pod['metadata']['namespace'],
pod['metadata']['name']))
not_updated_count += 1
updated_count += 1
t = prettytable.PrettyTable(['Stat', 'Number'],
hrules=prettytable.ALL)
t.align = 'l'
cells = [['Updated annotations', updated_count],
['Malformed annotations', malformed_count],
['Annotations left', not_updated_count]]
for cell in cells:
t.add_row(cell)
print(t)
def _has_valid_sriov_annot(self, state):
for obj in state.vifs.values():
if obj.obj_name() != objects.vif.VIFSriov.obj_name():
continue
if hasattr(obj, 'pod_name') and hasattr(obj, 'pod_link'):
continue
return False
return True
def _convert_sriov(self, state):
new_state = copy.deepcopy(state)
for iface, obj in new_state.additional_vifs.items():
if obj.obj_name() != objects.vif.VIFSriov.obj_name():
continue
if hasattr(obj, 'pod_name') and hasattr(obj, 'pod_link'):
continue
new_obj = objects.vif.VIFSriov()
new_obj.__dict__ = obj.__dict__.copy()
new_state.additional_vifs[iface] = new_obj
return new_state
def update_annotations(self): def update_annotations(self):
def test_fn(obj): pass
return (obj.obj_name() != objects.vif.PodState.obj_name() or
not self._has_valid_sriov_annot(obj))
def update_fn(obj):
if obj.obj_name() != objects.vif.PodState.obj_name():
return vif.PodState(default_vif=obj)
return self._convert_sriov(obj)
self._convert_annotations(test_fn, update_fn)
def downgrade_annotations(self): def downgrade_annotations(self):
# NOTE(danil): There is no need to downgrade sriov vifs pass
# when annotations has old format. After downgrade annotations
# will have only one default vif and it could not be sriov vif
def test_fn(obj):
return obj.obj_name() == objects.vif.PodState.obj_name()
def update_fn(obj):
return obj.default_vif
self._convert_annotations(test_fn, update_fn)
def print_version(): def print_version():

View File

@ -24,8 +24,6 @@ from pyroute2 import netns as pyroute_netns
from stevedore import driver as stv_driver from stevedore import driver as stv_driver
from kuryr_kubernetes.cni import utils as cni_utils from kuryr_kubernetes.cni import utils as cni_utils
from kuryr_kubernetes import config
from kuryr_kubernetes import constants
from kuryr_kubernetes import utils from kuryr_kubernetes import utils
_BINDING_NAMESPACE = 'kuryr_kubernetes.cni.binding' _BINDING_NAMESPACE = 'kuryr_kubernetes.cni.binding'
@ -130,25 +128,6 @@ def _need_configure_l3(vif):
return vif.port_profile.l3_setup return vif.port_profile.l3_setup
# NOTE(danil): by default kuryr-kubernetes has to setup l3 # NOTE(danil): by default kuryr-kubernetes has to setup l3
return True return True
# NOTE(danil): sriov vif. Figure out what driver should compute it
physnet = vif.physnet
mapping_res = config.CONF.sriov.physnet_resource_mappings
try:
resource = mapping_res[physnet]
except KeyError:
LOG.exception("No resource name for physnet %s", physnet)
raise
mapping_driver = config.CONF.sriov.resource_driver_mappings
try:
driver_name = mapping_driver[resource]
except KeyError:
LOG.exception("No driver for resource_name %s", resource)
raise
if driver_name in constants.USERSPACE_DRIVERS:
LOG.info("_configure_l3 will not be called for vif %s "
"because of it's driver", vif)
return False
# NOTE(danil): sriov vif computed by kernel driver
return True return True

View File

@ -1,386 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from kuryr.lib._i18n import _
from oslo_concurrency import lockutils
from oslo_config import cfg
from oslo_log import log as logging
from oslo_serialization import jsonutils
import pyroute2
from kuryr_kubernetes import clients
from kuryr_kubernetes.cni.binding import base as b_base
from kuryr_kubernetes import config
from kuryr_kubernetes import constants
from kuryr_kubernetes import exceptions
from kuryr_kubernetes.handlers import health
from kuryr_kubernetes import utils
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
class VIFSriovDriver(health.HealthHandler, b_base.BaseBindingDriver):
def __init__(self):
super().__init__()
self._lock = None
def release_lock_object(func):
def wrapped(self, *args, **kwargs):
try:
return func(self, *args, **kwargs)
finally:
if self._lock and self._lock.acquired:
self._lock.release()
return wrapped
@release_lock_object
def connect(self, vif, ifname, netns, container_id):
pci_info = self._process_vif(vif, ifname, netns)
if config.CONF.sriov.enable_node_annotations:
self._save_pci_info(vif.id, pci_info)
def disconnect(self, vif, ifname, netns, container_id):
# NOTE(k.zaitsev): when netns is deleted the interface is
# returned automatically to host netns. We may reset
# it to all-zero state
self._return_device_driver(vif)
if config.CONF.sriov.enable_node_annotations:
self._remove_pci_info(vif.id)
def _process_vif(self, vif, ifname, netns):
pr_client = clients.get_pod_resources_client()
pod_resources_list = pr_client.list()
resources = pod_resources_list.pod_resources
resource_name = self._get_resource_by_physnet(vif.physnet)
driver = self._get_driver_by_res(resource_name)
resource = self._make_resource(resource_name)
LOG.debug("Vif %s will correspond to pci device belonging to "
"resource %s", vif, resource)
pod_devices = self._get_pod_devices(vif.pod_link)
pod_resource = None
container_devices = None
for res in resources:
if res.name == vif.pod_name:
pod_resource = res
break
if not pod_resource:
raise exceptions.CNIError(
"No resources are discovered for pod {}".format(vif.pod_name))
LOG.debug("Looking for PCI device used by kubelet service and not "
"used by pod %s yet ...", vif.pod_name)
for container in pod_resource.containers:
try:
container_devices = container.devices
except Exception:
LOG.warning("No devices in container %s", container.name)
continue
for dev in container_devices:
if dev.resource_name != resource:
continue
for pci in dev.device_ids:
if pci in pod_devices:
continue
LOG.debug("Appropriate PCI device %s is found", pci)
pci_info = self._compute_pci(pci, driver, vif.pod_link,
vif, ifname, netns)
return pci_info
def _get_resource_by_physnet(self, physnet):
mapping = config.CONF.sriov.physnet_resource_mappings
try:
resource_name = mapping[physnet]
except KeyError:
LOG.exception("No resource name for physnet %s", physnet)
raise
return resource_name
def _make_resource(self, res_name):
res_prefix = config.CONF.sriov.device_plugin_resource_prefix
return res_prefix + '/' + res_name
def _get_driver_by_res(self, resource_name):
mapping = config.CONF.sriov.resource_driver_mappings
try:
driver = mapping[resource_name]
except KeyError:
LOG.exception("No driver for resource_name %s", resource_name)
raise
return driver
def _compute_pci(self, pci, driver, pod_link, vif, ifname, netns):
vf_name, vf_index, pf, pci_info = self._get_vf_info(pci, driver)
pci_info['physical_network'] = vif.physnet
if driver in constants.USERSPACE_DRIVERS:
LOG.info("PCI device %s will be rebinded to userspace network "
"driver %s", pci, driver)
if vf_index and pf:
self._set_vf_mac(pf, vf_index, vif.address)
if vif.network.should_provide_vlan:
vlan_id = vif.network.vlan
self._set_vf_vlan(pf, vf_index, vlan_id)
old_driver = self._bind_device(pci, driver)
else:
LOG.info("PCI device %s will be moved to container's net ns %s",
pci, netns)
self._move_to_netns(ifname, netns, vif, vf_name, vf_index, pf)
old_driver = driver
self._annotate_device(pod_link, pci, old_driver, driver, vif.id)
return pci_info
def _move_to_netns(self, ifname, netns, vif, vf_name, vf_index, pf):
if vf_index and pf:
if vif.network.should_provide_vlan:
vlan_id = vif.network.vlan
self._set_vf_vlan(pf, vf_index, vlan_id)
self._set_vf_mac(pf, vf_index, vif.address)
with b_base.get_ipdb() as h_ipdb, b_base.get_ipdb(netns) as c_ipdb:
with h_ipdb.interfaces[vf_name] as host_iface:
host_iface.net_ns_fd = utils.convert_netns(netns)
with c_ipdb.interfaces[vf_name] as iface:
iface.ifname = ifname
iface.mtu = vif.network.mtu
iface.up()
def _get_vf_info(self, pci, driver):
vf_sys_path = '/sys/bus/pci/devices/{}/net/'.format(pci)
if not os.path.exists(vf_sys_path):
if driver not in constants.USERSPACE_DRIVERS:
raise OSError(_("No vf name for device {}").format(pci))
vf_name = None
else:
vf_names = os.listdir(vf_sys_path)
vf_name = vf_names[0]
pfysfn_path = '/sys/bus/pci/devices/{}/physfn/net/'.format(pci)
# If physical function is not specified in VF's directory then
# this VF belongs to current VM node
if not os.path.exists(pfysfn_path):
LOG.info("Current device %s is a virtual function which is "
"passed into VM. Getting it's pci info", vf_name)
pci_info = self._get_vf_pci_info(pci)
return vf_name, None, None, pci_info
pf_names = os.listdir(pfysfn_path)
pf_name = pf_names[0]
nvfs = self._get_total_vfs(pf_name)
pf_sys_path = '/sys/class/net/{}/device'.format(pf_name)
for vf_index in range(nvfs):
virtfn_path = os.path.join(pf_sys_path,
'virtfn{}'.format(vf_index))
vf_pci = os.path.basename(os.readlink(virtfn_path))
if vf_pci == pci:
pci_info = self._get_pci_info(pf_name, vf_index)
return vf_name, vf_index, pf_name, pci_info
return None, None, None, None
def _get_vf_pci_info(self, pci):
vendor_path = '/sys/bus/pci/devices/{}/vendor'.format(pci)
with open(vendor_path) as vendor_file:
# vendor_full contains a hex value (e.g. 0x8086)
vendor_full = vendor_file.read()
vendor = vendor_full.split('x')[1].strip()
device_path = '/sys/bus/pci/devices/{}/device'.format(pci)
LOG.info("Full path to device which is being processed",
device_path)
with open(device_path) as device_file:
# device_full contains a hex value (e.g. 0x1520)
device_full = device_file.read()
device = device_full.split('x')[1].strip()
pci_vendor_info = '{}:{}'.format(vendor, device)
return {'pci_slot': pci,
'pci_vendor_info': pci_vendor_info}
def _bind_device(self, pci, driver, old_driver=None):
if not old_driver:
old_driver_path = '/sys/bus/pci/devices/{}/driver'.format(pci)
old_driver_link = os.readlink(old_driver_path)
old_driver = os.path.basename(old_driver_link)
if old_driver not in constants.MELLANOX_DRIVERS:
unbind_path = '/sys/bus/pci/drivers/{}/unbind'.format(old_driver)
bind_path = '/sys/bus/pci/drivers/{}/bind'.format(driver)
override = "/sys/bus/pci/devices/{}/driver_override".format(pci)
with open(unbind_path, 'w') as unbind_fd:
unbind_fd.write(pci)
with open(override, 'w') as override_fd:
override_fd.write("\00")
with open(override, 'w') as override_fd:
override_fd.write(driver)
with open(bind_path, 'w') as bind_fd:
bind_fd.write(pci)
LOG.info("Device %s was binded on driver %s. Old driver is %s",
pci, driver, old_driver)
return old_driver
def _annotate_device(self, pod_link, pci, old_driver, new_driver, port_id):
k8s = clients.get_kubernetes_client()
pod_devices = self._get_pod_devices(pod_link)
pod_devices[pci] = {
constants.K8S_ANNOTATION_OLD_DRIVER: old_driver,
constants.K8S_ANNOTATION_CURRENT_DRIVER: new_driver,
constants.K8S_ANNOTATION_NEUTRON_PORT: port_id
}
pod_devices = jsonutils.dumps(pod_devices)
LOG.debug("Trying to annotate pod %s with pci %s, old driver %s "
"and new driver %s", pod_link, pci, old_driver, new_driver)
k8s.annotate(pod_link,
{constants.K8S_ANNOTATION_PCI_DEVICES: pod_devices})
def _get_pod_devices(self, pod_link):
k8s = clients.get_kubernetes_client()
pod = k8s.get(pod_link)
annotations = pod['metadata']['annotations']
try:
json_devices = annotations[constants.K8S_ANNOTATION_PCI_DEVICES]
devices = jsonutils.loads(json_devices)
except KeyError:
devices = {}
except Exception as ex:
LOG.exception("Exception while getting annotations: %s", ex)
LOG.debug("Pod %s has devices %s", pod_link, devices)
return devices
def _return_device_driver(self, vif):
if not hasattr(vif, 'pod_link'):
return
pod_devices = self._get_pod_devices(vif.pod_link)
for pci, info in pod_devices.items():
if info[constants.K8S_ANNOTATION_NEUTRON_PORT] == vif.id:
if (info[constants.K8S_ANNOTATION_OLD_DRIVER] !=
info[constants.K8S_ANNOTATION_CURRENT_DRIVER]):
LOG.debug("Driver of device %s should be changed back",
pci)
self._bind_device(
pci,
info[constants.K8S_ANNOTATION_OLD_DRIVER],
info[constants.K8S_ANNOTATION_CURRENT_DRIVER]
)
def _get_pci_info(self, pf, vf_index):
vendor_path = '/sys/class/net/{}/device/virtfn{}/vendor'.format(
pf, vf_index)
with open(vendor_path) as vendor_file:
vendor_full = vendor_file.read()
vendor = vendor_full.split('x')[1].strip()
device_path = '/sys/class/net/{}/device/virtfn{}/device'.format(
pf, vf_index)
with open(device_path) as device_file:
device_full = device_file.read()
device = device_full.split('x')[1].strip()
pci_vendor_info = '{}:{}'.format(vendor, device)
vf_path = '/sys/class/net/{}/device/virtfn{}'.format(
pf, vf_index)
pci_slot_path = os.readlink(vf_path)
pci_slot = pci_slot_path.split('/')[1]
return {'pci_slot': pci_slot,
'pci_vendor_info': pci_vendor_info}
def _save_pci_info(self, neutron_port, port_pci_info):
k8s = clients.get_kubernetes_client()
annot_name = self._make_annotation_name(neutron_port)
nodename = utils.get_node_name()
LOG.info("Trying to annotate node %s with pci info %s",
nodename, port_pci_info)
k8s.patch_node_annotations(nodename, annot_name, port_pci_info)
def _remove_pci_info(self, neutron_port):
k8s = clients.get_kubernetes_client()
annot_name = self._make_annotation_name(neutron_port)
nodename = utils.get_node_name()
LOG.info("Trying to delete pci info for port %s on node %s",
neutron_port, nodename)
k8s.remove_node_annotations(nodename, annot_name)
def _make_annotation_name(self, neutron_port):
annot_name = constants.K8S_ANNOTATION_NODE_PCI_DEVICE_INFO
annot_name = annot_name.replace('/', '~1')
annot_name = annot_name + '-' + neutron_port
return annot_name
def _acquire(self, path):
if self._lock and self._lock.acquired:
raise RuntimeError(_("Attempting to lock {} when {} "
"is already locked.").format(path, self._lock))
self._lock = lockutils.InterProcessLock(path=path)
return self._lock.acquire()
def _release(self):
if not self._lock:
raise RuntimeError(_("Attempting release an empty lock"))
return self._lock.release()
def _get_total_vfs(self, pf):
"""Read /sys information for configured number of VFs of a PF"""
pf_sys_path = '/sys/class/net/{}/device'.format(pf)
total_fname = os.path.join(pf_sys_path, 'sriov_numvfs')
try:
with open(total_fname) as total_f:
data = total_f.read()
except IOError:
LOG.warning("Could not open %s. No VFs for %s", total_fname, pf)
return 0
nvfs = 0
try:
nvfs = int(data.strip())
except ValueError:
LOG.warning("Could not parse %s from %s. No VFs for %s", data,
total_fname, pf)
return 0
LOG.debug("PF %s has %s VFs", pf, nvfs)
return nvfs
def _set_vf_mac(self, pf, vf_index, mac):
LOG.debug("Setting VF MAC: pf = %s, vf_index = %s, mac = %s",
pf, vf_index, mac)
ip = pyroute2.IPRoute()
pf_index = ip.link_lookup(ifname=pf)[0]
try:
ip.link("set", index=pf_index, vf={"vf": vf_index, "mac": mac})
except pyroute2.NetlinkError:
LOG.exception("Unable to set mac for VF %s on pf %s",
vf_index, pf)
raise
def _set_vf_vlan(self, pf, vf_index, vlan_id):
LOG.debug("Setting VF VLAN: pf = %s, vf_index = %s, vlan_id = %s",
pf, vf_index, vlan_id)
ip = pyroute2.IPRoute()
pf_index = ip.link_lookup(ifname=pf)[0]
try:
ip.link("set", index=pf_index, vf={"vf": vf_index,
"vlan": vlan_id})
except pyroute2.NetlinkError:
LOG.exception("Unable to set vlan for VF %s on pf %s",
vf_index, pf)
raise

View File

@ -302,8 +302,6 @@ class CNIDaemonServiceManager(cotyledon.ServiceManager):
os_vif.initialize() os_vif.initialize()
clients.setup_kubernetes_client() clients.setup_kubernetes_client()
if CONF.sriov.enable_pod_resource_service:
clients.setup_pod_resources_client()
self.manager = multiprocessing.Manager() self.manager = multiprocessing.Manager()
registry = self.manager.dict() # For Watcher->Server communication. registry = self.manager.dict() # For Watcher->Server communication.

View File

@ -17,7 +17,6 @@ from kuryr.lib import config as lib_config
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log as logging from oslo_log import log as logging
from kuryr_kubernetes import constants
from kuryr_kubernetes import version from kuryr_kubernetes import version
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
@ -300,45 +299,6 @@ nested_vif_driver_opts = [
default=3), default=3),
] ]
DEFAULT_PHYSNET_SUBNET_MAPPINGS = {}
DEFAULT_DEVICE_MAPPINGS = []
sriov_opts = [
cfg.StrOpt('kubelet_root_dir',
help=_("The root directory of the Kubelet daemon"),
default='/var/lib/kubelet'),
cfg.BoolOpt('enable_pod_resource_service',
help=_("Enable PodResources service"),
default=False),
cfg.DictOpt('default_physnet_subnets',
help=_("A mapping of default subnets for certain physnets "
"in a form of physnet-name:<SUBNET-ID>"),
default=DEFAULT_PHYSNET_SUBNET_MAPPINGS),
cfg.DictOpt('physnet_resource_mappings',
help=_("A mapping of physnets for certain sriov dp "
"resource name in a form of "
"physnet-name:resource name. "
"Resource name is listed in sriov device plugin "
"configuation file."),
default=DEFAULT_PHYSNET_SUBNET_MAPPINGS),
cfg.StrOpt('device_plugin_resource_prefix',
help=_("This prefix is used by sriov-network-device-plugin "
"It concatenates with resource suffix defined in "
"sriov device plugin configuration file."),
default=constants.K8S_SRIOV_PREFIX),
cfg.DictOpt('resource_driver_mappings',
help=_("A mappping driver names for certain resource "
"names. Expected that device of VIF related to "
"exact physnet should be binded on specified driver."),
default=DEFAULT_PHYSNET_SUBNET_MAPPINGS),
cfg.BoolOpt('enable_node_annotations',
help=_("Enable node annotations. This option allows to "
"set annotations required by neutron to set active "
"state of ports. This option is useless when "
"sriov-nic-agent is not running on node."),
default=False),
]
vhostuser = [ vhostuser = [
cfg.StrOpt('mount_point', cfg.StrOpt('mount_point',
help=_("Path where vhost-user port will be created " help=_("Path where vhost-user port will be created "
@ -367,7 +327,6 @@ CONF.register_opts(neutron_defaults, group='neutron_defaults')
CONF.register_opts(octavia_defaults, group='octavia_defaults') CONF.register_opts(octavia_defaults, group='octavia_defaults')
CONF.register_opts(cache_defaults, group='cache_defaults') CONF.register_opts(cache_defaults, group='cache_defaults')
CONF.register_opts(nested_vif_driver_opts, group='pod_vif_nested') CONF.register_opts(nested_vif_driver_opts, group='pod_vif_nested')
CONF.register_opts(sriov_opts, group='sriov')
CONF.register_opts(vhostuser, group='vhostuser') CONF.register_opts(vhostuser, group='vhostuser')
CONF.register_opts(prometheus_exporter_opts, "prometheus_exporter") CONF.register_opts(prometheus_exporter_opts, "prometheus_exporter")

View File

@ -68,12 +68,6 @@ K8S_ANNOTATION_NPWG_NETWORK = K8S_ANNOTATION_NPWG_PREFIX + '/networks'
K8S_ANNOTATION_NPWG_CRD_SUBNET_ID = 'subnetId' K8S_ANNOTATION_NPWG_CRD_SUBNET_ID = 'subnetId'
K8S_ANNOTATION_NPWG_CRD_DRIVER_TYPE = 'driverType' K8S_ANNOTATION_NPWG_CRD_DRIVER_TYPE = 'driverType'
K8S_ANNOTATION_NODE_PCI_DEVICE_INFO = 'openstack.org/kuryr-pci-info'
K8S_ANNOTATION_PCI_DEVICES = K8S_ANNOTATION_PREFIX + '-pci-devices'
K8S_ANNOTATION_OLD_DRIVER = 'old_driver'
K8S_ANNOTATION_CURRENT_DRIVER = 'current_driver'
K8S_ANNOTATION_NEUTRON_PORT = 'neutron_id'
K8S_ANNOTATION_HEADLESS_SERVICE = 'service.kubernetes.io/headless' K8S_ANNOTATION_HEADLESS_SERVICE = 'service.kubernetes.io/headless'
K8S_ANNOTATION_CONFIG_SOURCE = 'kubernetes.io/config.source' K8S_ANNOTATION_CONFIG_SOURCE = 'kubernetes.io/config.source'
@ -93,7 +87,6 @@ CNI_TIMEOUT_CODE = 200
CNI_DELETED_POD_SENTINEL = None CNI_DELETED_POD_SENTINEL = None
KURYR_PORT_NAME = 'kuryr-pool-port' KURYR_PORT_NAME = 'kuryr-pool-port'
KURYR_VIF_TYPE_SRIOV = 'sriov'
OCTAVIA_L2_MEMBER_MODE = "L2" OCTAVIA_L2_MEMBER_MODE = "L2"
OCTAVIA_L3_MEMBER_MODE = "L3" OCTAVIA_L3_MEMBER_MODE = "L3"
@ -110,14 +103,9 @@ VIF_POOL_SHOW = '/showPool'
DEFAULT_IFNAME = 'eth0' DEFAULT_IFNAME = 'eth0'
K8S_SRIOV_PREFIX = 'intel.com'
K8S_OPERATOR_IN = 'in' K8S_OPERATOR_IN = 'in'
K8S_OPERATOR_NOT_IN = 'notin' K8S_OPERATOR_NOT_IN = 'notin'
K8S_OPERATOR_DOES_NOT_EXIST = 'doesnotexist' K8S_OPERATOR_DOES_NOT_EXIST = 'doesnotexist'
K8S_OPERATOR_EXISTS = 'exists' K8S_OPERATOR_EXISTS = 'exists'
USERSPACE_DRIVERS = ['vfio-pci', 'uio', 'uio_pci_generic', 'igb_uio']
MELLANOX_DRIVERS = ['mlx4_core', 'mlx5_core']
LEFTOVER_RM_POOL_SIZE = 5 LEFTOVER_RM_POOL_SIZE = 5

View File

@ -1,190 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from kuryr.lib import constants as kl_const
from oslo_config import cfg
from oslo_log import log as logging
from kuryr_kubernetes import clients
from kuryr_kubernetes import config
from kuryr_kubernetes import constants
from kuryr_kubernetes.controller.drivers import neutron_vif
from kuryr_kubernetes.controller.drivers import utils as c_utils
from kuryr_kubernetes import os_vif_util as ovu
from kuryr_kubernetes import utils
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
def sriov_make_resource(prefix, res_name):
return prefix + "/" + res_name
class SriovVIFDriver(neutron_vif.NeutronPodVIFDriver):
"""Provides VIFs for SRIOV VF interfaces."""
ALIAS = 'sriov_pod_vif'
def __init__(self):
self._physnet_subnet_mapping = self._get_physnet_subnet_mapping()
self._physnet_resname_mapping = self._get_physnet_resname_mapping()
self._res_prefix = config.CONF.sriov.device_plugin_resource_prefix
def request_vif(self, pod, project_id, subnets, security_groups):
pod_name = pod['metadata']['name']
os_net = clients.get_network_client()
vif_plugin = 'sriov'
subnet_id = next(iter(subnets))
physnet = self._get_physnet_for_subnet_id(subnet_id)
LOG.debug("Pod {} handling {}".format(pod_name, physnet))
amount = self._get_remaining_sriov_vfs(pod, physnet)
if not amount:
LOG.error("SRIOV VIF request failed due to lack of "
"available VFs for the current pod creation")
return None
rq = self._get_port_request(pod, project_id,
subnets, security_groups)
port = os_net.create_port(**rq)
self._check_port_binding([port])
if not self._tag_on_creation:
c_utils.tag_neutron_resources([port])
vif = ovu.neutron_to_osvif_vif(vif_plugin, port, subnets)
vif.physnet = physnet
vif.pod_name = pod_name
vif.pod_link = utils.get_res_link(pod)
LOG.debug("{} vifs are available for the pod {}".format(
amount, pod_name))
self._reduce_remaining_sriov_vfs(pod, physnet)
return vif
def activate_vif(self, vif, **kwargs):
vif.active = True
def _get_physnet_subnet_mapping(self):
physnets = config.CONF.sriov.default_physnet_subnets
result = {}
for name, subnet_id in physnets.items():
result[subnet_id] = name
return result
def _get_physnet_resname_mapping(self):
resources = config.CONF.sriov.physnet_resource_mappings
result = {}
if resources:
for physnet_name, resource_name in resources.items():
result[physnet_name] = resource_name
else:
for k, v in self._physnet_subnet_mapping.items():
result[v] = v
return result
def _get_driver_by_res(self, resource_name):
mapping = config.CONF.sriov.resource_driver_mappings
try:
driver = mapping[resource_name]
except KeyError:
LOG.exception("No driver for resource_name %s", resource_name)
raise
return driver
def _get_physnet_for_subnet_id(self, subnet_id):
"""Returns an appropriate physnet for exact subnet_id from mapping"""
try:
physnet = self._physnet_subnet_mapping[subnet_id]
except KeyError:
LOG.error("No mapping for subnet {} in {}".format(
subnet_id, self._physnet_subnet_mapping))
raise
return physnet
def _get_remaining_sriov_vfs(self, pod, physnet):
"""Returns the number of remaining vfs.
Returns the number of remaining vfs from the initial number that
got allocated for the current pod. This information is stored in
pod object.
"""
containers = pod['spec']['containers']
total_amount = 0
sriov_resource_name = self._physnet_resname_mapping.get(physnet, None)
if not sriov_resource_name:
LOG.error("No mapping for physnet {} in {}".format(
physnet, self._physnet_resname_mapping))
return 0
sriov_resource_name = sriov_make_resource(self._res_prefix,
sriov_resource_name)
for container in containers:
try:
requests = container['resources']['requests']
amount_value = requests[sriov_resource_name]
total_amount += int(amount_value)
except KeyError:
continue
return total_amount
def _reduce_remaining_sriov_vfs(self, pod, physnet):
"""Reduces number of available vfs for request"""
sriov_resource_name = self._physnet_resname_mapping.get(physnet, None)
driver = self._get_driver_by_res(sriov_resource_name)
if not sriov_resource_name:
LOG.error("No mapping for physnet {} in {}".format(
physnet, self._physnet_resname_mapping))
return
containers = pod['spec']['containers']
sriov_resource_name = sriov_make_resource(self._res_prefix,
sriov_resource_name)
for container in containers:
try:
requests = container['resources']['requests']
num_of_sriov = int(requests[sriov_resource_name])
if num_of_sriov == 0:
continue
requests[sriov_resource_name] = str(num_of_sriov - 1)
if driver in constants.USERSPACE_DRIVERS:
break
except KeyError:
continue
def _get_port_request(self, pod, project_id, subnets, security_groups):
port_req_body = {
'project_id': project_id,
'name': c_utils.get_port_name(pod),
'network_id': c_utils.get_network_id(subnets),
'fixed_ips': ovu.osvif_to_neutron_fixed_ips(subnets),
'device_owner': kl_const.DEVICE_OWNER + ':sriov',
'device_id': c_utils.get_device_id(pod),
'admin_state_up': True,
'binding:vnic_type': 'direct',
'binding:host_id': c_utils.get_host_id(pod),
}
if security_groups:
port_req_body['security_groups'] = security_groups
if self._tag_on_creation:
tags = CONF.neutron_defaults.resource_tags
if tags:
port_req_body['tags'] = tags
return port_req_body

View File

@ -627,33 +627,6 @@ def get_namespace(namespace_name):
return None return None
def update_port_pci_info(pod, vif):
node = get_host_id(pod)
annot_port_pci_info = get_port_annot_pci_info(node, vif.id)
os_net = clients.get_network_client()
LOG.debug("Neutron port %s is updated with binding:profile info %s",
vif.id, annot_port_pci_info)
os_net.update_port(vif.id, binding_profile=annot_port_pci_info)
def get_port_annot_pci_info(nodename, neutron_port):
k8s = clients.get_kubernetes_client()
annot_name = constants.K8S_ANNOTATION_NODE_PCI_DEVICE_INFO
annot_name = annot_name + '-' + neutron_port
node_info = k8s.get('/api/v1/nodes/{}'.format(nodename))
annotations = node_info['metadata']['annotations']
try:
json_pci_info = annotations[annot_name]
pci_info = jsonutils.loads(json_pci_info)
except KeyError:
pci_info = {}
except Exception:
LOG.exception('Exception when reading annotations '
'%s and converting from json', annot_name)
return pci_info
def get_endpoints_targets(name, namespace): def get_endpoints_targets(name, namespace):
kubernetes = clients.get_kubernetes_client() kubernetes = clients.get_kubernetes_client()
target_ips = [] target_ips = []

View File

@ -99,7 +99,6 @@ VIF_TYPE_TO_DRIVER_MAPPING = {
'VIFBridge': 'neutron-vif', 'VIFBridge': 'neutron-vif',
'VIFVlanNested': 'nested-vlan', 'VIFVlanNested': 'nested-vlan',
'VIFMacvlanNested': 'nested-macvlan', 'VIFMacvlanNested': 'nested-macvlan',
'VIFSriov': 'sriov',
'VIFDPDKNested': 'nested-dpdk', 'VIFDPDKNested': 'nested-dpdk',
'VIFVHostUser': 'neutron-vif', 'VIFVHostUser': 'neutron-vif',
} }

View File

@ -87,12 +87,6 @@ class KuryrPortHandler(k8s_base.ResourceEventHandler):
try: try:
for ifname, data in vifs.items(): for ifname, data in vifs.items():
if (data['vif'].plugin == constants.KURYR_VIF_TYPE_SRIOV and
oslo_cfg.CONF.sriov.enable_node_annotations):
pod_node = kuryrport_crd['spec']['podNodeName']
# TODO(gryf): This probably will need adoption, so it will
# add information to CRD instead of the pod.
driver_utils.update_port_pci_info(pod_node, data['vif'])
if not data['vif'].active: if not data['vif'].active:
try: try:
self._drv_vif_pool.activate_vif(data['vif'], pod=pod, self._drv_vif_pool.activate_vif(data['vif'], pod=pod,

View File

@ -198,34 +198,6 @@ class K8sClient(object):
self._raise_from_response(response) self._raise_from_response(response)
return response.json().get('status') return response.json().get('status')
def patch_node_annotations(self, node, annotation_name, value):
content_type = 'application/json-patch+json'
path = '{}/nodes/{}/'.format(constants.K8S_API_BASE, node)
value = jsonutils.dumps(value)
url, header = self._get_url_and_header(path, content_type)
data = [{'op': 'add',
'path': '/metadata/annotations/{}'.format(annotation_name),
'value': value}]
response = self.session.patch(url, data=jsonutils.dumps(data),
headers=header)
self._raise_from_response(response)
return response.json().get('status')
def remove_node_annotations(self, node, annotation_name):
content_type = 'application/json-patch+json'
path = '{}/nodes/{}/'.format(constants.K8S_API_BASE, node)
url, header = self._get_url_and_header(path, content_type)
data = [{'op': 'remove',
'path': '/metadata/annotations/{}'.format(annotation_name)}]
response = self.session.patch(url, data=jsonutils.dumps(data),
headers=header)
self._raise_from_response(response)
return response.json().get('status')
def post(self, path, body): def post(self, path, body):
LOG.debug("Post %(path)s: %(body)s", {'path': path, 'body': body}) LOG.debug("Post %(path)s: %(body)s", {'path': path, 'body': body})
url = self._base_url + path url = self._base_url + path

View File

@ -70,21 +70,6 @@ class VIFMacvlanNested(obj_osvif.VIFBase):
} }
@obj_base.VersionedObjectRegistry.register
class VIFSriov(obj_osvif.VIFDirect):
# This is OVO based SRIOV vif.
# Version 1.0: Initial version
# Version 1.1: Added pod_name field and pod_link field.
VERSION = '1.1'
fields = {
# physnet of the VIF
'physnet': obj_fields.StringField(),
'pod_name': obj_fields.StringField(),
'pod_link': obj_fields.StringField(),
}
@obj_base.VersionedObjectRegistry.register @obj_base.VersionedObjectRegistry.register
class VIFDPDKNested(obj_osvif.VIFNestedDPDK): class VIFDPDKNested(obj_osvif.VIFNestedDPDK):
# This is OVO based DPDK Nested vif. # This is OVO based DPDK Nested vif.

View File

@ -37,7 +37,6 @@ _kuryr_k8s_opts = [
('health_server', health.health_server_opts), ('health_server', health.health_server_opts),
('cni_health_server', cni_health.cni_health_server_opts), ('cni_health_server', cni_health.cni_health_server_opts),
('namespace_subnet', namespace_subnet.namespace_subnet_driver_opts), ('namespace_subnet', namespace_subnet.namespace_subnet_driver_opts),
('sriov', config.sriov_opts),
] ]

View File

@ -42,23 +42,3 @@ class NoOpPlugin(PluginBase):
def unplug(self, vif, instance_info): def unplug(self, vif, instance_info):
pass pass
class SriovPlugin(PluginBase):
"""Sriov Plugin to be used with sriov VIFS"""
def describe(self):
return objects.host_info.HostPluginInfo(
plugin_name='sriov',
vif_info=[
objects.host_info.HostVIFInfo(
vif_object_name=objects.vif.VIFDirect.__name__,
min_version="1.0",
max_version="1.0"),
])
def plug(self, vif, instance_info):
pass
def unplug(self, vif, instance_info):
pass

View File

@ -360,34 +360,6 @@ def neutron_to_osvif_vif_nested_macvlan(neutron_port, subnets):
vif_name=_get_vif_name(neutron_port)) vif_name=_get_vif_name(neutron_port))
def neutron_to_osvif_vif_sriov(vif_plugin, os_port, subnets):
"""Converts Neutron port to VIF object for SRIOV containers.
:param vif_plugin: name of the os-vif plugin to use (i.e. 'noop')
:param os_port: openstack.network.v2.port.Port object
:param subnets: subnet mapping as returned by PodSubnetsDriver.get_subnets
:return: osv_vif VIFSriov object
"""
details = os_port.binding_vif_details or {}
network = _make_vif_network(os_port, subnets)
vlan_name = network.vlan if network.should_provide_vlan else ''
vif = k_vif.VIFSriov(
id=os_port.id,
address=os_port.mac_address,
network=network,
has_traffic_filtering=details.get('port_filter', False),
preserve_on_delete=False,
active=_is_port_active(os_port),
plugin=vif_plugin,
mode='passthrough',
vlan_name=vlan_name,
vif_name=_get_vif_name(os_port),
)
return vif
def neutron_to_osvif_vif_dpdk(os_port, subnets, pod): def neutron_to_osvif_vif_dpdk(os_port, subnets, pod):
"""Converts Neutron port to VIF object for nested dpdk containers. """Converts Neutron port to VIF object for nested dpdk containers.

View File

@ -1,40 +0,0 @@
// Generated from kubernetes/pkg/kubelet/apis/podresources/v1alpha1/api.proto
// To regenerate api.proto, api_pb2.py and api_pb2_grpc.py follow instructions
// from doc/source/devref/updating_pod_resources_api.rst.
syntax = 'proto3';
package v1alpha1;
// PodResourcesLister is a service provided by the kubelet that provides information about the
// node resources consumed by pods and containers on the node
service PodResourcesLister {
rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {}
}
// ListPodResourcesRequest is the request made to the PodResourcesLister service
message ListPodResourcesRequest {}
// ListPodResourcesResponse is the response returned by List function
message ListPodResourcesResponse {
repeated PodResources pod_resources = 1;
}
// PodResources contains information about the node resources assigned to a pod
message PodResources {
string name = 1;
string namespace = 2;
repeated ContainerResources containers = 3;
}
// ContainerResources contains information about the resources assigned to a container
message ContainerResources {
string name = 1;
repeated ContainerDevices devices = 2;
}
// ContainerDevices contains information about the devices assigned to a container
message ContainerDevices {
string resource_name = 1;
repeated string device_ids = 2;
}

View File

@ -1,273 +0,0 @@
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: kuryr_kubernetes/pod_resources/api.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='kuryr_kubernetes/pod_resources/api.proto',
package='v1alpha1',
syntax='proto3',
serialized_options=None,
serialized_pb=_b('\n(kuryr_kubernetes/pod_resources/api.proto\x12\x08v1alpha1\"\x19\n\x17ListPodResourcesRequest\"I\n\x18ListPodResourcesResponse\x12-\n\rpod_resources\x18\x01 \x03(\x0b\x32\x16.v1alpha1.PodResources\"a\n\x0cPodResources\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x11\n\tnamespace\x18\x02 \x01(\t\x12\x30\n\ncontainers\x18\x03 \x03(\x0b\x32\x1c.v1alpha1.ContainerResources\"O\n\x12\x43ontainerResources\x12\x0c\n\x04name\x18\x01 \x01(\t\x12+\n\x07\x64\x65vices\x18\x02 \x03(\x0b\x32\x1a.v1alpha1.ContainerDevices\"=\n\x10\x43ontainerDevices\x12\x15\n\rresource_name\x18\x01 \x01(\t\x12\x12\n\ndevice_ids\x18\x02 \x03(\t2e\n\x12PodResourcesLister\x12O\n\x04List\x12!.v1alpha1.ListPodResourcesRequest\x1a\".v1alpha1.ListPodResourcesResponse\"\x00\x62\x06proto3')
)
_LISTPODRESOURCESREQUEST = _descriptor.Descriptor(
name='ListPodResourcesRequest',
full_name='v1alpha1.ListPodResourcesRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=54,
serialized_end=79,
)
_LISTPODRESOURCESRESPONSE = _descriptor.Descriptor(
name='ListPodResourcesResponse',
full_name='v1alpha1.ListPodResourcesResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='pod_resources', full_name='v1alpha1.ListPodResourcesResponse.pod_resources', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=81,
serialized_end=154,
)
_PODRESOURCES = _descriptor.Descriptor(
name='PodResources',
full_name='v1alpha1.PodResources',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='v1alpha1.PodResources.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='namespace', full_name='v1alpha1.PodResources.namespace', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='containers', full_name='v1alpha1.PodResources.containers', index=2,
number=3, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=156,
serialized_end=253,
)
_CONTAINERRESOURCES = _descriptor.Descriptor(
name='ContainerResources',
full_name='v1alpha1.ContainerResources',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='v1alpha1.ContainerResources.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='devices', full_name='v1alpha1.ContainerResources.devices', index=1,
number=2, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=255,
serialized_end=334,
)
_CONTAINERDEVICES = _descriptor.Descriptor(
name='ContainerDevices',
full_name='v1alpha1.ContainerDevices',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='resource_name', full_name='v1alpha1.ContainerDevices.resource_name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='device_ids', full_name='v1alpha1.ContainerDevices.device_ids', index=1,
number=2, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=336,
serialized_end=397,
)
_LISTPODRESOURCESRESPONSE.fields_by_name['pod_resources'].message_type = _PODRESOURCES
_PODRESOURCES.fields_by_name['containers'].message_type = _CONTAINERRESOURCES
_CONTAINERRESOURCES.fields_by_name['devices'].message_type = _CONTAINERDEVICES
DESCRIPTOR.message_types_by_name['ListPodResourcesRequest'] = _LISTPODRESOURCESREQUEST
DESCRIPTOR.message_types_by_name['ListPodResourcesResponse'] = _LISTPODRESOURCESRESPONSE
DESCRIPTOR.message_types_by_name['PodResources'] = _PODRESOURCES
DESCRIPTOR.message_types_by_name['ContainerResources'] = _CONTAINERRESOURCES
DESCRIPTOR.message_types_by_name['ContainerDevices'] = _CONTAINERDEVICES
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
ListPodResourcesRequest = _reflection.GeneratedProtocolMessageType('ListPodResourcesRequest', (_message.Message,), dict(
DESCRIPTOR = _LISTPODRESOURCESREQUEST,
__module__ = 'kuryr_kubernetes.pod_resources.api_pb2'
# @@protoc_insertion_point(class_scope:v1alpha1.ListPodResourcesRequest)
))
_sym_db.RegisterMessage(ListPodResourcesRequest)
ListPodResourcesResponse = _reflection.GeneratedProtocolMessageType('ListPodResourcesResponse', (_message.Message,), dict(
DESCRIPTOR = _LISTPODRESOURCESRESPONSE,
__module__ = 'kuryr_kubernetes.pod_resources.api_pb2'
# @@protoc_insertion_point(class_scope:v1alpha1.ListPodResourcesResponse)
))
_sym_db.RegisterMessage(ListPodResourcesResponse)
PodResources = _reflection.GeneratedProtocolMessageType('PodResources', (_message.Message,), dict(
DESCRIPTOR = _PODRESOURCES,
__module__ = 'kuryr_kubernetes.pod_resources.api_pb2'
# @@protoc_insertion_point(class_scope:v1alpha1.PodResources)
))
_sym_db.RegisterMessage(PodResources)
ContainerResources = _reflection.GeneratedProtocolMessageType('ContainerResources', (_message.Message,), dict(
DESCRIPTOR = _CONTAINERRESOURCES,
__module__ = 'kuryr_kubernetes.pod_resources.api_pb2'
# @@protoc_insertion_point(class_scope:v1alpha1.ContainerResources)
))
_sym_db.RegisterMessage(ContainerResources)
ContainerDevices = _reflection.GeneratedProtocolMessageType('ContainerDevices', (_message.Message,), dict(
DESCRIPTOR = _CONTAINERDEVICES,
__module__ = 'kuryr_kubernetes.pod_resources.api_pb2'
# @@protoc_insertion_point(class_scope:v1alpha1.ContainerDevices)
))
_sym_db.RegisterMessage(ContainerDevices)
_PODRESOURCESLISTER = _descriptor.ServiceDescriptor(
name='PodResourcesLister',
full_name='v1alpha1.PodResourcesLister',
file=DESCRIPTOR,
index=0,
serialized_options=None,
serialized_start=399,
serialized_end=500,
methods=[
_descriptor.MethodDescriptor(
name='List',
full_name='v1alpha1.PodResourcesLister.List',
index=0,
containing_service=None,
input_type=_LISTPODRESOURCESREQUEST,
output_type=_LISTPODRESOURCESRESPONSE,
serialized_options=None,
),
])
_sym_db.RegisterServiceDescriptor(_PODRESOURCESLISTER)
DESCRIPTOR.services_by_name['PodResourcesLister'] = _PODRESOURCESLISTER
# @@protoc_insertion_point(module_scope)

View File

@ -1,48 +0,0 @@
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc
from kuryr_kubernetes.pod_resources import api_pb2 as kuryr__kubernetes_dot_pod__resources_dot_api__pb2
class PodResourcesListerStub(object):
"""PodResourcesLister is a service provided by the kubelet that provides information about the
node resources consumed by pods and containers on the node
"""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.List = channel.unary_unary(
'/v1alpha1.PodResourcesLister/List',
request_serializer=kuryr__kubernetes_dot_pod__resources_dot_api__pb2.ListPodResourcesRequest.SerializeToString,
response_deserializer=kuryr__kubernetes_dot_pod__resources_dot_api__pb2.ListPodResourcesResponse.FromString,
)
class PodResourcesListerServicer(object):
"""PodResourcesLister is a service provided by the kubelet that provides information about the
node resources consumed by pods and containers on the node
"""
def List(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_PodResourcesListerServicer_to_server(servicer, server):
rpc_method_handlers = {
'List': grpc.unary_unary_rpc_method_handler(
servicer.List,
request_deserializer=kuryr__kubernetes_dot_pod__resources_dot_api__pb2.ListPodResourcesRequest.FromString,
response_serializer=kuryr__kubernetes_dot_pod__resources_dot_api__pb2.ListPodResourcesResponse.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'v1alpha1.PodResourcesLister', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))

View File

@ -1,43 +0,0 @@
# Copyright (c) 2019 Samsung Electronics Co.,Ltd
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log
import grpc
from kuryr_kubernetes.pod_resources import api_pb2
from kuryr_kubernetes.pod_resources import api_pb2_grpc
LOG = log.getLogger(__name__)
POD_RESOURCES_SOCKET = '/pod-resources/kubelet.sock'
class PodResourcesClient(object):
def __init__(self, kubelet_root_dir):
socket = 'unix:' + kubelet_root_dir + POD_RESOURCES_SOCKET
LOG.debug("Creating PodResourcesClient on socket: %s", socket)
self._channel = grpc.insecure_channel(socket)
self._stub = api_pb2_grpc.PodResourcesListerStub(self._channel)
def list(self):
try:
response = self._stub.List(api_pb2.ListPodResourcesRequest())
LOG.debug("PodResourceResponse: %s", response)
return response
except grpc.RpcError as e:
LOG.error("ListPodResourcesRequest failed: %s", e)
raise

View File

@ -146,42 +146,3 @@ class TestStatusCmd(test_base.TestCase):
ann_objs.append('{}') ann_objs.append('{}')
self._test__check_annotations(ann_objs, 1) self._test__check_annotations(ann_objs, 1)
def _test__convert_annotations(self, method, calls):
self.cmd.k8s.annotate = mock.Mock()
ann_objs = [
('foo',
vif.PodState(default_vif=vif.VIFMacvlanNested(vif_name='foo'))),
('bar', vif.VIFMacvlanNested(vif_name='bar')),
]
ann_objs = [(name, jsonutils.dumps(ann.obj_to_primitive()))
for name, ann in ann_objs]
pods = {
'items': [
{
'apiVersion': 'v1',
'kind': 'Pod',
'metadata': {
'annotations': {
constants.K8S_ANNOTATION_VIF: ann
},
'name': name,
'resourceVersion': 1,
}
} for name, ann in ann_objs
]
}
self.cmd.k8s = mock.Mock(get=mock.Mock(return_value=pods))
method()
for args in calls:
self.cmd.k8s.annotate.assert_any_call(*args)
def test_update_annotations(self):
self._test__convert_annotations(self.cmd.update_annotations,
[('/api/v1/pods/bar', mock.ANY, 1)])
def test_downgrade_annotations(self):
self._test__convert_annotations(self.cmd.downgrade_annotations,
[('/api/v1/pods/foo', mock.ANY, 1)])

View File

@ -25,9 +25,7 @@ from oslo_utils import uuidutils
from kuryr_kubernetes.cni.binding import base from kuryr_kubernetes.cni.binding import base
from kuryr_kubernetes.cni.binding import nested from kuryr_kubernetes.cni.binding import nested
from kuryr_kubernetes.cni.binding import sriov
from kuryr_kubernetes.cni.binding import vhostuser from kuryr_kubernetes.cni.binding import vhostuser
from kuryr_kubernetes import constants as k_const
from kuryr_kubernetes import exceptions from kuryr_kubernetes import exceptions
from kuryr_kubernetes import objects from kuryr_kubernetes import objects
from kuryr_kubernetes.tests import base as test_base from kuryr_kubernetes.tests import base as test_base
@ -296,208 +294,6 @@ class TestNestedMacvlanDriver(TestDriverMixin, test_base.TestCase):
self._test_disconnect() self._test_disconnect()
class TestSriovDriver(TestDriverMixin, test_base.TestCase):
def setUp(self):
super(TestSriovDriver, self).setUp()
self.vif = fake._fake_vif(objects.vif.VIFSriov)
self.vif.physnet = 'physnet2'
self.pci_info = mock.MagicMock()
self.vif.pod_link = 'pod_link'
self.vif.pod_name = 'pod_1'
self.pci = mock.Mock()
self.device_ids = ['pci_dev_1']
self.device = mock.Mock()
self.device.device_ids = self.device_ids
self.device.resource_name = 'intel.com/sriov'
self.cont_devs = [self.device]
self.container = mock.Mock()
self.container.devices = self.cont_devs
self.pod_containers = [self.container]
self.pod_resource = mock.Mock()
self.pod_resource.containers = self.pod_containers
self.pod_resource.name = 'pod_1'
self.resources = [self.pod_resource]
CONF.set_override('physnet_resource_mappings', 'physnet2:sriov',
group='sriov')
self.addCleanup(CONF.clear_override, 'physnet_resource_mappings',
group='sriov')
CONF.set_override('device_plugin_resource_prefix', 'intel.com',
group='sriov')
CONF.set_override('resource_driver_mappings', 'sriov:igbvf',
group='sriov')
CONF.set_override('enable_node_annotations', True,
group='sriov')
@mock.patch('kuryr_kubernetes.cni.binding.sriov.VIFSriovDriver.'
'_save_pci_info')
@mock.patch('kuryr_kubernetes.cni.binding.sriov.VIFSriovDriver.'
'_process_vif')
def test_connect(self, m_proc_vif, m_save_pci):
m_proc_vif.return_value = self.pci_info
self._test_connect()
m_save_pci.assert_called_once_with(self.vif.id, self.pci_info)
@mock.patch('kuryr_kubernetes.cni.binding.sriov.VIFSriovDriver.'
'_save_pci_info')
@mock.patch('kuryr_kubernetes.cni.binding.sriov.VIFSriovDriver.'
'_process_vif')
def test_connect_no_annotations(self, m_proc_vif, m_save_pci):
m_proc_vif.return_value = self.pci_info
CONF.set_override('enable_node_annotations', False,
group='sriov')
self._test_connect()
m_save_pci.assert_not_called()
@mock.patch('kuryr_kubernetes.cni.binding.sriov.VIFSriovDriver.'
'_return_device_driver')
@mock.patch('kuryr_kubernetes.cni.binding.sriov.VIFSriovDriver.'
'_remove_pci_info')
def test_disconnect(self, m_remove_pci, m_return_device):
m_remove_pci.return_value = None
m_return_device.return_value = None
self._test_disconnect()
m_remove_pci.assert_called_once_with(self.vif.id)
@mock.patch('kuryr_kubernetes.cni.binding.sriov.VIFSriovDriver.'
'_return_device_driver')
@mock.patch('kuryr_kubernetes.cni.binding.sriov.VIFSriovDriver.'
'_remove_pci_info')
def test_disconnect_no_annotations(self, m_remove_pci, m_return_device):
m_return_device.return_value = None
m_remove_pci.return_value = None
CONF.set_override('enable_node_annotations', False,
group='sriov')
self._test_disconnect()
m_remove_pci.assert_not_called()
@mock.patch('kuryr_kubernetes.clients.get_pod_resources_client')
@mock.patch('kuryr_kubernetes.cni.binding.sriov.VIFSriovDriver.'
'_get_resource_by_physnet')
def test_process_vif(self, m_get_res_ph, m_get_prc):
cls = sriov.VIFSriovDriver
m_driver = mock.Mock(spec=cls)
m_driver._make_resource.return_value = 'intel.com/sriov'
m_driver._get_pod_devices.return_value = ['pci_dev_2']
m_driver._get_driver_by_res.return_value = 'igbvf'
m_driver._compute_pci.return_value = self.pci_info
pod_resources_list = mock.Mock()
pod_resources_list.pod_resources = self.resources
pod_resources_client = mock.Mock()
pod_resources_client.list.return_value = pod_resources_list
m_get_prc.return_value = pod_resources_client
self.assertEqual(self.pci_info, cls._process_vif(m_driver, self.vif,
self.ifname,
self.netns))
m_driver._compute_pci.assert_called_once_with('pci_dev_1', 'igbvf',
self.vif.pod_link,
self.vif, self.ifname,
self.netns)
def test_get_resource_by_physnet(self):
cls = sriov.VIFSriovDriver
m_driver = mock.Mock(spec=cls)
self.assertEqual(
'sriov', cls._get_resource_by_physnet(m_driver, self.vif.physnet))
def test_make_resource(self):
cls = sriov.VIFSriovDriver
m_driver = mock.Mock(spec=cls)
self.assertEqual('intel.com/sriov', cls._make_resource(m_driver,
'sriov'))
def test_get_driver_by_res(self):
cls = sriov.VIFSriovDriver
m_driver = mock.Mock(spec=cls)
self.assertEqual('igbvf', cls._get_driver_by_res(m_driver, 'sriov'))
def test_compute_pci_vfio(self):
cls = sriov.VIFSriovDriver
m_driver = mock.Mock(spec=cls)
CONF.set_override('resource_driver_mappings', 'sriov:vfio-pci',
group='sriov')
vf_name = 'enp3s0s1'
vf_index = '1'
pf = 'enp1s0f1'
new_driver = 'vfio-pci'
old_driver = 'igbvf'
m_driver._get_vf_info.return_value = (vf_name, vf_index, pf,
self.pci_info)
m_driver._bind_device.return_value = old_driver
self.assertEqual(self.pci_info, cls._compute_pci(m_driver, self.pci,
new_driver,
self.vif.pod_link,
self.vif,
self.ifname,
self.netns))
m_driver._get_vf_info.assert_called_once_with(self.pci, new_driver)
m_driver._set_vf_mac.assert_called_once_with(pf, vf_index,
self.vif.address)
m_driver._bind_device.assert_called_once_with(self.pci, new_driver)
m_driver._annotate_device.assert_called_once_with(self.vif.pod_link,
self.pci, old_driver,
new_driver,
self.vif.id)
def test_compute_pci_netdevice(self):
cls = sriov.VIFSriovDriver
m_driver = mock.Mock(spec=cls)
CONF.set_override('resource_driver_mappings', 'sriov:igbvf',
group='sriov')
vf_name = 'enp3s0s1'
vf_index = '1'
pf = 'enp1s0f1'
new_driver = 'igbvf'
m_driver._get_vf_info.return_value = (vf_name, vf_index, pf,
self.pci_info)
m_driver._move_to_netns.return_value = self.pci_info
self.assertEqual(self.pci_info, cls._compute_pci(m_driver, self.pci,
new_driver,
self.vif.pod_link,
self.vif,
self.ifname,
self.netns))
m_driver._get_vf_info.assert_called_once_with(self.pci, new_driver)
m_driver._move_to_netns.assert_called_once_with(self.ifname,
self.netns, self.vif,
vf_name, vf_index, pf)
m_driver._annotate_device.assert_called_once_with(self.vif.pod_link,
self.pci, new_driver,
new_driver,
self.vif.id)
def test_return_device_driver(self):
cls = sriov.VIFSriovDriver
m_driver = mock.Mock(spec=cls)
self.vif.id = 'id'
old_driver = 'igbvf'
new_driver = 'vfio-pci'
pci = 'pci_dev_1'
m_driver._get_pod_devices.return_value = {
pci: {
k_const.K8S_ANNOTATION_NEUTRON_PORT: 'id',
k_const.K8S_ANNOTATION_OLD_DRIVER: old_driver,
k_const.K8S_ANNOTATION_CURRENT_DRIVER: new_driver
}
}
cls._return_device_driver(m_driver, self.vif)
m_driver._bind_device.assert_called_once_with(pci, old_driver,
new_driver)
class TestVHostUserDriver(TestDriverMixin, test_base.TestCase): class TestVHostUserDriver(TestDriverMixin, test_base.TestCase):
def setUp(self): def setUp(self):
super(TestVHostUserDriver, self).setUp() super(TestVHostUserDriver, self).setUp()

View File

@ -1,217 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from unittest import mock
import uuid
from kuryr_kubernetes.controller.drivers import sriov as drvs
from kuryr_kubernetes.tests import base as test_base
from kuryr_kubernetes.tests import fake
from kuryr_kubernetes.tests.unit import kuryr_fixtures as k_fix
from kuryr_kubernetes import constants as k_const
from kuryr_kubernetes import os_vif_util as ovu
from kuryr_kubernetes import utils
from oslo_config import cfg as oslo_cfg
SRIOV_RESOURCE_NAME_A = "sriov_a"
SRIOV_RESOURCE_NAME_B = "sriov_b"
AMOUNT_FOR_SUBNET_A = 2
AMOUNT_FOR_SUBNET_B = 3
SRIOV_PHYSNET_A = "physnet_a"
SRIOV_PHYSNET_B = "physnet_b"
class TestSriovVIFDriver(test_base.TestCase):
def setUp(self):
super(TestSriovVIFDriver, self).setUp()
self._res_map = {SRIOV_PHYSNET_A: SRIOV_RESOURCE_NAME_A,
SRIOV_PHYSNET_B: SRIOV_RESOURCE_NAME_B}
sriov_request = {drvs.sriov_make_resource(k_const.K8S_SRIOV_PREFIX,
SRIOV_RESOURCE_NAME_A): (
str(AMOUNT_FOR_SUBNET_A)),
drvs.sriov_make_resource(k_const.K8S_SRIOV_PREFIX,
SRIOV_RESOURCE_NAME_B): (
str(AMOUNT_FOR_SUBNET_B))}
self._pod = fake.get_k8s_pod()
self._pod['status'] = {'phase': k_const.K8S_POD_STATUS_PENDING}
self._pod['spec'] = {
'hostNetwork': False,
'nodeName': 'hostname',
'containers': [
{
'resources': {
'requests': sriov_request
}
}
]
}
def test_activate_vif(self):
cls = drvs.SriovVIFDriver
m_driver = mock.Mock(spec=cls)
vif = mock.Mock()
vif.active = False
cls.activate_vif(m_driver, vif)
self.assertEqual(True, vif.active)
@mock.patch('kuryr_kubernetes.os_vif_util.osvif_to_neutron_fixed_ips')
@mock.patch.object(ovu, 'neutron_to_osvif_vif')
def test_request_vif(self, m_to_vif, m_to_fips):
cls = drvs.SriovVIFDriver
cls._tag_on_creation = True
m_driver = mock.Mock(spec=cls)
os_net = self.useFixture(k_fix.MockNetworkClient()).client
project_id = mock.sentinel.project_id
fixed_ips = mock.sentinel.fixed_ips
m_to_fips.return_value = fixed_ips
network = mock.sentinel.Network
subnet_id = str(uuid.uuid4())
subnets = {subnet_id: network}
security_groups = mock.sentinel.security_groups
port_fixed_ips = mock.sentinel.port_fixed_ips
port_id = mock.sentinel.port_id
port = {
'fixed_ips': port_fixed_ips,
'id': port_id
}
port_request = {'fake_req': mock.sentinel.port_request}
m_driver._get_port_request.return_value = port_request
vif = mock.sentinel.vif
m_to_vif.return_value = vif
os_net.create_port.return_value = port
utils.get_subnet.return_value = subnets
self.assertEqual(vif, cls.request_vif(m_driver, self._pod, project_id,
subnets, security_groups))
os_net.create_port.assert_called_once_with(**port_request)
@mock.patch('kuryr_kubernetes.os_vif_util.osvif_to_neutron_fixed_ips')
@mock.patch.object(ovu, 'neutron_to_osvif_vif')
def test_request_vif_not_enough_vfs(self, m_to_vif, m_to_fips):
cls = drvs.SriovVIFDriver
m_driver = mock.Mock(spec=cls)
m_driver._get_remaining_sriov_vfs.return_value = 0
os_net = self.useFixture(k_fix.MockNetworkClient()).client
project_id = mock.sentinel.project_id
network = mock.sentinel.Network
subnet_id = str(uuid.uuid4())
subnets = {subnet_id: network}
security_groups = mock.sentinel.security_groups
self.assertIsNone(cls.request_vif(m_driver, self._pod, project_id,
subnets, security_groups))
os_net.create_port.assert_not_called()
def test_get_sriov_num_vf(self):
cls = drvs.SriovVIFDriver
m_driver = mock.Mock(spec=cls)
m_driver._physnet_resname_mapping = self._res_map
m_driver._res_prefix = k_const.K8S_SRIOV_PREFIX
amount = cls._get_remaining_sriov_vfs(m_driver, self._pod,
SRIOV_PHYSNET_A)
self.assertEqual(amount, AMOUNT_FOR_SUBNET_A)
amount = cls._get_remaining_sriov_vfs(m_driver, self._pod,
SRIOV_PHYSNET_B)
self.assertEqual(amount, AMOUNT_FOR_SUBNET_B)
def test_reduce_remaining_sriov_vfs(self):
cls = drvs.SriovVIFDriver
m_driver = mock.Mock(spec=cls)
m_driver._physnet_resname_mapping = self._res_map
m_driver._res_prefix = k_const.K8S_SRIOV_PREFIX
cls._reduce_remaining_sriov_vfs(m_driver, self._pod, SRIOV_PHYSNET_A)
amount = cls._get_remaining_sriov_vfs(m_driver, self._pod,
SRIOV_PHYSNET_A)
self.assertEqual(amount, AMOUNT_FOR_SUBNET_A - 1)
cls._reduce_remaining_sriov_vfs(m_driver, self._pod, SRIOV_PHYSNET_B)
amount = cls._get_remaining_sriov_vfs(m_driver, self._pod,
SRIOV_PHYSNET_B)
self.assertEqual(amount, AMOUNT_FOR_SUBNET_B - 1)
def test_get_physnet_subnet_mapping(self):
cls = drvs.SriovVIFDriver
m_driver = mock.Mock(spec=cls)
subnet_id = str(uuid.uuid4())
oslo_cfg.CONF.set_override('default_physnet_subnets',
'physnet10_4:'+str(subnet_id),
group='sriov')
mapping = cls._get_physnet_subnet_mapping(m_driver)
self.assertEqual(mapping, {subnet_id: 'physnet10_4'})
def test_get_physnet_resname_mapping(self):
cls = drvs.SriovVIFDriver
m_driver = mock.Mock(spec=cls)
oslo_cfg.CONF.set_override('physnet_resource_mappings',
SRIOV_PHYSNET_A + ':' +
SRIOV_RESOURCE_NAME_A + ',' +
SRIOV_PHYSNET_B + ':' +
SRIOV_RESOURCE_NAME_B,
group='sriov')
mapping = cls._get_physnet_resname_mapping(m_driver)
self.assertEqual(mapping, self._res_map)
def test_empty_physnet_resname_mapping(self):
cls = drvs.SriovVIFDriver
m_driver = mock.Mock(spec=cls)
empty_res_map = {SRIOV_PHYSNET_A: SRIOV_PHYSNET_A,
SRIOV_PHYSNET_B: SRIOV_PHYSNET_B}
subnet_id = str(uuid.uuid4())
subnet_id_2 = str(uuid.uuid4())
m_driver._physnet_subnet_mapping = {subnet_id: SRIOV_PHYSNET_A,
subnet_id_2: SRIOV_PHYSNET_B}
mapping = cls._get_physnet_resname_mapping(m_driver)
self.assertEqual(mapping, empty_res_map)
def test_get_physnet_for_subnet_id(self):
cls = drvs.SriovVIFDriver
m_driver = mock.Mock(spec=cls)
subnet_id = str(uuid.uuid4())
m_driver._physnet_subnet_mapping = {subnet_id: 'physnet10_4'}
physnet = cls._get_physnet_for_subnet_id(m_driver, subnet_id)
self.assertEqual(physnet, 'physnet10_4')
def test_get_physnet_for_subnet_id_error(self):
cls = drvs.SriovVIFDriver
m_driver = mock.Mock(spec=cls)
subnet_id = str(uuid.uuid4())
m_driver._physnet_subnet_mapping = {}
self.assertRaises(KeyError, cls._get_physnet_for_subnet_id,
m_driver, subnet_id)

View File

@ -270,33 +270,6 @@ class TestKuryrPortHandler(test_base.TestCase):
update_crd.assert_called_once_with(self._kp, self._vifs) update_crd.assert_called_once_with(self._kp, self._vifs)
@mock.patch('kuryr_kubernetes.controller.drivers.vif_pool.MultiVIFPool.'
'activate_vif')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'update_port_pci_info')
@mock.patch('kuryr_kubernetes.clients.get_kubernetes_client')
@mock.patch('kuryr_kubernetes.controller.drivers.base.MultiVIFDriver.'
'get_enabled_drivers')
def test_on_present_sriov(self, ged, get_k8s_client, update_port_pci_info,
activate_vif):
ged.return_value = [self._driver]
kp = kuryrport.KuryrPortHandler()
self._vif2.plugin = constants.KURYR_VIF_TYPE_SRIOV
self._vif2.active = True
self._kp['status']['vifs'] = {
'eth0': {'default': True,
'vif': self._vif2.obj_to_primitive()},
'eth1': {'default': False,
'vif': self._vif1.obj_to_primitive()}}
CONF.set_override('enable_node_annotations', True, group='sriov')
self.addCleanup(CONF.clear_override, 'enable_node_annotations',
group='sriov')
activate_vif.side_effect = os_exc.ResourceNotFound()
kp.on_present(self._kp)
update_port_pci_info.assert_called_once_with(self._host, self._vif2)
@mock.patch('kuryr_kubernetes.controller.drivers.default_project.' @mock.patch('kuryr_kubernetes.controller.drivers.default_project.'
'DefaultPodProjectDriver.get_project') 'DefaultPodProjectDriver.get_project')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_services') @mock.patch('kuryr_kubernetes.controller.drivers.utils.get_services')

View File

@ -42,7 +42,6 @@ LOG = log.getLogger(__name__)
VALID_MULTI_POD_POOLS_OPTS = {'noop': ['neutron-vif', VALID_MULTI_POD_POOLS_OPTS = {'noop': ['neutron-vif',
'nested-vlan', 'nested-vlan',
'nested-macvlan', 'nested-macvlan',
'sriov',
'nested-dpdk'], 'nested-dpdk'],
'neutron': ['neutron-vif'], 'neutron': ['neutron-vif'],
'nested': ['nested-vlan'], 'nested': ['nested-vlan'],

View File

@ -0,0 +1,8 @@
---
prelude: >
In this release we're removing SR-IOV support completely from Kuryr. The
motivation is that it is not tested upstream or maintained. Moreover the
preferred way of attaching additional SR-IOV ports is to use Multus.
upgrade:
- |
Support for SR-IOV additional ports is removed in this release.

View File

@ -26,6 +26,4 @@ PrettyTable>=0.7.2 # BSD
pyroute2>=0.5.7;sys_platform!='win32' # Apache-2.0 (+ dual licensed GPL2) pyroute2>=0.5.7;sys_platform!='win32' # Apache-2.0 (+ dual licensed GPL2)
retrying!=1.3.0,>=1.2.3 # Apache-2.0 retrying!=1.3.0,>=1.2.3 # Apache-2.0
stevedore>=1.20.0 # Apache-2.0 stevedore>=1.20.0 # Apache-2.0
grpcio>=1.25.0 # Apache-2.0
protobuf>=3.6.0 # 3-Clause BSD
prometheus_client>=0.6.0 # Apache-2.0 prometheus_client>=0.6.0 # Apache-2.0

View File

@ -26,7 +26,6 @@ oslo.config.opts =
os_vif = os_vif =
noop = kuryr_kubernetes.os_vif_plug_noop:NoOpPlugin noop = kuryr_kubernetes.os_vif_plug_noop:NoOpPlugin
sriov = kuryr_kubernetes.os_vif_plug_noop:SriovPlugin
console_scripts = console_scripts =
kuryr-k8s-controller = kuryr_kubernetes.cmd.eventlet.controller:start kuryr-k8s-controller = kuryr_kubernetes.cmd.eventlet.controller:start
@ -37,7 +36,6 @@ console_scripts =
kuryr_kubernetes.vif_translators = kuryr_kubernetes.vif_translators =
ovs = kuryr_kubernetes.os_vif_util:neutron_to_osvif_vif_ovs ovs = kuryr_kubernetes.os_vif_util:neutron_to_osvif_vif_ovs
sriov = kuryr_kubernetes.os_vif_util:neutron_to_osvif_vif_sriov
vhostuser = kuryr_kubernetes.os_vif_util:neutron_to_osvif_vif_ovs vhostuser = kuryr_kubernetes.os_vif_util:neutron_to_osvif_vif_ovs
kuryr_kubernetes.cni.binding = kuryr_kubernetes.cni.binding =
@ -47,7 +45,6 @@ kuryr_kubernetes.cni.binding =
VIFVHostUser = kuryr_kubernetes.cni.binding.vhostuser:VIFVHostUserDriver VIFVHostUser = kuryr_kubernetes.cni.binding.vhostuser:VIFVHostUserDriver
VIFVlanNested = kuryr_kubernetes.cni.binding.nested:VlanDriver VIFVlanNested = kuryr_kubernetes.cni.binding.nested:VlanDriver
VIFMacvlanNested = kuryr_kubernetes.cni.binding.nested:MacvlanDriver VIFMacvlanNested = kuryr_kubernetes.cni.binding.nested:MacvlanDriver
VIFSriov = kuryr_kubernetes.cni.binding.sriov:VIFSriovDriver
kuryr_kubernetes.controller.drivers.pod_project = kuryr_kubernetes.controller.drivers.pod_project =
default = kuryr_kubernetes.controller.drivers.default_project:DefaultPodProjectDriver default = kuryr_kubernetes.controller.drivers.default_project:DefaultPodProjectDriver
@ -87,7 +84,6 @@ kuryr_kubernetes.controller.drivers.pod_vif =
neutron-vif = kuryr_kubernetes.controller.drivers.neutron_vif:NeutronPodVIFDriver neutron-vif = kuryr_kubernetes.controller.drivers.neutron_vif:NeutronPodVIFDriver
nested-vlan = kuryr_kubernetes.controller.drivers.nested_vlan_vif:NestedVlanPodVIFDriver nested-vlan = kuryr_kubernetes.controller.drivers.nested_vlan_vif:NestedVlanPodVIFDriver
nested-macvlan = kuryr_kubernetes.controller.drivers.nested_macvlan_vif:NestedMacvlanPodVIFDriver nested-macvlan = kuryr_kubernetes.controller.drivers.nested_macvlan_vif:NestedMacvlanPodVIFDriver
sriov = kuryr_kubernetes.controller.drivers.sriov:SriovVIFDriver
nested-dpdk = kuryr_kubernetes.controller.drivers.nested_dpdk_vif:NestedDpdkPodVIFDriver nested-dpdk = kuryr_kubernetes.controller.drivers.nested_dpdk_vif:NestedDpdkPodVIFDriver
kuryr_kubernetes.controller.drivers.endpoints_lbaas = kuryr_kubernetes.controller.drivers.endpoints_lbaas =