diff --git a/README.md b/README.md index 3878be185..923338dd0 100644 --- a/README.md +++ b/README.md @@ -181,6 +181,15 @@ This plugin creates device plugin endpoints based on the configurations given in "isRdma": true } }, + { + "resourceName": "ct6dx_vdpa_vhost", + "selectors": { + "vendors": ["15b3"], + "devices": ["101e"], + "drivers": ["mlx5_core"], + "vdpaType": "vhost" + } + }, { "resourceName": "intel_fpga", "deviceType": "accelerator", @@ -233,6 +242,7 @@ This selector is applicable when "deviceType" is "netDevice"(note: this is defau | "ddpProfiles" | N | A map of device selectors | `string` list Default: `null` | "ddpProfiles": ["GTPv1-C/U IPv4/IPv6 payload"] | | "isRdma" | N | Mount RDMA resources | `bool` values `true` or `false` Default: `false` | "isRdma": `true` | | "needVhostNet"| N | Share /dev/vhost-net | `bool` values `true` or `false` Default: `false` | "needVhostNet": `true` | +| "vdpaType" | N | The type of vDPA device (virtio, vhost) | `string` values `vhost` or `virtio` Default: `null` | "vdpaType": "vhost" | [//]: # (The tables above generated using: https://ozh.github.io/ascii-tables/) diff --git a/docs/README.md b/docs/README.md index d463d4f74..e965c7727 100644 --- a/docs/README.md +++ b/docs/README.md @@ -7,3 +7,4 @@ This page contains supplimentary documention that users may find useful for vari * [Running RDMA application in Kubernetes](rdma/) * [SR-IOV Network Device Plugin with DDP](ddp/) * [Using node specific config file for running device plugin DaemonSet](config-file) +* [Using vDPA devices in Kuberenets](vdpa/) diff --git a/docs/vdpa/README.md b/docs/vdpa/README.md new file mode 100644 index 000000000..4622e84c6 --- /dev/null +++ b/docs/vdpa/README.md @@ -0,0 +1,40 @@ +# Using vDPA devices in Kubernetes +## Introduction to vDPA +vDPA (Virtio DataPath Acceleration) is a technology that enables the acceleration of virtIO devices while allowing the implementations of such devices (e.g: NIC vendors) to use their own control plane. +The consumers of the virtIO devices (VMs or containers) interact with the devices using the standard virtIO datapath and virtio-compatible control paths (virtIO, vhost). While the data-plane is mapped directly to the accelerator device, the contol-plane gets translated the vDPA kernel framework. + +The vDPA kernel framework is composed of a vdpa bus (/sys/bus/vdpa), vdpa devices (/sys/bus/vdpa/devices) and vdpa drivers (/sys/bus/vdpa/drivers). Currently, two vdpa drivers are implemented: +* virtio_vdpa: Exposes the device as a virtio-net netdev +* vhost_vdpa: Exposes the device as a vhost-vdpa device. This device uses an extension of the vhost-net protocol to allow userspace applications access the rings directly + +For more information about the vDPA framework, read the article on [LWN.net](https://lwn.net/Articles/816063/) or the blog series written by one of the main authors ([part 1](https://www.redhat.com/en/blog/vdpa-kernel-framework-part-1-vdpa-bus-abstracting-hardware), [part 2](https://www.redhat.com/en/blog/vdpa-kernel-framework-part-2-vdpa-bus-drivers-kernel-subsystem-interactions), [part3](https://www.redhat.com/en/blog/vdpa-kernel-framework-part-3-usage-vms-and-containers)) + +## vDPA Management +Currently, the management of vDPA devices is performed using the sysfs interface exposed by the vDPA Framework. However, in order to decouple the management of vdpa devices from the SR-IOV Device Plugin functionality, this low-level management is done in an external library called [go-vdpa](https://github.com/redhat-virtio-net/govdpa). + +In the context of the SR-IOV Device Plugin and the SR-IOV CNI, the current plan is to support only 1:1 mappings between SR-IOV VFs and vDPA devices despite the fact that the vDPA Framework might support 1:N mappings. + +## Tested NICs: +* Mellanox ConnectX®-6 DX * + +## vDPA device creation +Create vdpa using the vdpa management tool integrated into iproute2, e.g: + + $ vdpa mgmtdev show + pci/0000:65:00.2: + supported_classes net + $ vdpa dev add name vdpa2 mgmtdev pci/0000:65:00.2 + $ vdpa dev list + vdpa2: type network mgmtdev pci/0000:65:00.2 vendor_id 5555 max_vqs 16 max_vq_size 256 + +## Bind the desired vdpa driver +The vdpa bus works similar to the pci bus. To unbind a driver from a device, run: + + echo ${DEV_NAME} > /sys/bus/vdpa/devices/${DEV_NAME}/driver/unbind + +To bind a driver to a device, run: + + echo ${DEV_NAME} > /sys/bus/vdpa/drivers/${DRIVER_NAME}/bind + +## Configure the SR-IOV Device Plugin +See the sample [configMap](configMap.yaml) for an example of how to configure a vDPA device diff --git a/docs/vdpa/configMap.yaml b/docs/vdpa/configMap.yaml new file mode 100644 index 000000000..70e99444f --- /dev/null +++ b/docs/vdpa/configMap.yaml @@ -0,0 +1,30 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: sriovdp-config + namespace: kube-system +data: + config.json: | + { + "resourceList": [{ + { + "resourceName": "vdpa_mlx_virtio", + "selectors": { + "vendors": ["15b3"], + "devices": ["101e"], + "drivers": ["mlx5_core"], + "vdpaType": "virtio" + } + }, + { + "resourceName": "vdpa_mlx_vhost", + "selectors": { + "vendors": ["15b3"], + "devices": ["101e"], + "drivers": ["mlx5_core"], + "vdpaType": "vhost" + } + } + ] + } +