Skip to content

Commit

Permalink
docs: document vdpa device type
Browse files Browse the repository at this point in the history
Signed-off-by: Adrian Moreno <amorenoz@redhat.com>
  • Loading branch information
amorenoz committed Oct 29, 2021
1 parent 06a59bc commit f3681b5
Show file tree
Hide file tree
Showing 4 changed files with 109 additions and 0 deletions.
10 changes: 10 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,6 +181,15 @@ This plugin creates device plugin endpoints based on the configurations given in
"isRdma": true
}
},
{
"resourceName": "ct6dx_vdpa_vhost",
"selectors": {
"vendors": ["15b3"],
"devices": ["101e"],
"drivers": ["mlx5_core"],
"vdpaType": "vhost"
}
},
{
"resourceName": "intel_fpga",
"deviceType": "accelerator",
Expand Down Expand Up @@ -233,6 +242,7 @@ This selector is applicable when "deviceType" is "netDevice"(note: this is defau
| "ddpProfiles" | N | A map of device selectors | `string` list Default: `null` | "ddpProfiles": ["GTPv1-C/U IPv4/IPv6 payload"] |
| "isRdma" | N | Mount RDMA resources | `bool` values `true` or `false` Default: `false` | "isRdma": `true` |
| "needVhostNet"| N | Share /dev/vhost-net | `bool` values `true` or `false` Default: `false` | "needVhostNet": `true` |
| "vdpaType" | N | The type of vDPA device (virtio, vhost) | `string` values `vhost` or `virtio` Default: `null` | "vdpaType": "vhost" |


[//]: # (The tables above generated using: https://ozh.github.io/ascii-tables/)
Expand Down
1 change: 1 addition & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,4 @@ This page contains supplimentary documention that users may find useful for vari
* [Running RDMA application in Kubernetes](rdma/)
* [SR-IOV Network Device Plugin with DDP](ddp/)
* [Using node specific config file for running device plugin DaemonSet](config-file)
* [Using vDPA devices in Kuberenets](vdpa/)
68 changes: 68 additions & 0 deletions docs/vdpa/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# Using vDPA devices in Kubernetes
## Introduction to vDPA
vDPA (Virtio DataPath Acceleration) is a technology that enables the acceleration
of virtIO devices while allowing the implementations of such devices
(e.g: NIC vendors) to use their own control plane.

The consumers of the virtIO devices (VMs or containers) interact with the devices
using the standard virtIO datapath and virtio-compatible control paths (virtIO, vhost).
While the data-plane is mapped directly to the accelerator device, the contol-plane
gets translated the vDPA kernel framework.

The vDPA kernel framework is composed of a vdpa bus (/sys/bus/vdpa), vdpa devices
(/sys/bus/vdpa/devices) and vdpa drivers (/sys/bus/vdpa/drivers).
Currently, two vdpa drivers are implemented:
* virtio_vdpa: Exposes the device as a virtio-net netdev
* vhost_vdpa: Exposes the device as a vhost-vdpa device. This device uses an extension
of the vhost-net protocol to allow userspace applications access the rings directly

For more information about the vDPA framework, read the article on
[LWN.net](https://lwn.net/Articles/816063/) or the blog series written by one of the
main authors ([part 1](https://www.redhat.com/en/blog/vdpa-kernel-framework-part-1-vdpa-bus-abstracting-hardware),
[part 2](https://www.redhat.com/en/blog/vdpa-kernel-framework-part-2-vdpa-bus-drivers-kernel-subsystem-interactions),
[part3](https://www.redhat.com/en/blog/vdpa-kernel-framework-part-3-usage-vms-and-containers))

## vDPA Management
Currently, the management of vDPA devices is performed using the sysfs interface exposed
by the vDPA Framework. However, in order to decouple the management of vdpa devices from
the SR-IOV Device Plugin functionality, this low-level management is done in an external
library called [go-vdpa](https://github.com/redhat-virtio-net/govdpa).

In the context of the SR-IOV Device Plugin and the SR-IOV CNI, the current plan is to
support only 1:1 mappings between SR-IOV VFs and vDPA devices despite the fact that
the vDPA Framework might support 1:N mappings.

## Tested NICs:
* Mellanox ConnectX®-6 DX *

## Prerequisites
* Linux Kernel >= 5.12
* iproute >= 5.14

## vDPA device creation
Insert the vdpa kernel modules if not present:

$ modprobe vdpa
$ modprobe virtio-vdpa
$ modprobe vhost-vdpa

Create vdpa using the vdpa management tool integrated into iproute2, e.g:

$ vdpa mgmtdev show
pci/0000:65:00.2:
supported_classes net
$ vdpa dev add name vdpa2 mgmtdev pci/0000:65:00.2
$ vdpa dev list
vdpa2: type network mgmtdev pci/0000:65:00.2 vendor_id 5555 max_vqs 16 max_vq_size 256

## Bind the desired vdpa driver
The vdpa bus works similar to the pci bus. To unbind a driver from a device, run:

echo ${DEV_NAME} > /sys/bus/vdpa/devices/${DEV_NAME}/driver/unbind

To bind a driver to a device, run:

echo ${DEV_NAME} > /sys/bus/vdpa/drivers/${DRIVER_NAME}/bind

## Configure the SR-IOV Device Plugin
See the sample [configMap](configMap.yaml) for an example of how to configure a vDPA device
30 changes: 30 additions & 0 deletions docs/vdpa/configMap.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: sriovdp-config
namespace: kube-system
data:
config.json: |
{
"resourceList": [{
{
"resourceName": "vdpa_mlx_virtio",
"selectors": {
"vendors": ["15b3"],
"devices": ["101e"],
"drivers": ["mlx5_core"],
"vdpaType": "virtio"
}
},
{
"resourceName": "vdpa_mlx_vhost",
"selectors": {
"vendors": ["15b3"],
"devices": ["101e"],
"drivers": ["mlx5_core"],
"vdpaType": "vhost"
}
}
]
}

0 comments on commit f3681b5

Please sign in to comment.