Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for running inside a VM with vfio-noiommu #272

Merged
merged 4 commits into from
Nov 18, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
- [Command line arguments](#command-line-arguments)
- [Assumptions](#assumptions)
- [Workflow](#workflow)
- [Virtual Deployments](#virtual-deployments)
- [Example deployments](#example-deployments)
- [Deploy the Device Plugin](#deploy-the-device-plugin)
- [Deploy SR-IOV workloads when Multus is used](#deploy-sr-iov-workloads-when-multus-is-used)
Expand Down Expand Up @@ -51,6 +52,7 @@ The SR-IOV network device plugin is Kubernetes device plugin for discovering and
- Detects Kubelet restarts and auto-re-register
- Detects Link status (for Linux network devices) and updates associated VFs health accordingly
- Extensible to support new device types with minimal effort if not already supported
- Works within virtual deployments of Kubernetes that do not have virtualized-iommu support (VFIO No-IOMMU support)

To deploy workloads with SR-IOV VF this plugin needs to work together with the following two CNI components:

Expand Down Expand Up @@ -366,6 +368,18 @@ $ kubectl get node node1 -o json | jq '.status.allocatable'
}

```
## Virtual Deployments

SR-IOV network device plugin supports running in a virtualized environment. However, not all device selectors are
applicable as the VFs are passthrough to the VM without any association to their respective PF, hence any device
selector that relies on the association between a VF and its PF will not work and therefore the _pfNames_ and
_rootDevices_ extended selectors will not work in a virtual deployment. The common selector _pciAddress_ can be
used to select the virtual device.

### Virtual environments with no iommu

SR-IOV network device plugin supports allocating VFIO devices in a virtualized environment without a virtualized iommu.
For more information refer to [this](./docs/dpdk/README-virt.md).

## Example deployments

Expand Down
141 changes: 141 additions & 0 deletions docs/dpdk/README-virt.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
# Running DPDK applications in a Kubernetes virtual environment without virtualized iommu support

## Pre-requisites

In virtual deployments of Kubernetes where the underlying virtualization platform does not support a virtualized iommu, the VFIO driver needs to be loaded with a special
flag. The file **/etc/modprobe.d/vfio-noiommu.conf** must be created with the contents:

````
# cat /etc/modprobe.d/vfio-noiommu.conf
options vfio enable_unsafe_noiommu_mode=1
````

With the above option, vfio devices will be created with the form on the virtual host (VM):

````
/dev/vfio/noiommu-0
/dev/vfio/noiommu-1
...
````

The presence of noiommu-* devices will automatically be detected by the sriov-device-plugin. The noiommu-N devices will be mounted **inside** the pod in their expected/normal location;

````
/dev/vfio/0
/dev/vfio/1
...
````
It should be noted that with no IOMMU, there is no way to ensure safe use of DMA. When *enable_unsafe_noiommu_mode* is used, CAP_SYS_RAWIO privileges are necessary to work with groups and
containers using this mode.

>Note: The most common use case for direct VF is with the **DPDK** framework which will require the use of privileged containers.

Use of this mode, specifically
binding a device without a native IOMMU group to a VFIO bus driver will taint the kernel. Only no-iommu support for the vfio-pci bus is provided. However, there are still those users
that want userspace drivers even under those conditions.

### Hugepages
DPDK applications require Hugepages memory. Please refer to the [Hugepages section](http://doc.dpdk.org/guides/linux_gsg/sys_reqs.html#use-of-hugepages-in-the-linux-environment) in DPDK getting started guide on hugespages in DPDK.

Make sure that the virtual environment is enabled for creating VMs with hugepage support.

Kubernetes nodes can only advertise a single size pre-allocated hugepages. Which means even though one can have both 2M and 1G hugepages in a system, Kubernetes will only recognize the default hugepages as schedulable resources. Workload can request for hugepages using resource requests and limits specifying `hugepage-2Mi` or `hugepage-1Gi` resource references.

> One important thing to note here is that when requesting hugepage resources, either memory or CPU resource requests need to be specified.

For more information on hugepage support in Kubernetes please see [here](https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/).


### VF drivers
DPDK applications require devices to be attached with supported dpdk backend driver.
* For Intel® x700 series NICs `vfio-pci` is required.
* For Mellanox ConnectX®-4 Lx, ConnectX®-5 Adapters `mlx5_core` or `mlx5_ib` is required.

Native-bifurcating devices/drivers (i.e. Mellanox/mlx5_*) do not need to run with privilege. Non-bifurcating devices/drivers (i.e. Intel/vfio-pci) the PODs need to run with privilege.

### Privileges
Certain privileges are required for dpdk application to function properly in Kubernetes Pod. The level of privileges depend on the application and the host device driver attached (as mentioned above). When running in an environment without a fully virtualized IOMMU, the *enable_unsafe_noiommu_mode* of vfio must be used by creating a modprobe.d file.

````
# cat /etc/modprobe.d/vfio-noiommu.conf
options vfio enable_unsafe_noiommu_mode=1
````

With `vfio-pci` an application must run privilege Pod with **IPC_LOCK** and **CAP_SYS_RAWIO** capability.

# Example deployment
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this feature does not work with SRIOV CNI - can you add in an example CNI to use and state this feature doesn't work with SRIOV CNI. Thank you.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried with host-device, but also failed.
I am getting:

failed to find host device: no net directory under pci device 0000:00:06.0

Is there a patch outstanding on a CNI to fix this for vfio-pci devices?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so at this point no CNI is expected to be used if i understood correctly

This directory includes a sample deployment yaml files showing how to deploy a dpdk application in Kubernetes with a **privileged** Pod (_pod_testpmd_virt.yaml_).

## Deploy Virtual machines with attached VFs

1. Depending on the virtualization environment, create a network that supports SR-IOV. Configure the VF as per your requirements:
- Trusted On/Off
- Spoof-Checking On/Off

In a virtual environment, some VF characteristics are set by the underlying virtualization platform and are used 'as is' inside the VM. A virtual deployment does not have access to the VFs associated PF.

2. Attach the VFs or associated ports to the VM

## Check that environment supports VFIO and hugepages memory

1. After deployment of the VM, confirm that your hugepagesz parameter is present.
````
sh-4.4# cat /proc/cmdline
BOOT_IMAGE=(hd0,gpt1)/ostree/rhcos-92d66d9df4cafad87abd888fd1b22fd1d890e86bc2ad8b9009bb9faa4f403a95/vmlinuz-4.18.0-193.24.1.el8_2.dt1.x86_64 rhcos.root=crypt_rootfs random.trust_cpu=on console=tty0 console=ttyS0,115200n8 rd.luks.options=discard ostree=/ostree/boot.1/rhcos/92d66d9df4cafad87abd888fd1b22fd1d890e86bc2ad8b9009bb9faa4f403a95/0 ignition.platform.id=openstack nohz=on nosoftlockup skew_tick=1 intel_pstate=disable intel_iommu=on iommu=pt rcu_nocbs=2-3 tuned.non_isolcpus=00000003 default_hugepagesz=1G nmi_watchdog=0 audit=0 mce=off processor.max_cstate=1 idle=poll intel_idle.max_cstate=0
````
2. On the desired worker node,

````
cat /proc/meminfo | grep -i hugepage
AnonHugePages: 245760 kB
ShmemHugePages: 0 kB
HugePages_Total: 8
HugePages_Free: 8
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 1048576 kB
````
You should see your requested hugepage size and a non-zero HugePages_Total.

3. Confirm that Hugepages memory are allocated and mounted
```
# cat /proc/meminfo | grep -i hugepage
HugePages_Total: 16
HugePages_Free: 16
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 1048576 kB

# mount | grep hugetlbfs
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)

```

5. Load vfio-pci module

````
# echo "options vfio enable_unsafe_noiommu_mode=1" > /etc/modprobe.d/vfio-noiommu.conf
````

```
modprobe vfio-pci
```

7. For non-bifurcating devices/drivers, bind the appropriate interfaces (VF) to the vfio-pci driver. You can use or `driverctl` or [`dpdk-devbind.py`](https://github.com/DPDK/dpdk/blob/master/usertools/dpdk-devbind.py) to bind/unbind drivers using devices PCI addresses. Please see [here](https://dpdk-guide.gitlab.io/dpdk-guide/setup/binding.html) more information on NIC driver bindings.

Native-bifurcating devices/drivers can stay with the default binding.

# Performance
It is worth mentioning that to achieve maximum performance from a dpdk application the followings are required:

1. Application process needs to be pinned to some dedicated isolated CPUs. Detailing how to achieve this is out of scope of this document. You can refer to [CPU Manager for Kubernetes](https://github.com/intel/CPU-Manager-for-Kubernetes) that provides such functionality in Kubernetes. In the virtualized case, cpu pinning and isolation must be considered at the phyiscal layer as well as the virtual layer.

2. All application resources(CPUs, devices and memory) are from same NUMA locality. In the virtualized case, NUMA locality is controlled by the underlying virtualized platform for the VM.

# Usage

>When consuming a VFIO device in a virtual environment, a secondary network is not required as network configuration for the underlying VF should be performed at the hypervisor level.

An example of a noiommu deployment is shown in [pod_testpmd_virt.yaml](pod_testpmd_virt.yaml). The configMap for the example is shown in [configMap-virt.yaml](configMap-virt.yaml).


35 changes: 35 additions & 0 deletions docs/dpdk/configMap-virt.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: sriovdp-config
namespace: kube-system
data:
config.json: |
{
"resourceList": [
{
"resourceName": "intelnics_radio_downlink",
"selectors": {
"drivers": [
"vfio-pci"
],
"pciAddresses": [
"0000:00:09.0",
"0000:00:0a.0"
],
},
},
{
"resourceName": "intelnics_radio_uplink",
"selectors": {
"drivers": [
"vfio-pci"
],
"pciAddresses": [
"0000:00:07.0",
"0000:00:08.0"
],
},
}
]
}
32 changes: 32 additions & 0 deletions docs/dpdk/pod_testpmd_virt.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
apiVersion: v1
kind: Pod
metadata:
name: testpmd
spec:
containers:
- name: testpmd
image: <DPDK testpmd image>
securityContext:
# This application is DPDK-based
privileged: true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so in the example pod should it be privileged ? or will it generally work with CAP_IPC_LOCK and CAP_SYS_RAWIO ?
id prefer to list the minimum capabilities.

resources:
requests:
openshift.io/intelnics_radio_downlink: "1"
openshift.io/intelnics_radio_uplink: "1"
memory: 1000Mi
hugepages-1Gi: 2Gi
cpu: '1'
limits:
openshift.io/intelnics_radio_downlink: "1"
openshift.io/intelnics_radio_uplink: "1"
hugepages-1Gi: 2Gi
cpu: '1'
memory: 2000Mi
volumeMounts:
- mountPath: /dev/hugepages
name: hugepage
readOnly: False
volumes:
- name: hugepage
emptyDir:
medium: HugePages
8 changes: 4 additions & 4 deletions pkg/resources/vfioResource.go
Original file line number Diff line number Diff line change
Expand Up @@ -50,13 +50,13 @@ func (rp *vfioResource) GetDeviceSpecs(pciAddr string) []*pluginapi.DeviceSpec {
Permissions: "mrw",
})

vfioDev, err := utils.GetVFIODeviceFile(pciAddr)
vfioDevHost, vfioDevContainer, err := utils.GetVFIODeviceFile(pciAddr)
if err != nil {
glog.Errorf("GetDeviceSpecs(): error getting vfio device file for device: %s", pciAddr)
glog.Errorf("GetDeviceSpecs(): error getting vfio device file for device: %s, %s", pciAddr, err.Error())
} else {
devSpecs = append(devSpecs, &pluginapi.DeviceSpec{
HostPath: vfioDev,
ContainerPath: vfioDev,
HostPath: vfioDevHost,
ContainerPath: vfioDevContainer,
Permissions: "mrw",
})
}
Expand Down
21 changes: 18 additions & 3 deletions pkg/utils/utils.go
Original file line number Diff line number Diff line change
Expand Up @@ -259,7 +259,7 @@ func ValidResourceName(name string) bool {
}

// GetVFIODeviceFile returns a vfio device files for vfio-pci bound PCI device's PCI address
func GetVFIODeviceFile(dev string) (devFile string, err error) {
func GetVFIODeviceFile(dev string) (devFileHost string, devFileContainer string, err error) {
// Get iommu group for this device
devPath := filepath.Join(sysBusPci, dev)
_, err = os.Lstat(devPath)
Expand Down Expand Up @@ -290,8 +290,23 @@ func GetVFIODeviceFile(dev string) (devFile string, err error) {
err = fmt.Errorf("GetVFIODeviceFile(): error reading symlink to iommu_group %v", err)
return
}

devFile = filepath.Join("/dev/vfio", filepath.Base(linkName))
devFileContainer = filepath.Join("/dev/vfio", filepath.Base(linkName))
devFileHost = devFileContainer

// Get a file path to the iommu group name
namePath := filepath.Join(linkName, "name")
atyronesmith marked this conversation as resolved.
Show resolved Hide resolved
// Read the iommu group name
// The name file will not exist on baremetal
vfioName, errName := ioutil.ReadFile(namePath)
if errName == nil {
vName := strings.TrimSpace(string(vfioName))

// if the iommu group name == vfio-noiommu then we are in a VM, adjust path to vfio device
if vName == "vfio-noiommu" {
linkName = filepath.Join(filepath.Dir(linkName), "noiommu-"+filepath.Base(linkName))
atyronesmith marked this conversation as resolved.
Show resolved Hide resolved
devFileHost = filepath.Join("/dev/vfio", filepath.Base(linkName))
}
}

return
}
Expand Down
5 changes: 3 additions & 2 deletions pkg/utils/utils_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -296,8 +296,9 @@ var _ = Describe("In the utils package", func() {
DescribeTable("getting VFIO device file",
func(fs *FakeFilesystem, device, expected string, shouldFail bool) {
defer fs.Use()()
actual, err := GetVFIODeviceFile(device)
Expect(actual).To(Equal(expected))
//TODO: adapt test to running in a virtual environment
actualHost, _, err := GetVFIODeviceFile(device)
Expect(actualHost).To(Equal(expected))
assertShouldFail(err, shouldFail)
},
Entry("could not get directory information for device",
Expand Down