-
Notifications
You must be signed in to change notification settings - Fork 176
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for running inside a VM with vfio-noiommu #272
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,102 @@ | ||
# Running DPDK applications in a Kubernetes virtual environment | ||
|
||
## Pre-requisites | ||
|
||
### Hugepages | ||
DPDK applications require Hugepages memory. Please refer to the [Hugepages section](http://doc.dpdk.org/guides/linux_gsg/sys_reqs.html#use-of-hugepages-in-the-linux-environment) in DPDK getting started guide on hugespages in DPDK. | ||
|
||
Make sure that the virtual environment is enabled for creating VMs with hugepage support. | ||
|
||
Kubernetes nodes can only advertise a single size pre-allocated hugepages. Which means even though one can have both 2M and 1G hugepages in a system, Kubernetes will only recognize the default hugepages as schedulable resources. Workload can request for hugepages using resource requests and limits specifying `hugepage-2Mi` or `hugepage-1Gi` resource references. | ||
|
||
> One important thing to note here is that when requesting hugepage resources, either memory or CPU resource requests need to be specified. | ||
|
||
For more information on hugepage support in Kubernetes please see [here](https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/). | ||
|
||
|
||
### VF drivers | ||
DPDK applications require devices to be attached with supported dpdk backend driver. | ||
* For Intel® x700 series NICs `vfio-pci` is required. | ||
* For Mellanox ConnectX®-4 Lx, ConnectX®-5 Adapters `mlx5_core` or `mlx5_ib` is required. | ||
|
||
Native-bifurcating devices/drivers (i.e. Mellanox/mlx5_*) do not need to run with privilege. Non-bifurcating devices/drivers (i.e. Intel/vfio-pci) the PODs need to run with privilege. | ||
|
||
### Privileges | ||
Certain privileges are required for dpdk application to function properly in Kubernetes Pod. The level of privileges depend on the application and the host device driver attached (as mentioned above). When running in an environment without a fully virtualized IOMMU, the *enable_unsafe_noiommu_mode* of vfio must be used by creating a modprobe.d file. | ||
|
||
```` | ||
# cat /etc/modprobe.d/vfio-noiommu.conf | ||
options vfio enable_unsafe_noiommu_mode=1 | ||
```` | ||
|
||
With `vfio-pci` an application must run privilege Pod with **IPC_LOCK** and **CAP_SYS_RAWIO** capability. | ||
|
||
# Example deployment | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Since this feature does not work with SRIOV CNI - can you add in an example CNI to use and state this feature doesn't work with SRIOV CNI. Thank you. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I tried with host-device, but also failed.
Is there a patch outstanding on a CNI to fix this for vfio-pci devices? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. so at this point no CNI is expected to be used if i understood correctly |
||
This directory includes sample deployment yaml files showing how to deploy a dpdk application in Kubernetes with in privileged Pod with SR-IOV VF attached to vfio-pci driver in the case of non-bifurcating nic devices/drivers or the VF attached to the default driver in the case of native-bifurcating devices/drivers (non-priveleged). See [this](https://doc.dpdk.org/guides/howto/flow_bifurcation.html) for more information. | ||
atyronesmith marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
## Deploy Virtual machines with attached VFs | ||
|
||
1. Depending on the virtualization environment, create a network that supports SR-IOV. Configure the VF as per your requirements: | ||
- Trusted On/Off | ||
- Spoof-Checking On/Off | ||
|
||
In a virtual environment, some VF characteristics are set by the underlying virtualization platform and are used 'as is' inside the VM. A virtual deployment does not have access the VFs associated PF. | ||
atyronesmith marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
2. Attach the VFs or associated ports to the VM | ||
|
||
## Check that environment supports VFIO and hugepages memory | ||
|
||
1. After deployment of the VM, confirm that your hugepagesz parameter is present. | ||
```` | ||
sh-4.4# cat /proc/cmdline | ||
BOOT_IMAGE=(hd0,gpt1)/ostree/rhcos-92d66d9df4cafad87abd888fd1b22fd1d890e86bc2ad8b9009bb9faa4f403a95/vmlinuz-4.18.0-193.24.1.el8_2.dt1.x86_64 rhcos.root=crypt_rootfs random.trust_cpu=on console=tty0 console=ttyS0,115200n8 rd.luks.options=discard ostree=/ostree/boot.1/rhcos/92d66d9df4cafad87abd888fd1b22fd1d890e86bc2ad8b9009bb9faa4f403a95/0 ignition.platform.id=openstack nohz=on nosoftlockup skew_tick=1 intel_pstate=disable intel_iommu=on iommu=pt rcu_nocbs=2-3 tuned.non_isolcpus=00000003 default_hugepagesz=1G nmi_watchdog=0 audit=0 mce=off processor.max_cstate=1 idle=poll intel_idle.max_cstate=0 | ||
```` | ||
2. On the desired worker node, | ||
|
||
```` | ||
cat /proc/meminfo | grep -i hugepage | ||
AnonHugePages: 245760 kB | ||
ShmemHugePages: 0 kB | ||
HugePages_Total: 8 | ||
HugePages_Free: 8 | ||
HugePages_Rsvd: 0 | ||
HugePages_Surp: 0 | ||
Hugepagesize: 1048576 kB | ||
```` | ||
You should see your requested hugepage size and a non-zero HugePages_Total. | ||
|
||
3. Confirm that Hugepages memory are allocated and mounted | ||
``` | ||
# cat /proc/meminfo | grep -i hugepage | ||
HugePages_Total: 16 | ||
HugePages_Free: 16 | ||
HugePages_Rsvd: 0 | ||
HugePages_Surp: 0 | ||
Hugepagesize: 1048576 kB | ||
|
||
# mount | grep hugetlbfs | ||
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) | ||
|
||
``` | ||
|
||
5. Load vfio-pci module | ||
|
||
```` | ||
# echo "options vfio enable_unsafe_noiommu_mode=1" > /etc/modprobe.d/vfio-noiommu.conf | ||
```` | ||
|
||
``` | ||
modprobe vfio-pci | ||
``` | ||
|
||
7. For non-bifurcating devices/drivers, bind the appropriate interfaces (VF) to the vfio-pci driver. You can use or `driverctl` or [`dpdk-devbind.py`](https://github.com/DPDK/dpdk/blob/master/usertools/dpdk-devbind.py) to bind/unbind drivers using devices PCI addresses. Please see [here](https://dpdk-guide.gitlab.io/dpdk-guide/setup/binding.html) more information on NIC driver bindings. | ||
|
||
Native-bifurcating devices/drivers can stay with the default binding. | ||
|
||
# Performance | ||
It is worth mentioning that to achieve maximum performance from a dpdk application the followings are required: | ||
|
||
1. Application process needs to be pinned to some dedicated isolated CPUs. Detailing how to achieve this is out of scope of this document. You can refer to [CPU Manager for Kubernetes](https://github.com/intel/CPU-Manager-for-Kubernetes) that provides such functionality in Kubernetes. In the virtualized case, cpu pinning and isolation must be considered at the phyiscal layer as well as the virtual layer. | ||
|
||
2. All application resources(CPUs, devices and memory) are from same NUMA locality. In the virtualized case, NUMA locality is controlled by the underlying virtualized platform for the VM. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to me a rather specialized usecase, IMO we move this section under docs/dpdk/README-virt
we can have a general description here and point to docs/dpdk/README-virt
we should also explain that not all selectors are applicable.
e.g
Virtual environments
SR-IOV network device plugin supports running in a virtualized environment, however, not all device selectors are applicable as the VFs are passthrough to the VM without any association to their respective PF, hence any device selector that relies on the association between a VF and its PF will not work.
The following selectors will not work in a virtualized environment:
pfNames
,rootDevices
Virtual environments with no iommu
SR-IOV network device plugin supports allocating VFIO devices in a virtualized environment without a virtualized iommu
for more information refer to {add link}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P.S
I realize we are missing some documentation on consuming sr-iov resources in virtualized environment in general, however thats not related to this PR.