video-0.mp4
We asked an AI what it thinks about our SocketCAN Kubernetes plugin, this was it's answer...
What we are still trying to figure out is: what's with all the lighthouses??
This plugin enables you to use hardware-backed and virtual SocketCAN interfaces inside your Kubernetes Pods.
vcan
allows processes inside the pod to communicate with each other using the full Linux SocketCAN API. If you have
a real CAN adapter in your embedded system, you can use this plugin to use it inside your Kubernetes deployment.
Assuming you have a microk8s Kubernetes cluster you can install the SocketCAN plugin:
microk8s kubectl apply -f https://raw.githubusercontent.com/Collabora/k8s-socketcan/main/k8s-socketcan-daemonset.yaml
microk8s kubectl wait --for=condition=ready pod -l name=k8s-socketcan
NOTE: Using it with other k8s providers should require only an adjustment to the init
container script to add a new
search path for the containerd.sock
control socket and/or install the vcan
kernel module.
Next, you can create a simple Pod that has two vcan
interfaces enabled:
microk8s kubectl apply -f https://raw.githubusercontent.com/Collabora/k8s-socketcan/main/k8s-socketcan-client-example.yaml
microk8s kubectl wait --for=condition=ready pod k8s-socketcan-client-example
Afterwards, you can run these two commands in two separate terminals to verify it's working correctly:
microk8s kubectl exec -it k8s-socketcan-client-example -- candump vcan0
microk8s kubectl exec -it k8s-socketcan-client-example -- cansend vcan0 5A1#11.2233.44556677.88
If everything goes according to plan you should see this in the two terminals:
Adding SocketCAN support to an existing Pod is as easy as adding a resource limit in the container spec:
resources:
limits:
k8s.collabora.com/vcan: 1
We support Azure AKS, microk8s
and k3s
out-of-the-box. To use it with other k8s providers (keeping in mind the
Limitations) you should look into the init
container script which:
- installs and loads the
vcan
kernel module - searches for a
containerd.sock
in a few well known paths and makes a symlink.
If you add tail -f /dev/null
at the end of the script you will be able to kubectl exec
into the container
and have a look around to verify the environment is created properly.
The SocketCAN device plugin also supports hardware CAN interfaces which is useful if you want to use (for example) K3s to manage your embedded system software. It allows you to improve security by moving a SocketCAN network interface into a single container and fully isolating it from any other applications on the system. It's a perfect solution if you have a daemon that arbitrates all access to the CAN bus and you wish to containerize it.
To move a hardware CAN interface into a Pod you have to modify the DaemonSet to specify the names of the interfaces
you wish to make available. The names should be passed as a space separated list in the SOCKETCAN_DEVICES
environment
variable:
containers:
- name: k8s-socketcan
image: ghcr.io/collabora/k8s-socketcan:latest
env:
- name: SOCKETCAN_DEVICES
value: "can1 can2"
Afterwards, in the client container definition, instead of k8s.collabora.com/vcan
you can specify the name of
the interface you wish to use (adding the socketcan-
prefix to make sure it's unambiguous):
resources:
limits:
k8s.collabora.com/socketcan-can1: 1
SocketCAN official documentation is a little bit scattered around the Internet but we found these two presentations by Oliver Hartkopp from Microchip to be invaluable to understand the motivation behind and the architecture of the SocketCAN subsystem:
- The CAN Subsystem of the Linux Kernel (includes discussion of the kernel interfaces, C code examples, usage of the "firewall" filters on CAN frames)
- Design & separation of CAN applications (discusses the
vxcan
interface pairs and SocketCAN usage inside of namespaces/containers)
Other resources:
- SocketCAN - The official CAN API of the Linux kernel by Marc Kleine-Budde from Pengutronix
- python-can library
The plugin requires kernel support for SocketCAN (compiled in or as a module) on the cluster Nodes. This is a package installation away on Ubuntu (so microk8s and Azure AKS work great) but unfortunatelly does not seem possible at all on Alpine (so for example Rancher Desktop <= v1.1.1 does not work).
Currently each Pod get it's own isolated virtual SocketCAN network. There is no support for bridging
this to other Pods on the same node or to other nodes in the cluster. Adding local bridging would be possible with
the vxcan
functionality in the kernel and the cangw
tool. Transparent bridging to other cluster nodes over
the network should be possible manually with cannelloni. Pull requests to either of these cases automatically
are more then welcome.
Currently, the plugin only work with clusters based on containerd, which includes most production clusters but
not Docker Desktop (we recommend using microk8s instead). Pull requests to support dockerd
are of course welcome.
This project was inspired by the k8s-device-plugin-socketcan project by Matthias Preu but it was written from scrach and has some significant improvements:
- it has a valid Open Source license (MIT)
- it supports
containerd
(which is used by default in most k8s clusters, like AKS, these days) instead of thedockerd
- it is capable of handling multiple Pods starting at the same time, which avoids head-of-the-line blocking issues when you have Pods that take a long time to start
- it supports exclusive use of real CAN interfaces
Neither project currently supports sharing a single SocketCAN interface among multiple Pods.