Kube-OVN, a CNCF Sandbox Level Project, integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises with the most functions and the easiest operation.
The Kube-OVN community is waiting for your participation!
- Namespaced Subnets: Each Namespace can have a unique Subnet (backed by a Logical Switch). Pods within the Namespace will have IP addresses allocated from the Subnet. It's also possible for multiple Namespaces to share a Subnet.
- Vlan/Underlay Support: In addition to overlay network, Kube-OVN also supports underlay and vlan mode network for better performance and direct connectivity with physical network.
- VPC Support: Multi-tenant network with independent address spaces, where each tenant has its own network infrastructure such as eips, nat gateways, security groups and loadbalancers.
- Static IP Addresses for Workloads: Allocate random or static IP addresses to workloads.
- Multi-Cluster Network: Connect different Kubernetes/Openstack clusters into one L3 network.
- TroubleShooting Tools: Handy tools to diagnose, trace, monitor and dump container network traffic to help troubleshoot complicate network issues.
- Prometheus & Grafana Integration: Exposing network quality metrics like pod/node/service/dns connectivity/latency in Prometheus format.
- ARM Support: Kube-OVN can run on x86_64 and arm64 platforms.
- Subnet Isolation: Can configure a Subnet to deny any traffic from source IP addresses not within the same Subnet. Can whitelist specific IP addresses and IP ranges.
- Network Policy: Implementing networking.k8s.io/NetworkPolicy API by high performance ovn ACL.
- DualStack IP Support: Pod can run in IPv4-Only/IPv6-Only/DualStack mode.
- Pod NAT and EIP: Manage the pod external traffic and external ip like tradition VM.
- IPAM for Multi NIC: A cluster-wide IPAM for CNI plugins other than Kube-OVN, such as macvlan/vlan/host-device to take advantage of subnet and static ip allocation functions in Kube-OVN.
- Dynamic QoS: Configure Pod/Gateway Ingress/Egress traffic rate limits on the fly.
- Embedded Load Balancers: Replace kube-proxy with the OVN embedded high performance distributed L2 Load Balancer.
- Distributed Gateways: Every Node can act as a Gateway to provide external network connectivity.
- Namespaced Gateways: Every Namespace can have a dedicated Gateway for Egress traffic.
- Direct External Connectivity:Pod IP can be exposed to external network directly.
- BGP Support: Pod/Subnet IP can be exposed to external by BGP router protocol.
- Traffic Mirror: Duplicated container network traffic for monitoring, diagnosing and replay.
- Hardware Offload: Boost network performance and save CPU resource by offloading OVS flow table to hardware.
- DPDK Support: DPDK application now can run in Pod with OVS-DPDK.
- Policy-based QoS
- High performance kernel datapath
- Namespaced VPC
- Kubevirt/Kata optimization
The Switch, Router and Firewall showed in the diagram below are all distributed on all Nodes. There is no single point of failure for in-cluster network.
Kube-OVN offers prometheus integration with grafana dashboards to visualise network quality.
Kube-OVN is easy to install with all necessary components/dependencies included. If you already have a Kubernetes cluster without any cni plugin, please refer to the Installation Guide.
If you want to install Kubernetes from scratch, you can try kubespray or for Chinese users try kubeasz to deploy a production ready Kubernetes cluster with Kube-OVN embedded.
- Namespaced Subnets
- Subnet Isolation
- Static IP
- Pod NAT and EIP
- Dynamic QoS
- Subnet Gateway and Direct connect
- Pod Gateway
- Multi-Cluster Network
- Network interconnection with OpenStack
- BGP support
- Multi NIC Support
- Hardware Offload
- Vlan/Underlay Support
- DPDK Support
- Traffic Mirror
- IPv6
- DualStack
- VPC
- Tracing/Diagnose/Dump Traffic with Kubectl Plugin
- Prometheus Integration
- Metrics
- Performance Tuning
We are looking forwards to your PR!
-
Q: How about the scalability of Kube-OVN?
A: We have simulated 200 Nodes with 10k Pods by kubemark, and it works fine. Some community users have deployed one cluster with 250+ Nodes and 3k+ Pods in production. It's still not reach the limitation, but we don't have enough resources to find the limitation.
-
Q: What's the Addressing/IPAM? Node-specific or cluster-wide?
A: Kube-OVN use a cluster-wide IPAM, Pod address can float to any nodes in the cluster.
-
Q: What's the encapsulation?
A: For overlay mode, Kube-OVN uses Geneve/Vxlan to encapsulate packets between nodes. For Vlan/Underlay mode there is no encapsulation.
Different CNI Implementation has different function scope and network topology. There is no single implementation that can resolve all network problems. In this section, we compare Kube-OVN to some other options to give users a better understanding to assess which network will fit into your infrastructure.
ovn-kubernetes is developed by the ovn community to integration ovn for Kubernetes. As both projects use OVN/OVS as the data plane, they have some same function sets and architecture. The main differences come from the network topology and gateway implementation.
ovn-kubernetes implements a subnet-per-node network topology. That means each node will have a fixed cidr range, and the ip allocation is fulfilled by each node when the pod has been invoked by kubelet.
Kube-OVN implements a subnet-per-namespace network topology. That means a cidr can spread the entire cluster nodes, and the ip allocation is fulfilled by kube-ovn-controller at a central place. And then kube-ovn can apply lots of network configurations at subnet level, like cidr, gw, exclude_ips, nat and so on. This topology also gives Kube-OVN more ability to control how ip should be allocated, on top of this topology, Kube-OVN can allocate static ip for workloads.
We believe the subnet-per-namespace topology will give more flexibility to evolve the network.
On the gateway side, ovn-kubernetes uses native ovn gateway concept to control the traffic. The native ovn gateway relies on a dedicated nic or needs to transfer the nic ip to another device to bind the nic to the ovs bridge. This implementation can reach better performance, however not all environments meet the network requirements especially in the cloud.
Kube-OVN uses policy-route, ipset and iptables to implement the gateway functions that all by software, which can fit more infrastructure and give more flexibility to more function.
Calico is an open-source networking and network security solution for containers, virtual machines, and native host-based workloads. It's known for its good performance and security policy.
The main difference from the design point is the encapsulation method. Calico use no encapsulation or lightweight IPIP encapsulation and Kube-OVN uses geneve to encapsulate packets. No encapsulation can achieve better network performance for both throughput and latency. However, as this method will expose pod network directly to the underlay network with it comes with the burden on deploy and maintain. In some managed network environment where BGP and IPIP is not allowed, encapsulation is a must.
Use encapsulation can lower the requirement on networking, and isolate containers and underlay network from logical. We can use the overlay technology to build a much complex network concept, like router, gateway, and vpc. For performance, ovs can make use of hardware offload and DPDK to enhance throughput and latency.
Kube-OVN can also work in non-encapsulation mode, that take use of underlay switches to switch the packets or use hardware offload to achieve better performance than kernel datapath.
From the function set, Kube-OVN can offer some more abilities like static ip, QoS and traffic mirror. The subnet in Kube-OVN and ippool in Calico share some same function set.