This page describes a suite of Kubernetes e2e tests that will be used for Kubernetes IPv6 Continuous Integration (CI) testing, and can also be used for testing IPv6 networking conformance on an IPv6-only, multi-node Kubernetes cluster.
The test cases that are included in this suite are all "out-of-the-box" Kubernetes e2e test cases, that is, they are available upstream. Running this test suite is therefore a matter of providing the right test case filtering through the use of "--ginkgo.focus" and "--ginkgo.skip" regular expressions on the command line, as described in the Kubernetes e2e test documentation, as well as a "--num-nodes=2" flag to run the Kubernetes services e2e tests that require 2 or more worker nodes.
Some of the steps described below assume the topology shown in the following diagram, but certainly various topologies can be tested with slight variations in the steps:
It is expected that the list of test cases that are included in this suite will grow over time to improve test coverage and make the testing more comprehensive. Some things to consider before adding a test case:
- Does the test work in an IPv6 cluster (is it debugged)?
- Is it a meaningful test of IPv6 functionality?
- Is it fairly quick to run? (important for keeping CI test queues reasonable)
If there is an IPv4-specific Kubernetes e2e networking test case that should be excluded from testing on an IPv6-only cluster, then the test should be marked as being an IPv4 test case by adding the following tag in the test description:
[Feature:Networking-IPv4]
For example, there is a 'ping 8.8.8.8' test that has been disabled for IPv6 testing as follows:
It("should provide Internet connection for containers [Feature:Networking-IPv4]", func() {
Any tests with this tag can be excluded from e2e testing by including "IPv4" as part of the --ginkgo.skip regular expression on the e2e test command line (see "e2e Test Command Line" below).
Conversely, if there is an IPv6-specific Kubernetes e2e networking test case that should be excluded from testing on an IPv4-only cluster, then the test case should be marked with the following tag in the test description:
[Feature:Networking-IPv6][Experimental]
For example:
It("should provide Internet connection for containers [Feature:Networking-IPv6][Experimental]", func() {
Any test with this tag can be excluded from e2e testing by including "IPv6" as part of the --ginkgo.skip regular expression on the e2e test command line (see "e2e Test Command Line" below).
There are a few methods that can be used to set up a multi-node, IPv6-only Kubernetes cluster:
- Use the scripts from the Mirantis/kubeadm-dind-cluster repo to create a containerized, multi-node IPv6-only cluster running either on a local Linux host or on a Google Compute Engine (GCE) instance . For example, to create a containerized IPv6 cluster on a local host:
cd
git clone https://github.com/Mirantis/kubeadm-dind-cluster.git
cd $HOME/kubeadm-dind-cluster
export REMOTE_DNS64_V4SERVER=<your-local-IPv4-DNS-server-IP-address>
export IP_MODE=ipv6
./fixed/dind-cluster-v1.10.sh up
- Manually configure a multi-node cluster on bare metal nodes or VMs using the step-by-step instructions in the kube-v6 repo. These instructions can be easily modified, for example, to bring up an IPv6-only cluster with your favorite (IPv6-capable) CNI plugin, or with your topology of choice.
- Use scripts from the Lazyjack repo to instantiate a multi-node, IPv6-only Kubernetes cluster on bare-metal nodes.
Running the IPv6 e2e Test Suite on a Local, Containerized Cluster That was Instantiated via Mirantis/kubeadm-dind-cluster
After spinning up a local, containerized IPv6-only cluster using Mirantis/kubeadm-dind-cluster, the IPv6 e2e test suite can be run either via the dind-cluster.sh script:
../dind-cluster.sh e2e 'Networking|Services' 'IPv4|DNS|Networking-Performance|Federation|functioning NodePort|preserve source pod'
or by running the tests through the Kubernetes e2e utility, e.g.:
cd $GOCODE/src/k8s.io/kubernetes
go run hack/e2e.go -- --provider=local --v 4 --test --test_args="--ginkgo.focus=Networking|Services --ginkgo.skip=IPv4|DNS|Networking-Performance|Federation|functioning\sNodePort|preserve\ssource\spod --num-nodes=2"
The following instructions explain how to connect from a Linux-based build/test server to a multi-node, IPv6-only Kubernetes cluster that is instantiated on bare-metal nodes or VMs. These instructions rely on connectivity to the cluster via the Kubernetes API service IP, although any IPv6 address on the Kubernetes master node that is accessible by the build/test server can be used instead (using the associated API port 6443 instead of service port 443).
If you haven't already done so, copy the kubernetes config file and the kubectl binary from your kube-master to a Linux host that will function as an external build/test server
The following assumes that you have password-less access to kube-master for user "kube" (but no scp access for root):
#!/bin/bash
KUBE_USER=kube
KUBE_MASTER=kube-master
echo ssh into $KUBE_MASTER and copy Kubernetes config and kubectl to $KUBE_USER home directory
ssh $KUBE_USER@$KUBE_MASTER << EOT
mkdir -p /home/kube/.kube
sudo yes | sudo cp -f /etc/kubernetes/admin.conf /home/$KUBE_USER/.kube/config
sudo chown $(id -u):$(id -g) /home/$KUBE_USER/.kube/config
sudo yes | sudo cp -f /bin/kubectl /home/$KUBE_USER/.kube
EOT
echo scp kubernetes config from $KUBE_MASTER to /home/$KUBE_USER/.kube/config
mkdir -p $HOME/.kube
scp $KUBE_USER@$KUBE_MASTER:/home/$KUBE_USER/.kube/config $HOME/.kube
chown $(id -u):$(id -g) $HOME/.kube/config
scp $KUBE_USER@$KUBE_MASTER:/home/$KUBE_USER/.kube/kubectl $HOME/.kube
sudo cp $HOME/.kube/kubectl /bin/kubectl
Confirm that you can access the Kubernetes API server from the build server using the kubectl client:
some-user@build-server:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master Ready master 6d v1.9.0-alpha.0.ipv6.0.1+23df37a5a1b7d7-dirty
kube-minion-1 Ready <none> 6d v1.9.0-alpha.0.ipv6.0.1+23df37a5a1b7d7-dirty
kube-minion-2 Ready <none> 6d v1.9.0-alpha.0.ipv6.0.1+23df37a5a1b7d7-dirty
some-user@build-server:~$
If you don't get a response, check that you've copied the Kubernetes config file correctly from the kube-master to $HOME/.kube/config (previous step), and check that you have the required routes from your build node to the Kubernetes API service IP.
some-user@build-server:~$ curl -g [fd00:1234::1]:443 | od -c -a
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 14 0 14 0 0 265 0 --:--:-- --:--:-- --:--:-- 269
0000000 025 003 001 \0 002 002 \n
nak etx soh nul stx stx nl
0000007
some-user@build-server:~$
If you don't get a response, check that you've have the required routes from your build node to the Kubernetes API service IP.
The e2e tests can be run as follows:
export KUBECONFIG=/home/openstack/.kube/config
export KUBE_MASTER=local
export KUBE_MASTER_IP="[fd00:1234::1]:443"
export KUBERNETES_CONFORMANCE_TEST=n
cd $GOPATH/src/k8s.io/kubernetes
go run hack/e2e.go -- --provider=local --v --test --test_args="--host=https://[fd00:1234::1]:443 --ginkgo.focus=Networking|Services --ginkgo.skip=IPv4|DNS|Networking-Performance|Federation|functioning\sNodePort|preserve\ssource\spod --num-nodes=2"
An explanation of some of the fields used in this command set:
- Kubernetes API Service IP: fd00:1234::1
- INCLUDE test cases with the following phrases/words in their descriptions:
- "Networking"
- "Services"
- But EXCLUDE test cases with the following phrases/words in their descriptions:
- "IPv4"
- "DNS"
- "Federation"
- "functioning NodePort"
- "preserve source pod"
- Number of worker nodes to use for testing: 2 (Min required for some service tests)
There is a Kubernetes pull request that is up for review for introducing pre-commit and post-commit Kubernetes IPv6 CI test jobs:
Once this PR is merged, the Kubernetes IPv6 CI test jobs will run the test suite that is described in this documentation.
Description | Sample Test Time (seconds)* |
---|---|
Network Connectivity Test Cases | |
[It] should function for node-pod communication: udp [Conformance] | 63.557 |
[It] should function for node-pod communication: http [Conformance] | 61.757 |
[It] should function for intra-pod communication: http [Conformance] | 55.885 |
[It] should function for intra-pod communication: udp [Conformance] | 59.329 |
[It] should provide unchanging, static URL paths for kubernetes api services | 9.420 |
Services Test Cases | |
[It] should function for pod-Service: udp | 66.270 |
[It] should be able to change the type from ExternalName to ClusterIP | 8.993 |
[It] should update endpoints: udp | 149.118 |
[It] should be able to change the type from ClusterIP to ExternalName | 9.112 |
[It] should update nodePort: udp [Slow] | 167.113 |
[It] should function for node-Service: http | 68.807 |
[It] should check NodePort out-of-range | 9.091 |
[It] should be able to change the type from ExternalName to NodePort | 9.079 |
[It] should check kube-proxy urls | 62.688 |
[It] should function for pod-Service: http | 86.556 |
[It] should use same NodePort with same port but different protocols | 9.205 |
[It] should serve multiport endpoints from pods [Conformance] | 74.567 |
[It] should update nodePort: http [Slow] | 152.895 |
[It] should release NodePorts on delete | 17.233 |
[It] should be able to change the type from NodePort to ExternalName | 9.308 |
[It] should function for node-Service: udp | 91.738 |
[It] should update endpoints: http | 155.233 |
[It] should prevent NodePort collisions | 9.892 |
[It] should provide secure master service [Conformance] | 9.044 |
[It] should be able to update NodePorts with two same port numbers but different protocols | 6.273 |
[It] should create endpoints for unready pods ** | 27.295 |
[It] should function for endpoint-Service: http | 73.971 |
[It] should function for endpoint-Service: udp | 82.004 |
[It] should serve a basic endpoint from pods [Conformance] | 61.248 |
[It] should function for client IP based session affinity: http | 85.018 |
--------------------------------------------------------------- TOTAL TEST TIME: | 30 min 20 secs |
* Sample test times are a rough guideline. These test times were taken on a fairly slow virtualized Kubernetes cluster: CentOS VirtualBox guests on an Ubuntu 16.04 host.
** The "should create endpoints for unready pods" exhibits occasional failures, with failures occurring in about 5% to 10% of the runs. These (somewhat rare) failures are under investigation.
Test Area | Description | Comment/Issue |
---|---|---|
DNS | [It] should provide DNS for services [Conformance] | kubernetes/kubernetes#62883 |
Network Connectivity | [It] should provide Internet connection for containers [Feature:Networking-IPv6][Experimental] |
Intermittently failing, with failures occurring in about 20-30% of test runs. The failures appear to be correlated with kube-dns crash loop that occasionally occurs. |
Performance | [It] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients) | Intermittently failing. Could be an issue with insufficient performance for containerized Kubeclusters on GCE instances. |
Test Area | Description | Comment |
---|---|---|
Services | [It] should preserve source pod IP for traffic thru service cluster IP | Masquerading is enabled in Bridge CNI plugin, so pod source IPs will not be preserved |
Services | [It] should be able to create a functioning NodePort service | For testing on a GCE instance, only IPv4 external IPs are available. Therefore, stateless NAT46 would be required to test NodePort service on an IPv6-only cluster using an external, IPv4 address. |
Test Area | Description |
---|---|
Service LoadBalancer |
[It] should support simple GET on Ingress ips [Feature:ServiceLoadBalancer] |
NEW TEST TO BE WRITTEN |
[It] should support kube-dns probes of type SRV [Feature:KubeDNS] |