Skip to content
This repository has been archived by the owner on May 25, 2023. It is now read-only.

Using jobSpec for queue e2e. #438

Merged
merged 1 commit into from
Oct 16, 2018
Merged

Using jobSpec for queue e2e. #438

merged 1 commit into from
Oct 16, 2018

Conversation

k82cn
Copy link
Contributor

@k82cn k82cn commented Oct 15, 2018

Signed-off-by: Da K. Ma klaus1982.cn@gmail.com

part of #425

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Oct 15, 2018
@k8s-ci-robot k8s-ci-robot requested a review from jinzhejz October 15, 2018 05:40
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: k82cn

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Oct 15, 2018
@TravisBuddy
Copy link

Travis tests have failed

Hey @k82cn,
Please read the following log in order to understand the failure reason.
It'll be awesome if you fix what's wrong and commit the changes.

1st Build

View build log

make e2e
mkdir -p _output/bin
go build -o _output/bin/kube-batch ./cmd/kube-batch/
hack/run-e2e.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 73763    0 73763    0     0   726k      0 --:--:-- --:--:-- --:--:--  727k
* Making sure DIND image is up to date 
v1.11: Pulling from mirantis/kubeadm-dind-cluster





















Digest: sha256:ee87eb24cab4a596f31ba83bd651df10750ca5ac7c5ce9834467c87fa7f6564b
Status: Downloaded newer image for mirantis/kubeadm-dind-cluster:v1.11
/home/travis/.kubeadm-dind-cluster/kubectl-v1.11.0: OK
* Starting DIND container: kube-master
* Running kubeadm: init --config /etc/kubeadm.conf --ignore-preflight-errors=all
Initializing machine ID from random generator.
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Loaded image: mirantis/hypokube:base

real	0m9.058s
user	0m0.544s
sys	0m0.396s

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube
 ---> 0b7f99080cd3
Removing intermediate container 6c5c1f0db7c5
Successfully built 0b7f99080cd3
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
I1015 05:44:07.590424     527 feature_gate.go:230] feature gates: &{map[]}
[init] using Kubernetes version: v1.11.0
[preflight] running pre-flight checks
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1015 05:44:07.619621     527 kernel_validator.go:81] Validating kernel version
I1015 05:44:07.619764     527 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kube-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.192.0.2]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kube-master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kube-master localhost] and IPs [10.192.0.2 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 39.001769 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node kube-master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node kube-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-master" as an annotation
[bootstraptoken] using token: 6myh15.cdq6etasugohnpge
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.192.0.2:6443 --token 6myh15.cdq6etasugohnpge --discovery-token-ca-cert-hash sha256:7fd73a7d2dad424c4b464161b092f2fe5eb7d757f3a1435ac957e6f707b702d9


real	1m6.722s
user	0m5.692s
sys	0m0.152s
f660de12b1d9
854c7ed8451d
001fca6c9f86
3440411c137b
f0747da904b4
c2e1f942ca3e
03353f647f14
4102a0ea42a1
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
configmap/kube-proxy configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
daemonset.extensions/kube-proxy configured
No resources found
* Setting cluster config 
Cluster "dind" set.
Context "dind" created.
Switched to context "dind".
* Starting node container: 1
* Starting DIND container: kube-node-1
* Node container started: 1
* Starting node container: 2
* Starting DIND container: kube-node-2
* Node container started: 2
* Starting node container: 3
* Starting DIND container: kube-node-3
* Node container started: 3
* Joining node: 1
* Joining node: 2
* Joining node: 3
* Running kubeadm: join --ignore-preflight-errors=all 10.192.0.2:6443 --token 6myh15.cdq6etasugohnpge --discovery-token-ca-cert-hash sha256:7fd73a7d2dad424c4b464161b092f2fe5eb7d757f3a1435ac957e6f707b702d9
* Running kubeadm: join --ignore-preflight-errors=all 10.192.0.2:6443 --token 6myh15.cdq6etasugohnpge --discovery-token-ca-cert-hash sha256:7fd73a7d2dad424c4b464161b092f2fe5eb7d757f3a1435ac957e6f707b702d9
Initializing machine ID from random generator.
Initializing machine ID from random generator.
* Running kubeadm: join --ignore-preflight-errors=all 10.192.0.2:6443 --token 6myh15.cdq6etasugohnpge --discovery-token-ca-cert-hash sha256:7fd73a7d2dad424c4b464161b092f2fe5eb7d757f3a1435ac957e6f707b702d9
Initializing machine ID from random generator.
Warning: Stopping docker.service, but it can still be activated by:
  docker.socket
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Loaded image: mirantis/hypokube:base
Loaded image: mirantis/hypokube:base
Loaded image: mirantis/hypokube:base

real	0m40.905s
user	0m0.652s
sys	0m0.368s

real	0m40.918s
user	0m0.692s
sys	0m0.364s

real	0m41.140s
user	0m0.652s
sys	0m0.372s

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube
 ---> 04bae8991b0e
 ---> ae05f400db05
 ---> f4f6f0516804
Removing intermediate container 0927499ba075
Successfully built 04bae8991b0e
Removing intermediate container f39604cb4ed1
Removing intermediate container aec04b973bad
Successfully built ae05f400db05
Successfully built f4f6f0516804
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
[preflight] running pre-flight checks
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1015 05:48:22.482550     501 kernel_validator.go:81] Validating kernel version
I1015 05:48:22.482843     501 kernel_validator.go:96] Validating kernel config
[preflight] running pre-flight checks
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

[preflight] running pre-flight checks
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1015 05:48:22.736592     503 kernel_validator.go:81] Validating kernel version
I1015 05:48:22.737318     503 kernel_validator.go:96] Validating kernel config
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1015 05:48:22.763017     502 kernel_validator.go:81] Validating kernel version
I1015 05:48:22.767993     502 kernel_validator.go:96] Validating kernel config
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "6myh15" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "6myh15" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "6myh15" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "6myh15" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "6myh15" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "6myh15" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "6myh15" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "6myh15" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "6myh15" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "6myh15" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "6myh15" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "6myh15" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Requesting info from "https://10.192.0.2:6443" again to validate TLS against the pinned public key
[discovery] Requesting info from "https://10.192.0.2:6443" again to validate TLS against the pinned public key
[discovery] Requesting info from "https://10.192.0.2:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.192.0.2:6443"
[discovery] Successfully established connection with API Server "10.192.0.2:6443"
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.192.0.2:6443"
[discovery] Successfully established connection with API Server "10.192.0.2:6443"
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.192.0.2:6443"
[discovery] Successfully established connection with API Server "10.192.0.2:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-node-3" as an annotation
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-node-2" as an annotation
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-node-1" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

real	0m28.743s
user	0m0.572s
sys	0m0.096s
* Node joined: 3

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

real	0m28.674s
user	0m0.612s
sys	0m0.056s
* Node joined: 2

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

real	0m28.956s
user	0m0.588s
sys	0m0.064s
* Node joined: 1
Creating static routes for bridge/PTP plugin
* Deploying k8s dashboard 
deployment.extensions/kubernetes-dashboard created
service/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/add-on-cluster-admin created
* Patching kube-dns deployment to make it start faster 
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.extensions/kube-dns configured
* Cluster Info 
Network Mode: ipv4
Cluster context: dind
Cluster ID: 0
Management CIDR(s): 10.192.0.0/24
Service CIDR/mode: 10.96.0.0/12/ipv4
Pod CIDR(s): 10.244.0.0/16
* Taking snapshot of the cluster 
deployment.extensions/kube-dns scaled
deployment.extensions/kubernetes-dashboard scaled
pod "kube-proxy-g6h7k" deleted
pod "kube-proxy-jbvgn" deleted
pod "kube-proxy-jmr59" deleted
pod "kube-proxy-nj4p5" deleted
NAME                        READY     STATUS              RESTARTS   AGE
etcd-kube-master            1/1       Running             0          2m
kube-dns-86c47599bd-92q86   3/3       Terminating         0          19s
kube-proxy-5nmhn            0/1       ContainerCreating   0          1s
kube-proxy-894bl            1/1       Running             0          1s
kube-proxy-kk8vt            1/1       Running             0          2s
kube-proxy-p4pms            0/1       ContainerCreating   0          1s
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
Warning: Stopping docker.service, but it can still be activated by:
  docker.socket
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
* Waiting for kube-proxy and the nodes 
...[done]
* Bringing up kube-dns and kubernetes-dashboard 
deployment.extensions/kube-dns scaled
deployment.extensions/kubernetes-dashboard scaled
.....................[done]
NAME          STATUS    ROLES     AGE       VERSION
kube-master   Ready     master    5m        v1.11.0
kube-node-1   Ready     <none>    1m        v1.11.0
kube-node-2   Ready     <none>    1m        v1.11.0
kube-node-3   Ready     <none>    1m        v1.11.0
* Access dashboard at: http://127.0.0.1:32768/api/v1/namespaces/kube-system/services/kubernetes-dashboard:/proxy
customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.incubator.k8s.io created
customresourcedefinition.apiextensions.k8s.io/queues.scheduling.incubator.k8s.io created
84f10a67c03f
* Removing container: e0871ec1e422
39c10fd:default/kube-scheduler-kube-master): job , status Running, pri 1, resreq cpu 100.00, memory 0.00, GPU 0.00
Node (kube-node-1): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (052026b1-d03e-11e8-a72d-46a9a39c10fd:kube-system/kube-proxy-kk8vt): job 7cec8acf-d03d-11e8-ba3e-46a9a39c10fd, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (2d46bf9b-d03e-11e8-819e-46a9a39c10fd:kube-system/kubernetes-dashboard-54f47d4878-lth67): job 006232f5-d03e-11e8-a72d-46a9a39c10fd, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-2): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (05593b9d-d03e-11e8-a72d-46a9a39c10fd:kube-system/kube-proxy-894bl): job 7cec8acf-d03d-11e8-ba3e-46a9a39c10fd, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
I1015 05:50:30.700645   19603 reclaim.go:42] Enter Reclaim ...
I1015 05:50:30.700652   19603 reclaim.go:50] There are <0> Jobs and <3> Queues in total for scheduling.
I1015 05:50:30.700662   19603 reclaim.go:189] Leaving Reclaim ...
I1015 05:50:30.700666   19603 allocate.go:42] Enter Allocate ...
I1015 05:50:30.700671   19603 allocate.go:61] Try to allocate resource to 0 Queues
I1015 05:50:30.700677   19603 allocate.go:155] Leaving Allocate ...
I1015 05:50:30.700682   19603 preempt.go:44] Enter Preempt ...
I1015 05:50:30.700688   19603 preempt.go:145] Leaving Preempt ...
I1015 05:50:30.700692   19603 session.go:103] Close Session 398e, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (2d4a0a75-d03e-11e8-819e-46a9a39c10fd:kube-system/kube-dns-57f756cc64-qnkh8): job 00f6c7f8-d03e-11e8-a72d-46a9a39c10fd, status Running, pri 1, resreq cpu 260.00, memory 115343360.00, GPU 0.00
I1015 05:50:32.812655   19603 reclaim.go:42] Enter Reclaim ...
I1015 05:50:32.812664   19603 reclaim.go:50] There are <0> Jobs and <3> Queues in total for scheduling.
I1015 05:50:32.812677   19603 reclaim.go:189] Leaving Reclaim ...
I1015 05:50:32.812683   19603 allocate.go:42] Enter Allocate ...
I1015 05:50:32.812690   19603 allocate.go:61] Try to allocate resource to 0 Queues
I1015 05:50:32.812700   19603 allocate.go:155] Leaving Allocate ...
I1015 05:50:32.812706   19603 preempt.go:44] Enter Preempt ...
I1015 05:50:32.812714   19603 preempt.go:145] Leaving Preempt ...
I1015 05:50:32.812721   19603 session.go:103] Close Session 3ad0f4df-d03e-11e8-afcd-42010a140028
make: *** [e2e] Error 2
TravisBuddy Request Identifier: 3cd434d0-d03e-11e8-8e29-6bd38fa4cac4

@TravisBuddy
Copy link

Travis tests have failed

Hey @k82cn,
Please read the following log in order to understand the failure reason.
It'll be awesome if you fix what's wrong and commit the changes.

1st Build

View build log

make e2e
mkdir -p _output/bin
go build -o _output/bin/kube-batch ./cmd/kube-batch/
hack/run-e2e.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 73763    0 73763    0     0   735k      0 --:--:-- --:--:-- --:--:--  742k
* Making sure DIND image is up to date 
v1.11: Pulling from mirantis/kubeadm-dind-cluster





















Digest: sha256:ee87eb24cab4a596f31ba83bd651df10750ca5ac7c5ce9834467c87fa7f6564b
Status: Downloaded newer image for mirantis/kubeadm-dind-cluster:v1.11
/home/travis/.kubeadm-dind-cluster/kubectl-v1.11.0: OK
* Starting DIND container: kube-master
* Running kubeadm: init --config /etc/kubeadm.conf --ignore-preflight-errors=all
Initializing machine ID from random generator.
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Loaded image: mirantis/hypokube:base

real	0m7.169s
user	0m0.576s
sys	0m0.368s

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube
 ---> a2d2f4845630
Removing intermediate container 441703d1cf1b
Successfully built a2d2f4845630
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
[init] using Kubernetes version: v1.11.0
[preflight] running pre-flight checks
I1015 05:47:31.209853     524 feature_gate.go:230] feature gates: &{map[]}
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1015 05:47:31.240738     524 kernel_validator.go:81] Validating kernel version
I1015 05:47:31.240815     524 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kube-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.192.0.2]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kube-master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kube-master localhost] and IPs [10.192.0.2 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 40.501661 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node kube-master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node kube-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-master" as an annotation
[bootstraptoken] using token: nnwl3w.6obata665yvgscms
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.192.0.2:6443 --token nnwl3w.6obata665yvgscms --discovery-token-ca-cert-hash sha256:6b0ebef0ddf527b933b06ea6f8bdb58fd83e85c13a10f8cafb59cf38a6d2a92b


real	1m5.577s
user	0m4.564s
sys	0m0.192s
898dfc4a725a
59e1237051e1
1c6f960631c2
f49137fa86d0
8ed14d0f5da6
be5aedfb04f5
91f08d4e48b0
fd1b59be46a3
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
configmap/kube-proxy configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
daemonset.extensions/kube-proxy configured
No resources found
* Setting cluster config 
Cluster "dind" set.
Context "dind" created.
Switched to context "dind".
* Starting node container: 1
* Starting DIND container: kube-node-1
* Node container started: 1
* Starting node container: 2
* Starting DIND container: kube-node-2
* Node container started: 2
* Starting node container: 3
* Starting DIND container: kube-node-3
* Node container started: 3
* Joining node: 1
* Joining node: 2
* Joining node: 3
* Running kubeadm: join --ignore-preflight-errors=all 10.192.0.2:6443 --token nnwl3w.6obata665yvgscms --discovery-token-ca-cert-hash sha256:6b0ebef0ddf527b933b06ea6f8bdb58fd83e85c13a10f8cafb59cf38a6d2a92b
Initializing machine ID from random generator.
* Running kubeadm: join --ignore-preflight-errors=all 10.192.0.2:6443 --token nnwl3w.6obata665yvgscms --discovery-token-ca-cert-hash sha256:6b0ebef0ddf527b933b06ea6f8bdb58fd83e85c13a10f8cafb59cf38a6d2a92b
Initializing machine ID from random generator.
* Running kubeadm: join --ignore-preflight-errors=all 10.192.0.2:6443 --token nnwl3w.6obata665yvgscms --discovery-token-ca-cert-hash sha256:6b0ebef0ddf527b933b06ea6f8bdb58fd83e85c13a10f8cafb59cf38a6d2a92b
Initializing machine ID from random generator.
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Loaded image: mirantis/hypokube:base
Loaded image: mirantis/hypokube:base
Loaded image: mirantis/hypokube:base

real	0m40.425s
user	0m0.628s
sys	0m0.372s

real	0m40.802s
user	0m0.612s
sys	0m0.368s

real	0m40.466s
user	0m0.624s
sys	0m0.348s

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube
 ---> 89435653b934
 ---> 2cb001d10482
 ---> fa663f9cc94b
Removing intermediate container 464452c0a934
Successfully built 89435653b934
Removing intermediate container 76ab8d5f385a
Successfully built 2cb001d10482
Removing intermediate container 74d29900c9ec
Successfully built fa663f9cc94b
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
[preflight] running pre-flight checks
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1015 05:51:45.673399     504 kernel_validator.go:81] Validating kernel version
I1015 05:51:45.673699     504 kernel_validator.go:96] Validating kernel config
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
[preflight] running pre-flight checks
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1015 05:51:45.944912     503 kernel_validator.go:81] Validating kernel version
I1015 05:51:45.945028     503 kernel_validator.go:96] Validating kernel config
[preflight] running pre-flight checks
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1015 05:51:46.111824     502 kernel_validator.go:81] Validating kernel version
I1015 05:51:46.111921     502 kernel_validator.go:96] Validating kernel config
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "nnwl3w" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "nnwl3w" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "nnwl3w" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "nnwl3w" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "nnwl3w" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "nnwl3w" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "nnwl3w" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "nnwl3w" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "nnwl3w" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "nnwl3w" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "nnwl3w" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "nnwl3w" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "nnwl3w" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "nnwl3w" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "nnwl3w" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Requesting info from "https://10.192.0.2:6443" again to validate TLS against the pinned public key
[discovery] Requesting info from "https://10.192.0.2:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.192.0.2:6443"
[discovery] Successfully established connection with API Server "10.192.0.2:6443"
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.192.0.2:6443"
[discovery] Successfully established connection with API Server "10.192.0.2:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-node-1" as an annotation
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-node-2" as an annotation
[discovery] Requesting info from "https://10.192.0.2:6443" again to validate TLS against the pinned public key

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

real	0m29.104s
user	0m0.580s
sys	0m0.052s
* Node joined: 1
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.192.0.2:6443"
[discovery] Successfully established connection with API Server "10.192.0.2:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

real	0m29.432s
user	0m0.544s
sys	0m0.080s
* Node joined: 2
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-node-3" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

real	0m31.388s
user	0m0.544s
sys	0m0.076s
* Node joined: 3
Creating static routes for bridge/PTP plugin
* Deploying k8s dashboard 
deployment.extensions/kubernetes-dashboard created
service/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/add-on-cluster-admin created
* Patching kube-dns deployment to make it start faster 
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.extensions/kube-dns configured
* Cluster Info 
Network Mode: ipv4
Cluster context: dind
Cluster ID: 0
Management CIDR(s): 10.192.0.0/24
Service CIDR/mode: 10.96.0.0/12/ipv4
Pod CIDR(s): 10.244.0.0/16
* Taking snapshot of the cluster 
deployment.extensions/kube-dns scaled
deployment.extensions/kubernetes-dashboard scaled
pod "kube-proxy-4fx2c" deleted
pod "kube-proxy-d98lt" deleted
pod "kube-proxy-qkgdt" deleted
pod "kube-proxy-vs59m" deleted
WARNING: cluster glitch: proxy pods aren't removed; pods may 'blink' for some time after restore
NAME               READY     STATUS    RESTARTS   AGE
etcd-kube-master   1/1       Running   0          2m
kube-proxy-59v54   1/1       Running   0          24s
kube-proxy-wmzxg   1/1       Running   0          20s
kube-proxy-zfw2x   1/1       Running   0          28s
kube-proxy-zqcrk   1/1       Running   0          25s
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
Warning: Stopping docker.service, but it can still be activated by:
  docker.socket
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
* Waiting for kube-proxy and the nodes 
...[done]
* Bringing up kube-dns and kubernetes-dashboard 
deployment.extensions/kube-dns scaled
deployment.extensions/kubernetes-dashboard scaled
...................[done]
NAME          STATUS    ROLES     AGE       VERSION
kube-master   Ready     master    5m        v1.11.0
kube-node-1   Ready     <none>    2m        v1.11.0
kube-node-2   Ready     <none>    2m        v1.11.0
kube-node-3   Ready     <none>    2m        v1.11.0
* Access dashboard at: http://127.0.0.1:32768/api/v1/namespaces/kube-system/services/kubernetes-dashboard:/proxy
customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.incubator.k8s.io created
customresourcedefinition.apiextensions.k8s.io/queues.scheduling.incubator.k8s.io created
# github.com/kubernetes-sigs/kube-batch/test/e2e
test/e2e/util.go:183:6: no new variables on left side of :=
FAIL	github.com/kubernetes-sigs/kube-batch/test/e2e [build failed]
* Removing container: 9a5fa8c0fdde
9a5fa8c0fdde
* Removing container: d8c1b3c7121d
d8c1b3c7121d
* Removing container: fdaf637125bf
fdaf637125bf
* Removing container: 971f88f65d16
971f88f65d16
hack/run-e2e.sh: line 32: 20175 Killed                  nohup ${KA_BIN}/kube-batch --kubeconfig ${HOME}/.kube/config --enable-namespace-as-queue=${ENABLE_NAMESPACES_AS_QUEUE} --logtostderr --v ${LOG_LEVEL} > scheduler.log 2>&1
====================================================================================
=============================>>>>> Scheduler Logs <<<<<=============================
====================================================================================
I1015 05:54:18.947692   20175 flags.go:52] FLAG: --alsologtostderr="false"
I1015 05:54:18.947739   20175 flags.go:52] FLAG: --enable-namespace-as-queue="false"
I1015 05:54:18.947746   20175 flags.go:52] FLAG: --kubeconfig="/home/travis/.kube/config"
I1015 05:54:18.947752   20175 flags.go:52] FLAG: --leader-elect="false"
I1015 05:54:18.947757   20175 flags.go:52] FLAG: --lock-object-namespace=""
I1015 05:54:18.947762   20175 flags.go:52] FLAG: --log-backtrace-at=":0"
I1015 05:54:18.947770   20175 flags.go:52] FLAG: --log-dir=""
I1015 05:54:18.947775   20175 flags.go:52] FLAG: --log-flush-frequency="5s"
I1015 05:54:18.947781   20175 flags.go:52] FLAG: --logtostderr="true"
I1015 05:54:18.947786   20175 flags.go:52] FLAG: --master=""
I1015 05:54:18.947790   20175 flags.go:52] FLAG: --schedule-period="1s"
I1015 05:54:18.947795   20175 flags.go:52] FLAG: --scheduler-conf=""
I1015 05:54:18.947799   20175 flags.go:52] FLAG: --scheduler-name="kube-batch"
I1015 05:54:18.947807   20175 flags.go:52] FLAG: --stderrthreshold="2"
I1015 05:54:18.947812   20175 flags.go:52] FLAG: --v="3"
I1015 05:54:18.947816   20175 flags.go:52] FLAG: --vmodule=""
I1015 05:54:18.949749   20175 reflector.go:202] Starting reflector *v1beta1.PodDisruptionBudget (0s) from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:241
I1015 05:54:18.949778   20175 reflector.go:240] Listing and watching *v1beta1.PodDisruptionBudget from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:241
I1015 05:54:18.950732   20175 reflector.go:202] Starting reflector *v1.Pod (0s) from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:242
I1015 05:54:18.950750   20175 reflector.go:240] Listing and watching *v1.Pod from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:242
I1015 05:54:18.953583   20175 reflector.go:202] Starting reflector *v1.Node (0s) from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:243
I1015 05:54:18.953602   20175 reflector.go:240] Listing and watching *v1.Node from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:243
I1015 05:54:18.955780   20175 reflector.go:202] Starting reflector *v1alpha1.PodGroup (0s) from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:244
I1015 05:54:18.955797   20175 reflector.go:240] Listing and watching *v1alpha1.PodGroup from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:244
I1015 05:54:18.956457   20175 reflector.go:202] Starting reflector *v1alpha1.Queue (0s) from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:249
I1015 05:54:18.956475   20175 reflector.go:240] Listing and watching *v1alpha1.Queue from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:249
I1015 05:54:18.978529   20175 event_handlers.go:171] Added pod <kube-system/kube-proxy-wmzxg> into cache.
I1015 05:54:18.978579   20175 event_handlers.go:171] Added pod <kube-system/kube-proxy-zfw2x> into cache.
I1015 05:54:18.978621   20175 event_handlers.go:171] Added pod <kube-system/kube-proxy-zqcrk> into cache.
I1015 05:54:18.978633   20175 event_handlers.go:171] Added pod <kube-system/kubernetes-dashboard-54f47d4878-h22jf> into cache.
I1015 05:54:18.978643   20175 event_handlers.go:171] Added pod <default/kube-apiserver-kube-master> into cache.
I1015 05:54:18.978650   20175 event_handlers.go:171] Added pod <default/kube-scheduler-kube-master> into cache.
I1015 05:54:18.978659   20175 event_handlers.go:171] Added pod <kube-system/etcd-kube-master> into cache.
I1015 05:54:18.978667   20175 event_handlers.go:171] Added pod <kube-system/kube-proxy-59v54> into cache.
I1015 05:54:18.978679   20175 event_handlers.go:171] Added pod <kube-system/kube-dns-57f756cc64-fhh5s> into cache.
I1015 05:54:19.050460   20175 cache.go:466] The scheduling spec of Job <f59e1e49-d03d-11e8-af10-2252b927b0ae:/> is nil, ignore it.
I1015 05:54:19.050502   20175 cache.go:466] The scheduling spec of Job <7a852437-d03e-11e8-9241-2252b927b0ae:/> is nil, ignore it.
I1015 05:54:19.050521   20175 cache.go:466] The scheduling spec of Job <7b1b5383-d03e-11e8-9241-2252b927b0ae:/> is nil, ignore it.
I1015 05:54:19.050558   20175 cache.go:485] There are <0> Jobs and <0> Queues in total for scheduling.
I1015 05:54:19.050581   20175 session.go:86] Open Session c1aa2610-d03e-11e8-a9d8-42010a140091 with <0> Job and <0> Queues
I1015 05:54:19.050615   20175 proportion.go:63] The total resource is <cpu 7740.00, memory 30838013952.00, GPU 0.00>
I1015 05:54:19.050675   20175 scheduler.go:87] Session c1aa2610-d03e-11e8-a9d8-42010a140091: 
Node (kube-node-3): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (81213492-d03e-11e8-9241-2252b927b0ae:kube-system/kube-proxy-zfw2x): job f59e1e49-d03d-11e8-af10-2252b927b0ae, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (bb7f0e0f-d03e-11e8-8b83-2252b927b0ae:kube-system/kubernetes-dashboard-54f47d4878-h22jf): job 7a852437-d03e-11e8-9241-2252b927b0ae, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-master): idle <cpu 1650.00, memory 7738339328.00, GPU 0.00>, used <cpu 350.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (2a374d17-d03e-11e8-9241-2252b927b0ae:default/kube-scheduler-kube-master): job , status Running, pri 1, resreq cpu 100.00, memory 0.00, GPU 0.00
	 1: Task (44c5f050-d03e-11e8-9241-2252b927b0ae:kube-system/etcd-kube-master): job , status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 2: Task (85e23ca5-d03e-11e8-9241-2252b927b0ae:kube-system/kube-proxy-wmzxg): job f59e1e49-d03d-11e8-af10-2252b927b0ae, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 3: Task (31d42d8f-d03e-11e8-9241-2252b927b0ae:default/kube-apiserver-kube-master): job , status Running, pri 1, resreq cpu 250.00, memory 0.00, GPU 0.00
Node (kube-node-1): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releaspec of Job <7a852437-d03e-11e8-9241-2252b927b0ae:/> is nil, ignore it.
I1015 05:54:20.051586   20175 cache.go:466] The scheduling spec of Job <7b1b5383-d03e-11e8-9241-2252b927b0ae:/> is nil, ignore it.
I1015 05:54:20.051595   20175 cache.go:485] There are <0> Jobs and <0> Queues in total for scheduling.
I1015 05:54:20.051606   20175 session.go:86] Open Session c242e773-d03e-11e8-a9d8-42010a140091 with <0> Job and <0> Queues
I1015 05:54:20.051692   20175 proportion.go:63] The total resource is <cpu 7740.00, memory 30838013952.00, GPU 0.00>
I1015 05:54:20.051786   20175 scheduler.go:87] Session c242e773-d03e-11e8-a9d8-42010a140091: 
Node (kube-master): idle <cpu 1650.00, memory 7738339328.00, GPU 0.00>, used <cpu 350.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (85e23ca5-d03e-11e8-9241-2252b927b0ae:kube-system/kube-proxy-wmzxg): job f59e1e49-d03d-11e8-af10-2252b927b0ae, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (31d42d8f-d03e-11e8-9241-2252b927b0ae:default/kube-apiserver-kube-master): job , status Running, pri 1, resreq cpu 250.00, memory 0.00, GPU 0.00
	 2: Task (2a374d17-d03e-11e8-9241-2252b927b0ae:default/kube-scheduler-kube-master): job , status Running, pri 1, resreq cpu 100.00, memory 0.00, GPU 0.00
	 3: Task (44c5f050-d03e-11e8-9241-2252b927b0ae:kube-system/etcd-kube-master): job , status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-1): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (82e76d34-d03e-11e8-9241-2252b927b0ae:kube-system/kube-proxy-zqcrk): job f59e1e49-d03d-11e8-af10-2252b927b0ae, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-2): idle <cpu 1740.00, memory 7622995968.00, GPU 0.00>, used <cpu 260.00, memory 115343360.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (bb7ee03c-d03e-11e8-8b83-2252b927b0ae:kube-system/kube-dns-57f756cc64-fhh5s): job 7b1b5383-d03e-11e8-9241-2252b927b0ae, status Running, pri 1, resreq cpu 260.00, memory 115343360.00, GPU 0.00
	 1: Task (832bf4f7-d03e-11e8-9241-2252b927b0ae:kube-system/kube-proxy-59v54): job f59e1e49-d03d-11e8-af10-2252b927b0ae, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-3): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (81213492-d03e-11e8-9241-2252b927b0ae:kube-system/kube-proxy-zfw2x): job f59e1e49-d03d-11e8-af10-2252b927b0ae, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (bb7f0e0f-d03e-11e8-8b83-2252b927b0ae:kube-system/kubernetes-dashboard-54f47d4878-h22jf): job 7a852437-d03e-11e8-9241-2252b927b0ae, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
I1015 05:54:20.051980   20175 reclaim.go:42] Enter Reclaim ...
I1015 05:54:20.052025   20175 reclaim.go:50] There are <0> Jobs and <0> Queues in total for scheduling.
I1015 05:54:20.052039   20175 reclaim.go:189] Leaving Reclaim ...
I1015 05:54:20.052045   20175 allocate.go:42] Enter Allocate ...
I1015 05:54:20.052051   20175 allocate.go:61] Try to allocate resource to 0 Queues
I1015 05:54:20.052061   20175 ae (kube-master): idle <cpu 1650.00, memory 7738339328.00, GPU 0.00>, used <cpu 350.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (44c5f050-d03e-11e8-9241-2252b927b0ae:kube-system/etcd-kube-master): job , status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (85e23ca5-d03e-11e8-9241-2252b927b0ae:kube-system/kube-proxy-wmzxg): job f59e1e49-d03d-11e8-af10-2252b927b0ae, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 2: Task (31d42d8f-d03e-11e8-9241-2252b927b0ae:default/kube-apiserver-kube-master): job , status Running, pri 1, resreq cpu 250.00, memory 0.00, GPU 0.00
	 3: Task (2a374d17-d03e-11e8-9241-2252b927b0ae:default/kube-scheduler-kube-master): job , status Running, pri 1, resreq cpu 100.00, memory 0.00, GPU 0.00
Node (kube-node-1): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (82e76d34-d03e-11e8-9241-2252b927b0ae:kube-system/kube-proxy-zqcrk): job f59e1e49-d03d-11e8-af10-2252b927b0ae, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-2): idle <cpu 1740.00, memory 7622995968.00, GPU 0.00>, used <cpu 260.00, memory 115343360.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (832bf4f7-d03e-11e8-9241-2252b927b0ae:kube-system/kube-proxy-59v54): job f59e1e49-d03d-11e8-af10-2252b927b0ae, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (bb7ee03c-d03e-11e8-8b83-2252b927b0ae:kube-system/kube-dns-57f756cc64-fhh5s): job 7b1b5383-d03e-11e8-9241-2252b927b0ae, status Running, pri 1, resreq cpu 260.00, memory 115343360.00, GPU 0.00
I1015 05:54:21.052641   20175 reclaim.go:42] Enter Reclaim ...
I1015 05:54:21.052650   20175 reclaim.go:50] There are <0> Jobs and <0> Queues in total for scheduling.
I1015 05:54:21.052662   20175 reclaim.go:189] Leaving Reclaim ...
I1015 05:54:21.052668   20175 allocate.go:42] Enter Allocate ...
I1015 05:54:21.052675   20175 allocate.go:61] Try to allocate resource to 0 Queues
I1015 05:54:21.052685   20175 allocate.go:155] Leaving Allocate ...
I1015 05:54:21.052691   20175 preempt.go:44] Enter Preempt ...
I1015 05:54:21.052700   20175 preempt.go:145] Leaving Preempt ...
I1015 05:54:21.052707   20175 session.go:103] Close Session c2db9cc0-d03e-11e8-a9d8-42010a140091
I1015 05:54:22.052885   20175 cache.go:466] The scheduling spec of Job <7a852437-d03e-11e8-9241-2252b927b0ae:/> is nil, ignore it.
I1015 05:54:22.052927   20175 cache.go:466] The scheduling spec of Job <7b1b5383-d03e-11e8-9241-2252b927b0ae:/> is nil, ignore it.
I1015 05:54:22.052936   20175 cache.go:466] The scheduling spec of Job <f59e1e49-d03d-11e8-af10-2252b927b0ae:/> is nil, ignore it.
I1015 05:54:22.052947   20175 cache.go:485] There are <0> Jobs and <0> Queues in total for scheduling.
I1015 05:54:22.052958   20175 session.go:86] Open Session c374497f-d03e-11e8-a9d8-42010a140091 with <0> Job and <0> Queues
I1015 05:54:22.052983   20175 proportion.go:63] The total resource is <cpu 7740.00, memory 30838013952.00, GPU 0.00>
I1015 05:54:22.053001   20175 scheduler.go:87] Session c374497f-d03e-11e8-a9d8-42010a140091: 
Node (kube-node-2): idle <cpu 1740.00, memory 7622995968.00, GPU 0.00>, used <cpu 260.00, memory 115343360.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (832bf4f7-d03e-11e8-9241-2252b927b0ae:kube-system/kube-proxy-59v54): job f59e1e49-d03d-11e8-af10-2252b927b0ae, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (bb7ee03c-d03e-11e8-8b83-2252b927b0ae:kube-system/kube-dns-57f756cc64-fhh5s): job 7b1b5383-d03e-11e8-9241-2252b927b0ae, status Running, pri 1, resreq cpu 260.00, memory 115343360.00, GPU 0.00
Node (kube-node-3): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (81213492-d03e-11e8-9241-2252b927b0ae:kube-system/kube-proxy-zfw2x): job f59e1e49-d03d-11e8-af10-2252b927b0ae, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (bb7f0e0f-d03e-11e8-8b83-2252b927b0ae:kube-system/kubernetes-dashboard-54f47d4878-h22jf): job 7a852437-d03e-11e8-9241-2252b927b0ae, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-master): idle <cpu 1650.00, memory 7738339328.00, GPU 0.00>, used <cpu 350.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (31d42d8f-d03e-11e8-9241-2252b927b0ae:default/kube-apiserver-kube-master): job , status Running, pri 1, resreq cpu 250.00, memory 0.00, GPU 0.00
	 1: Task (2a374d17-d03e-11e8-9241-2252b927b0ae:default/kube-scheduler-kube-master): job , status Running, pri 1, resreq cpu 100.00, memory 0.00, GPU 0.00
	 2: Task (44c5f050-d03e-11e8-9241-2252b927b0ae:kube-system/etcd-kube-master): job , status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 3: Task (85e23ca5-d03e-11e8-9241-2252b927b0ae:kube-system/kube-proxy-wmzxg): job f59e1e49-d03d-11e8-af10-2252b927b0ae, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-1): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (82e76d34-d03e-11e8-9241-2252b927b0ae:kube-system/kube-proxy-zqcrk): job f59e1e49-d03d-11e8-af10-2252b927b0ae, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
I1015 05:54:22.053130   20175 reclaim.go:42] Enter Reclaim ...
I1015 05:54:22.053139   20175 reclaim.go:50] There are <0> Jobs and <0> Queues in total for scheduling.
I1015 05:54:22.053150   20175 reclaim.go:189] Leaving Reclaim ...
I1015 05:54:22.053157   20175 allocate.go:42] Enter Allocate ...
I1015 05:54:22.053164   20175 allocate.go:61] Try to allocate resource to 0 Queues
I1015 05:54:22.053173   20175 allocate.go:155] Leaving Allocate ...
I1015 05:54:22.053179   20175 preempt.go:44] Enter Preempt ...
I1015 05:54:22.053187   20175 preempt.go:145] Leaving Preempt ...
I1015 05:54:22.053193   20175 session.go:103] Close Session c374497f-d03e-11e8-a9d8-42010a140091
I1015 05:54:23.053384   20175 cache.go:466] The scheduling spec of Job <f59e1e49-d03d-11e8-af10-2252b927b0ae:/> is nil, ignore it.
I1015 05:54:23.053424   20175 cache.go:466] The scheduling spec of Job <7a852437-d03e-11e8-9241-2252b927b0ae:/> is nil, ignore it.
I1015 05:54:23.053475   20175 cache.go:466] The scheduling spec of Job <7b1b5383-d03e-11e8-9241-2252b927b0ae:/> is nil, ignore it.
I1015 05:54:23.053488   20175 cache.go:485] There are <0> Jobs and <0> Q0, memory 0.00, GPU 0.00>
	 0: Task (832bf4f7-d03e-11e8-9241-2252b927b0ae:kube-system/kube-proxy-59v54): job f59e1e49-d03d-11e8-af10-2252b927b0ae, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (bb7ee03c-d03e-11e8-8b83-2252b927b0ae:kube-system/kube-dns-57f756cc64-fhh5s): job 7b1b5383-d03e-11e8-9241-2252b927b0ae, status Running, pri 1, resreq cpu 260.00, memory 115343360.00, GPU 0.00
Node (kube-node-3): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (81213492-d03e-11e8-9241-2252b927b0ae:kube-system/kube-proxy-zfw2x): job f59e1e49-d03d-11e8-af10-2252b927b0ae, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (bb7f0e0f-d03e-11e8-8b83-2252b927b0ae:kube-system/kubernetes-dashboard-54f47d4878-h22jf): job 7a852437-d03e-11e8-9241-2252b927b0ae, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
I1015 05:54:23.053659   20175 reclaim.go:42] Enter Reclaim ...
I1015 05:54:23.053667   20175 reclaim.go:50] There are <0> Jobs and <0> Queues in total for scheduling.
I1015 05:54:23.053680   20175 reclaim.go:189] Leaving Reclaim ...
I1015 05:54:23.053687   20175 allocate.go:42] Enter Allocate ...
I1015 05:54:23.053694   20175 allocate.go:61] Try to allocate resource to 0 Queues
I1015 05:54:23.053703   20175 allocate.go:155] Leaving Allocate ...
I1015 05:54:23.053709   20175 preempt.go:44] Enter Preempt ...
I1015 05:54:23.053717   20175 preempt.go:145] Leaving Preempt ...
I1015 05:54:23.053724   20175 session.go:103] Close Session c40cf3e3-d03e-11e8-a9d8-42010a140091
make: *** [e2e] Error 2
TravisBuddy Request Identifier: c58508e0-d03e-11e8-8e29-6bd38fa4cac4

@TravisBuddy
Copy link

Travis tests have failed

Hey @k82cn,
Please read the following log in order to understand the failure reason.
It'll be awesome if you fix what's wrong and commit the changes.

1st Build

View build log

make e2e
mkdir -p _output/bin
go build -o _output/bin/kube-batch ./cmd/kube-batch/
hack/run-e2e.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 73763    0 73763    0     0   746k      0 --:--:-- --:--:-- --:--:--  750k
* Making sure DIND image is up to date 
v1.11: Pulling from mirantis/kubeadm-dind-cluster





















Digest: sha256:ee87eb24cab4a596f31ba83bd651df10750ca5ac7c5ce9834467c87fa7f6564b
Status: Downloaded newer image for mirantis/kubeadm-dind-cluster:v1.11
/home/travis/.kubeadm-dind-cluster/kubectl-v1.11.0: OK
* Starting DIND container: kube-master
* Running kubeadm: init --config /etc/kubeadm.conf --ignore-preflight-errors=all
Initializing machine ID from random generator.
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Loaded image: mirantis/hypokube:base

real	0m9.155s
user	0m0.484s
sys	0m0.376s

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube
 ---> 1fbf789b4151
Removing intermediate container 69fb98ac0973
Successfully built 1fbf789b4151
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
I1015 05:59:07.913572     526 feature_gate.go:230] feature gates: &{map[]}
[init] using Kubernetes version: v1.11.0
[preflight] running pre-flight checks
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1015 05:59:07.954813     526 kernel_validator.go:81] Validating kernel version
I1015 05:59:07.955023     526 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kube-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.192.0.2]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kube-master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kube-master localhost] and IPs [10.192.0.2 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 40.005336 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node kube-master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node kube-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-master" as an annotation
[bootstraptoken] using token: duw6y9.q87kbajaciji2q3m
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.192.0.2:6443 --token duw6y9.q87kbajaciji2q3m --discovery-token-ca-cert-hash sha256:4548da38674eb12c47a1eae63ba5a7808a06f479c0d915a0518258f0cf8073c9


real	1m6.403s
user	0m5.108s
sys	0m0.192s
9f85e45702b6
c9bce72d34ff
83e8a2c61e64
92568c618110
a980fe3e67d9
f49802f98571
26889484bf1a
108beda874fe
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
configmap/kube-proxy configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
daemonset.extensions/kube-proxy configured
No resources found
* Setting cluster config 
Cluster "dind" set.
Context "dind" created.
Switched to context "dind".
* Starting node container: 1
* Starting DIND container: kube-node-1
* Node container started: 1
* Starting node container: 2
* Starting DIND container: kube-node-2
* Node container started: 2
* Starting node container: 3
* Starting DIND container: kube-node-3
* Node container started: 3
* Joining node: 1
* Joining node: 2
* Joining node: 3
* Running kubeadm: join --ignore-preflight-errors=all 10.192.0.2:6443 --token duw6y9.q87kbajaciji2q3m --discovery-token-ca-cert-hash sha256:4548da38674eb12c47a1eae63ba5a7808a06f479c0d915a0518258f0cf8073c9
* Running kubeadm: join --ignore-preflight-errors=all 10.192.0.2:6443 --token duw6y9.q87kbajaciji2q3m --discovery-token-ca-cert-hash sha256:4548da38674eb12c47a1eae63ba5a7808a06f479c0d915a0518258f0cf8073c9
Initializing machine ID from random generator.
* Running kubeadm: join --ignore-preflight-errors=all 10.192.0.2:6443 --token duw6y9.q87kbajaciji2q3m --discovery-token-ca-cert-hash sha256:4548da38674eb12c47a1eae63ba5a7808a06f479c0d915a0518258f0cf8073c9
Initializing machine ID from random generator.
Initializing machine ID from random generator.
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Loaded image: mirantis/hypokube:base
Loaded image: mirantis/hypokube:base
Loaded image: mirantis/hypokube:base

real	0m40.434s
user	0m0.664s
sys	0m0.360s

real	0m40.582s
user	0m0.680s
sys	0m0.312s

real	0m40.686s
user	0m0.664s
sys	0m0.324s

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube
 ---> 49baf49b713a
 ---> c8c0ceaf4d76
 ---> 60365aba557f
Removing intermediate container 797de6f35255
Successfully built 49baf49b713a
Removing intermediate container 0b7dcdfd6cc1
Removing intermediate container 838caa0bb222
Successfully built 60365aba557f
Successfully built c8c0ceaf4d76
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
[preflight] running pre-flight checks
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_wrr ip_vs_sh ip_vs ip_vs_rr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1015 06:03:21.637928     513 kernel_validator.go:81] Validating kernel version
I1015 06:03:21.638007     513 kernel_validator.go:96] Validating kernel config
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
[preflight] running pre-flight checks
[preflight] running pre-flight checks
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_wrr ip_vs_sh ip_vs ip_vs_rr] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1015 06:03:21.965018     508 kernel_validator.go:81] Validating kernel version
I1015 06:03:21.965386     508 kernel_validator.go:96] Validating kernel config
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1015 06:03:21.983167     505 kernel_validator.go:81] Validating kernel version
I1015 06:03:21.985086     505 kernel_validator.go:96] Validating kernel config
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "duw6y9" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "duw6y9" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "duw6y9" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "duw6y9" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "duw6y9" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "duw6y9" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Requesting info from "https://10.192.0.2:6443" again to validate TLS against the pinned public key
[discovery] Requesting info from "https://10.192.0.2:6443" again to validate TLS against the pinned public key
[discovery] Requesting info from "https://10.192.0.2:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.192.0.2:6443"
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.192.0.2:6443"
[discovery] Successfully established connection with API Server "10.192.0.2:6443"
[discovery] Successfully established connection with API Server "10.192.0.2:6443"
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.192.0.2:6443"
[discovery] Successfully established connection with API Server "10.192.0.2:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-node-3" as an annotation
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-node-1" as an annotation
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-node-2" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

real	0m19.163s
user	0m0.528s
sys	0m0.084s
* Node joined: 3

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

real	0m19.013s
user	0m0.500s
sys	0m0.080s
* Node joined: 1

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

real	0m19.046s
user	0m0.508s
sys	0m0.060s
* Node joined: 2
Creating static routes for bridge/PTP plugin
* Deploying k8s dashboard 
deployment.extensions/kubernetes-dashboard created
service/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/add-on-cluster-admin created
* Patching kube-dns deployment to make it start faster 
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.extensions/kube-dns configured
* Cluster Info 
Network Mode: ipv4
Cluster context: dind
Cluster ID: 0
Management CIDR(s): 10.192.0.0/24
Service CIDR/mode: 10.96.0.0/12/ipv4
Pod CIDR(s): 10.244.0.0/16
* Taking snapshot of the cluster 
deployment.extensions/kube-dns scaled
deployment.extensions/kubernetes-dashboard scaled
pod "kube-proxy-c9jsk" deleted
pod "kube-proxy-gpjn6" deleted
pod "kube-proxy-krlf7" deleted
pod "kube-proxy-lw7gb" deleted
NAME                        READY     STATUS        RESTARTS   AGE
etcd-kube-master            1/1       Running       0          2m
kube-dns-86c47599bd-kwm4t   3/3       Terminating   0          48s
kube-proxy-25752            1/1       Running       0          14s
kube-proxy-dtxmv            1/1       Running       0          15s
kube-proxy-gb2vd            1/1       Running       0          19s
kube-proxy-l2wsf            1/1       Running       0          15s
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
* Waiting for kube-proxy and the nodes 
..........[done]
* Bringing up kube-dns and kubernetes-dashboard 
deployment.extensions/kube-dns scaled
deployment.extensions/kubernetes-dashboard scaled
...............................................[done]
NAME          STATUS    ROLES     AGE       VERSION
kube-master   Ready     master    6m        v1.11.0
kube-node-1   Ready     <none>    2m        v1.11.0
kube-node-2   Ready     <none>    2m        v1.11.0
kube-node-3   Ready     <none>    2m        v1.11.0
* Access dashboard at: http://127.0.0.1:32768/api/v1/namespaces/kube-system/services/kubernetes-dashboard:/proxy
customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.incubator.k8s.io created
customresourcedefinition.apiextensions.k8s.io/queues.scheduling.incubator.k8s.io created
d1f718458471
* Removing container: 2afb3cd9c057
2afb3cd9c057
* Removing container: 7875d6851415
7875d6851415
====================================================================================
=============================>>>>> Scheduler Logs <<<<<=============================
====================================================================================
I1015 06:06:28.495847   21582 flags.go:52] FLAG: --alsologtostderr="false"
I1015 06:06:28.495895   21582 flags.go:52] FLAG: --enable-namespace-as-queue="true"
I1015 06:06:28.495903   21582 flags.go:52] FLAG: --kubeconfig="/home/travis/.kube/config"
I1015 06:06:28.495908   21582 flags.go:52] FLAG: --leader-elect="false"
I1015 06:06:28.495912   21582 flags.go:52] FLAG: --lock-object-namespace=""
I1015 06:06:28.495915   21582 flags.go:52] FLAG: --log-backtrace-at=":0"
I1015 06:06:28.495921   21582 flags.go:52] FLAG: --log-dir=""
I1015 06:06:28.495925   21582 flags.go:52] FLAG: --log-flush-frequency="5s"
I1015 06:06:28.497815   21582 flags.go:52] FLAG: --logtostderr="true"
I1015 06:06:28.497822   21582 flags.go:52] FLAG: --master=""
I1015 06:06:28.497825   21582 flags.go:52] FLAG: --schedule-period="1s"
I1015 06:06:28.497829   21582 flags.go:52] FLAG: --scheduler-conf=""
I1015 06:06:28.497832   21582 flags.go:52] FLAG: --scheduler-name="kube-batch"
I1015 06:06:28.497851   21582 flags.go:52] FLAG: --stderrthreshold="2"
I1015 06:06:28.497855   21582 flags.go:52] FLAG: --v="3"
I1015 06:06:28.497858   21582 flags.go:52] FLAG: --vmodule=""
I1015 06:06:28.499402   21582 reflector.go:202] Starting reflector *v1beta1.PodDisruptionBudget (0s) from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:241
I1015 06:06:28.499424   21582 reflector.go:240] Listing and watching *v1beta1.PodDisruptionBudget from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:241
I1015 06:06:28.499809   21582 reflector.go:202] Starting reflector *v1.Pod (0s) from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:242
I1015 06:06:28.499818   21582 reflector.go:240] Listing and watching *v1.Pod from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:242
I1015 06:06:28.500396   21582 reflector.go:202] Starting reflector *v1.Node (0s) from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:243
I1015 06:06:28.500407   21582 reflector.go:240] Listing and watching *v1.Node from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:243
I1015 06:06:28.500924   21582 reflector.go:202] Starting reflector *v1alpha1.PodGroup (0s) from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:244
I1015 06:06:28.500936   21582 reflector.go:240] Listing and watching *v1alpha1.PodGroup from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:244
I1015 06:06:28.501536   21582 reflector.go:202] Starting reflector *v1.Namespace (0s) from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:247
I1015 06:06:28.501559   21582 reflector.go:240] Listing and watching *v1.Namespace from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:247
I1015 06:06:28.535902   21582 event_handlers.go:171] Added pod <kube-system/kube-proxy-25752> into cache.
I1015 06:06:28.536231   21582 event_handlers.go:171] Added pod <kube-system/kube-proxy-gb2vd> into cache.
I1015 06:06:28.536290   21582 event_handlers.go:171] Added pod <kube-system/kube-proxy-l2wsf> into cache.
I1015 06:06:28.536299   21582 event_handlers.go:171] Added pod <default/kube-scheduler-kube-master> into cache.
I1015 06:06:28.536307   21582 event_handlers.go:171] Added pod <kube-system/etcd-kube-master> into cache.
I1015 06:06:28.536315   21582 event_handlers.go:171] Added pod <kube-system/kube-proxy-dtxmv> into cache.
I1015 06:06:28.536325   21582 event_handlers.go:171] Added pod <kube-system/kubernetes-dashboard-54f47d4878-hpv6f> into cache.
I1015 06:06:28.536338   21582 event_handlers.go:171] Added pod <kube-system/kube-dns-57f756cc64-qp6wj> into cache.
I1015 06:06:28.536357   21582 event_handlers.go:171] Added pod <default/kube-controller-manager-kube-master> into cache.
I1015 06:06:28.536483   21582 event_handlers.go:171] Added pod <default/kube-apiserver-kube-master> into cache.
I1015 06:06:28.599630   21582 cache.go:466] The scheduling spec of Job <9561d68a-d03f-11e8-87ff-06e096b6f2d4:/> is nil, ignore it.
I1015 06:06:28.599657   21582 cache.go:466] The scheduling spec of Job <12d7c640-d040-11e8-98c4-06e096b6f2d4:/> is nil, ignore it.
I1015 06:06:28.599668   21582 cache.go:466] The scheduling spec of Job <13fc7c87-d040-11e8-98c4-06e096b6f2d4:/> is nil, ignore it.
I1015 06:06:28.599675   21582 cache.go:485] There are <0> Jobs and <3> Queues in total for scheduling.
I1015 06:06:28.599682   21582 session.go:86] Open Session 7482869f-d040-11e8-a3f8-42010a1400ae with <0> Job and <3> Queues
I1015 06:06:28.599699   21582 proportion.go:63] The total resource is <cpu 7740.00, memory 30838013952.00, GPU 0.00>
I1015 06:06:28.59975 reclaim.go:189] Leaving Reclaim ...
I1015 06:06:29.600313   21582 allocate.go:42] Enter Allocate ...
I1015 06:06:29.600317   21582 allocate.go:61] Try to allocate resource to 0 Queues
I1015 06:06:29.600323   21582 allocate.go:155] Leaving Allocate ...
I1015 06:06:29.600327   21582 preempt.go:44] Enter Preempt ...
I1015 06:06:29.600332   21582 preempt.go:145] Leaving Preempt ...
I1015 06:06:29.600336   21582 session.go:103] Close Session 751b2f43-d040-11e8-a3f8-42010a1400ae
I1015 06:06:30.600528   21582 cache.go:466] The scheduling spec of Job <12d7c640-d040-11e8-98c4-06e096b6f2d4:/> is nil, ignore it.
I1015 06:06:30.600557   21582 cache.go:466] The scheduling spec of Job <13fc7c87-d040-11e8-98c4-06e096b6f2d4:/> is nil, ignore it.
I1015 06:06:30.600566   21582 cache.go:466] The scheduling spec of Job <9561d68a-d03f-11e8-87ff-06e096b6f2d4:/> is nil, ignore it.
I1015 06:06:30.600578   21582 cache.go:485] There are <0> Jobs and <3> Queues in total for scheduling.
I1015 06:06:30.600589   21582 session.go:86] Open Session 75b3d5cd-d040-11e8-a3f8-42010a1400ae with <0> Job and <3> Queues
I1015 06:06:30.600620   21582 proportion.go:63] The total resource is <cpu 7740.00, memory 30838013952.00, GPU 0.00>
I1015 06:06:30.600640   21582 scheduler.go:87] Session 75b3d5cd-d040-11e8-a3f8-42010a1400ae: 
Node (kube-master): idle <cpu 1450.00, memory 7738339328.00, GPU 0.00>, used <cpu 550.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (d85abf5f-d03f-11e8-98c4-06e096b6f2d4:kube-system/etcd-kube-master): job , status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (6a84a7d3-d04] Open Session 764c80e9-d040-11e8-a3f8-42010a1400ae with <0> Job and <3> Queues
I1015 06:06:31.601277   21582 proportion.go:63] The total resource is <cpu 7740.00, memory 30838013952.00, GPU 0.00>
I1015 06:06:31.601309   21582 scheduler.go:87] Session 764c80e9-d040-11e8-a3f8-42010a1400ae: 
Node (kube-master): idle <cpu 1450.00, memory 7738339328.00, GPU 0.00>, used <cpu 550.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (6a84a7d3-d040-11e8-b219-06e096b6f2d4:default/kube-controller-manager-kube-master): job , status Running, pri 1, resreq cpu 200.00, memory 0.00, GPU 0.00
	 1: Task (cd2f0aaa-d03f-11e8-98c4-06e096b6f2d4:default/kube-apiserver-kube-master): job , status Running, pri 1, resreq cpu 250.00, memory 0.00, GPU 0.00
	 2: Task (1f50e35a-d040-11e8-98c4-06e096b6f2d4:kube-system/kube-proxy-gb2vd): job 9561d68a-d03f-11e8-87ff-06e096b6f2d4, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 3: Task (cd2f1955-d03f-11e8-98c4-06e096b6f2d4:default/kube-scheduler-kube-master): job , status Running, pri 1, resreq cpu 100.00, memory 0.00, GPU 0.00
	 4: Task (d85abf5f-d03f-11e8-98c4-06e096b6f2d4:kube-system/etcd-kube-master): job , status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-1): idle <cpu 1740.00, memory 7622995968.00, GPU 0.00>, used <cpu 260.00, memory 115343360.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (21da0449-d040-11e8-98c4-06e096b6f2d4:kube-system/kube-proxy-l2wsf): job 9561d68a-d03f-11e8-87ff-06e096b6f2d4, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (52895750-d040-11e8-b219-06e091561   21582 session.go:103] Close Session 764c80e9-d040-11e8-a3f8-42010a1400ae
make: *** [e2e] Error 2
TravisBuddy Request Identifier: 78565bd0-d040-11e8-8e29-6bd38fa4cac4

@TravisBuddy
Copy link

Travis tests have failed

Hey @k82cn,
Please read the following log in order to understand the failure reason.
It'll be awesome if you fix what's wrong and commit the changes.

1st Build

View build log

make e2e
mkdir -p _output/bin
go build -o _output/bin/kube-batch ./cmd/kube-batch/
hack/run-e2e.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 73763    0 73763    0     0   559k      0 --:--:-- --:--:-- --:--:--  562k
* Making sure DIND image is up to date 
v1.11: Pulling from mirantis/kubeadm-dind-cluster





















Digest: sha256:ee87eb24cab4a596f31ba83bd651df10750ca5ac7c5ce9834467c87fa7f6564b
Status: Downloaded newer image for mirantis/kubeadm-dind-cluster:v1.11
/home/travis/.kubeadm-dind-cluster/kubectl-v1.11.0: OK
* Starting DIND container: kube-master
* Running kubeadm: init --config /etc/kubeadm.conf --ignore-preflight-errors=all
Initializing machine ID from random generator.
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Loaded image: mirantis/hypokube:base

real	0m9.020s
user	0m0.612s
sys	0m0.368s

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube
 ---> 6cad7f1850e6
Removing intermediate container 66ed81b1f76c
Successfully built 6cad7f1850e6
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
I1015 06:18:28.776312     532 feature_gate.go:230] feature gates: &{map[]}
[init] using Kubernetes version: v1.11.0
[preflight] running pre-flight checks
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1015 06:18:28.809510     532 kernel_validator.go:81] Validating kernel version
I1015 06:18:28.809813     532 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kube-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.192.0.2]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kube-master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kube-master localhost] and IPs [10.192.0.2 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 39.501861 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node kube-master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node kube-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-master" as an annotation
[bootstraptoken] using token: 0t5gx2.8heiqw2i6l18u0nn
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.192.0.2:6443 --token 0t5gx2.8heiqw2i6l18u0nn --discovery-token-ca-cert-hash sha256:adaa4f55acaec179d70c27cec872f276874e0b6e654ce8fc07d4c078c2c80e12


real	1m6.342s
user	0m5.556s
sys	0m0.216s
d3b82060617f
274ee090e908
58fa420b3405
4606f7738e7f
cbaf6771b935
7d40057ce8af
f5c52c5e3e95
f878c6ef8ff6
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
configmap/kube-proxy configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
daemonset.extensions/kube-proxy configured
No resources found
* Setting cluster config 
Cluster "dind" set.
Context "dind" created.
Switched to context "dind".
* Starting node container: 1
* Starting DIND container: kube-node-1
* Node container started: 1
* Starting node container: 2
* Starting DIND container: kube-node-2
* Node container started: 2
* Starting node container: 3
* Starting DIND container: kube-node-3
* Node container started: 3
* Joining node: 1
* Joining node: 2
* Joining node: 3
* Running kubeadm: join --ignore-preflight-errors=all 10.192.0.2:6443 --token 0t5gx2.8heiqw2i6l18u0nn --discovery-token-ca-cert-hash sha256:adaa4f55acaec179d70c27cec872f276874e0b6e654ce8fc07d4c078c2c80e12
Initializing machine ID from random generator.
* Running kubeadm: join --ignore-preflight-errors=all 10.192.0.2:6443 --token 0t5gx2.8heiqw2i6l18u0nn --discovery-token-ca-cert-hash sha256:adaa4f55acaec179d70c27cec872f276874e0b6e654ce8fc07d4c078c2c80e12
Initializing machine ID from random generator.
* Running kubeadm: join --ignore-preflight-errors=all 10.192.0.2:6443 --token 0t5gx2.8heiqw2i6l18u0nn --discovery-token-ca-cert-hash sha256:adaa4f55acaec179d70c27cec872f276874e0b6e654ce8fc07d4c078c2c80e12
Initializing machine ID from random generator.
Warning: Stopping docker.service, but it can still be activated by:
  docker.socket
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Loaded image: mirantis/hypokube:base
Loaded image: mirantis/hypokube:base
Loaded image: mirantis/hypokube:base

real	0m41.443s
user	0m0.656s
sys	0m0.408s

real	0m41.015s
user	0m0.668s
sys	0m0.388s

real	0m41.117s
user	0m0.668s
sys	0m0.436s

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube
 ---> 78b209e3b82c
 ---> b7cf85c3cc64
 ---> 3b042f0097c1
Removing intermediate container bdaa087137e4
Successfully built 78b209e3b82c
Removing intermediate container 1b76b6058b8f
Removing intermediate container 41d19147ae27
Successfully built 3b042f0097c1
Successfully built b7cf85c3cc64
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
[preflight] running pre-flight checks
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1015 06:22:44.457851     514 kernel_validator.go:81] Validating kernel version
I1015 06:22:44.458352     514 kernel_validator.go:96] Validating kernel config
[preflight] running pre-flight checks
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

[preflight] running pre-flight checks
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1015 06:22:44.684139     503 kernel_validator.go:81] Validating kernel version
I1015 06:22:44.684240     503 kernel_validator.go:96] Validating kernel config
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1015 06:22:44.691645     508 kernel_validator.go:81] Validating kernel version
I1015 06:22:44.691848     508 kernel_validator.go:96] Validating kernel config
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "0t5gx2" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "0t5gx2" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "0t5gx2" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "0t5gx2" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "0t5gx2" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "0t5gx2" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "0t5gx2" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "0t5gx2" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "0t5gx2" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "0t5gx2" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "0t5gx2" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "0t5gx2" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "0t5gx2" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Requesting info from "https://10.192.0.2:6443" again to validate TLS against the pinned public key
[discovery] Requesting info from "https://10.192.0.2:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.192.0.2:6443"
[discovery] Successfully established connection with API Server "10.192.0.2:6443"
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.192.0.2:6443"
[discovery] Successfully established connection with API Server "10.192.0.2:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-node-2" as an annotation
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-node-1" as an annotation
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Requesting info from "https://10.192.0.2:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.192.0.2:6443"
[discovery] Successfully established connection with API Server "10.192.0.2:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

real	0m26.892s
user	0m0.612s
sys	0m0.052s
* Node joined: 1
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

real	0m26.854s
user	0m0.568s
sys	0m0.108s
* Node joined: 2
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-node-3" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

real	0m28.979s
user	0m0.580s
sys	0m0.064s
* Node joined: 3
Creating static routes for bridge/PTP plugin
* Deploying k8s dashboard 
deployment.extensions/kubernetes-dashboard created
service/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/add-on-cluster-admin created
* Patching kube-dns deployment to make it start faster 
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.extensions/kube-dns configured
* Cluster Info 
Network Mode: ipv4
Cluster context: dind
Cluster ID: 0
Management CIDR(s): 10.192.0.0/24
Service CIDR/mode: 10.96.0.0/12/ipv4
Pod CIDR(s): 10.244.0.0/16
* Taking snapshot of the cluster 
deployment.extensions/kube-dns scaled
deployment.extensions/kubernetes-dashboard scaled
pod "kube-proxy-557td" deleted
pod "kube-proxy-8bqgs" deleted
pod "kube-proxy-lscnv" deleted
pod "kube-proxy-xgttl" deleted
WARNING: cluster glitch: proxy pods aren't removed; pods may 'blink' for some time after restore
NAME               READY     STATUS    RESTARTS   AGE
etcd-kube-master   1/1       Running   0          2m
kube-proxy-586qv   1/1       Running   0          26s
kube-proxy-h57kh   1/1       Running   0          20s
kube-proxy-w6tcb   1/1       Running   0          22s
kube-proxy-zqsjv   1/1       Running   0          22s
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
Warning: Stopping docker.service, but it can still be activated by:
  docker.socket
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
* Waiting for kube-proxy and the nodes 
...[done]
* Bringing up kube-dns and kubernetes-dashboard 
deployment.extensions/kube-dns scaled
deployment.extensions/kubernetes-dashboard scaled
...............[done]
NAME          STATUS    ROLES     AGE       VERSION
kube-master   Ready     master    5m        v1.11.0
kube-node-1   Ready     <none>    1m        v1.11.0
kube-node-2   Ready     <none>    1m        v1.11.0
kube-node-3   Ready     <none>    1m        v1.11.0
* Access dashboard at: http://127.0.0.1:32768/api/v1/namespaces/kube-system/services/kubernetes-dashboard:/proxy
customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.incubator.k8s.io created
customresourcedefinition.apiextensions.k8s.io/queues.scheduling.incubator.k8s.io created
=== RUN   TestE2E
Running Suite: kube-batch Test Suite
====================================
Random Seed: 1539584717
Will run 11 of 11 specs

le BestEffort Job [It]
  /home/travis/gopath/src/github.com/kubernetes-sigs/kube-batch/test/e2e/job.go:217

  Expected error:
      <*errors.StatusError | 0xc420446480>: {
          ErrStatus: {
              TypeMeta: {Kind: "", APIVersion: ""},
              ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
              Status: "Failure",
              Message: "object is being deleted: namespaces \"test\" already exists",
              Reason: "AlreadyExists",
              Details: {Name: "test", Group: "", Kind: "namespaces", UID: "", Causes: nil, RetryAfterSeconds: 0},
              Code: 409,
          },
      }
      object is being deleted: namespaces "test" already exists
  not to have occurred

  /home/travis/gopath/src/github.com/kubernetes-sigs/kube-batch/test/e2e/util.go:95
------------------------------
• Failure [0.006 seconds]
Predicates E2E Test
/home/travis/gopath/src/github.com/kubernetes-sigs/kube-batch/test/e2e/queue.go:26
  Reclaim [It]
  /home/travis/gopath/src/github.com/kubernetes-sigs/kube-batch/test/e2e/queue.go:27

  Expected error:
      <*errors.StatusError | 0xc4204466c0>: {
          ErrStatus: {
              TypeMeta: {Kind: "", APIVersion: ""},
              ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
              Status: "Failure",
              Message: "object is being deleted: namespaces \"test\" already exists",
              Reason: "AlreadyExists",
              Details: {Name: "test", Group: "", Kind: "namespaces", UID: "", Causes: nil, RetryAfterSeconds: 0},
              Code: 409,
          },
      }
      object is being deleted: namespaces "test" already exists
  not to have occurred

  /home/travis/gopath/src/github.com/kubernetes-sigs/kube-batch/test/e2e/util.go:95
------------------------------
• Failure [0.008 seconds]
Predicates E2E Test
/home/travis/gopath/src/github.com/kubernetes-sigs/kube-batch/test/e2e/predicates.go:28
  NodeAffinity [It]
  /home/travis/gopath/src/github.com/kubernetes-sigs/kube-batch/test/e2e/predicates.go:29

  Expected error:
      <*errors.StatusError | 0xc42014d050>: {
          ErrStatus: {
              TypeMeta: {Kind: "", APIVersion: ""},
              ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
              Status: "Failure",
              Message: "object is being deleted: namespaces \"test\" already exists",
              Reason: "AlreadyExists",
              Details: {Name: "test", Group: "", Kind: "namespaces", UID: "", Causes: nil, RetryAfterSeconds: 0},
              Code: 409,
          },
      }
      object is being deleted: namespaces "test" already exists
  not to have occurred

  /home/travis/gopath/src/github.com/kubernetes-sigs/kube-batch/test/e2e/util.go:95
------------------------------
• Failure [0.010 seconds]
Predicates E2E Test
/home/travis/gopath/src/github.com/kubernetes-sigs/kube-batch/test/e2e/predicates.go:28
  Hostport [It]
  /home/travis/gopath/src/github.com/kubernetes-sigs/kube-batch/test/e2e/predicates.go:78

  Expected error:
      <*errors.StatusError | 0xc420446a20>: {
          ErrStatus: {
              TypeMeta: {Kind: "", APIVersion: ""},
              ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
              Status: "Failure",
              Message: "object is being deleted: namespaces \"test\" already exists",
              Reason: "AlreadyExists",
              Details: {Name: "test", Group: "", Kind: "namespaces", UID: "", Causes: nil, RetryAfterSeconds: 0},
              Code: 409,
          },
      }
      object is being deleted: namespaces "test" already exists
  not to have occurred

  /home/travis/gopath/src/github.com/kubernetes-sigs/kube-batch/test/e2e/util.go:95
------------------------------
• Failure [0.010 seconds]
Predicates E2E Test
/home/travis/gopath/src/github.com/kubernetes-sigs/kube-batch/test/e2e/predicates.go:28
  Pod Affinity [It]
  /home/travis/gopath/src/github.com/kubernetes-sigs/kube-batch/test/e2e/predicates.go:106

  Expected error:
      <*errors.StatusError | 0xc420446cf0>: {
          ErrStatus: {
              TypeMeta: {Kind: "", APIVersion: ""},
              ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
              Status: "Failure",
              Message: "object is being deleted: namespaces \"test\" already exists",
              Reason: "AlreadyExists",
              Details: {Name: "test", Group: "", Kind: "namespaces", UID: "", Causes: nil, RetryAfterSeconds: 0},
              Code: 409,
          },
      }
      object is being deleted: nameshack/run-e2e.sh: line 32: 19890 Killed                  nohup ${KA_BIN}/kube-batch --kubeconfig ${HOME}/.kube/config --enable-namespace-as-queue=${ENABLE_NAMESPACES_AS_QUEUE} --logtostderr --v ${LOG_LEVEL} > scheduler.log 2>&1
* Removing container: 772c717d2986
772c717d2986
* Removing container: eb3e93448c15
  19890 reclaim.go:189] Leaving Reclaim ...
I1015 06:25:11.194519   19890 allocate.go:42] Enter Allocate ...
I1015 06:25:11.194526   19890 allocate.go:61] Try to allocate resource to 0 Queues
I1015 06:25:11.194534   19890 allocate.go:155] Leaving Allocate ...
I1015 06:25:11.194540   19890 preempt.go:44] Enter Preempt ...
I1015 06:25:11.194548   19890 preempt.go:145] Leaving Preempt ...
I1015 06:25:11.194555   19890 session.go:103] Close Session 11a0dc8f-d043-11e8-9209-42010a14004d
I1015 06:25:12.194778   19890 cache.go:466] The scheduling spec of Job <cced6eae-d042-11e8-b234-fac05722f16d:/> is nil, ignore it.
I1015 06:25:12.194819   19890 cache.go:466] The scheduling spec of Job <cd9f62de-d042-11e8-b234-fac05722f16d:/> is nil, ignore it.
I1015 06:25:12.194828   19890 cache.go:466] The scheduling spec of Job <49477bef-d042-11e8-86e2-fac05722f16d:/> is nil, ignore it.
I1015 06:25:12.194838   19890 cache.go:485] There are <0> Jobs and <3> Queues in total for scheduling.
I1015 06:25:12.194849   19890 session.go:86] Open Session 123988fe-d043-11e8-9209-42010a14004d with <0> Job and <3> Queues
I1015 06:25:12.194873   19890 proportion.go:63] The total resource is <cpu 7740.00, memory 30838013952.00, GPU 0.00>
I1015 06:25:12.194892   19890 scheduler.go:87] Session 123988fe-d043-11e8-9209-42010a14004d: 
Node (kube-master): idle <cpu 1650.00, memory 7738339328.00, GPU 0.00>, used <cpu 350.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (815c7a78-d042-11e8-b234-fac05722f16d:kube-system/etcd-kube-master): job , status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (d2f180.00, GPU 0.00>
	 0: Task (d59b0c35-d042-11e8-b234-fac05722f16d:kube-system/kube-proxy-w6tcb): job 49477bef-d042-11e8-86e2-fac05722f16d, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-2): idle <cpu 1740.00, memory 7622995968.00, GPU 0.00>, used <cpu 260.00, memory 115343360.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (d592b0ed-d042-11e8-b234-fac05722f16d:kube-system/kube-proxy-zqsjv): job 49477bef-d042-11e8-86e2-fac05722f16d, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (0ade47f9-d043-11e8-a484-fac05722f16d:kube-system/kube-dns-57f756cc64-vdzp4): job cd9f62de-d042-11e8-b234-fac05722f16d, status Running, pri 1, resreq cpu 260.00, memory 115343360.00, GPU 0.00
Node (kube-node-3): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (d6d90e20-d042-11e8-b234-fac05722f16d:kube-system/kube-proxy-h57kh): job 49477bef-d042-11e8-86e2-fac05722f16d, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (0adf292b-d043-11e8-a484-fac05722f16d:kube-system/kubernetes-dashboard-54f47d4878-nzmc6): job cced6eae-d042-11e8-b234-fac05722f16d, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-master): idle <cpu 1650.00, memory 7738339328.00, GPU 0.00>, used <cpu 350.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (815c7a78-d042-11e8-b234-fac05722f16d:kube-system/etcd-kube-master): job , status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (d2f187d2-d042-11e8-b234-fac05722f1736   19890 session.go:86] Open Session 1534eaf4-d043-11e8-9209-42010a14004d with <0> Job and <3> Queues
I1015 06:25:17.197806   19890 proportion.go:63] The total resource is <cpu 7740.00, memory 30838013952.00, GPU 0.00>
I1015 06:25:17.197828   19890 scheduler.go:87] Session 1534eaf4-d043-11e8-9209-42010a14004d: 
Node (kube-node-1): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (d59b0c35-d042-11e8-b234-fac05722f16d:kube-system/kube-proxy-w6tcb): job 49477bef-d042-11e8-86e2-fac05722f16d, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-2): idle <cpu 1740.00, memory 7622995968.00, GPU 0.00>, used <cpu 260.00, memory 115343360.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (d592b0ed-d042-11e8-b234-fac05722f16d:kube-system/kube-proxy-zqsjv): job 49477bef-d042-11e8-86e2-fac05722f16d, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (0ade47f9-d043-11e8-a484-fac05722f16d:kube-system/kube-dns-57f756cc64-vdzp4): job cd9f62de-d042-11e8-b234-fac05722f16d, status Running, pri 1, resreq cpu 260.00, memory 115343360.00, GPU 0.00
Node (kube-node-3): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (0adf292b-d043-11e8-a484-fac05722f16d:kube-system/kubernetes-dashboard-54f47d4878-nzmc6): job cced6eae-d042-11e8-b234-fac05722f16d, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (d6d90e20-d042-11e8-b234-fac05722f16d:kube-system/kube-proxy-h57kh)te resource to 5 tasks of Job <test/qj-1>
I1015 06:25:18.199273   19890 priority.go:97] Priority TaskOrder: <test/qj-1-0-dts78> QoS is Burstable, <test/qj-1-0-ndjrb> QoS is Burstable
I1015 06:25:18.199279   19890 priority.go:47] Priority TaskOrder: <test/qj-1-0-dts78> prority is 1, <test/qj-1-0-ndjrb> priority is 1
I1015 06:25:18.199289   19890 priority.go:97] Priority TaskOrder: <test/qj-1-0-ndjrb> QoS is Burstable, <test/qj-1-0-d9b92> QoS is Burstable
I1015 06:25:18.199295   19890 priority.go:47] Priority TaskOrder: <test/qj-1-0-ndjrb> prority is 1, <test/qj-1-0-d9b92> priority is 1
I1015 06:25:18.199301   19890 priority.go:97] Priority TaskOrder: <test/qj-1-0-5c82x> QoS is Burstable, <test/qj-1-0-d9b92> QoS is Burstable
I1015 06:25:18.199307   19890 priority.go:47] Priority TaskOrder: <test/qj-1-0-5c82x> prority is 1, <test/qj-1-0-d9b92> priority is 1
I1015 06:25:18.199314   19890 allocate.go:102] There are <4> nodes for Job <test/qj-1>
I1015 06:25:18.199321   19890 allocate.go:106] Considering Task <test/qj-1-0-cgkzq> on node <kube-node-1>: <cpu 1000.00, memory 0.00, GPU 0.00> vs. <cpu 2000.00, memory 7738339328.00, GPU 0.00>
I1015 06:25:18.199343   19890 predicates.go:121] NodeSelect predicates Task <test/qj-1-0-cgkzq> on Node <kube-node-1>: fit true, err <nil>
I1015 06:25:18.199349   19890 predicates.go:135] HostPorts predicates Task <test/qj-1-0-cgkzq> on Node <kube-node-1>: fit true, err <nil>
I1015 06:25:18.199355   19890 predicates.go:149] Toleration/Taint predicates Task <test/qj-1-0-cgkzq> on Node <kube-node-1>: fit true, err <nil>
I1015 06:25:18.199363   19890 predicates.go:164] Pod Affinity/Anti-Affi049   19890 predicates.go:164] Pod Affinity/Anti-Affinity predicates Task <test/qj-1-0-dts78> on Node <kube-node-3>: fit true, err <nil>
I1015 06:25:18.200056   19890 allocate.go:118] Binding Task <test/qj-1-0-dts78> to node <kube-node-3>
I1015 06:25:18.200065   19890 session.go:164] After allocated Task <test/qj-1-0-dts78> to Node <kube-node-3>: idle <cpu 1000.00, memory 7738339328.00, GPU 0.00>, used <cpu 1000.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
I1015 06:25:18.200078   19890 drf.go:146] DRF AllocateFunc: task <test/qj-1-0-dts78>, resreq <cpu 1000.00, memory 0.00, GPU 0.00>,  share <0.5>
I1015 06:25:18.200087   19890 proportion.go:202] Proportion AllocateFunc: task <test/qj-1-0-dts78>, resreq <cpu 1000.00, memory 0.00, GPU 0.00>,  share <0.8>
I1015 06:25:18.200101   19890 allocate.go:78] Try to allocate resource to Jobs in Queue <test>
I1015 06:25:18.200108   19890 allocate.go:95] Try to allocate resource to 1 tasks of Job <test/qj-1>
I1015 06:25:18.200114   19890 allocate.go:102] There are <4> nodes for Job <test/qj-1>
I1015 06:25:18.200121   19890 allocate.go:106] Considering Task <test/qj-1-0-5c82x> on node <kube-node-1>: <cpu 1000.00, memory 0.00, GPU 0.00> vs. <cpu 0.00, memory 7738339328.00, GPU 0.00>
I1015 06:25:18.200146   19890 predicates.go:121] NodeSelect predicates Task <test/qj-1-0-5c82x> on Node <kube-node-1>: fit true, err <nil>
I1015 06:25:18.200153   19890 predicates.go:135] HostPorts predicates Task <test/qj-1-0-5c82x> on Node <kube-node-1>: fit true, err <nil>
I1015 06:25:18.200159   19890 predicates.go:149] Toleration/Taint predicates Task <test/qj-1-0-5c82x>
I1015 06:25:21.202793   19890 reclaim.go:90] Queue <test> is overused, ignore it.
I1015 06:25:21.202802   19890 reclaim.go:189] Leaving Reclaim ...
I1015 06:25:21.202809   19890 allocate.go:42] Enter Allocate ...
I1015 06:25:21.202821   19890 allocate.go:57] Added Job <test/qj-1> into Queue <test>
I1015 06:25:21.202828   19890 allocate.go:61] Try to allocate resource to 1 Queues
I1015 06:25:21.202835   19890 allocate.go:72] Queue <test> is overused, ignore it.
I1015 06:25:21.202843   19890 allocate.go:155] Leaving Allocate ...
I1015 06:25:21.202850   19890 preempt.go:44] Enter Preempt ...
I1015 06:25:21.202855   19890 preempt.go:56] Added Queue <test> for Job <test/qj-1>
I1015 06:25:21.202862   19890 preempt.go:81] No preemptors in Queue <test>, break.
I1015 06:25:21.202869   19890 preempt.go:145] Leaving Preempt ...
I1015 06:25:21.202876   19890 session.go:103] Close Session 1797fbbb-d043-11e8-9209-42010a14004d
I1015 06:25:21.247307   19890 event_handlers.go:228] Deleted pod <test/qj-1-0-ndjrb> from cache.
make: *** [e2e] Error 1
TravisBuddy Request Identifier: 199017a0-d043-11e8-8e29-6bd38fa4cac4

@TravisBuddy
Copy link

Travis tests have failed

Hey @k82cn,
Please read the following log in order to understand the failure reason.
It'll be awesome if you fix what's wrong and commit the changes.

1st Build

View build log

make e2e
mkdir -p _output/bin
go build -o _output/bin/kube-batch ./cmd/kube-batch/
hack/run-e2e.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 73763    0 73763    0     0   724k      0 --:--:-- --:--:-- --:--:--  720k
* Making sure DIND image is up to date 
v1.11: Pulling from mirantis/kubeadm-dind-cluster





















Digest: sha256:ee87eb24cab4a596f31ba83bd651df10750ca5ac7c5ce9834467c87fa7f6564b
Status: Downloaded newer image for mirantis/kubeadm-dind-cluster:v1.11
/home/travis/.kubeadm-dind-cluster/kubectl-v1.11.0: OK
* Starting DIND container: kube-master
* Running kubeadm: init --config /etc/kubeadm.conf --ignore-preflight-errors=all
Initializing machine ID from random generator.
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Loaded image: mirantis/hypokube:base

real	0m8.500s
user	0m0.564s
sys	0m0.444s

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube
 ---> ce0b224681ec
Removing intermediate container 5497f5419621
Successfully built ce0b224681ec
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
[init] using Kubernetes version: v1.11.0
I1015 06:42:00.138934     533 feature_gate.go:230] feature gates: &{map[]}
[preflight] running pre-flight checks
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1015 06:42:00.173528     533 kernel_validator.go:81] Validating kernel version
I1015 06:42:00.173728     533 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kube-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.192.0.2]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kube-master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kube-master localhost] and IPs [10.192.0.2 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 38.001846 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node kube-master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node kube-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-master" as an annotation
[bootstraptoken] using token: 1irgdc.iarwk3rwk44xaqj4
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.192.0.2:6443 --token 1irgdc.iarwk3rwk44xaqj4 --discovery-token-ca-cert-hash sha256:99cfd9d62a8a5c92ca9f4780450472c4523b2c1e2c2e6f925cc2fcd7f88ec9bf


real	1m4.582s
user	0m6.600s
sys	0m0.192s
822c62b6f167
ffd154563925
022317dbb000
799b90532693
70ac5532e448
fbe9bba5177c
5c131c516239
6852a632691c
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
configmap/kube-proxy configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
daemonset.extensions/kube-proxy configured
No resources found
* Setting cluster config 
Cluster "dind" set.
Context "dind" created.
Switched to context "dind".
* Starting node container: 1
* Starting DIND container: kube-node-1
* Node container started: 1
* Starting node container: 2
* Starting DIND container: kube-node-2
* Node container started: 2
* Starting node container: 3
* Starting DIND container: kube-node-3
* Node container started: 3
* Joining node: 1
* Joining node: 2
* Joining node: 3
* Running kubeadm: join --ignore-preflight-errors=all 10.192.0.2:6443 --token 1irgdc.iarwk3rwk44xaqj4 --discovery-token-ca-cert-hash sha256:99cfd9d62a8a5c92ca9f4780450472c4523b2c1e2c2e6f925cc2fcd7f88ec9bf
Initializing machine ID from random generator.
* Running kubeadm: join --ignore-preflight-errors=all 10.192.0.2:6443 --token 1irgdc.iarwk3rwk44xaqj4 --discovery-token-ca-cert-hash sha256:99cfd9d62a8a5c92ca9f4780450472c4523b2c1e2c2e6f925cc2fcd7f88ec9bf
* Running kubeadm: join --ignore-preflight-errors=all 10.192.0.2:6443 --token 1irgdc.iarwk3rwk44xaqj4 --discovery-token-ca-cert-hash sha256:99cfd9d62a8a5c92ca9f4780450472c4523b2c1e2c2e6f925cc2fcd7f88ec9bf
Initializing machine ID from random generator.
Initializing machine ID from random generator.
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Loaded image: mirantis/hypokube:base
Loaded image: mirantis/hypokube:base
Loaded image: mirantis/hypokube:base

real	0m41.146s
user	0m0.584s
sys	0m0.448s

real	0m41.395s
user	0m0.612s
sys	0m0.420s

real	0m41.681s
user	0m0.632s
sys	0m0.396s

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube
 ---> 381f74bf7422
 ---> dacf83b683b1
 ---> e2e9f15fc85b
Removing intermediate container 685c68250ff9
Successfully built dacf83b683b1
Removing intermediate container cab4852b29e1
Successfully built e2e9f15fc85b
Removing intermediate container e34e899c53f0
Successfully built 381f74bf7422
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
[preflight] running pre-flight checks
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1015 06:46:14.553808     502 kernel_validator.go:81] Validating kernel version
I1015 06:46:14.554167     502 kernel_validator.go:96] Validating kernel config
[preflight] running pre-flight checks
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_wrr ip_vs_sh ip_vs ip_vs_rr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1015 06:46:14.631381     502 kernel_validator.go:81] Validating kernel version
I1015 06:46:14.631573     502 kernel_validator.go:96] Validating kernel config
[preflight] running pre-flight checks
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1015 06:46:14.978595     513 kernel_validator.go:81] Validating kernel version
I1015 06:46:14.979171     513 kernel_validator.go:96] Validating kernel config
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "1irgdc" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "1irgdc" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "1irgdc" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "1irgdc" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "1irgdc" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "1irgdc" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "1irgdc" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "1irgdc" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "1irgdc" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Requesting info from "https://10.192.0.2:6443" again to validate TLS against the pinned public key
[discovery] Requesting info from "https://10.192.0.2:6443" again to validate TLS against the pinned public key
[discovery] Requesting info from "https://10.192.0.2:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.192.0.2:6443"
[discovery] Successfully established connection with API Server "10.192.0.2:6443"
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.192.0.2:6443"
[discovery] Successfully established connection with API Server "10.192.0.2:6443"
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.192.0.2:6443"
[discovery] Successfully established connection with API Server "10.192.0.2:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-node-2" as an annotation
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-node-3" as an annotation
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-node-1" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

real	0m24.175s
user	0m0.512s
sys	0m0.100s
* Node joined: 2

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

real	0m24.056s
user	0m0.564s
sys	0m0.092s
* Node joined: 3

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

real	0m24.366s
user	0m0.584s
sys	0m0.092s
* Node joined: 1
Creating static routes for bridge/PTP plugin
* Deploying k8s dashboard 
deployment.extensions/kubernetes-dashboard created
service/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/add-on-cluster-admin created
* Patching kube-dns deployment to make it start faster 
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.extensions/kube-dns configured
* Cluster Info 
Network Mode: ipv4
Cluster context: dind
Cluster ID: 0
Management CIDR(s): 10.192.0.0/24
Service CIDR/mode: 10.96.0.0/12/ipv4
Pod CIDR(s): 10.244.0.0/16
* Taking snapshot of the cluster 
deployment.extensions/kube-dns scaled
deployment.extensions/kubernetes-dashboard scaled
pod "kube-proxy-j5nzp" deleted
pod "kube-proxy-r7dpm" deleted
pod "kube-proxy-tzg6l" deleted
pod "kube-proxy-x4b2q" deleted
NAME                        READY     STATUS              RESTARTS   AGE
etcd-kube-master            1/1       Running             0          2m
kube-dns-86c47599bd-p45rd   0/3       Terminating         0          23s
kube-proxy-7wdxd            1/1       Running             0          4s
kube-proxy-lv8wj            1/1       Running             0          5s
kube-proxy-pff87            1/1       Running             0          2s
kube-proxy-xd56g            0/1       ContainerCreating   0          1s
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
Warning: Stopping docker.service, but it can still be activated by:
  docker.socket
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
* Waiting for kube-proxy and the nodes 
..........[done]
* Bringing up kube-dns and kubernetes-dashboard 
deployment.extensions/kube-dns scaled
deployment.extensions/kubernetes-dashboard scaled
.....[done]
NAME          STATUS    ROLES     AGE       VERSION
kube-master   Ready     master    5m        v1.11.0
kube-node-1   Ready     <none>    1m        v1.11.0
kube-node-2   Ready     <none>    1m        v1.11.0
kube-node-3   Ready     <none>    1m        v1.11.0
* Access dashboard at: http://127.0.0.1:32768/api/v1/namespaces/kube-system/services/kubernetes-dashboard:/proxy
customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.incubator.k8s.io created
customresourcedefinition.apiextensions.k8s.io/queues.scheduling.incubator.k8s.io created
# github.com/kubernetes-sigs/kube-batch/test/e2e
test/e2e/util.go:179:6: undefined: enableNamespaceAsQueue
FAIL	github.com/kubernetes-sigs/kube-batch/test/e2e [build failed]
hack/run-e2e.sh: line 32: 19554 Killed                  nohup ${KA_BIN}/kube-batch --kubeconfig ${HOME}/.kube/config --enable-namespace-as-queue=${ENABLE_NAMESPACES_AS_QUEUE} --logtostderr --v ${LOG_LEVEL} > scheduler.log 2>&1
* Removing container: e74535909066
e74535909066
* Removing container: 124af2875a21
124af2875a21
* Removing container: 6d1f4f3960f4
6d1f4f3960f4
* Removing container: 66e1c579377c
66e1c579377c
====================================================================================
=============================>>>>> Scheduler Logs <<<<<=============================
====================================================================================
I1015 06:48:12.154186   19554 flags.go:52] FLAG: --alsologtostderr="false"
I1015 06:48:12.154260   19554 flags.go:52] FLAG: --enable-namespace-as-queue="true"
I1015 06:48:12.154271   19554 flags.go:52] FLAG: --kubeconfig="/home/travis/.kube/config"
I1015 06:48:12.154280   19554 flags.go:52] FLAG: --leader-elect="false"
I1015 06:48:12.154286   19554 flags.go:52] FLAG: --lock-object-namespace=""
I1015 06:48:12.154292   19554 flags.go:52] FLAG: --log-backtrace-at=":0"
I1015 06:48:12.154300   19554 flags.go:52] FLAG: --log-dir=""
I1015 06:48:12.154307   19554 flags.go:52] FLAG: --log-flush-frequency="5s"
I1015 06:48:12.154315   19554 flags.go:52] FLAG: --logtostderr="true"
I1015 06:48:12.154321   19554 flags.go:52] FLAG: --master=""
I1015 06:48:12.154341   19554 flags.go:52] FLAG: --schedule-period="1s"
I1015 06:48:12.154347   19554 flags.go:52] FLAG: --scheduler-conf=""
I1015 06:48:12.154353   19554 flags.go:52] FLAG: --scheduler-name="kube-batch"
I1015 06:48:12.154359   19554 flags.go:52] FLAG: --stderrthreshold="2"
I1015 06:48:12.154365   19554 flags.go:52] FLAG: --v="3"
I1015 06:48:12.154371   19554 flags.go:52] FLAG: --vmodule=""
I1015 06:48:12.157517   19554 reflector.go:202] Starting reflector *v1.Namespace (0s) from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:247
I1015 06:48:12.157627   19554 reflector.go:240] Listing and watching *v1.Namespace from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:247
I1015 06:48:12.158706   19554 reflector.go:202] Starting reflector *v1.Pod (0s) from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:242
I1015 06:48:12.159253   19554 reflector.go:240] Listing and watching *v1.Pod from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:242
I1015 06:48:12.159543   19554 reflector.go:202] Starting reflector *v1.Node (0s) from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:243
I1015 06:48:12.159559   19554 reflector.go:240] Listing and watching *v1.Node from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:243
I1015 06:48:12.160146   19554 reflector.go:202] Starting reflector *v1alpha1.PodGroup (0s) from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:244
I1015 06:48:12.160162   19554 reflector.go:240] Listing and watching *v1alpha1.PodGroup from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:244
I1015 06:48:12.159048   19554 reflector.go:202] Starting reflector *v1beta1.PodDisruptionBudget (0s) from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:241
I1015 06:48:12.161017   19554 reflector.go:240] Listing and watching *v1beta1.PodDisruptionBudget from github.com/kubernetes-sigs/kube-batch/pkg/scheduler/cache/cache.go:241
I1015 06:48:12.199700   19554 event_handlers.go:171] Added pod <kube-system/kube-proxy-pff87> into cache.
I1015 06:48:12.199733   19554 event_handlers.go:171] Added pod <kube-system/kube-proxy-xd56g> into cache.
I1015 06:48:12.199756   19554 event_handlers.go:171] Added pod <kube-system/kube-dns-57f756cc64-65wbl> into cache.
I1015 06:48:12.199767   19554 event_handlers.go:171] Added pod <default/kube-scheduler-kube-master> into cache.
I1015 06:48:12.199784   19554 event_handlers.go:171] Added pod <kube-system/kubernetes-dashboard-54f47d4878-5vqws> into cache.
I1015 06:48:12.199794   19554 event_handlers.go:171] Added pod <kube-system/kube-proxy-7wdxd> into cache.
I1015 06:48:12.199805   19554 event_handlers.go:171] Added pod <kube-system/kube-proxy-lv8wj> into cache.
I1015 06:48:12.199813   19554 event_handlers.go:171] Added pod <default/kube-apiserver-kube-master> into cache.
I1015 06:48:12.199823   19554 event_handlers.go:171] Added pod <kube-system/etcd-kube-master> into cache.
I1015 06:48:12.257244   19554 cache.go:466] The scheduling spec of Job <91755d6c-d045-11e8-b5c7-e6c65ec6592f:/> is nil, ignore it.
I1015 06:48:12.257285   19554 cache.go:466] The scheduling spec of Job <1482ab8e-d046-11e8-9725-e6c65ec6592f:/> is nil, ignore it.
I1015 06:48:12.257293   19554 cache.go:466] The scheduling spec of Job <1358abaa-d046-11e8-9725-e6c65ec6592f:/> is nil, ignore it.
I1015 06:48:12.257313   19554 cache.go:485] There are <0> Jobs and <3> Queues in total for scheduling.
I1015 06:48:12.257324   19554 session.go:86] Open Session 48ce5bbc-d046-11e8-82dc-42010a140024 with <0> Job and <3> Queues
I1015 06:48:12.257346   19554 proportion.go:63] The total resource is <cpu 7740.00, memory 30838013952.00, GPU 0.00>
I1015 06:48:12.257397   19554 scheduler.go:87] Session 48ce5bbc-d046-11e8-82dc-42010a140024: 
Node (kube-master): idle <cpu 1650.00, memory 7738339328.00, GPU 0.00>, used <cpu 350.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (1bda3e09-d046-11e8-9725-e6c65ec6592f:kube-system/kube-proxy-pff87): job 91755d6c-d045-11e8-b5c7-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (cb5efd88-d045-11e8-9725-e6c65ec6592f:default/kube-scheduler-kube-master): job , status Running, pri 1, resreq cpu 100.00, memory 0.00, GPU 0.00
	 2: Task (d4e84bde-d045-11e8-9725-e6c65ec6592f:default/kube-apiserver-kube-master): job , status Running, pri 1, resreq cpu 250.00, memory 0.00, GPU 0.00
	 3: Task (c8ecd7b5-d045-11e8-9725-e6c65ec6592f:kube-system/etcd-kube-master): job , status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-1): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (1a69464b-d046-11e8-9725-e6c65ec6592f:kube-system/kube-proxy-7wdxd): job 91755d6c-d045-11e8-b5c7-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-2): idle <cpu 1740.00, memory 7622995968.00, GPU 0.00>, used <cpu 260.00, memory 115343360.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (1a249d00-d046-11e8-9725-e6c65ec6592f:kube-system/kube-proxy-lv8wj): job 91755d6c-d045-11e8-b5c7-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (45747840-d046-11e8-af5b-e6c65ec6592f:kube-system/kube-dns-57f756cc64-65wbl): job 1482ab8e-d046-11e8-9725-e6c65ec6592f, status Running, pri 1, resreq cpu 260.00, memory 115343360.00, GPU 0.00
Node (kube-node-3): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (45761302-d046-11e8-af5b-e6c65ec6592f:kube-system/kubernetes-dashboard-54f47d4878-5vqws): job 1358abaa-d046-11e8-9725-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (1c8a9430-d046-11e8-9725-e6c65ec6592f:kube-system/kube-proxy-xd56g): job 91755d6c-d045-11e8-b5c7-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
I1015 06:48:12.257553   19554 reclaim.go:42] Enter Reclaim ...
I1015 06:48:12.257561   19554 reclaim.go:50] There are <0> Jobs and <3> Queues in total for scheduling.
I1015 06:48:12.257572   19554 reclaim.go:189] Leaving Reclaim ...
I1015 06:48:12.257579   19554 allocate.go:42] Enter Allocate ...
I1015 06:48:12.257586   19554 allocate.go:61] Try to allocate resource to 0 Queues
I1015 06:48:12.257594   19554 allocate.go:155] Leaving Allocate ...
I1015 06:48:12.257600   19554 preempt.go:44] Enter Preempt ...
I1015 06:48:12.257608   19554 preempt.go:145] Leaving Preempt ...
I1015 06:48:12.257615   19554 session.go:103] Close Session 48ce5bbc-d046-11e8-82dc-42010a140024
I1015 06:48:13.257835   19554 cache.go:466] The scheduling spec of Job <91755d6c-d045-11e8-b5c7-e6c65ec6592f:/> is nil, ignore it.
I1015 06:48:13.257871   19554 cache.go:466] The scheduling spec of Job <1482ab8e-d046-11e8-9725-e6c65ec6592f:/> is nil, ignore it.
I1015 06:48:13.257879   19554 cache.go:466] The scheduling spec of Job <1358abaa-d046-11e8-9725-e6c65ec6592f:/> is nil, ignore it.
I1015 06:48:13.257889   19554 cache.go:485] There are <0> Jobs and <3> Queues in total for scheduling.
I1015 06:48:13.257899   19554 session.go:86] Open Session 49670875-d046-11e8-82dc-42010a140024 with <0> Job and <3> Queues
I1015 06:48:13.257920   19554 proportion.go:63] The total resource is <cpu 7740.00, memory 30838013952.00, GPU 0.00>
I1015 06:48:13.257949   19554 scheduler.go:87] Session 49670875-d046-11e8-82dc-42010a140024: 
Node (kube-master): idle <cpu 1650.00, memory 7738339328.00, GPU 0.00>, used <cpu 350.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (1bda3e09-d046-11e8-9725-e6c65ec6592f:kube-system/kube-proxy-pff87): job 91755d6c-d045-11e8-b5c7-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (cb5efd88-d045-11e8-9725-e6c65ec6592f:default/kube-scheduler-kube-master): job , status Running, pri 1, resreq cpu 100.00, memory 0.00, GPU 0.00
	 2: Task (d4e84bde-d045-11e8-9725-e6c65ec6592f:default/kube-apiserver-kube-master): job , status Running, pri 1, resreq cpu 250.00, memory 0.00, GPU 0.00
	 3: Task (c8ecd7b5-d045-11e8-9725-e6c65ec6592f:kube-system/etcd-kube-master): job , status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-1): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (1a69464b-d046-11e8-9725-e6c65ec6592f:kube-system/kube-proxy-7wdxd): job 91755d6c-d045-11e8-b5c7-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-2): idle <cpu 1740.00, memory 7622995968.00, GPU 0.00>, used <cpu 260.00, memory 115343360.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (45747840-d046-11e8-af5b-e6c65ec6592f:kube-system/kube-dns-57f756cc64-65wbl): job 1482ab8e-d046-11e8-9725-e6c65ec6592f, status Running, pri 1, resreq cpu 260.00, memory 115343360.00, GPU 0.00
	 1: Task (1a249d00-d046-11e8-9725-e6c65ec6592f:kube-system/kube-proxy-lv8wj): job 91755d6c-d045-11e8-b5c7-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-3): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (1c8a9430-d046-11e8-9725-e6c65ec6592f:kube-system/kube-proxy-xd56g): job 91755d6c-d045-11e8-b5c7-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (45761302-d046-11e8-af5b-e6c65ec6592f:kube-system/kubernetes-dashboard-54f47d4878-5vqws): job 1358abaa-d046-11e8-9725-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
I1015 06:48:13.258176   19554 reclaim.go:42] Enter Reclaim ...
I1015 06:48:13.258212   19554 reclaim.go:50] There are <0> Jobs and <3> Queues in total for scheduling.
I1015 06:48:13.258227   19554 reclaim.go:189] Leaving Reclaim ...
I1015 06:48:13.258234   19554 allocate.go:42] Enter Allocate ...
I1015 06:48:13.258241   19554 allocate.go:61] Try to allocate resource to 0 Queues
I1015 06:48:13.258250   19554 allocate.go:155] Leaving Allocate ...
I1015 06:48:13.258256   19554 preempt.go:44] Enter Preempt ...
I1015 06:48:13.258264   19554 preempt.go:145] Leaving Preempt ...
I1015 06:48:13.258270   19554 session.go:103] Close Session 49670875-d046-11e8-82dc-42010a140024
I1015 06:48:14.258545   19554 cache.go:466] The scheduling spec of Job <91755d6c-d045-11e8-b5c7-e6c65ec6592f:/> is nil, ignore it.
I1015 06:48:14.258578   19554 cache.go:466] The scheduling spec of Job <1482ab8e-d046-11e8-9725-e6c65ec6592f:/> is nil, ignore it.
I1015 06:48:14.258593   19554 cache.go:466] The scheduling spec of Job <1358abaa-d046-11e8-9725-e6c65ec6592f:/> is nil, ignore it.
I1015 06:48:14.258602   19554 cache.go:485] There are <0> Jobs and <3> Queues in total for scheduling.
I1015 06:48:14.258612   19554 session.go:86] Open Session 49ffb981-d046-11e8-82dc-42010a140024 with <0> Job and <3> Queues
I1015 06:48:14.258636   19554 proportion.go:63] The total resource is <cpu 7740.00, memory 30838013952.00, GPU 0.00>
I1015 06:48:14.258691   19554 scheduler.go:87] Session 49ffb981-d046-11e8-82dc-42010a140024: 
Node (kube-master): idle <cpu 1650.00, memory 7738339328.00, GPU 0.00>, used <cpu 350.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (d4e84bde-d045-11e8-9725-e6c65ec6592f:default/kube-apiserver-kube-master): job , status Running, pri 1, resreq cpu 250.00, memory 0.00, GPU 0.00
	 1: Task (c8ecd7b5-d045-11e8-9725-e6c65ec6592f:kube-system/etcd-kube-master): job , status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 2: Task (1bda3e09-d046-11e8-9725-e6c65ec6592f:kube-system/kube-proxy-pff87): job 91755d6c-d045-11e8-b5c7-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 3: Task (cb5efd88-d045-11e8-9725-e6c65ec6592f:default/kube-scheduler-kube-master): job , status Running, pri 1, resreq cpu 100.00, memory 0.00, GPU 0.00
Node (kube-node-1): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (1a69464b-d046-11e8-9725-e6c65ec6592f:kube-system/kube-proxy-7wdxd): job 91755d6c-d045-11e8-b5c7-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-2): idle <cpu 1740.00, memory 7622995968.00, GPU 0.00>, used <cpu 260.00, memory 115343360.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (45747840-d046-11e8-af5b-e6c65ec6592f:kube-system/kube-dns-57f756cc64-65wbl): job 1482ab8e-d046-11e8-9725-e6c65ec6592f, status Running, pri 1, resreq cpu 260.00, memory 115343360.00, GPU 0.00
	 1: Task (1a249d00-d046-11e8-9725-e6c65ec6592f:kube-system/kube-proxy-lv8wj): job 91755d6c-d045-11e8-b5c7-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-3): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (1c8a9430-d046-11e8-9725-e6c65ec6592f:kube-system/kube-proxy-xd56g): job 91755d6c-d045-11e8-b5c7-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (45761302-d046-11e8-af5b-e6c65ec6592f:kube-system/kubernetes-dashboard-54f47d4878-5vqws): job 1358abaa-d046-11e8-9725-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
I1015 06:48:14.258852   19554 reclaim.go:42] Enter Reclaim ...
I1015 06:48:14.258860   19554 reclaim.go:50] There are <0> Jobs and <3> Queues in total for scheduling.
I1015 06:48:14.258880   19554 reclaim.go:189] Leaving Reclaim ...
I1015 06:48:14.258887   19554 allocate.go:42] Enter Allocate ...
I1015 06:48:14.258894   19554 allocate.go:61] Try to allocate resource to 0 Queues
I1015 06:48:14.258970   19554 allocate.go:155] Leaving Allocate ...
I1015 06:48:14.258977   19554 preempt.go:44] Enter Preempt ...
I1015 06:48:14.258985   19554 preempt.go:145] Leaving Preempt ...
I1015 06:48:14.258994   19554 session.go:103] Close Session 49ffb981-d046-11e8-82dc-42010a140024
I1015 06:48:15.259210   19554 cache.go:466] The scheduling spec of Job <91755d6c-d045-11e8-b5c7-e6c65ec6592f:/> is nil, ignore it.
I1015 06:48:15.259257   19554 cache.go:466] The scheduling spec of Job <1482ab8e-d046-11e8-9725-e6c65ec6592f:/> is nil, ignore it.
I1015 06:48:15.259276   19554 cache.go:466] The scheduling spec of Job <1358abaa-d046-11e8-9725-e6c65ec6592f:/> is nil, ignore it.
I1015 06:48:15.259285   19554 cache.go:485] There are <0> Jobs and <3> Queues in total for scheduling.
I1015 06:48:15.259296   19554 session.go:86] Open Session 4a986b2a-d046-11e8-82dc-42010a140024 with <0> Job and <3> Queues
I1015 06:48:15.259318   19554 proportion.go:63] The total resource is <cpu 7740.00, memory 30838013952.00, GPU 0.00>
I1015 06:48:15.259337   19554 scheduler.go:87] Session 4a986b2a-d046-11e8-82dc-42010a140024: 
Node (kube-master): idle <cpu 1650.00, memory 7738339328.00, GPU 0.00>, used <cpu 350.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (1bda3e09-d046-11e8-9725-e6c65ec6592f:kube-system/kube-proxy-pff87): job 91755d6c-d045-11e8-b5c7-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (cb5efd88-d045-11e8-9725-e6c65ec6592f:default/kube-scheduler-kube-master): job , status Running, pri 1, resreq cpu 100.00, memory 0.00, GPU 0.00
	 2: Task (d4e84bde-d045-11e8-9725-e6c65ec6592f:default/kube-apiserver-kube-master): job , status Running, pri 1, resreq cpu 250.00, memory 0.00, GPU 0.00
	 3: Task (c8ecd7b5-d045-11e8-9725-e6c65ec6592f:kube-system/etcd-kube-master): job , status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-1): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (1a69464b-d046-11e8-9725-e6c65ec6592f:kube-system/kube-proxy-7wdxd): job 91755d6c-d045-11e8-b5c7-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-2): idle <cpu 1740.00, memory 7622995968.00, GPU 0.00>, used <cpu 260.00, memory 115343360.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (1a249d00-d046-11e8-9725-e6c65ec6592f:kube-system/kube-proxy-lv8wj): job 91755d6c-d045-11e8-b5c7-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (45747840-d046-11e8-af5b-e6c65ec6592f:kube-system/kube-dns-57f756cc64-65wbl): job 1482ab8e-d046-11e8-9725-e6c65ec6592f, status Running, pri 1, resreq cpu 260.00, memory 115343360.00, GPU 0.00
Node (kube-node-3): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (1c8a9430-d046-11e8-9725-e6c65ec6592f:kube-system/kube-proxy-xd56g): job 91755d6c-d045-11e8-b5c7-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (45761302-d046-11e8-af5b-e6c65ec6592f:kube-system/kubernetes-dashboard-54f47d4878-5vqws): job 1358abaa-d046-11e8-9725-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
I1015 06:48:15.259497   19554 reclaim.go:42] Enter Reclaim ...
I1015 06:48:15.259511   19554 reclaim.go:50] There are <0> Jobs and <3> Queues in total for scheduling.
I1015 06:48:15.259524   19554 reclaim.go:189] Leaving Reclaim ...
I1015 06:48:15.259530   19554 allocate.go:42] Enter Allocate ...
I1015 06:48:15.259537   19554 allocate.go:61] Try to allocate resource to 0 Queues
I1015 06:48:15.259546   19554 allocate.go:155] Leaving Allocate ...
I1015 06:48:15.259552   19554 preempt.go:44] Enter Preempt ...
I1015 06:48:15.259561   19554 preempt.go:145] Leaving Preempt ...
I1015 06:48:15.259567   19554 session.go:103] Close Session 4a986b2a-d046-11e8-82dc-42010a140024
I1015 06:48:16.259799   19554 cache.go:466] The scheduling spec of Job <91755d6c-d045-11e8-b5c7-e6c65ec6592f:/> is nil, ignore it.
I1015 06:48:16.259850   19554 cache.go:466] The scheduling spec of Job <1482ab8e-d046-11e8-9725-e6c65ec6592f:/> is nil, ignore it.
I1015 06:48:16.259859   19554 cache.go:466] The scheduling spec of Job <1358abaa-d046-11e8-9725-e6c65ec6592f:/> is nil, ignore it.
I1015 06:48:16.259868   19554 cache.go:485] There are <0> Jobs and <3> Queues in total for scheduling.
I1015 06:48:16.259878   19554 session.go:86] Open Session 4b3118f5-d046-11e8-82dc-42010a140024 with <0> Job and <3> Queues
I1015 06:48:16.259901   19554 proportion.go:63] The total resource is <cpu 7740.00, memory 30838013952.00, GPU 0.00>
I1015 06:48:16.259920   19554 scheduler.go:87] Session 4b3118f5-d046-11e8-82dc-42010a140024: 
Node (kube-node-2): idle <cpu 1740.00, memory 7622995968.00, GPU 0.00>, used <cpu 260.00, memory 115343360.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (45747840-d046-11e8-af5b-e6c65ec6592f:kube-system/kube-dns-57f756cc64-65wbl): job 1482ab8e-d046-11e8-9725-e6c65ec6592f, status Running, pri 1, resreq cpu 260.00, memory 115343360.00, GPU 0.00
	 1: Task (1a249d00-d046-11e8-9725-e6c65ec6592f:kube-system/kube-proxy-lv8wj): job 91755d6c-d045-11e8-b5c7-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-3): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (45761302-d046-11e8-af5b-e6c65ec6592f:kube-system/kubernetes-dashboard-54f47d4878-5vqws): job 1358abaa-d046-11e8-9725-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (1c8a9430-d046-11e8-9725-e6c65ec6592f:kube-system/kube-proxy-xd56g): job 91755d6c-d045-11e8-b5c7-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-master): idle <cpu 1650.00, memory 7738339328.00, GPU 0.00>, used <cpu 350.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (1bda3e09-d046-11e8-9725-e6c65ec6592f:kube-system/kube-proxy-pff87): job 91755d6c-d045-11e8-b5c7-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (cb5efd88-d045-11e8-9725-e6c65ec6592f:default/kube-scheduler-kube-master): job , status Running, pri 1, resreq cpu 100.00, memory 0.00, GPU 0.00
	 2: Task (d4e84bde-d045-11e8-9725-e6c65ec6592f:default/kube-apiserver-kube-master): job , status Running, pri 1, resreq cpu 250.00, memory 0.00, GPU 0.00
	 3: Task (c8ecd7b5-d045-11e8-9725-e6c65ec6592f:kube-system/etcd-kube-master): job , status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-1): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (1a69464b-d046-11e8-9725-e6c65ec6592f:kube-system/kube-proxy-7wdxd): job 91755d6c-d045-11e8-b5c7-e6c65ec6592f, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
I1015 06:48:16.260044   19554 reclaim.go:42] Enter Reclaim ...
I1015 06:48:16.260053   19554 reclaim.go:50] There are <0> Jobs and <3> Queues in total for scheduling.
I1015 06:48:16.260064   19554 reclaim.go:189] Leaving Reclaim ...
I1015 06:48:16.260071   19554 allocate.go:42] Enter Allocate ...
I1015 06:48:16.260078   19554 allocate.go:61] Try to allocate resource to 0 Queues
I1015 06:48:16.260087   19554 allocate.go:155] Leaving Allocate ...
I1015 06:48:16.260094   19554 preempt.go:44] Enter Preempt ...
I1015 06:48:16.260101   19554 preempt.go:145] Leaving Preempt ...
I1015 06:48:16.260108   19554 session.go:103] Close Session 4b3118f5-d046-11e8-82dc-42010a140024
make: *** [e2e] Error 2
TravisBuddy Request Identifier: 4cb55340-d046-11e8-8e29-6bd38fa4cac4

@TravisBuddy
Copy link

Travis tests have failed

Hey @k82cn,
Please read the following log in order to understand the failure reason.
It'll be awesome if you fix what's wrong and commit the changes.

1st Build

View build log

make e2e
mkdir -p _output/bin
go build -o _output/bin/kube-batch ./cmd/kube-batch/
hack/run-e2e.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 73763    0 73763    0     0   557k      0 --:--:-- --:--:-- --:--:--  554k
* Making sure DIND image is up to date 
v1.11: Pulling from mirantis/kubeadm-dind-cluster





















Digest: sha256:ee87eb24cab4a596f31ba83bd651df10750ca5ac7c5ce9834467c87fa7f6564b
Status: Downloaded newer image for mirantis/kubeadm-dind-cluster:v1.11
/home/travis/.kubeadm-dind-cluster/kubectl-v1.11.0: OK
* Starting DIND container: kube-master
* Running kubeadm: init --config /etc/kubeadm.conf --ignore-preflight-errors=all
Initializing machine ID from random generator.
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Loaded image: mirantis/hypokube:base

real	0m9.272s
user	0m0.568s
sys	0m0.504s

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube
 ---> 0dbd7b23560e
Removing intermediate container 60a158aec549
Successfully built 0dbd7b23560e
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
I1016 05:08:40.275605     521 feature_gate.go:230] feature gates: &{map[]}
[init] using Kubernetes version: v1.11.0
[preflight] running pre-flight checks
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1016 05:08:40.329917     521 kernel_validator.go:81] Validating kernel version
I1016 05:08:40.330409     521 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kube-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.192.0.2]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kube-master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kube-master localhost] and IPs [10.192.0.2 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 39.001882 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node kube-master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node kube-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-master" as an annotation
[bootstraptoken] using token: k7u14z.n1x8l8pclrj0hfb2
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.192.0.2:6443 --token k7u14z.n1x8l8pclrj0hfb2 --discovery-token-ca-cert-hash sha256:49784eaba7a74d03194d327e9cc5941c45486b2f45471eff7dd4758a038075c7


real	0m59.485s
user	0m6.880s
sys	0m0.168s
1ad34a1b9529
f38dd846f94e
a5c49971af29
ac78c0459bcf
ff8c941d0757
7aaf6cf5289a
e6c97bfc4633
1fa9d4a8791f
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
configmap/kube-proxy configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
daemonset.extensions/kube-proxy configured
No resources found
* Setting cluster config 
Cluster "dind" set.
Context "dind" created.
Switched to context "dind".
* Starting node container: 1
* Starting DIND container: kube-node-1
* Node container started: 1
* Starting node container: 2
* Starting DIND container: kube-node-2
* Node container started: 2
* Starting node container: 3
* Starting DIND container: kube-node-3
* Node container started: 3
* Joining node: 1
* Joining node: 2
* Joining node: 3
* Running kubeadm: join --ignore-preflight-errors=all 10.192.0.2:6443 --token k7u14z.n1x8l8pclrj0hfb2 --discovery-token-ca-cert-hash sha256:49784eaba7a74d03194d327e9cc5941c45486b2f45471eff7dd4758a038075c7
Initializing machine ID from random generator.
* Running kubeadm: join --ignore-preflight-errors=all 10.192.0.2:6443 --token k7u14z.n1x8l8pclrj0hfb2 --discovery-token-ca-cert-hash sha256:49784eaba7a74d03194d327e9cc5941c45486b2f45471eff7dd4758a038075c7
Initializing machine ID from random generator.
* Running kubeadm: join --ignore-preflight-errors=all 10.192.0.2:6443 --token k7u14z.n1x8l8pclrj0hfb2 --discovery-token-ca-cert-hash sha256:49784eaba7a74d03194d327e9cc5941c45486b2f45471eff7dd4758a038075c7
Initializing machine ID from random generator.
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
Loaded image: mirantis/hypokube:base
Loaded image: mirantis/hypokube:base
Loaded image: mirantis/hypokube:base

real	0m29.111s
user	0m0.656s
sys	0m0.572s

real	0m28.892s
user	0m0.740s
sys	0m0.496s

real	0m28.717s
user	0m0.724s
sys	0m0.488s

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube

Step 1/2 : FROM mirantis/hypokube:base
 ---> bfb7cd25465c
Step 2/2 : COPY hyperkube /hyperkube
 ---> 340b27a2da1d
 ---> f6d1e0670fe2
 ---> 2628a41dfcd6
Removing intermediate container b95517d02f84
Successfully built f6d1e0670fe2
Removing intermediate container 615a4da66693
Removing intermediate container a874d53f0747
Successfully built 340b27a2da1d
Successfully built 2628a41dfcd6
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
[preflight] running pre-flight checks
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

[preflight] running pre-flight checks
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1016 05:12:10.083335     496 kernel_validator.go:81] Validating kernel version
I1016 05:12:10.084113     496 kernel_validator.go:96] Validating kernel config
[preflight] running pre-flight checks
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I1016 05:12:10.127203     514 kernel_validator.go:81] Validating kernel version
I1016 05:12:10.128938     514 kernel_validator.go:96] Validating kernel config
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING FileExisting-crictl]: crictl not found in system path
I1016 05:12:10.151460     508 kernel_validator.go:81] Validating kernel version
I1016 05:12:10.151608     508 kernel_validator.go:96] Validating kernel config
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "k7u14z" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "k7u14z" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "k7u14z" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "k7u14z" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "k7u14z" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Failed to connect to API Server "10.192.0.2:6443": token id "k7u14z" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Requesting info from "https://10.192.0.2:6443" again to validate TLS against the pinned public key
[discovery] Requesting info from "https://10.192.0.2:6443" again to validate TLS against the pinned public key
[discovery] Requesting info from "https://10.192.0.2:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.192.0.2:6443"
[discovery] Successfully established connection with API Server "10.192.0.2:6443"
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.192.0.2:6443"
[discovery] Successfully established connection with API Server "10.192.0.2:6443"
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.192.0.2:6443"
[discovery] Successfully established connection with API Server "10.192.0.2:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[preflight] Activating the kubelet service
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-node-3" as an annotation
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-node-2" as an annotation
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-node-1" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

real	0m15.083s
user	0m0.620s
sys	0m0.080s
* Node joined: 3

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

real	0m15.014s
user	0m0.600s
sys	0m0.092s
* Node joined: 2

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

real	0m15.316s
user	0m0.656s
sys	0m0.080s
* Node joined: 1
Creating static routes for bridge/PTP plugin
* Deploying k8s dashboard 
deployment.extensions/kubernetes-dashboard created
service/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/add-on-cluster-admin created
* Patching kube-dns deployment to make it start faster 
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.extensions/kube-dns configured
* Cluster Info 
Network Mode: ipv4
Cluster context: dind
Cluster ID: 0
Management CIDR(s): 10.192.0.0/24
Service CIDR/mode: 10.96.0.0/12/ipv4
Pod CIDR(s): 10.244.0.0/16
* Taking snapshot of the cluster 
deployment.extensions/kube-dns scaled
deployment.extensions/kubernetes-dashboard scaled
pod "kube-proxy-2rknc" deleted
pod "kube-proxy-4gxzr" deleted
pod "kube-proxy-5f2t8" deleted
pod "kube-proxy-xpzq2" deleted
NAME                        READY     STATUS        RESTARTS   AGE
etcd-kube-master            1/1       Running       0          1m
kube-dns-86c47599bd-6fllm   3/3       Terminating   0          28s
kube-proxy-5lb8c            1/1       Running       0          2s
kube-proxy-bk98p            1/1       Running       0          2s
kube-proxy-m488f            1/1       Running       0          2s
kube-proxy-q284v            1/1       Running       0          7s
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
tar: var/lib/kubelet/device-plugins/kubelet.sock: socket ignored
* Waiting for kube-proxy and the nodes 
..........[done]
* Bringing up kube-dns and kubernetes-dashboard 
deployment.extensions/kube-dns scaled
deployment.extensions/kubernetes-dashboard scaled
..............[done]
NAME          STATUS    ROLES     AGE       VERSION
kube-master   Ready     master    4m        v1.11.0
kube-node-1   Ready     <none>    1m        v1.11.0
kube-node-2   Ready     <none>    1m        v1.11.0
kube-node-3   Ready     <none>    1m        v1.11.0
* Access dashboard at: http://127.0.0.1:32768/api/v1/namespaces/kube-system/services/kubernetes-dashboard:/proxy
customresourcedefinition.apiextensions.k8s.io/podgroups.scheduling.incubator.k8s.io created
customresourcedefinition.apiextensions.k8s.io/queues.scheduling.incubator.k8s.io created
8e726fcb7f08
* Removing container: 9a3020a997e6
11e8-a597-9a8f3442e8dd:kube-system/kubernetes-dashboard-54f47d4878-97zxn): job 13abafdf-d102-11e8-b84e-9a8f3442e8dd, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-master): idle <cpu 1650.00, memory 7738339328.00, GPU 0.00>, used <cpu 350.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (1a1e3c3a-d102-11e8-b84e-9a8f3442e8dd:kube-system/kube-proxy-q284v): job af0c9211-d101-11e8-b9f4-9a8f3442e8dd, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
	 1: Task (edbfe26b-d101-11e8-b84e-9a8f3442e8dd:default/kube-scheduler-kube-master): job , status Running, pri 1, resreq cpu 100.00, memory 0.00, GPU 0.00
	 2: Task (ebb4014e-d101-11e8-b84e-9a8f3442e8dd:default/kube-apiserver-kube-master): job , status Running, pri 1, resreq cpu 250.00, memory 0.00, GPU 0.00
	 3: Task (f0b88425-d101-11e8-b84e-9a8f3442e8dd:kube-system/etcd-kube-master): job , status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
Node (kube-node-1): idle <cpu 2000.00, memory 7738339328.00, GPU 0.00>, used <cpu 0.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (1d2ca234-d102-11e8-b84e-9a8f3442e8dd:kube-system/kube-proxy-m488f): job af0c9211-d101-11e8-b9f4-9a8f3442e8dd, status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.00
I1016 05:14:13.216443   19295 reclaim.go:42] Enter Reclaim ...
I1016 05:14:13.216454   19295 reclaim.go:50] There are <0> Jobs and <0> Queues in total for scheduling.
I1016 05:14:13.216467   19295 reclaim.go:189] Leaving Reclaim ...
I1016 05:14:13.216475   19295 allocate.go:42] Enter Allocate ...
I1016 05:14:13.216482   19295 allocate.go:61] Try to allocate resource to 0 Queues
I1016 05:14:13.216492   19295 allocate.go:155] Leaving Allocate ...
I1016 05:14:13.216499   19295 preempt.go:44] Enter Preempt ...
I1016 05:14:13.216507   19295 preempt.go:145] Leaving Preempt ...
I1016 05:14:13.216515   19295 session.go:103] Close Session 5216b796-d102-11e8-9f77-42010a140013
I1016 05:14:14.219060   19295 cache.go:466] The scheduling spec of Job <13abafdf-d102-11e8-b84e-9a8f3442e8dd:/> is nil, ignore it.
I1016 05:14:14.219101   19295 cache.go:466] The scheduling spec of Job <af0c9211-d101-11e8-b9f4-9a8f3442e8dd:/> is nil, ignore it.
I1016 05:14:14.219112   19295 cache.go:466] The scheduling spec of Job <14363e92-d102-11e8-b84e-9a8f3442e8dd:/> is nil, ignore it.
I1016 05:14:14.219122   19295 cache.go:486] There are <0> Jobs and <0> Queues in total for scheduling.
I1016 05:14:14.219134   19295 session.go:86] Open Session 52afbff7-d102-11e8-9f77-42010a140013 with <0> Job and <0> Queues
I1016 05:14:14.219157   19295 proportion.go:63] The total resource is <cpu 7740.00, memory 30838013952.00, GPU 0.00>
I1016 05:14:14.219178   19295 scheduler.go:87] Session 52afbff7-d102-11e8-9f77-42010a140013: 
Node (kube-master): idle <cpu 1650.00, memory 7738339328.00, GPU 0.00>, used <cpu 350.00, memory 0.00, GPU 0.00>, releasing <cpu 0.00, memory 0.00, GPU 0.00>
	 0: Task (ebb4014e-d101-11e8-b84e-9a8f3442e8dd:default/kube-apiserver-kube-master): job , status Running, pri 1, resreq cpu 250.00, memory 0.00, GPU 0.00
	 1: Task (f0b88425-d101-11e8-b84e-9a8f3442e8dd:kube-system/etcd-kube-master): job , status Running, pri 1, resreq cpu 0.00, memory 0.00, GPU 0.0051   19295 session.go:103] Close Session 53e116a4-d102-11e8-9f77-42010a140013
make: *** [e2e] Error 2
TravisBuddy Request Identifier: 559136c0-d102-11e8-84c3-e12343383572

Signed-off-by: Da K. Ma <klaus1982.cn@gmail.com>
@TravisBuddy
Copy link

Hey @k82cn,
Something went wrong with the build.

TravisCI finished with status errored, which means the build failed because of something unrelated to the tests, such as a problem with a dependency or the build process itself.

View build log

TravisBuddy Request Identifier: b5ac9490-d108-11e8-84c3-e12343383572

@k82cn k82cn added the lgtm Indicates that a PR is ready to be merged. label Oct 16, 2018
@k8s-ci-robot k8s-ci-robot merged commit 3f44056 into kubernetes-retired:master Oct 16, 2018
@k82cn k82cn deleted the kb_425_5 branch March 22, 2019 12:31
kevin-wangzefeng pushed a commit to kevin-wangzefeng/scheduler that referenced this pull request Jun 28, 2019
kevin-wangzefeng pushed a commit to kevin-wangzefeng/scheduler that referenced this pull request Jun 28, 2019
kevin-wangzefeng pushed a commit to kevin-wangzefeng/scheduler that referenced this pull request Jun 28, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants