-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nftables support #1812
Comments
The approach that was discussed is as follows:
This approach will have K3s prefer the host-level iptables binaries, and in the worst case, use the iptables that |
We are currently planning on leaving servicelb operating in |
On both a CentOS 8 and Debian Buster system, the |
If I understand correctly. RH and Centos 8 no longer include the https://access.redhat.com/solutions/4377321
would it not? |
On a RHEL8/CentOS 8 system, you would be unable to view the rules put into place by the service LB, but would be exec into the pod if necessary to view/manipulate the legacy rules. There is currently no simple method to determine whether to use We are also packaging a statically compiled legacy binary with the new |
The following operating systems (and according permutations) should/were tested with these new changes. As the changes implemented are to prefer the host-level binaries. Alpine Linux 3.12 iptables installed Ubuntu 18.04 iptables installed Ubuntu 20.04 iptables installed Debian Buster iptables installed (nftables) RHEL 7 iptables installed RHEL 8 iptables installed CentOS 7 iptables installed CentOS 8 iptables installed For all operating system tests that do not have iptables installed, you can verify the mode that was chosen via the |
Tested via |
Alpine Linux 3.12 iptables installed No Ubuntu 18.04 iptables installed Yes Ubuntu 20.04 iptables installed Yes Debian Buster 10 iptables installed (nftables) Yes RHEL 7.8 iptables installed Yes RHEL 8.0 iptables installed No CentOS 7.8 iptables installed Yes CentOS 8.0 iptables installed Yes More info:
Ubuntu18.04
Alternatives not set
After uninstall iptables is removed successfully Ubuntu 20.04
Alternatives is set to legacy
After uninstall, iptables is removed successfully RHEL 7.8
Alternatives not set
After uninstall iptables is removed successfully RHEL 8
iptables not installed k3s iptables set
Centos 7
Alternatives not set Centos 8
After uninstall iptables is removed successfully Debian Buster 10
Alternatives is set to nft |
This seems like something that should be fixed |
Signed-off-by: Chris Kim <oats87g@gmail.com>
Part of: k3s-io/k3s#1812 Signed-off-by: Chris Kim <oats87g@gmail.com>
Signed-off-by: Chris Kim <oats87g@gmail.com>
k3s -v Alpine Linux 3.12 iptables installed NO
Ubuntu 18.04 iptables installed YES
Ubuntu 20.04 iptables installed YES
Debian Buster 10 iptables installed (nftables) YES
CentOS 7.8 iptables installed YES
Centos 8 iptables installed YES
CentOS 8.2
RHEL 7.8 iptables installed YES
RHEL 8
|
Impossible running inside Hyperbola v0.4.1 with nftables and OpenRC init run: log: E0928 23:55:49.198804 4424 proxier.go:874] "Failed to ensure chain jumps" err=< error checking rule: exit status 2: iptables v1.8.6 (nf_tables): Couldn't find match `conntrack' Try `iptables -h' or 'iptables --help' for more information. > table=filter srcChain=INPUT dstChain=KUBE-EXTERNAL-SERVICES I0928 23:55:49.198841 4424 proxier.go:858] "Sync failed" retryingTime="30s" E0928 23:55:49.201527 4424 proxier.go:874] "Failed to ensure chain jumps" err=< error checking rule: exit status 2: ip6tables v1.8.6 (nf_tables): Couldn't find match `conntrack' Try `ip6tables -h' or 'ip6tables --help' for more information. > table=filter srcChain=INPUT dstChain=KUBE-EXTERNAL-SERVICES Full log k3sINFO[0000] Acquiring lock file /var/lib/rancher/k3s/data/.lock INFO[0000] Preparing data dir /var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2 WARN[0000] no-deploy flag is deprecated, it will be removed in v1.25. Use --skip-deploy instead. INFO[0000] Starting k3s v1.24.4+k3s1 (c3f830e9) INFO[0000] Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s INFO[0000] Configuring database table schema and indexes, this may take a moment... INFO[0000] Database tables and indexes are up to date INFO[0000] Kine available at unix://kine.sock INFO[0000] generated self-signed CA certificate CN=k3s-client-ca@1664380537: notBefore=2022-09-28 15:55:37.082320724 +0000 UTC notAfter=2032-09-25 15:55:37.082320724 +0000 UTC INFO[0000] certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC INFO[0000] certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC INFO[0000] certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC INFO[0000] certificate CN=system:apiserver,O=system:masters signed by CN=k3s-client-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC INFO[0000] certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC INFO[0000] certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC INFO[0000] certificate CN=k3s-cloud-controller-manager signed by CN=k3s-client-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC INFO[0000] generated self-signed CA certificate CN=k3s-server-ca@1664380537: notBefore=2022-09-28 15:55:37.085006611 +0000 UTC notAfter=2032-09-25 15:55:37.085006611 +0000 UTC INFO[0000] certificate CN=kube-apiserver signed by CN=k3s-server-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC INFO[0000] generated self-signed CA certificate CN=k3s-request-header-ca@1664380537: notBefore=2022-09-28 15:55:37.085631413 +0000 UTC notAfter=2032-09-25 15:55:37.085631413 +0000 UTC INFO[0000] certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC INFO[0000] generated self-signed CA certificate CN=etcd-server-ca@1664380537: notBefore=2022-09-28 15:55:37.086211811 +0000 UTC notAfter=2032-09-25 15:55:37.086211811 +0000 UTC INFO[0000] certificate CN=etcd-server signed by CN=etcd-server-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC INFO[0000] certificate CN=etcd-client signed by CN=etcd-server-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC INFO[0000] generated self-signed CA certificate CN=etcd-peer-ca@1664380537: notBefore=2022-09-28 15:55:37.087044373 +0000 UTC notAfter=2032-09-25 15:55:37.087044373 +0000 UTC INFO[0000] certificate CN=etcd-peer signed by CN=etcd-peer-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC INFO[0000] certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC WARN[0000] dynamiclistener [::]:6443: no cached certificate available for preload - deferring certificate load until storage initialization or first client request INFO[0000] Active TLS secret / (ver=) (count 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-93.95.230.133:93.95.230.133 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-k3s-master-01:k3s-master-01 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=0F6E54A74F339E94FA0DF4A6780336AA840FB9C3] INFO[0000] Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --etcd-servers=unix://kine.sock --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key INFO[0000] Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259 INFO[0000] Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true INFO[0000] Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m0s --profiling=false INFO[0000] Server node token is available at /var/lib/rancher/k3s/server/token INFO[0000] To join server node to cluster: k3s server -s https://93.95.230.133:6443 -t ${SERVER_NODE_TOKEN} INFO[0000] Agent node token is available at /var/lib/rancher/k3s/server/agent-token INFO[0000] To join agent node to cluster: k3s agent -s https://93.95.230.133:6443 -t ${AGENT_NODE_TOKEN} INFO[0000] Wrote kubeconfig /etc/rancher/k3s/k3s.yaml INFO[0000] Run: k3s kubectl INFO[0000] Tunnel server egress proxy mode: agent INFO[0000] Tunnel server egress proxy waiting for runtime core to become available I0928 23:55:37.252823 4424 server.go:576] external host was not specified, using 93.95.230.133 I0928 23:55:37.252972 4424 server.go:168] Version: v1.24.4+k3s1 I0928 23:55:37.252999 4424 server.go:170] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" INFO[0000] Waiting for API server to become available I0928 23:55:37.842418 4424 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0928 23:55:37.842495 4424 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0928 23:55:37.843182 4424 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0928 23:55:37.843225 4424 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0928 23:55:37.844239 4424 shared_informer.go:255] Waiting for caches to sync for node_authorizer W0928 23:55:37.870767 4424 genericapiserver.go:557] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources. I0928 23:55:37.872071 4424 instance.go:274] Using reconciler: lease INFO[0000] certificate CN=k3s-master-01 signed by CN=k3s-server-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC INFO[0000] certificate CN=system:node:k3s-master-01,O=system:nodes signed by CN=k3s-client-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC I0928 23:55:38.018546 4424 instance.go:586] API group "internal.apiserver.k8s.io" is not enabled, skipping. W0928 23:55:38.190500 4424 genericapiserver.go:557] Skipping API authentication.k8s.io/v1beta1 because it has no resources. W0928 23:55:38.194052 4424 genericapiserver.go:557] Skipping API authorization.k8s.io/v1beta1 because it has no resources. W0928 23:55:38.224313 4424 genericapiserver.go:557] Skipping API certificates.k8s.io/v1beta1 because it has no resources. W0928 23:55:38.226437 4424 genericapiserver.go:557] Skipping API coordination.k8s.io/v1beta1 because it has no resources. W0928 23:55:38.233947 4424 genericapiserver.go:557] Skipping API networking.k8s.io/v1beta1 because it has no resources. W0928 23:55:38.243739 4424 genericapiserver.go:557] Skipping API node.k8s.io/v1alpha1 because it has no resources. W0928 23:55:38.249573 4424 genericapiserver.go:557] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources. W0928 23:55:38.249619 4424 genericapiserver.go:557] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0928 23:55:38.250786 4424 genericapiserver.go:557] Skipping API scheduling.k8s.io/v1beta1 because it has no resources. W0928 23:55:38.250826 4424 genericapiserver.go:557] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W0928 23:55:38.254056 4424 genericapiserver.go:557] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0928 23:55:38.257149 4424 genericapiserver.go:557] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. W0928 23:55:38.260141 4424 genericapiserver.go:557] Skipping API apps/v1beta2 because it has no resources. W0928 23:55:38.260179 4424 genericapiserver.go:557] Skipping API apps/v1beta1 because it has no resources. W0928 23:55:38.261531 4424 genericapiserver.go:557] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. I0928 23:55:38.264409 4424 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0928 23:55:38.264457 4424 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. W0928 23:55:38.274807 4424 genericapiserver.go:557] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. INFO[0001] Module overlay was already loaded WARN[0001] Failed to load kernel module iptable_nat with modprobe INFO[0001] Set sysctl 'net/ipv4/conf/default/forwarding' to 1 INFO[0001] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 INFO[0001] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 INFO[0001] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 INFO[0001] Set sysctl 'net/ipv4/conf/all/forwarding' to 1 INFO[0001] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log INFO[0001] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd INFO[0002] Containerd is now running INFO[0002] Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3s-master-01 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key INFO[0002] Connecting to proxy url="wss://127.0.0.1:6443/v1-k3s/connect" INFO[0002] Handling backend connection request [k3s-master-01] I0928 23:55:40.028321 4424 secure_serving.go:210] Serving securely on 127.0.0.1:6444 I0928 23:55:40.028526 4424 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt" I0928 23:55:40.045968 4424 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" I0928 23:55:40.046152 4424 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0928 23:55:40.046582 4424 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key" I0928 23:55:40.046790 4424 controller.go:83] Starting OpenAPI AggregationController I0928 23:55:40.046992 4424 autoregister_controller.go:141] Starting autoregister controller I0928 23:55:40.047033 4424 cache.go:32] Waiting for caches to sync for autoregister controller I0928 23:55:40.049556 4424 available_controller.go:491] Starting AvailableConditionController I0928 23:55:40.049600 4424 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0928 23:55:40.049853 4424 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0928 23:55:40.049885 4424 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller I0928 23:55:40.049931 4424 apf_controller.go:317] Starting API Priority and Fairness config controller I0928 23:55:40.050083 4424 apiservice_controller.go:97] Starting APIServiceRegistrationController I0928 23:55:40.050105 4424 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0928 23:55:40.050183 4424 customresource_discovery_controller.go:209] Starting DiscoveryController I0928 23:55:40.050212 4424 controller.go:80] Starting OpenAPI V3 AggregationController I0928 23:55:40.050256 4424 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt" I0928 23:55:40.052556 4424 crdregistration_controller.go:111] Starting crd-autoregister controller I0928 23:55:40.052580 4424 shared_informer.go:255] Waiting for caches to sync for crd-autoregister I0928 23:55:40.053923 4424 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt" I0928 23:55:40.054130 4424 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt" I0928 23:55:40.055884 4424 controller.go:85] Starting OpenAPI controller I0928 23:55:40.055914 4424 controller.go:85] Starting OpenAPI V3 controller I0928 23:55:40.055950 4424 naming_controller.go:291] Starting NamingConditionController I0928 23:55:40.055975 4424 establishing_controller.go:76] Starting EstablishingController I0928 23:55:40.055996 4424 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0928 23:55:40.056014 4424 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0928 23:55:40.056041 4424 crd_finalizer.go:266] Starting CRDFinalizer INFO[0003] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error I0928 23:55:40.135978 4424 controller.go:611] quota admission added evaluator for: namespaces E0928 23:55:40.139694 4424 controller.go:166] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.43.0.1"}: failed to allocate IP 10.43.0.1: cannot allocate resources of type serviceipallocations at this time I0928 23:55:40.144300 4424 shared_informer.go:262] Caches are synced for node_authorizer I0928 23:55:40.147111 4424 cache.go:39] Caches are synced for autoregister controller I0928 23:55:40.149729 4424 cache.go:39] Caches are synced for AvailableConditionController controller I0928 23:55:40.149928 4424 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller I0928 23:55:40.149964 4424 apf_controller.go:322] Running API Priority and Fairness config worker I0928 23:55:40.152088 4424 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0928 23:55:40.152675 4424 shared_informer.go:262] Caches are synced for crd-autoregister I0928 23:55:40.704619 4424 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0928 23:55:41.052773 4424 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 I0928 23:55:41.055749 4424 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 I0928 23:55:41.055795 4424 storage_scheduling.go:111] all system priority classes are created successfully or already exist. I0928 23:55:41.257309 4424 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0928 23:55:41.275072 4424 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0928 23:55:41.352049 4424 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.43.0.1] W0928 23:55:41.354485 4424 lease.go:234] Resetting endpoints for master service "kubernetes" to [93.95.230.133] I0928 23:55:41.354980 4424 controller.go:611] quota admission added evaluator for: endpoints I0928 23:55:41.357122 4424 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io Flag --cloud-provider has been deprecated, will be removed in 1.24 or later, in favor of removing cloud provider code from Kubelet. Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed. Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. I0928 23:55:41.971298 4424 server.go:192] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" I0928 23:55:41.971568 4424 server.go:395] "Kubelet version" kubeletVersion="v1.24.4+k3s1" I0928 23:55:41.971621 4424 server.go:397] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" W0928 23:55:41.973079 4424 info.go:53] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id" I0928 23:55:41.973284 4424 server.go:644] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" I0928 23:55:41.973468 4424 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] I0928 23:55:41.973546 4424 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} I0928 23:55:41.973593 4424 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" I0928 23:55:41.973614 4424 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true I0928 23:55:41.973675 4424 state_mem.go:36] "Initialized new in-memory state store" I0928 23:55:41.973980 4424 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt" I0928 23:55:41.989696 4424 kubelet.go:376] "Attempting to sync node with API server" I0928 23:55:41.989745 4424 kubelet.go:267] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests" I0928 23:55:41.989774 4424 kubelet.go:278] "Adding apiserver pod source" I0928 23:55:41.989797 4424 apiserver.go:42] "Waiting for node sync before watching apiserver pods" W0928 23:55:42.002352 4424 reflector.go:324] k8s.io/client-go@v1.24.4-k3s1/tools/cache/reflector.go:167: failed to list *v1.Endpoints: endpoints "kubernetes" is forbidden: User "system:k3s-controller" cannot list resource "endpoints" in API group "" in the namespace "default" E0928 23:55:42.002399 4424 reflector.go:138] k8s.io/client-go@v1.24.4-k3s1/tools/cache/reflector.go:167: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints "kubernetes" is forbidden: User "system:k3s-controller" cannot list resource "endpoints" in API group "" in the namespace "default" I0928 23:55:42.003037 4424 kuberuntime_manager.go:239] "Container runtime initialized" containerRuntime="containerd" version="v1.6.6-k3s1" apiVersion="v1" W0928 23:55:42.003168 4424 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. I0928 23:55:42.003405 4424 server.go:1177] "Started kubelet" I0928 23:55:42.004551 4424 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" I0928 23:55:42.004974 4424 server.go:150] "Starting to listen" address="0.0.0.0" port=10250 I0928 23:55:42.005413 4424 server.go:410] "Adding debug handlers to kubelet server" I0928 23:55:42.006660 4424 volume_manager.go:289] "Starting Kubelet Volume Manager" I0928 23:55:42.006867 4424 desired_state_of_world_populator.go:145] "Desired state populator starts to run" E0928 23:55:42.022842 4424 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs" E0928 23:55:42.022938 4424 kubelet.go:1298] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" E0928 23:55:42.027922 4424 kubelet_network_linux.go:103] "Failed to ensure marking rule for KUBE-MARK-DROP chain" err=< error checking rule: exit status 2: iptables v1.8.6 (nf_tables): unknown option "--or-mark" Try `iptables -h' or 'iptables --help' for more information. > I0928 23:55:42.027957 4424 kubelet_network_linux.go:84] "Failed to initialize protocol iptables rules; some functionality may be missing." protocol=IPv4 E0928 23:55:42.031764 4424 kubelet_network_linux.go:103] "Failed to ensure marking rule for KUBE-MARK-DROP chain" err=< error checking rule: exit status 2: ip6tables v1.8.6 (nf_tables): unknown option "--or-mark" Try `ip6tables -h' or 'ip6tables --help' for more information. > I0928 23:55:42.031776 4424 kubelet_network_linux.go:84] "Failed to initialize protocol iptables rules; some functionality may be missing." protocol=IPv6 I0928 23:55:42.031781 4424 status_manager.go:161] "Starting to sync pod status with apiserver" I0928 23:55:42.031789 4424 kubelet.go:1986] "Starting kubelet main sync loop" E0928 23:55:42.031811 4424 kubelet.go:2010] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" E0928 23:55:42.041250 4424 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"k3s-master-01\" not found" node="k3s-master-01" I0928 23:55:42.068224 4424 cpu_manager.go:213] "Starting CPU manager" policy="none" I0928 23:55:42.068256 4424 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" I0928 23:55:42.068286 4424 state_mem.go:36] "Initialized new in-memory state store" I0928 23:55:42.068987 4424 policy_none.go:49] "None policy: Start" I0928 23:55:42.069290 4424 memory_manager.go:168] "Starting memorymanager" policy="None" I0928 23:55:42.069323 4424 state_mem.go:35] "Initializing new in-memory state store" I0928 23:55:42.072602 4424 manager.go:610] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" I0928 23:55:42.072740 4424 plugin_manager.go:114] "Starting Kubelet Plugin Manager" E0928 23:55:42.074972 4424 eviction_manager.go:254] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"k3s-master-01\" not found" E0928 23:55:42.126020 4424 kubelet.go:2424] "Error getting node" err="node \"k3s-master-01\" not found" I0928 23:55:42.126350 4424 kubelet_node_status.go:70] "Attempting to register node" node="k3s-master-01" I0928 23:55:42.133347 4424 kubelet_node_status.go:73] "Successfully registered node" node="k3s-master-01" INFO[0005] Annotations and labels have been set successfully on node: k3s-master-01 INFO[0005] Starting flannel with backend vxlan INFO[0005] Kube API server is now running INFO[0005] ETCD server is now running INFO[0005] k3s is up and running INFO[0005] Waiting for cloud-controller-manager privileges to become available INFO[0005] Tunnel server egress proxy waiting for runtime core to become available I0928 23:55:42.477239 4424 serving.go:355] Generated self-signed cert in-memory INFO[0005] Applying CRD addons.k3s.cattle.io I0928 23:55:42.697756 4424 controllermanager.go:180] Version: v1.24.4+k3s1 I0928 23:55:42.697822 4424 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0928 23:55:42.704028 4424 secure_serving.go:210] Serving securely on 127.0.0.1:10257 I0928 23:55:42.704316 4424 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController I0928 23:55:42.704355 4424 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController I0928 23:55:42.704394 4424 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0928 23:55:42.704454 4424 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0928 23:55:42.704479 4424 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0928 23:55:42.704515 4424 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0928 23:55:42.704535 4424 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file INFO[0005] Applying CRD helmcharts.helm.cattle.io INFO[0005] Applying CRD helmchartconfigs.helm.cattle.io INFO[0005] Waiting for CRD helmchartconfigs.helm.cattle.io to become available I0928 23:55:42.805333 4424 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0928 23:55:42.805391 4424 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController I0928 23:55:42.805460 4424 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0928 23:55:42.866324 4424 shared_informer.go:255] Waiting for caches to sync for tokens I0928 23:55:42.871355 4424 controller.go:611] quota admission added evaluator for: serviceaccounts I0928 23:55:42.872616 4424 controllermanager.go:593] Started "serviceaccount" I0928 23:55:42.872730 4424 serviceaccounts_controller.go:117] Starting service account controller I0928 23:55:42.872765 4424 shared_informer.go:255] Waiting for caches to sync for service account I0928 23:55:42.876641 4424 controllermanager.go:593] Started "csrsigning" I0928 23:55:42.876769 4424 certificate_controller.go:119] Starting certificate controller "csrsigning-kubelet-serving" I0928 23:55:42.876806 4424 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kubelet-serving I0928 23:55:42.876845 4424 certificate_controller.go:119] Starting certificate controller "csrsigning-kubelet-client" I0928 23:55:42.876868 4424 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kubelet-client I0928 23:55:42.876916 4424 certificate_controller.go:119] Starting certificate controller "csrsigning-kube-apiserver-client" I0928 23:55:42.876944 4424 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client I0928 23:55:42.876970 4424 certificate_controller.go:119] Starting certificate controller "csrsigning-legacy-unknown" I0928 23:55:42.876983 4424 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-legacy-unknown I0928 23:55:42.877007 4424 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.crt::/var/lib/rancher/k3s/server/tls/server-ca.key" I0928 23:55:42.894893 4424 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key" I0928 23:55:42.895100 4424 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key" I0928 23:55:42.895352 4424 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.crt::/var/lib/rancher/k3s/server/tls/server-ca.key" I0928 23:55:42.899857 4424 controllermanager.go:593] Started "persistentvolume-binder" I0928 23:55:42.899974 4424 pv_controller_base.go:311] Starting persistent volume controller I0928 23:55:42.899993 4424 shared_informer.go:255] Waiting for caches to sync for persistent volume I0928 23:55:42.903972 4424 node_lifecycle_controller.go:377] Sending events to api server. I0928 23:55:42.904073 4424 taint_manager.go:163] "Sending events to api server" I0928 23:55:42.904137 4424 node_lifecycle_controller.go:505] Controller will reconcile labels. I0928 23:55:42.904191 4424 controllermanager.go:593] Started "nodelifecycle" I0928 23:55:42.904310 4424 node_lifecycle_controller.go:539] Starting node controller I0928 23:55:42.904340 4424 shared_informer.go:255] Waiting for caches to sync for taint I0928 23:55:42.908419 4424 controllermanager.go:593] Started "endpointslicemirroring" I0928 23:55:42.908522 4424 endpointslicemirroring_controller.go:212] Starting EndpointSliceMirroring controller I0928 23:55:42.908543 4424 shared_informer.go:255] Waiting for caches to sync for endpoint_slice_mirroring I0928 23:55:42.912504 4424 controllermanager.go:593] Started "job" I0928 23:55:42.912630 4424 job_controller.go:184] Starting job controller I0928 23:55:42.912660 4424 shared_informer.go:255] Waiting for caches to sync for job I0928 23:55:42.916631 4424 controllermanager.go:593] Started "replicaset" W0928 23:55:42.916668 4424 controllermanager.go:558] "tokencleaner" is disabled I0928 23:55:42.916789 4424 replica_set.go:205] Starting replicaset controller I0928 23:55:42.916826 4424 shared_informer.go:255] Waiting for caches to sync for ReplicaSet I0928 23:55:42.920782 4424 controllermanager.go:593] Started "persistentvolume-expander" I0928 23:55:42.920880 4424 expand_controller.go:341] Starting expand controller I0928 23:55:42.920900 4424 shared_informer.go:255] Waiting for caches to sync for expand I0928 23:55:42.924775 4424 controllermanager.go:593] Started "podgc" I0928 23:55:42.924909 4424 gc_controller.go:92] Starting GC controller I0928 23:55:42.924929 4424 shared_informer.go:255] Waiting for caches to sync for GC I0928 23:55:42.967243 4424 shared_informer.go:262] Caches are synced for tokens W0928 23:55:42.971710 4424 reflector.go:324] k8s.io/client-go@v1.24.4-k3s1/tools/cache/reflector.go:167: failed to list *v1.Endpoints: endpoints "kubernetes" is forbidden: User "system:k3s-controller" cannot list resource "endpoints" in API group "" in the namespace "default" E0928 23:55:42.971770 4424 reflector.go:138] k8s.io/client-go@v1.24.4-k3s1/tools/cache/reflector.go:167: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints "kubernetes" is forbidden: User "system:k3s-controller" cannot list resource "endpoints" in API group "" in the namespace "default" I0928 23:55:43.002406 4424 apiserver.go:52] "Watching apiserver" I0928 23:55:43.027707 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for cronjobs.batch I0928 23:55:43.027769 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io I0928 23:55:43.027812 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy I0928 23:55:43.027863 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for deployments.apps I0928 23:55:43.027902 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io I0928 23:55:43.027948 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for controllerrevisions.apps I0928 23:55:43.027992 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for leases.coordination.k8s.io I0928 23:55:43.028033 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for addons.k3s.cattle.io I0928 23:55:43.028083 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for limitranges I0928 23:55:43.028139 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling I0928 23:55:43.028176 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for jobs.batch I0928 23:55:43.028219 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for serviceaccounts I0928 23:55:43.028255 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for daemonsets.apps I0928 23:55:43.028296 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for replicasets.apps I0928 23:55:43.028332 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for statefulsets.apps I0928 23:55:43.028372 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io I0928 23:55:43.028410 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io I0928 23:55:43.028459 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for podtemplates I0928 23:55:43.028496 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io I0928 23:55:43.028520 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io I0928 23:55:43.028547 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for endpoints I0928 23:55:43.028578 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for events.events.k8s.io I0928 23:55:43.028599 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io I0928 23:55:43.028619 4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for helmchartconfigs.helm.cattle.io I0928 23:55:43.028635 4424 controllermanager.go:593] Started "resourcequota" I0928 23:55:43.028670 4424 resource_quota_controller.go:273] Starting resource quota controller I0928 23:55:43.028683 4424 shared_informer.go:255] Waiting for caches to sync for resource quota I0928 23:55:43.028774 4424 resource_quota_monitor.go:308] QuotaMonitor running I0928 23:55:43.071879 4424 reconciler.go:159] "Reconciler: start to sync state" I0928 23:55:43.169089 4424 controllermanager.go:593] Started "deployment" I0928 23:55:43.169170 4424 deployment_controller.go:153] "Starting controller" controller="deployment" I0928 23:55:43.169201 4424 shared_informer.go:255] Waiting for caches to sync for deployment INFO[0006] Done waiting for CRD helmchartconfigs.helm.cattle.io to become available INFO[0006] Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-crd-10.19.300.tgz INFO[0006] Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-10.19.300.tgz INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml INFO[0006] Starting k3s.cattle.io/v1, Kind=Addon controller INFO[0006] Creating deploy event broadcaster I0928 23:55:43.376671 4424 controllermanager.go:593] Started "disruption" I0928 23:55:43.376738 4424 disruption.go:363] Starting disruption controller I0928 23:55:43.376770 4424 shared_informer.go:255] Waiting for caches to sync for disruption INFO[0006] Creating svccontroller event broadcaster INFO[0006] Cluster dns configmap has been set successfully INFO[0006] Labels and annotations have been set successfully on node: k3s-master-01 INFO[0006] Starting helm.cattle.io/v1, Kind=HelmChart controller INFO[0006] Starting helm.cattle.io/v1, Kind=HelmChartConfig controller I0928 23:55:43.522300 4424 controllermanager.go:593] Started "endpoint" I0928 23:55:43.522365 4424 endpoints_controller.go:178] Starting endpoint controller I0928 23:55:43.522402 4424 shared_informer.go:255] Waiting for caches to sync for endpoint INFO[0006] Starting apps/v1, Kind=DaemonSet controller INFO[0006] Starting apps/v1, Kind=Deployment controller INFO[0006] Starting rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding controller INFO[0006] Starting batch/v1, Kind=Job controller INFO[0006] Starting /v1, Kind=Node controller WARN[0006] Unable to fetch coredns config map: configmaps "coredns" not found INFO[0006] Starting /v1, Kind=ConfigMap controller INFO[0006] Starting /v1, Kind=ServiceAccount controller INFO[0006] Starting /v1, Kind=Pod controller INFO[0006] Starting /v1, Kind=Service controller INFO[0006] Starting /v1, Kind=Endpoints controller WARN[0006] Unable to fetch coredns config map: configmaps "coredns" not found I0928 23:55:43.668497 4424 controllermanager.go:593] Started "replicationcontroller" W0928 23:55:43.668543 4424 controllermanager.go:558] "bootstrapsigner" is disabled I0928 23:55:43.668597 4424 replica_set.go:205] Starting replicationcontroller controller I0928 23:55:43.668624 4424 shared_informer.go:255] Waiting for caches to sync for ReplicationController I0928 23:55:43.825359 4424 controllermanager.go:593] Started "pv-protection" I0928 23:55:43.825458 4424 pv_protection_controller.go:79] Starting PV protection controller I0928 23:55:43.825495 4424 shared_informer.go:255] Waiting for caches to sync for PV protection I0928 23:55:43.988129 4424 controllermanager.go:593] Started "ttl-after-finished" I0928 23:55:43.988228 4424 ttlafterfinished_controller.go:109] Starting TTL after finished controller I0928 23:55:43.988266 4424 shared_informer.go:255] Waiting for caches to sync for TTL after finished I0928 23:55:44.126922 4424 controllermanager.go:593] Started "endpointslice" I0928 23:55:44.127026 4424 endpointslice_controller.go:257] Starting endpoint slice controller I0928 23:55:44.127057 4424 shared_informer.go:255] Waiting for caches to sync for endpoint_slice I0928 23:55:44.268331 4424 controllermanager.go:593] Started "cronjob" I0928 23:55:44.268405 4424 cronjob_controllerv2.go:135] "Starting cronjob controller v2" I0928 23:55:44.268436 4424 shared_informer.go:255] Waiting for caches to sync for cronjob I0928 23:55:44.317792 4424 controllermanager.go:593] Started "csrcleaner" I0928 23:55:44.317857 4424 cleaner.go:82] Starting CSR cleaner controller INFO[0007] Starting /v1, Kind=Secret controller I0928 23:55:44.469348 4424 controllermanager.go:593] Started "clusterrole-aggregation" W0928 23:55:44.469392 4424 controllermanager.go:558] "route" is disabled I0928 23:55:44.469475 4424 clusterroleaggregation_controller.go:194] Starting ClusterRoleAggregator I0928 23:55:44.469516 4424 shared_informer.go:255] Waiting for caches to sync for ClusterRoleAggregator INFO[0007] Updating TLS secret for kube-system/k3s-serving (count: 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-93.95.230.133:93.95.230.133 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-k3s-master-01:k3s-master-01 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=0F6E54A74F339E94FA0DF4A6780336AA840FB9C3] I0928 23:55:44.718172 4424 controllermanager.go:593] Started "garbagecollector" I0928 23:55:44.718315 4424 garbagecollector.go:149] Starting garbage collector controller I0928 23:55:44.718351 4424 shared_informer.go:255] Waiting for caches to sync for garbage collector I0928 23:55:44.718388 4424 graph_builder.go:289] GraphBuilder running INFO[0007] Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=k3s-master-01 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables I0928 23:55:44.822591 4424 server.go:231] "Warning, all flags other than --config, --write-config-to, and --cleanup are deprecated, please begin using a config file ASAP" I0928 23:55:44.851226 4424 node.go:163] Successfully retrieved node IP: 93.95.230.133 I0928 23:55:44.851280 4424 server_others.go:138] "Detected node IP" address="93.95.230.133" I0928 23:55:44.852683 4424 server_others.go:206] "Using iptables Proxier" I0928 23:55:44.852724 4424 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0928 23:55:44.852768 4424 server_others.go:214] "Creating dualStackProxier for iptables" I0928 23:55:44.852799 4424 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0928 23:55:44.852834 4424 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259" I0928 23:55:44.852914 4424 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259" I0928 23:55:44.853061 4424 server.go:661] "Version info" version="v1.24.4+k3s1" I0928 23:55:44.853100 4424 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0928 23:55:44.854043 4424 config.go:317] "Starting service config controller" I0928 23:55:44.854092 4424 shared_informer.go:255] Waiting for caches to sync for service config I0928 23:55:44.854133 4424 config.go:226] "Starting endpoint slice config controller" I0928 23:55:44.854164 4424 shared_informer.go:255] Waiting for caches to sync for endpoint slice config I0928 23:55:44.854334 4424 config.go:444] "Starting node config controller" I0928 23:55:44.854376 4424 shared_informer.go:255] Waiting for caches to sync for node config I0928 23:55:44.857798 4424 controller.go:611] quota admission added evaluator for: events.events.k8s.io INFO[0007] Active TLS secret kube-system/k3s-serving (ver=247) (count 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-93.95.230.133:93.95.230.133 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-k3s-master-01:k3s-master-01 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=0F6E54A74F339E94FA0DF4A6780336AA840FB9C3] I0928 23:55:44.954919 4424 shared_informer.go:262] Caches are synced for node config I0928 23:55:44.954975 4424 shared_informer.go:262] Caches are synced for service config I0928 23:55:44.955021 4424 shared_informer.go:262] Caches are synced for endpoint slice config E0928 23:55:44.958687 4424 proxier.go:874] "Failed to ensure chain jumps" err=< error checking rule: exit status 2: iptables v1.8.6 (nf_tables): Couldn't find match `conntrack' |
Your kernel appears to be missing the nf_conntrack module. See https://conntrack-tools.netfilter.org/manual.html#requirements You can also run |
K3s does not currently support nftables-backed distributions, such as RHEL 8, CentOS 8, Debian Buster, Ubuntu 20.04, and so on.
K3s should be able to determine which iptables version to use (
nft
orlegacy
) and program the according packet filters.The text was updated successfully, but these errors were encountered: