Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nftables support #1812

Closed
Oats87 opened this issue May 19, 2020 · 13 comments
Closed

nftables support #1812

Oats87 opened this issue May 19, 2020 · 13 comments
Assignees
Labels
kind/task Work not related to bug fixes or new functionality
Milestone

Comments

@Oats87
Copy link
Member

Oats87 commented May 19, 2020

K3s does not currently support nftables-backed distributions, such as RHEL 8, CentOS 8, Debian Buster, Ubuntu 20.04, and so on.

K3s should be able to determine which iptables version to use (nft or legacy) and program the according packet filters.

@davidnuzik davidnuzik added this to the v1.19 - September milestone May 19, 2020
@davidnuzik davidnuzik added [zube]: Working kind/task Work not related to bug fixes or new functionality labels May 19, 2020
@Oats87
Copy link
Member Author

Oats87 commented May 20, 2020

The approach that was discussed is as follows:

  1. Compile a 1.8.x version of iptables in rancher/k3s-root
  2. Move the iptables links to bin/aux, and perform this action in rancher/k3s-root
  3. Create and maintain the iptables-detect scripts within the rancher/k3s-root repository
  4. Create a second set of tar balls that only contain the iptables links, scripts, and xtables binaries

This approach will have K3s prefer the host-level iptables binaries, and in the worst case, use the iptables that k3s-root provides.

@Oats87
Copy link
Member Author

Oats87 commented Jun 22, 2020

We are currently planning on leaving servicelb operating in legacy mode, which should work given the underlying host has the legacy iptables modules loaded... if not, then we may run ourselves into problems... more investigation on this to follow.

@Oats87
Copy link
Member Author

Oats87 commented Jun 22, 2020

On both a CentOS 8 and Debian Buster system, the ip_tables module is loaded on startup. If the module is not loaded, it will cause svclb to fail, as expected.

@lelandsindttouchnet
Copy link

If I understand correctly. RH and Centos 8 no longer include the -legacy mode binaries. Leaving servicelb operating in legacy mode may work but would leave the system administrator unable to view the iptables config.

https://access.redhat.com/solutions/4377321

rpm -q --changelog iptables
...snip...
* Wed Jul 11 2018 Phil Sutter - 1.8.0-1                                                                                                                                                                            
 - New upstream version 1.8.0                                                                                                                                                                                       
 - Drop compat sub-package                                                                                                                                                                                          
 - Use nft tool versions, drop legacy ones
..snip...

would it not?

@Oats87
Copy link
Member Author

Oats87 commented Jun 22, 2020

If I understand correctly. RH and Centos 8 no longer include the -legacy mode binaries. Leaving servicelb operating in legacy mode may work but would leave the system administrator unable to view the iptables config.

https://access.redhat.com/solutions/4377321

rpm -q --changelog iptables
...snip...
* Wed Jul 11 2018 Phil Sutter - 1.8.0-1                                                                                                                                                                            
 - New upstream version 1.8.0                                                                                                                                                                                       
 - Drop compat sub-package                                                                                                                                                                                          
 - Use nft tool versions, drop legacy ones
..snip...

would it not?

On a RHEL8/CentOS 8 system, you would be unable to view the rules put into place by the service LB, but would be exec into the pod if necessary to view/manipulate the legacy rules. There is currently no simple method to determine whether to use ip_tables or nf_tables from within a pod that is not on the host network namespace without making further modifications to k3s and the servicelb which is not preferred.

We are also packaging a statically compiled legacy binary with the new k3s-root builds that could be called if absolutely necessary from the host, although it is slightly convoluted to access the binary.

@Oats87
Copy link
Member Author

Oats87 commented Jun 24, 2020

The following operating systems (and according permutations) should/were tested with these new changes. As the changes implemented are to prefer the host-level binaries.

Alpine Linux 3.12

iptables installed
iptables not installed

Ubuntu 18.04

iptables installed
iptables installed, alternatives set for nft
iptables not installed

Ubuntu 20.04

iptables installed
iptables installed, alternatives set for nft
iptables not installed

Debian Buster

iptables installed (nftables)
iptables installed, alternatives set to legacy
iptables not installed

RHEL 7

iptables installed
iptables not installed

RHEL 8

iptables installed
iptables not installed

CentOS 7

iptables installed
iptables not installed

CentOS 8

iptables installed
iptables not installed

For all operating system tests that do not have iptables installed, you can verify the mode that was chosen via the /var/lib/rancher/k3s/data/<data-hash>/bin/aux/iptables link, which should either point to xtables-nft-multi or xtables-legacy-multi. In addition, you can simply invoke the /var/lib/rancher/k3s/data/<data-hash>/bin/aux/iptables-detect.sh script which will tell you what was chosen and by which method.

@Oats87
Copy link
Member Author

Oats87 commented Jun 24, 2020

Tested via k3d, which will use the aux located iptables-detect. As it is run within a container, it falls to legacy automatically.

@ShylajaDevadiga
Copy link
Contributor

Alpine Linux 3.12

iptables installed No
nftables installed no
iptables not installed **Not Applicable **
k3s iptables No

Ubuntu 18.04

iptables installed Yes
iptables installed, alternatives set for nft No
iptables uninstalled Yes

Ubuntu 20.04

iptables installed Yes
iptables installed, alternatives set for nft No
iptables uninstalled Yes

Debian Buster 10

iptables installed (nftables) Yes
iptables installed, alternatives set to legacy No alternatives set to nft Yes
iptables uninstalled Yes

RHEL 7.8

iptables installed Yes
iptables uninstalled Yes

RHEL 8.0

iptables installed No
nftables installed No
iptables not installed **Not Applicable **
k3s iptables Yes

CentOS 7.8

iptables installed Yes
iptables uninstalled Yes

CentOS 8.0

iptables installed Yes
iptables uninstalled Yes

More info:
Alpine Linux 3.12 (iptables-detect.sh not found)

cat /etc/alpine-release 
3.12.0
iptables --version
-ash: iptables: not found

ls ./bin/aux
mount      wg-add.sh

Ubuntu18.04

iptables --version 
iptables v1.6.1

iptables-detect.sh 
mode is legacy detected via os and containerized is false

Alternatives not set

ls -l /etc/alternatives/iptables*
ls: cannot access '/etc/alternatives/iptables*': No such file or directory

After uninstall iptables is removed successfully

Ubuntu 20.04

iptables --version
iptables v1.8.4 (legacy)

iptables-detect.sh 
mode is legacy detected via rules and containerized is false

Alternatives is set to legacy

ls -l /etc/alternatives/iptables*
lrwxrwxrwx 1 root root 25 Apr 23 08:14 /etc/alternatives/iptables -> /usr/sbin/iptables-legacy

After uninstall, iptables is removed successfully

RHEL 7.8

iptables --version
iptables v1.4.21

iptables-detect.sh 
./iptables-detect.sh: line 197: [: 7.8: integer expression expected
mode is legacy detected via os and containerized is false

Alternatives not set

ls -l /etc/alternatives/iptables*
ls: cannot access '/etc/alternatives/iptables*': No such file or directory

After uninstall iptables is removed successfully

RHEL 8

iptables --version
-bash: iptables: command not found

/iptables-detect.sh 
./iptables-detect.sh: line 197: [: 8.0: integer expression expected
mode is legacy detected via os and containerized is false

iptables not installed
Alternatives not set
nftables not installed

k3s iptables set

aux]# ./iptables --version
iptables v1.8.3 (legacy)

Centos 7

iptables --version
iptables v1.4.21

./iptables-detect.sh 
mode is legacy detected via os and containerized is false

Alternatives not set
nftables not installed

Centos 8

cat /etc/redhat-release 
CentOS Linux release 8.0.1905 (Core) 

iptables --version
iptables v1.8.2 (nf_tables)

iptables-detect.sh 
mode is nft detected via os and containerized is false

After uninstall iptables is removed successfully

Debian Buster 10

iptables --version
iptables v1.8.2 (nf_tables)

iptables-detect.sh 
mode is nft detected via rules and containerized is false

Alternatives is set to nft
After uninstall iptables is removed successfully

@brandond
Copy link
Member

brandond commented Jul 21, 2020

./iptables-detect.sh: line 197: [: 7.8: integer expression expected
./iptables-detect.sh: line 197: [: 8.0: integer expression expected

This seems like something that should be fixed

@ShylajaDevadiga
Copy link
Contributor

@brandond yes @Oats87 is working on it.

Oats87 added a commit to Oats87/k3s that referenced this issue Jul 24, 2020
Signed-off-by: Chris Kim <oats87g@gmail.com>
Oats87 added a commit to Oats87/k3s-root that referenced this issue Jul 24, 2020
Part of: k3s-io/k3s#1812

Signed-off-by: Chris Kim <oats87g@gmail.com>
Oats87 added a commit to Oats87/k3s that referenced this issue Jul 24, 2020
Signed-off-by: Chris Kim <oats87g@gmail.com>
@ShylajaDevadiga
Copy link
Contributor

k3s -v
k3s version v1.18.6+k3s-169ee639 (169ee63)

Alpine Linux 3.12

iptables installed NO
nftables installed NO

./bin/aux/iptables-detect.sh 
mode is legacy detected via os and containerized is false

./bin/aux/iptables --version 
iptables v1.8.3 (nf_tables)

Ubuntu 18.04

iptables installed YES
iptables removed YES

iptables --version
iptables v1.6.1

./bin/aux/iptables-detect.sh 
mode is legacy detected via os and containerized is false

Ubuntu 20.04

iptables installed YES
iptables installed, alternatives set for nft NO
iptables removed YES

iptables --version
iptables v1.8.4 (legacy)

ls -l /etc/alternatives/iptables*
lrwxrwxrwx 1 root root 25 Jul 16 17:13 /etc/alternatives/iptables -> /usr/sbin/iptables-legacy

./bin/aux/iptables-detect.sh 
mode is legacy detected via rules and containerized is false

Debian Buster 10

iptables installed (nftables) YES
iptables installed, alternatives set to nft YES
iptables uninstalled YES

iptables --version
iptables v1.8.2 (nf_tables)

 ls -l /etc/alternatives/iptables*
lrwxrwxrwx 1 root root 22 Jun 10 17:41 /etc/alternatives/iptables -> /usr/sbin/iptables-nft

./bin/aux/iptables-detect.sh 
mode is nft detected via rules and containerized is false

CentOS 7.8

iptables installed YES
iptables uninstalled YES

iptables --version
iptables v1.4.21

./bin/aux/iptables-detect.sh 
mode is legacy detected via os and containerized is false

Centos 8

iptables installed YES
iptables uninstalled YES

iptables --version
iptables v1.8.2 (nf_tables)

iptables-detect.sh 
mode is nft detected via os and containerized is false

CentOS 8.2
iptables installed NO

iptables --version
-bash: iptables: command not found

./bin/aux/iptables --version
iptables v1.8.3 (nf_tables)

./bin/aux/iptables-detect.sh 
mode is nft detected via os and containerized is false

RHEL 7.8

iptables installed YES
iptables installed, alternatives set to nft NO
iptables uninstalled YES

iptables --version
iptables v1.4.21

./bin/aux/iptables-detect.sh 
mode is legacy detected via os and containerized is false

RHEL 8
iptables installed NO
nftables installed NO

./bin/aux/iptables-detect.sh
mode is nft detected via os and containerized is false
./bin/aux/iptables --version 
iptables v1.8.3 (nf_tables)

@arankaren
Copy link

arankaren commented Sep 28, 2022

Impossible running inside Hyperbola v0.4.1 with nftables and OpenRC init

run:
/usr/local/bin/k3s server --no-deploy traefik --write-kubeconfig-mode 644 --node-name k3s-master-01

log:

E0928 23:55:49.198804    4424 proxier.go:874] "Failed to ensure chain jumps" err=<
	error checking rule: exit status 2: iptables v1.8.6 (nf_tables): Couldn't find match `conntrack'
	
	Try `iptables -h' or 'iptables --help' for more information.
 > table=filter srcChain=INPUT dstChain=KUBE-EXTERNAL-SERVICES
I0928 23:55:49.198841    4424 proxier.go:858] "Sync failed" retryingTime="30s"
E0928 23:55:49.201527    4424 proxier.go:874] "Failed to ensure chain jumps" err=<
	error checking rule: exit status 2: ip6tables v1.8.6 (nf_tables): Couldn't find match `conntrack'
	
	Try `ip6tables -h' or 'ip6tables --help' for more information.
 > table=filter srcChain=INPUT dstChain=KUBE-EXTERNAL-SERVICES
Full log k3s
INFO[0000] Acquiring lock file /var/lib/rancher/k3s/data/.lock 
INFO[0000] Preparing data dir /var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2 
WARN[0000] no-deploy flag is deprecated, it will be removed in v1.25. Use --skip-deploy instead. 
INFO[0000] Starting k3s v1.24.4+k3s1 (c3f830e9)         
INFO[0000] Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s 
INFO[0000] Configuring database table schema and indexes, this may take a moment... 
INFO[0000] Database tables and indexes are up to date   
INFO[0000] Kine available at unix://kine.sock           
INFO[0000] generated self-signed CA certificate CN=k3s-client-ca@1664380537: notBefore=2022-09-28 15:55:37.082320724 +0000 UTC notAfter=2032-09-25 15:55:37.082320724 +0000 UTC 
INFO[0000] certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC 
INFO[0000] certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC 
INFO[0000] certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC 
INFO[0000] certificate CN=system:apiserver,O=system:masters signed by CN=k3s-client-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC 
INFO[0000] certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC 
INFO[0000] certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC 
INFO[0000] certificate CN=k3s-cloud-controller-manager signed by CN=k3s-client-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC 
INFO[0000] generated self-signed CA certificate CN=k3s-server-ca@1664380537: notBefore=2022-09-28 15:55:37.085006611 +0000 UTC notAfter=2032-09-25 15:55:37.085006611 +0000 UTC 
INFO[0000] certificate CN=kube-apiserver signed by CN=k3s-server-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC 
INFO[0000] generated self-signed CA certificate CN=k3s-request-header-ca@1664380537: notBefore=2022-09-28 15:55:37.085631413 +0000 UTC notAfter=2032-09-25 15:55:37.085631413 +0000 UTC 
INFO[0000] certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC 
INFO[0000] generated self-signed CA certificate CN=etcd-server-ca@1664380537: notBefore=2022-09-28 15:55:37.086211811 +0000 UTC notAfter=2032-09-25 15:55:37.086211811 +0000 UTC 
INFO[0000] certificate CN=etcd-server signed by CN=etcd-server-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC 
INFO[0000] certificate CN=etcd-client signed by CN=etcd-server-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC 
INFO[0000] generated self-signed CA certificate CN=etcd-peer-ca@1664380537: notBefore=2022-09-28 15:55:37.087044373 +0000 UTC notAfter=2032-09-25 15:55:37.087044373 +0000 UTC 
INFO[0000] certificate CN=etcd-peer signed by CN=etcd-peer-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC 
INFO[0000] certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC 
WARN[0000] dynamiclistener [::]:6443: no cached certificate available for preload - deferring certificate load until storage initialization or first client request 
INFO[0000] Active TLS secret / (ver=) (count 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-93.95.230.133:93.95.230.133 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-k3s-master-01:k3s-master-01 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=0F6E54A74F339E94FA0DF4A6780336AA840FB9C3] 
INFO[0000] Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --etcd-servers=unix://kine.sock --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key 
INFO[0000] Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259 
INFO[0000] Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true 
INFO[0000] Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m0s --profiling=false 
INFO[0000] Server node token is available at /var/lib/rancher/k3s/server/token 
INFO[0000] To join server node to cluster: k3s server -s https://93.95.230.133:6443 -t ${SERVER_NODE_TOKEN} 
INFO[0000] Agent node token is available at /var/lib/rancher/k3s/server/agent-token 
INFO[0000] To join agent node to cluster: k3s agent -s https://93.95.230.133:6443 -t ${AGENT_NODE_TOKEN} 
INFO[0000] Wrote kubeconfig /etc/rancher/k3s/k3s.yaml   
INFO[0000] Run: k3s kubectl                             
INFO[0000] Tunnel server egress proxy mode: agent       
INFO[0000] Tunnel server egress proxy waiting for runtime core to become available 
I0928 23:55:37.252823    4424 server.go:576] external host was not specified, using 93.95.230.133
I0928 23:55:37.252972    4424 server.go:168] Version: v1.24.4+k3s1
I0928 23:55:37.252999    4424 server.go:170] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
INFO[0000] Waiting for API server to become available   
I0928 23:55:37.842418    4424 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0928 23:55:37.842495    4424 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0928 23:55:37.843182    4424 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0928 23:55:37.843225    4424 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0928 23:55:37.844239    4424 shared_informer.go:255] Waiting for caches to sync for node_authorizer
W0928 23:55:37.870767    4424 genericapiserver.go:557] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
I0928 23:55:37.872071    4424 instance.go:274] Using reconciler: lease
INFO[0000] certificate CN=k3s-master-01 signed by CN=k3s-server-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC 
INFO[0000] certificate CN=system:node:k3s-master-01,O=system:nodes signed by CN=k3s-client-ca@1664380537: notBefore=2022-09-28 15:55:37 +0000 UTC notAfter=2023-09-28 15:55:37 +0000 UTC 
I0928 23:55:38.018546    4424 instance.go:586] API group "internal.apiserver.k8s.io" is not enabled, skipping.
W0928 23:55:38.190500    4424 genericapiserver.go:557] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
W0928 23:55:38.194052    4424 genericapiserver.go:557] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
W0928 23:55:38.224313    4424 genericapiserver.go:557] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
W0928 23:55:38.226437    4424 genericapiserver.go:557] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
W0928 23:55:38.233947    4424 genericapiserver.go:557] Skipping API networking.k8s.io/v1beta1 because it has no resources.
W0928 23:55:38.243739    4424 genericapiserver.go:557] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0928 23:55:38.249573    4424 genericapiserver.go:557] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
W0928 23:55:38.249619    4424 genericapiserver.go:557] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0928 23:55:38.250786    4424 genericapiserver.go:557] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
W0928 23:55:38.250826    4424 genericapiserver.go:557] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0928 23:55:38.254056    4424 genericapiserver.go:557] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0928 23:55:38.257149    4424 genericapiserver.go:557] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W0928 23:55:38.260141    4424 genericapiserver.go:557] Skipping API apps/v1beta2 because it has no resources.
W0928 23:55:38.260179    4424 genericapiserver.go:557] Skipping API apps/v1beta1 because it has no resources.
W0928 23:55:38.261531    4424 genericapiserver.go:557] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
I0928 23:55:38.264409    4424 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0928 23:55:38.264457    4424 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W0928 23:55:38.274807    4424 genericapiserver.go:557] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
INFO[0001] Module overlay was already loaded            
WARN[0001] Failed to load kernel module iptable_nat with modprobe 
INFO[0001] Set sysctl 'net/ipv4/conf/default/forwarding' to 1 
INFO[0001] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 
INFO[0001] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 
INFO[0001] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 
INFO[0001] Set sysctl 'net/ipv4/conf/all/forwarding' to 1 
INFO[0001] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log 
INFO[0001] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd 
INFO[0002] Containerd is now running                    
INFO[0002] Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3s-master-01 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key 
INFO[0002] Connecting to proxy                           url="wss://127.0.0.1:6443/v1-k3s/connect"
INFO[0002] Handling backend connection request [k3s-master-01] 
I0928 23:55:40.028321    4424 secure_serving.go:210] Serving securely on 127.0.0.1:6444
I0928 23:55:40.028526    4424 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
I0928 23:55:40.045968    4424 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
I0928 23:55:40.046152    4424 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0928 23:55:40.046582    4424 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key"
I0928 23:55:40.046790    4424 controller.go:83] Starting OpenAPI AggregationController
I0928 23:55:40.046992    4424 autoregister_controller.go:141] Starting autoregister controller
I0928 23:55:40.047033    4424 cache.go:32] Waiting for caches to sync for autoregister controller
I0928 23:55:40.049556    4424 available_controller.go:491] Starting AvailableConditionController
I0928 23:55:40.049600    4424 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0928 23:55:40.049853    4424 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0928 23:55:40.049885    4424 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
I0928 23:55:40.049931    4424 apf_controller.go:317] Starting API Priority and Fairness config controller
I0928 23:55:40.050083    4424 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0928 23:55:40.050105    4424 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0928 23:55:40.050183    4424 customresource_discovery_controller.go:209] Starting DiscoveryController
I0928 23:55:40.050212    4424 controller.go:80] Starting OpenAPI V3 AggregationController
I0928 23:55:40.050256    4424 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
I0928 23:55:40.052556    4424 crdregistration_controller.go:111] Starting crd-autoregister controller
I0928 23:55:40.052580    4424 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
I0928 23:55:40.053923    4424 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
I0928 23:55:40.054130    4424 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
I0928 23:55:40.055884    4424 controller.go:85] Starting OpenAPI controller
I0928 23:55:40.055914    4424 controller.go:85] Starting OpenAPI V3 controller
I0928 23:55:40.055950    4424 naming_controller.go:291] Starting NamingConditionController
I0928 23:55:40.055975    4424 establishing_controller.go:76] Starting EstablishingController
I0928 23:55:40.055996    4424 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0928 23:55:40.056014    4424 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0928 23:55:40.056041    4424 crd_finalizer.go:266] Starting CRDFinalizer
INFO[0003] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error 
I0928 23:55:40.135978    4424 controller.go:611] quota admission added evaluator for: namespaces
E0928 23:55:40.139694    4424 controller.go:166] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.43.0.1"}: failed to allocate IP 10.43.0.1: cannot allocate resources of type serviceipallocations at this time
I0928 23:55:40.144300    4424 shared_informer.go:262] Caches are synced for node_authorizer
I0928 23:55:40.147111    4424 cache.go:39] Caches are synced for autoregister controller
I0928 23:55:40.149729    4424 cache.go:39] Caches are synced for AvailableConditionController controller
I0928 23:55:40.149928    4424 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0928 23:55:40.149964    4424 apf_controller.go:322] Running API Priority and Fairness config worker
I0928 23:55:40.152088    4424 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0928 23:55:40.152675    4424 shared_informer.go:262] Caches are synced for crd-autoregister
I0928 23:55:40.704619    4424 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0928 23:55:41.052773    4424 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0928 23:55:41.055749    4424 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0928 23:55:41.055795    4424 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0928 23:55:41.257309    4424 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0928 23:55:41.275072    4424 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0928 23:55:41.352049    4424 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.43.0.1]
W0928 23:55:41.354485    4424 lease.go:234] Resetting endpoints for master service "kubernetes" to [93.95.230.133]
I0928 23:55:41.354980    4424 controller.go:611] quota admission added evaluator for: endpoints
I0928 23:55:41.357122    4424 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
Flag --cloud-provider has been deprecated, will be removed in 1.24 or later, in favor of removing cloud provider code from Kubelet.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
I0928 23:55:41.971298    4424 server.go:192] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
I0928 23:55:41.971568    4424 server.go:395] "Kubelet version" kubeletVersion="v1.24.4+k3s1"
I0928 23:55:41.971621    4424 server.go:397] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
W0928 23:55:41.973079    4424 info.go:53] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
I0928 23:55:41.973284    4424 server.go:644] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
I0928 23:55:41.973468    4424 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I0928 23:55:41.973546    4424 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
I0928 23:55:41.973593    4424 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
I0928 23:55:41.973614    4424 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true
I0928 23:55:41.973675    4424 state_mem.go:36] "Initialized new in-memory state store"
I0928 23:55:41.973980    4424 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt"
I0928 23:55:41.989696    4424 kubelet.go:376] "Attempting to sync node with API server"
I0928 23:55:41.989745    4424 kubelet.go:267] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
I0928 23:55:41.989774    4424 kubelet.go:278] "Adding apiserver pod source"
I0928 23:55:41.989797    4424 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
W0928 23:55:42.002352    4424 reflector.go:324] k8s.io/client-go@v1.24.4-k3s1/tools/cache/reflector.go:167: failed to list *v1.Endpoints: endpoints "kubernetes" is forbidden: User "system:k3s-controller" cannot list resource "endpoints" in API group "" in the namespace "default"
E0928 23:55:42.002399    4424 reflector.go:138] k8s.io/client-go@v1.24.4-k3s1/tools/cache/reflector.go:167: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints "kubernetes" is forbidden: User "system:k3s-controller" cannot list resource "endpoints" in API group "" in the namespace "default"
I0928 23:55:42.003037    4424 kuberuntime_manager.go:239] "Container runtime initialized" containerRuntime="containerd" version="v1.6.6-k3s1" apiVersion="v1"
W0928 23:55:42.003168    4424 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
I0928 23:55:42.003405    4424 server.go:1177] "Started kubelet"
I0928 23:55:42.004551    4424 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
I0928 23:55:42.004974    4424 server.go:150] "Starting to listen" address="0.0.0.0" port=10250
I0928 23:55:42.005413    4424 server.go:410] "Adding debug handlers to kubelet server"
I0928 23:55:42.006660    4424 volume_manager.go:289] "Starting Kubelet Volume Manager"
I0928 23:55:42.006867    4424 desired_state_of_world_populator.go:145] "Desired state populator starts to run"
E0928 23:55:42.022842    4424 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
E0928 23:55:42.022938    4424 kubelet.go:1298] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
E0928 23:55:42.027922    4424 kubelet_network_linux.go:103] "Failed to ensure marking rule for KUBE-MARK-DROP chain" err=<
	error checking rule: exit status 2: iptables v1.8.6 (nf_tables): unknown option "--or-mark"
	Try `iptables -h' or 'iptables --help' for more information.
 >
I0928 23:55:42.027957    4424 kubelet_network_linux.go:84] "Failed to initialize protocol iptables rules; some functionality may be missing." protocol=IPv4
E0928 23:55:42.031764    4424 kubelet_network_linux.go:103] "Failed to ensure marking rule for KUBE-MARK-DROP chain" err=<
	error checking rule: exit status 2: ip6tables v1.8.6 (nf_tables): unknown option "--or-mark"
	Try `ip6tables -h' or 'ip6tables --help' for more information.
 >
I0928 23:55:42.031776    4424 kubelet_network_linux.go:84] "Failed to initialize protocol iptables rules; some functionality may be missing." protocol=IPv6
I0928 23:55:42.031781    4424 status_manager.go:161] "Starting to sync pod status with apiserver"
I0928 23:55:42.031789    4424 kubelet.go:1986] "Starting kubelet main sync loop"
E0928 23:55:42.031811    4424 kubelet.go:2010] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
E0928 23:55:42.041250    4424 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"k3s-master-01\" not found" node="k3s-master-01"
I0928 23:55:42.068224    4424 cpu_manager.go:213] "Starting CPU manager" policy="none"
I0928 23:55:42.068256    4424 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s"
I0928 23:55:42.068286    4424 state_mem.go:36] "Initialized new in-memory state store"
I0928 23:55:42.068987    4424 policy_none.go:49] "None policy: Start"
I0928 23:55:42.069290    4424 memory_manager.go:168] "Starting memorymanager" policy="None"
I0928 23:55:42.069323    4424 state_mem.go:35] "Initializing new in-memory state store"
I0928 23:55:42.072602    4424 manager.go:610] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
I0928 23:55:42.072740    4424 plugin_manager.go:114] "Starting Kubelet Plugin Manager"
E0928 23:55:42.074972    4424 eviction_manager.go:254] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"k3s-master-01\" not found"
E0928 23:55:42.126020    4424 kubelet.go:2424] "Error getting node" err="node \"k3s-master-01\" not found"
I0928 23:55:42.126350    4424 kubelet_node_status.go:70] "Attempting to register node" node="k3s-master-01"
I0928 23:55:42.133347    4424 kubelet_node_status.go:73] "Successfully registered node" node="k3s-master-01"
INFO[0005] Annotations and labels have been set successfully on node: k3s-master-01 
INFO[0005] Starting flannel with backend vxlan          
INFO[0005] Kube API server is now running               
INFO[0005] ETCD server is now running                   
INFO[0005] k3s is up and running                        
INFO[0005] Waiting for cloud-controller-manager privileges to become available 
INFO[0005] Tunnel server egress proxy waiting for runtime core to become available 
I0928 23:55:42.477239    4424 serving.go:355] Generated self-signed cert in-memory
INFO[0005] Applying CRD addons.k3s.cattle.io            
I0928 23:55:42.697756    4424 controllermanager.go:180] Version: v1.24.4+k3s1
I0928 23:55:42.697822    4424 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0928 23:55:42.704028    4424 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0928 23:55:42.704316    4424 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0928 23:55:42.704355    4424 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
I0928 23:55:42.704394    4424 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0928 23:55:42.704454    4424 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0928 23:55:42.704479    4424 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0928 23:55:42.704515    4424 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0928 23:55:42.704535    4424 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
INFO[0005] Applying CRD helmcharts.helm.cattle.io       
INFO[0005] Applying CRD helmchartconfigs.helm.cattle.io 
INFO[0005] Waiting for CRD helmchartconfigs.helm.cattle.io to become available 
I0928 23:55:42.805333    4424 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0928 23:55:42.805391    4424 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
I0928 23:55:42.805460    4424 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0928 23:55:42.866324    4424 shared_informer.go:255] Waiting for caches to sync for tokens
I0928 23:55:42.871355    4424 controller.go:611] quota admission added evaluator for: serviceaccounts
I0928 23:55:42.872616    4424 controllermanager.go:593] Started "serviceaccount"
I0928 23:55:42.872730    4424 serviceaccounts_controller.go:117] Starting service account controller
I0928 23:55:42.872765    4424 shared_informer.go:255] Waiting for caches to sync for service account
I0928 23:55:42.876641    4424 controllermanager.go:593] Started "csrsigning"
I0928 23:55:42.876769    4424 certificate_controller.go:119] Starting certificate controller "csrsigning-kubelet-serving"
I0928 23:55:42.876806    4424 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
I0928 23:55:42.876845    4424 certificate_controller.go:119] Starting certificate controller "csrsigning-kubelet-client"
I0928 23:55:42.876868    4424 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kubelet-client
I0928 23:55:42.876916    4424 certificate_controller.go:119] Starting certificate controller "csrsigning-kube-apiserver-client"
I0928 23:55:42.876944    4424 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
I0928 23:55:42.876970    4424 certificate_controller.go:119] Starting certificate controller "csrsigning-legacy-unknown"
I0928 23:55:42.876983    4424 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
I0928 23:55:42.877007    4424 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.crt::/var/lib/rancher/k3s/server/tls/server-ca.key"
I0928 23:55:42.894893    4424 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key"
I0928 23:55:42.895100    4424 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key"
I0928 23:55:42.895352    4424 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.crt::/var/lib/rancher/k3s/server/tls/server-ca.key"
I0928 23:55:42.899857    4424 controllermanager.go:593] Started "persistentvolume-binder"
I0928 23:55:42.899974    4424 pv_controller_base.go:311] Starting persistent volume controller
I0928 23:55:42.899993    4424 shared_informer.go:255] Waiting for caches to sync for persistent volume
I0928 23:55:42.903972    4424 node_lifecycle_controller.go:377] Sending events to api server.
I0928 23:55:42.904073    4424 taint_manager.go:163] "Sending events to api server"
I0928 23:55:42.904137    4424 node_lifecycle_controller.go:505] Controller will reconcile labels.
I0928 23:55:42.904191    4424 controllermanager.go:593] Started "nodelifecycle"
I0928 23:55:42.904310    4424 node_lifecycle_controller.go:539] Starting node controller
I0928 23:55:42.904340    4424 shared_informer.go:255] Waiting for caches to sync for taint
I0928 23:55:42.908419    4424 controllermanager.go:593] Started "endpointslicemirroring"
I0928 23:55:42.908522    4424 endpointslicemirroring_controller.go:212] Starting EndpointSliceMirroring controller
I0928 23:55:42.908543    4424 shared_informer.go:255] Waiting for caches to sync for endpoint_slice_mirroring
I0928 23:55:42.912504    4424 controllermanager.go:593] Started "job"
I0928 23:55:42.912630    4424 job_controller.go:184] Starting job controller
I0928 23:55:42.912660    4424 shared_informer.go:255] Waiting for caches to sync for job
I0928 23:55:42.916631    4424 controllermanager.go:593] Started "replicaset"
W0928 23:55:42.916668    4424 controllermanager.go:558] "tokencleaner" is disabled
I0928 23:55:42.916789    4424 replica_set.go:205] Starting replicaset controller
I0928 23:55:42.916826    4424 shared_informer.go:255] Waiting for caches to sync for ReplicaSet
I0928 23:55:42.920782    4424 controllermanager.go:593] Started "persistentvolume-expander"
I0928 23:55:42.920880    4424 expand_controller.go:341] Starting expand controller
I0928 23:55:42.920900    4424 shared_informer.go:255] Waiting for caches to sync for expand
I0928 23:55:42.924775    4424 controllermanager.go:593] Started "podgc"
I0928 23:55:42.924909    4424 gc_controller.go:92] Starting GC controller
I0928 23:55:42.924929    4424 shared_informer.go:255] Waiting for caches to sync for GC
I0928 23:55:42.967243    4424 shared_informer.go:262] Caches are synced for tokens
W0928 23:55:42.971710    4424 reflector.go:324] k8s.io/client-go@v1.24.4-k3s1/tools/cache/reflector.go:167: failed to list *v1.Endpoints: endpoints "kubernetes" is forbidden: User "system:k3s-controller" cannot list resource "endpoints" in API group "" in the namespace "default"
E0928 23:55:42.971770    4424 reflector.go:138] k8s.io/client-go@v1.24.4-k3s1/tools/cache/reflector.go:167: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints "kubernetes" is forbidden: User "system:k3s-controller" cannot list resource "endpoints" in API group "" in the namespace "default"
I0928 23:55:43.002406    4424 apiserver.go:52] "Watching apiserver"
I0928 23:55:43.027707    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for cronjobs.batch
I0928 23:55:43.027769    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
I0928 23:55:43.027812    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
I0928 23:55:43.027863    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for deployments.apps
I0928 23:55:43.027902    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
I0928 23:55:43.027948    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for controllerrevisions.apps
I0928 23:55:43.027992    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
I0928 23:55:43.028033    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for addons.k3s.cattle.io
I0928 23:55:43.028083    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for limitranges
I0928 23:55:43.028139    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
I0928 23:55:43.028176    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for jobs.batch
I0928 23:55:43.028219    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for serviceaccounts
I0928 23:55:43.028255    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for daemonsets.apps
I0928 23:55:43.028296    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for replicasets.apps
I0928 23:55:43.028332    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for statefulsets.apps
I0928 23:55:43.028372    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
I0928 23:55:43.028410    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io
I0928 23:55:43.028459    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for podtemplates
I0928 23:55:43.028496    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io
I0928 23:55:43.028520    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
I0928 23:55:43.028547    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for endpoints
I0928 23:55:43.028578    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for events.events.k8s.io
I0928 23:55:43.028599    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
I0928 23:55:43.028619    4424 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for helmchartconfigs.helm.cattle.io
I0928 23:55:43.028635    4424 controllermanager.go:593] Started "resourcequota"
I0928 23:55:43.028670    4424 resource_quota_controller.go:273] Starting resource quota controller
I0928 23:55:43.028683    4424 shared_informer.go:255] Waiting for caches to sync for resource quota
I0928 23:55:43.028774    4424 resource_quota_monitor.go:308] QuotaMonitor running
I0928 23:55:43.071879    4424 reconciler.go:159] "Reconciler: start to sync state"
I0928 23:55:43.169089    4424 controllermanager.go:593] Started "deployment"
I0928 23:55:43.169170    4424 deployment_controller.go:153] "Starting controller" controller="deployment"
I0928 23:55:43.169201    4424 shared_informer.go:255] Waiting for caches to sync for deployment
INFO[0006] Done waiting for CRD helmchartconfigs.helm.cattle.io to become available 
INFO[0006] Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-crd-10.19.300.tgz 
INFO[0006] Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-10.19.300.tgz 
INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml 
INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml 
INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml 
INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml 
INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml 
INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml 
INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml 
INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml 
INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml 
INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml 
INFO[0006] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml 
INFO[0006] Starting k3s.cattle.io/v1, Kind=Addon controller 
INFO[0006] Creating deploy event broadcaster            
I0928 23:55:43.376671    4424 controllermanager.go:593] Started "disruption"
I0928 23:55:43.376738    4424 disruption.go:363] Starting disruption controller
I0928 23:55:43.376770    4424 shared_informer.go:255] Waiting for caches to sync for disruption
INFO[0006] Creating svccontroller event broadcaster     
INFO[0006] Cluster dns configmap has been set successfully 
INFO[0006] Labels and annotations have been set successfully on node: k3s-master-01 
INFO[0006] Starting helm.cattle.io/v1, Kind=HelmChart controller 
INFO[0006] Starting helm.cattle.io/v1, Kind=HelmChartConfig controller 
I0928 23:55:43.522300    4424 controllermanager.go:593] Started "endpoint"
I0928 23:55:43.522365    4424 endpoints_controller.go:178] Starting endpoint controller
I0928 23:55:43.522402    4424 shared_informer.go:255] Waiting for caches to sync for endpoint
INFO[0006] Starting apps/v1, Kind=DaemonSet controller  
INFO[0006] Starting apps/v1, Kind=Deployment controller 
INFO[0006] Starting rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding controller 
INFO[0006] Starting batch/v1, Kind=Job controller       
INFO[0006] Starting /v1, Kind=Node controller           
WARN[0006] Unable to fetch coredns config map: configmaps "coredns" not found 
INFO[0006] Starting /v1, Kind=ConfigMap controller      
INFO[0006] Starting /v1, Kind=ServiceAccount controller 
INFO[0006] Starting /v1, Kind=Pod controller            
INFO[0006] Starting /v1, Kind=Service controller        
INFO[0006] Starting /v1, Kind=Endpoints controller      
WARN[0006] Unable to fetch coredns config map: configmaps "coredns" not found 
I0928 23:55:43.668497    4424 controllermanager.go:593] Started "replicationcontroller"
W0928 23:55:43.668543    4424 controllermanager.go:558] "bootstrapsigner" is disabled
I0928 23:55:43.668597    4424 replica_set.go:205] Starting replicationcontroller controller
I0928 23:55:43.668624    4424 shared_informer.go:255] Waiting for caches to sync for ReplicationController
I0928 23:55:43.825359    4424 controllermanager.go:593] Started "pv-protection"
I0928 23:55:43.825458    4424 pv_protection_controller.go:79] Starting PV protection controller
I0928 23:55:43.825495    4424 shared_informer.go:255] Waiting for caches to sync for PV protection
I0928 23:55:43.988129    4424 controllermanager.go:593] Started "ttl-after-finished"
I0928 23:55:43.988228    4424 ttlafterfinished_controller.go:109] Starting TTL after finished controller
I0928 23:55:43.988266    4424 shared_informer.go:255] Waiting for caches to sync for TTL after finished
I0928 23:55:44.126922    4424 controllermanager.go:593] Started "endpointslice"
I0928 23:55:44.127026    4424 endpointslice_controller.go:257] Starting endpoint slice controller
I0928 23:55:44.127057    4424 shared_informer.go:255] Waiting for caches to sync for endpoint_slice
I0928 23:55:44.268331    4424 controllermanager.go:593] Started "cronjob"
I0928 23:55:44.268405    4424 cronjob_controllerv2.go:135] "Starting cronjob controller v2"
I0928 23:55:44.268436    4424 shared_informer.go:255] Waiting for caches to sync for cronjob
I0928 23:55:44.317792    4424 controllermanager.go:593] Started "csrcleaner"
I0928 23:55:44.317857    4424 cleaner.go:82] Starting CSR cleaner controller
INFO[0007] Starting /v1, Kind=Secret controller         
I0928 23:55:44.469348    4424 controllermanager.go:593] Started "clusterrole-aggregation"
W0928 23:55:44.469392    4424 controllermanager.go:558] "route" is disabled
I0928 23:55:44.469475    4424 clusterroleaggregation_controller.go:194] Starting ClusterRoleAggregator
I0928 23:55:44.469516    4424 shared_informer.go:255] Waiting for caches to sync for ClusterRoleAggregator
INFO[0007] Updating TLS secret for kube-system/k3s-serving (count: 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-93.95.230.133:93.95.230.133 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-k3s-master-01:k3s-master-01 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=0F6E54A74F339E94FA0DF4A6780336AA840FB9C3] 
I0928 23:55:44.718172    4424 controllermanager.go:593] Started "garbagecollector"
I0928 23:55:44.718315    4424 garbagecollector.go:149] Starting garbage collector controller
I0928 23:55:44.718351    4424 shared_informer.go:255] Waiting for caches to sync for garbage collector
I0928 23:55:44.718388    4424 graph_builder.go:289] GraphBuilder running
INFO[0007] Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=k3s-master-01 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables 
I0928 23:55:44.822591    4424 server.go:231] "Warning, all flags other than --config, --write-config-to, and --cleanup are deprecated, please begin using a config file ASAP"
I0928 23:55:44.851226    4424 node.go:163] Successfully retrieved node IP: 93.95.230.133
I0928 23:55:44.851280    4424 server_others.go:138] "Detected node IP" address="93.95.230.133"
I0928 23:55:44.852683    4424 server_others.go:206] "Using iptables Proxier"
I0928 23:55:44.852724    4424 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0928 23:55:44.852768    4424 server_others.go:214] "Creating dualStackProxier for iptables"
I0928 23:55:44.852799    4424 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0928 23:55:44.852834    4424 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0928 23:55:44.852914    4424 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0928 23:55:44.853061    4424 server.go:661] "Version info" version="v1.24.4+k3s1"
I0928 23:55:44.853100    4424 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0928 23:55:44.854043    4424 config.go:317] "Starting service config controller"
I0928 23:55:44.854092    4424 shared_informer.go:255] Waiting for caches to sync for service config
I0928 23:55:44.854133    4424 config.go:226] "Starting endpoint slice config controller"
I0928 23:55:44.854164    4424 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0928 23:55:44.854334    4424 config.go:444] "Starting node config controller"
I0928 23:55:44.854376    4424 shared_informer.go:255] Waiting for caches to sync for node config
I0928 23:55:44.857798    4424 controller.go:611] quota admission added evaluator for: events.events.k8s.io
INFO[0007] Active TLS secret kube-system/k3s-serving (ver=247) (count 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-93.95.230.133:93.95.230.133 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-k3s-master-01:k3s-master-01 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=0F6E54A74F339E94FA0DF4A6780336AA840FB9C3] 
I0928 23:55:44.954919    4424 shared_informer.go:262] Caches are synced for node config
I0928 23:55:44.954975    4424 shared_informer.go:262] Caches are synced for service config
I0928 23:55:44.955021    4424 shared_informer.go:262] Caches are synced for endpoint slice config
E0928 23:55:44.958687    4424 proxier.go:874] "Failed to ensure chain jumps" err=<
	error checking rule: exit status 2: iptables v1.8.6 (nf_tables): Couldn't find match `conntrack'
Try `iptables -h' or 'iptables --help' for more information.

table=filter srcChain=INPUT dstChain=KUBE-EXTERNAL-SERVICES
I0928 23:55:44.958697 4424 proxier.go:858] "Sync failed" retryingTime="30s"
E0928 23:55:44.961265 4424 proxier.go:874] "Failed to ensure chain jumps" err=<
error checking rule: exit status 2: ip6tables v1.8.6 (nf_tables): Couldn't find match `conntrack'

Try `ip6tables -h' or 'ip6tables --help' for more information.

table=filter srcChain=INPUT dstChain=KUBE-EXTERNAL-SERVICES
I0928 23:55:44.961276 4424 proxier.go:858] "Sync failed" retryingTime="30s"
I0928 23:55:44.968449 4424 controllermanager.go:593] Started "daemonset"
I0928 23:55:44.968549 4424 daemon_controller.go:284] Starting daemon sets controller
I0928 23:55:44.968581 4424 shared_informer.go:255] Waiting for caches to sync for daemon sets
I0928 23:55:45.118866 4424 controllermanager.go:593] Started "statefulset"
W0928 23:55:45.118927 4424 controllermanager.go:558] "service" is disabled
I0928 23:55:45.118996 4424 stateful_set.go:147] Starting stateful set controller
I0928 23:55:45.119033 4424 shared_informer.go:255] Waiting for caches to sync for stateful set
I0928 23:55:45.269175 4424 controllermanager.go:593] Started "ephemeral-volume"
I0928 23:55:45.269267 4424 controller.go:170] Starting ephemeral volume controller
I0928 23:55:45.269297 4424 shared_informer.go:255] Waiting for caches to sync for ephemeral
I0928 23:55:45.376747 4424 controller.go:611] quota admission added evaluator for: addons.k3s.cattle.io
INFO[0008] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"ccm", UID:"ce149125-e3a7-4568-8b60-accc0b07b3eb", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"255", FieldPath:""}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at "/var/lib/rancher/k3s/server/manifests/ccm.yaml"
INFO[0008] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"ccm", UID:"ce149125-e3a7-4568-8b60-accc0b07b3eb", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"255", FieldPath:""}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at "/var/lib/rancher/k3s/server/manifests/ccm.yaml"
INFO[0008] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"coredns", UID:"a62ca600-b716-4ca4-bda7-bef6e1b970c9", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"263", FieldPath:""}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at "/var/lib/rancher/k3s/server/manifests/coredns.yaml"
I0928 23:55:45.424716 4424 controller.go:611] quota admission added evaluator for: deployments.apps
I0928 23:55:45.431678 4424 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.43.0.10]
E0928 23:55:45.436625 4424 proxier.go:874] "Failed to ensure chain jumps" err=<
error checking rule: exit status 2: iptables v1.8.6 (nf_tables): Couldn't find match `conntrack'

Try `iptables -h' or 'iptables --help' for more information.

table=filter srcChain=INPUT dstChain=KUBE-EXTERNAL-SERVICES
I0928 23:55:45.436636 4424 proxier.go:858] "Sync failed" retryingTime="30s"
INFO[0008] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"coredns", UID:"a62ca600-b716-4ca4-bda7-bef6e1b970c9", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"263", FieldPath:""}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at "/var/lib/rancher/k3s/server/manifests/coredns.yaml"
INFO[0008] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"local-storage", UID:"718d68da-92f0-4c99-9110-a926e4ba6599", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"275", FieldPath:""}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at "/var/lib/rancher/k3s/server/manifests/local-storage.yaml"
INFO[0008] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"local-storage", UID:"718d68da-92f0-4c99-9110-a926e4ba6599", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"275", FieldPath:""}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at "/var/lib/rancher/k3s/server/manifests/local-storage.yaml"
INFO[0008] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"aggregated-metrics-reader", UID:"00978fdd-654e-47c3-aac3-41189997511b", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"285", FieldPath:""}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
INFO[0008] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"aggregated-metrics-reader", UID:"00978fdd-654e-47c3-aac3-41189997511b", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"285", FieldPath:""}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
INFO[0008] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"auth-delegator", UID:"52e266bc-d133-4192-9955-b1c73a8f85b0", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"290", FieldPath:""}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
INFO[0008] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"auth-delegator", UID:"52e266bc-d133-4192-9955-b1c73a8f85b0", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"290", FieldPath:""}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
INFO[0008] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"auth-reader", UID:"0e400664-8f92-4e40-88f6-5ccdd6d84428", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"295", FieldPath:""}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
INFO[0008] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"auth-reader", UID:"0e400664-8f92-4e40-88f6-5ccdd6d84428", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"295", FieldPath:""}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
I0928 23:55:45.568426 4424 controllermanager.go:593] Started "horizontalpodautoscaling"
I0928 23:55:45.568500 4424 horizontal.go:168] Starting HPA controller
I0928 23:55:45.568530 4424 shared_informer.go:255] Waiting for caches to sync for HPA
I0928 23:55:45.618339 4424 controllermanager.go:593] Started "csrapproving"
I0928 23:55:45.618408 4424 certificate_controller.go:119] Starting certificate controller "csrapproving"
I0928 23:55:45.618437 4424 shared_informer.go:255] Waiting for caches to sync for certificate-csrapproving
I0928 23:55:45.769095 4424 controllermanager.go:593] Started "ttl"
I0928 23:55:45.769199 4424 ttl_controller.go:121] Starting TTL controller
I0928 23:55:45.769230 4424 shared_informer.go:255] Waiting for caches to sync for TTL
INFO[0008] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"metrics-apiservice", UID:"143a93de-9684-4a66-8a1a-6b531f911c7b", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"301", FieldPath:""}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
INFO[0008] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"metrics-apiservice", UID:"143a93de-9684-4a66-8a1a-6b531f911c7b", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"301", FieldPath:""}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
W0928 23:55:45.902852 4424 reflector.go:324] k8s.io/client-go@v1.24.4-k3s1/tools/cache/reflector.go:167: failed to list *v1.Endpoints: endpoints "kubernetes" is forbidden: User "system:k3s-controller" cannot list resource "endpoints" in API group "" in the namespace "default"
E0928 23:55:45.902917 4424 reflector.go:138] k8s.io/client-go@v1.24.4-k3s1/tools/cache/reflector.go:167: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints "kubernetes" is forbidden: User "system:k3s-controller" cannot list resource "endpoints" in API group "" in the namespace "default"
I0928 23:55:45.918551 4424 controllermanager.go:593] Started "root-ca-cert-publisher"
I0928 23:55:45.918617 4424 publisher.go:107] Starting root CA certificate configmap publisher
I0928 23:55:45.918645 4424 shared_informer.go:255] Waiting for caches to sync for crt configmap
I0928 23:55:46.068898 4424 controllermanager.go:593] Started "pvc-protection"
I0928 23:55:46.068973 4424 pvc_protection_controller.go:103] "Starting PVC protection controller"
I0928 23:55:46.069003 4424 shared_informer.go:255] Waiting for caches to sync for PVC protection
INFO[0009] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"metrics-server-deployment", UID:"331399b4-3bd4-4ebc-a159-f2cbc54511b0", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"309", FieldPath:""}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
INFO[0009] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"metrics-server-deployment", UID:"331399b4-3bd4-4ebc-a159-f2cbc54511b0", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"309", FieldPath:""}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
I0928 23:55:46.566497 4424 serving.go:355] Generated self-signed cert in-memory
E0928 23:55:46.759030 4424 namespaced_resources_deleter.go:161] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0928 23:55:46.759154 4424 controllermanager.go:593] Started "namespace"
I0928 23:55:46.778340 4424 namespace_controller.go:200] Starting namespace controller
I0928 23:55:46.778378 4424 shared_informer.go:255] Waiting for caches to sync for namespace
W0928 23:55:46.818742 4424 handler_proxy.go:102] no RequestInfo found in the context
E0928 23:55:46.818802 4424 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0928 23:55:46.818843 4424 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0928 23:55:46.818898 4424 handler_proxy.go:102] no RequestInfo found in the context
E0928 23:55:46.818939 4424 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
I0928 23:55:46.839093 4424 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
INFO[0009] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"metrics-server-service", UID:"a3539cfe-36c8-464b-a555-93c3dea21183", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"316", FieldPath:""}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
I0928 23:55:46.965488 4424 node_ipam_controller.go:91] Sending events to api server.
I0928 23:55:47.031793 4424 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.43.124.231]
I0928 23:55:47.034931 4424 controllermanager.go:143] Version: v1.24.4+k3s1
E0928 23:55:47.039505 4424 proxier.go:874] "Failed to ensure chain jumps" err=<
error checking rule: exit status 2: iptables v1.8.6 (nf_tables): Couldn't find match `conntrack'

Try `iptables -h' or 'iptables --help' for more information.

table=filter srcChain=INPUT dstChain=KUBE-EXTERNAL-SERVICES
I0928 23:55:47.039519 4424 proxier.go:858] "Sync failed" retryingTime="30s"
INFO[0010] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"metrics-server-service", UID:"a3539cfe-36c8-464b-a555-93c3dea21183", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"316", FieldPath:""}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
I0928 23:55:47.061299 4424 secure_serving.go:210] Serving securely on 127.0.0.1:10258
I0928 23:55:47.061681 4424 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0928 23:55:47.061725 4424 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
I0928 23:55:47.061780 4424 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0928 23:55:47.062030 4424 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0928 23:55:47.062069 4424 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0928 23:55:47.062121 4424 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0928 23:55:47.062152 4424 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
INFO[0010] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"resource-reader", UID:"bc546812-edac-4985-9eb0-8ba9db68a843", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"323", FieldPath:""}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
INFO[0010] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"resource-reader", UID:"bc546812-edac-4985-9eb0-8ba9db68a843", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"323", FieldPath:""}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at "/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
I0928 23:55:47.162475 4424 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0928 23:55:47.162531 4424 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
I0928 23:55:47.162611 4424 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
INFO[0010] Updated coredns node hosts entry [93.95.230.133 k3s-master-01]
INFO[0010] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"rolebindings", UID:"2fe61457-78cb-4d8d-a3a4-fb39e5564ce9", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"330", FieldPath:""}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at "/var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
INFO[0010] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"rolebindings", UID:"2fe61457-78cb-4d8d-a3a4-fb39e5564ce9", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"330", FieldPath:""}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at "/var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
W0928 23:55:48.062545 4424 handler_proxy.go:102] no RequestInfo found in the context
E0928 23:55:48.062621 4424 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0928 23:55:48.062662 4424 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0928 23:55:48.062707 4424 handler_proxy.go:102] no RequestInfo found in the context
E0928 23:55:48.062740 4424 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
I0928 23:55:48.063829 4424 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0928 23:55:48.475271 4424 request.go:601] Waited for 1.04916275s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:6444/apis/storage.k8s.io/v1beta1
E0928 23:55:49.075890 4424 controllermanager.go:463] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0928 23:55:49.076134 4424 node_controller.go:118] Sending events to api server.
I0928 23:55:49.076221 4424 controllermanager.go:291] Started "cloud-node"
I0928 23:55:49.076354 4424 node_lifecycle_controller.go:77] Sending events to api server
I0928 23:55:49.076400 4424 controllermanager.go:291] Started "cloud-node-lifecycle"
I0928 23:55:49.076713 4424 node_controller.go:157] Waiting for informer caches to sync
I0928 23:55:49.176877 4424 node_controller.go:406] Initializing node k3s-master-01 with cloud provider
I0928 23:55:49.184027 4424 node_controller.go:470] Successfully initialized node k3s-master-01 with cloud provider
I0928 23:55:49.198734 4424 event.go:294] "Event occurred" object="k3s-master-01" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="Synced" message="Node synced successfully"
E0928 23:55:49.198804 4424 proxier.go:874] "Failed to ensure chain jumps" err=<
error checking rule: exit status 2: iptables v1.8.6 (nf_tables): Couldn't find match `conntrack'

Try `iptables -h' or 'iptables --help' for more information.

table=filter srcChain=INPUT dstChain=KUBE-EXTERNAL-SERVICES
I0928 23:55:49.198841 4424 proxier.go:858] "Sync failed" retryingTime="30s"
E0928 23:55:49.201527 4424 proxier.go:874] "Failed to ensure chain jumps" err=<
error checking rule: exit status 2: ip6tables v1.8.6 (nf_tables): Couldn't find match `conntrack'

Try `ip6tables -h' or 'ip6tables --help' for more information.

table=filter srcChain=INPUT dstChain=KUBE-EXTERNAL-SERVICES
I0928 23:55:49.201537 4424 proxier.go:858] "Sync failed" retryingTime="30s"
I0928 23:55:49.659579 4424 serving.go:355] Generated self-signed cert in-memory
I0928 23:55:50.179013 4424 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4+k3s1"
I0928 23:55:50.179075 4424 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0928 23:55:50.180787 4424 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0928 23:55:50.180863 4424 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0928 23:55:50.180900 4424 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
I0928 23:55:50.180939 4424 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0928 23:55:50.181919 4424 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0928 23:55:50.181953 4424 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0928 23:55:50.181997 4424 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0928 23:55:50.182019 4424 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0928 23:55:50.281716 4424 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
I0928 23:55:50.282878 4424 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0928 23:55:50.282909 4424 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
INFO[0014] Stopped tunnel to 127.0.0.1:6443
INFO[0014] Connecting to proxy url="wss://93.95.230.133:6443/v1-k3s/connect"
INFO[0014] Proxy done err="context canceled" url="wss://127.0.0.1:6443/v1-k3s/connect"
INFO[0014] error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF
INFO[0014] Handling backend connection request [k3s-master-01]
I0928 23:55:52.143282 4424 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0928 23:55:56.986634 4424 range_allocator.go:83] Sending events to api server.
I0928 23:55:56.986790 4424 range_allocator.go:117] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
I0928 23:55:56.986836 4424 controllermanager.go:593] Started "nodeipam"
W0928 23:55:56.986870 4424 controllermanager.go:558] "cloud-node-lifecycle" is disabled
I0928 23:55:56.986971 4424 node_ipam_controller.go:154] Starting ipam controller
I0928 23:55:56.987004 4424 shared_informer.go:255] Waiting for caches to sync for node
I0928 23:55:56.990774 4424 controllermanager.go:593] Started "attachdetach"
I0928 23:55:56.993072 4424 shared_informer.go:255] Waiting for caches to sync for resource quota
I0928 23:55:57.011094 4424 attach_detach_controller.go:328] Starting attach detach controller
I0928 23:55:57.011127 4424 shared_informer.go:255] Waiting for caches to sync for attach detach
W0928 23:55:57.031578 4424 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k3s-master-01" does not exist
E0928 23:55:57.060090 4424 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0928 23:55:57.062086 4424 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0928 23:55:57.063358 4424 shared_informer.go:255] Waiting for caches to sync for garbage collector
I0928 23:55:57.068567 4424 shared_informer.go:262] Caches are synced for HPA
I0928 23:55:57.068610 4424 shared_informer.go:262] Caches are synced for cronjob
I0928 23:55:57.068658 4424 shared_informer.go:262] Caches are synced for ReplicationController
I0928 23:55:57.069751 4424 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
I0928 23:55:57.070081 4424 shared_informer.go:262] Caches are synced for PVC protection
I0928 23:55:57.070115 4424 shared_informer.go:262] Caches are synced for deployment
I0928 23:55:57.070418 4424 shared_informer.go:262] Caches are synced for TTL
I0928 23:55:57.070855 4424 shared_informer.go:262] Caches are synced for ephemeral
I0928 23:55:57.072869 4424 shared_informer.go:262] Caches are synced for service account
I0928 23:55:57.076880 4424 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
I0928 23:55:57.076914 4424 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
I0928 23:55:57.076994 4424 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0928 23:55:57.077020 4424 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
I0928 23:55:57.077831 4424 shared_informer.go:262] Caches are synced for disruption
I0928 23:55:57.077857 4424 disruption.go:371] Sending events to api server.
I0928 23:55:57.079901 4424 shared_informer.go:262] Caches are synced for namespace
I0928 23:55:57.087095 4424 shared_informer.go:262] Caches are synced for node
I0928 23:55:57.087128 4424 range_allocator.go:173] Starting range CIDR allocator
I0928 23:55:57.087143 4424 shared_informer.go:255] Waiting for caches to sync for cidrallocator
I0928 23:55:57.087157 4424 shared_informer.go:262] Caches are synced for cidrallocator
I0928 23:55:57.088354 4424 shared_informer.go:262] Caches are synced for TTL after finished
I0928 23:55:57.090603 4424 controller.go:611] quota admission added evaluator for: replicasets.apps
I0928 23:55:57.100022 4424 shared_informer.go:262] Caches are synced for persistent volume
I0928 23:55:57.105425 4424 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-668d979685 to 1"
I0928 23:55:57.105459 4424 event.go:294] "Event occurred" object="kube-system/local-path-provisioner" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set local-path-provisioner-7b7dc8d6f5 to 1"
I0928 23:55:57.105479 4424 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-b96499967 to 1"
I0928 23:55:57.113429 4424 shared_informer.go:262] Caches are synced for job
I0928 23:55:57.118958 4424 shared_informer.go:262] Caches are synced for crt configmap
I0928 23:55:57.120520 4424 shared_informer.go:262] Caches are synced for ReplicaSet
I0928 23:55:57.120712 4424 shared_informer.go:262] Caches are synced for certificate-csrapproving
I0928 23:55:57.120770 4424 shared_informer.go:262] Caches are synced for stateful set
I0928 23:55:57.120975 4424 shared_informer.go:262] Caches are synced for expand
INFO[0020] Flannel found PodCIDR assigned for node k3s-master-01
I0928 23:55:57.124957 4424 shared_informer.go:262] Caches are synced for GC
I0928 23:55:57.125082 4424 range_allocator.go:374] Set node k3s-master-01 PodCIDR to [10.42.0.0/24]
INFO[0020] The interface eth0 with ipv4 address 93.95.230.133 will be used by flannel
I0928 23:55:57.126122 4424 kube.go:121] Waiting 10m0s for node controller to sync
I0928 23:55:57.126157 4424 shared_informer.go:262] Caches are synced for PV protection
I0928 23:55:57.126978 4424 kube.go:402] Starting kube subnet manager
I0928 23:55:57.146932 4424 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
I0928 23:55:57.153027 4424 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
I0928 23:55:57.211087 4424 event.go:294] "Event occurred" object="kube-system/metrics-server-668d979685" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-668d979685-xkxx6"
I0928 23:55:57.211139 4424 event.go:294] "Event occurred" object="kube-system/local-path-provisioner-7b7dc8d6f5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: local-path-provisioner-7b7dc8d6f5-xmj7v"
I0928 23:55:57.211160 4424 event.go:294] "Event occurred" object="kube-system/coredns-b96499967" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-b96499967-qm5pq"
I0928 23:55:57.211307 4424 shared_informer.go:262] Caches are synced for attach detach
INFO[0020] Starting the netpol controller
I0928 23:55:57.248084 4424 network_policy_controller.go:162] Starting network policy controller
F0928 23:55:57.250360 4424 network_policy_controller.go:335] Failed to verify rule exists in INPUT chain due to running [/var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2/bin/aux/iptables -t filter -C INPUT -m comment --comment kube-router netpol - 4IA2OSFRMVNDXBVV -j KUBE-ROUTER-INPUT --wait]: exit status 2: iptables v1.8.6 (nf_tables): Couldn't find match `comment'

Try iptables -h' or 'iptables --help' for more information. panic: F0928 23:55:57.250360 4424 network_policy_controller.go:335] Failed to verify rule exists in INPUT chain due to running [/var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2/bin/aux/iptables -t filter -C INPUT -m comment --comment kube-router netpol - 4IA2OSFRMVNDXBVV -j KUBE-ROUTER-INPUT --wait]: exit status 2: iptables v1.8.6 (nf_tables): Couldn't find match comment'

Try `iptables -h' or 'iptables --help' for more information.

goroutine 24310 [running]:
k8s.io/klog/v2.(*loggingT).output(0x7eb5ea0, 0x3, 0x0, 0xc00097ec40, 0x1, {0x6469804?, 0x2?}, 0xc00410e400?, 0x0)
/go/pkg/mod/github.com/k3s-io/klog/v2@v2.60.1-k3s1/klog.go:820 +0x694
k8s.io/klog/v2.(*loggingT).printfDepth(0x7eb5ea0, 0x5?, 0x0, {0x0, 0x0}, 0xc007e11ac0?, {0x4c9a41f, 0x32}, {0xc01039ae20, 0x2, ...})
/go/pkg/mod/github.com/k3s-io/klog/v2@v2.60.1-k3s1/klog.go:630 +0x1f2
k8s.io/klog/v2.(*loggingT).printf(...)
/go/pkg/mod/github.com/k3s-io/klog/v2@v2.60.1-k3s1/klog.go:612
k8s.io/klog/v2.Fatalf(...)
/go/pkg/mod/github.com/k3s-io/klog/v2@v2.60.1-k3s1/klog.go:1496
github.com/cloudnativelabs/kube-router/pkg/controllers/netpol.(*NetworkPolicyController).ensureTopLevelChains.func2({0x5688460, 0xc00641f720}, {0x4b94069, 0x5}, {0xc0120f2d20, 0x6, 0x6}, {0xc007e11ac0, 0x10}, 0x1)
/go/pkg/mod/github.com/k3s-io/kube-router@v1.5.1-0.20220630214451-a43bcd8511d2/pkg/controllers/netpol/network_policy_controller.go:335 +0x1b4
github.com/cloudnativelabs/kube-router/pkg/controllers/netpol.(*NetworkPolicyController).ensureTopLevelChains(0xc0101cb200)
/go/pkg/mod/github.com/k3s-io/kube-router@v1.5.1-0.20220630214451-a43bcd8511d2/pkg/controllers/netpol/network_policy_controller.go:395 +0x138d
github.com/cloudnativelabs/kube-router/pkg/controllers/netpol.(*NetworkPolicyController).Run(0xc0101cb200, 0xc00dc238c0, 0xc000c7e540, 0xc0105e9170)
/go/pkg/mod/github.com/k3s-io/kube-router@v1.5.1-0.20220630214451-a43bcd8511d2/pkg/controllers/netpol/network_policy_controller.go:166 +0x16f
created by github.com/k3s-io/k3s/pkg/agent/netpol.Run
/go/src/github.com/k3s-io/k3s/pkg/agent/netpol/netpol.go:135 +0xacc

@brandond
Copy link
Member

brandond commented Sep 28, 2022

Your kernel appears to be missing the nf_conntrack module. See https://conntrack-tools.netfilter.org/manual.html#requirements

You can also run k3s check-config. If you continue to experience problems after making the required kernel modules available, please open a new issue instead of commenting on old closed ones.

@k3s-io k3s-io locked as resolved and limited conversation to collaborators Sep 28, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/task Work not related to bug fixes or new functionality
Projects
None yet
Development

No branches or pull requests

6 participants