Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix running cAdvisor in container on RHEL systems #1476

Merged
merged 2 commits into from
Sep 22, 2016

Conversation

derekwaynecarr
Copy link
Collaborator

@derekwaynecarr derekwaynecarr commented Sep 22, 2016

Fixes #1461

  • update the libcontainer dependency.
  • look at all cgroup mounts

@k8s-bot
Copy link
Collaborator

k8s-bot commented Sep 22, 2016

Jenkins GCE e2e

Build/test failed for commit 63fffe9.

@derekwaynecarr
Copy link
Collaborator Author

well, it appears i did something wrong to stop tests building.

@k8s-bot
Copy link
Collaborator

k8s-bot commented Sep 22, 2016

Jenkins GCE e2e

Build/test passed for commit 7e255a5.

@derekwaynecarr
Copy link
Collaborator Author

@timstclair @pmorie @vishh @ncdc -- PTAL, needed to fix running cAdvisor in container.

@@ -45,7 +45,7 @@ type CgroupSubsystems struct {
// Get information about the cgroup subsystems.
func GetCgroupSubsystems() (CgroupSubsystems, error) {
// Get all cgroup mounts.
allCgroups, err := cgroups.GetCgroupMounts()
allCgroups, err := cgroups.GetCgroupMounts(false)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You eventually want to set this to true, right? It's fine to just do that in this PR (we frequently update dependencies while making changes that rely on them).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, will do that now.

@k8s-bot
Copy link
Collaborator

k8s-bot commented Sep 22, 2016

Jenkins GCE e2e

Build/test passed for commit d7a936f.

@k8s-bot
Copy link
Collaborator

k8s-bot commented Sep 22, 2016

Jenkins GCE e2e

Build/test passed for commit b84046f.

@derekwaynecarr
Copy link
Collaborator Author

I want to do one last test on this before merging.

@derekwaynecarr derekwaynecarr changed the title Update godeps for libcontainer DO NOT MERGE: Update godeps for libcontainer Sep 22, 2016
@derekwaynecarr derekwaynecarr changed the title DO NOT MERGE: Update godeps for libcontainer Fix running cAdvisor in container on RHEL systems Sep 22, 2016
@derekwaynecarr
Copy link
Collaborator Author

Ok, tested and confirmed. All is good on RHEL flavored systems with containers.

@tangjiaxing669
Copy link

@derekwaynecarr I'm sorry, How can I solve this problem?

  • update the libcontainer dependency.
  • look at all cgroup mounts

maybe i am very stupid
please help
I tried it, but the exception still exists.

@timstclair
Copy link
Contributor

xref: http://stackoverflow.com/q/39890410/1837431 with more detail

@tangjiaxing669
Copy link

@timstclair That is my question.

@fabMrc
Copy link

fabMrc commented Oct 27, 2016

Also have an error . the docker image is not reachable although port 8080 is exposed. When I log into the container I can get the content http://localhost:8080 but not from the host

getsockopt: connection refused in docker logs

@bmouthrob
Copy link

Hi,

I don't think the above issue is fixed, I have tried both cadvisor and cadvisor-canary and am getting similar symptoms on an Amazon Linux instance...

I0208 22:08:06.528378 1 storagedriver.go:50] Caching stats in memory for 2m0s
I0208 22:08:06.528672 1 manager.go:140] cAdvisor running in container: "/docker/aeca51a748adbe6c12dc08550ed15024194544b4235fcb66144bee3dbac08334"
W0208 22:08:06.542050 1 manager.go:148] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused
I0208 22:08:06.566575 1 fs.go:116] Filesystem partitions: map[/dev/mapper/docker-202:1-395786-98d5d321dc375464f3fb4cdc61dfc09e97c813ce39fe7bde32c1cdef15a7f7b7:{mountpoint:/ major:253 minor:13 fsType:xfs blockSize:0} /dev/xvda1:{mountpoint:/var/lib/docker/devicemapper major:202 minor:1 fsType:ext4 blockSize:0}]
I0208 22:08:06.570021 1 info.go:47] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
I0208 22:08:06.570107 1 manager.go:195] Machine: {NumCores:2 CpuFrequency:3000000 MemoryCapacity:16037875712 MachineID: SystemUUID:EC2FE94F-E16E-29BC-DC54-897C42293EDC BootID:e22f450d-be25-4472-a042-b290c9e76394 Filesystems:[{Device:/dev/mapper/docker-202:1-395786-98d5d321dc375464f3fb4cdc61dfc09e97c813ce39fe7bde32c1cdef15a7f7b7 Capacity:10725883904 Type:vfs Inodes:10484736 HasInodes:true} {Device:/dev/xvda1 Capacity:8318783488 Type:vfs Inodes:524288 HasInodes:true}] DiskMap:map[253:0:{Name:dm-0 Major:253 Minor:0 Size:107374182400 Scheduler:none} 253:3:{Name:dm-3 Major:253 Minor:3 Size:10737418240 Scheduler:none} 253:7:{Name:dm-7 Major:253 Minor:7 Size:10737418240 Scheduler:none} 253:8:{Name:dm-8 Major:253 Minor:8 Size:10737418240 Scheduler:none} 253:9:{Name:dm-9 Major:253 Minor:9 Size:10737418240 Scheduler:none} 202:0:{Name:xvda Major:202 Minor:0 Size:8589934592 Scheduler:noop} 253:10:{Name:dm-10 Major:253 Minor:10 Size:10737418240 Scheduler:none} 253:6:{Name:dm-6 Major:253 Minor:6 Size:10737418240 Scheduler:none} 253:12:{Name:dm-12 Major:253 Minor:12 Size:10737418240 Scheduler:none} 253:4:{Name:dm-4 Major:253 Minor:4 Size:10737418240 Scheduler:none} 253:1:{Name:dm-1 Major:253 Minor:1 Size:10737418240 Scheduler:none} 253:11:{Name:dm-11 Major:253 Minor:11 Size:10737418240 Scheduler:none} 253:13:{Name:dm-13 Major:253 Minor:13 Size:10737418240 Scheduler:none} 253:2:{Name:dm-2 Major:253 Minor:2 Size:10737418240 Scheduler:none} 253:5:{Name:dm-5 Major:253 Minor:5 Size:10737418240 Scheduler:none}] NetworkDevices:[{Name:eth0 MacAddress:12:18:27:50:94:b8 Speed:0 Mtu:9001}] Topology:[{Id:0 Memory:16037875712 Cores:[{Id:0 Threads:[0 1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:47185920 Type:Unified Level:3}]}] CloudProvider:AWS InstanceType:r4.large InstanceID:i-0176c03a9ed8c4cb3}
I0208 22:08:06.570741 1 manager.go:201] Version: {KernelVersion:4.4.41-36.55.amzn1.x86_64 ContainerOsVersion:Alpine Linux v3.4 DockerVersion:1.12.6 CadvisorVersion:v0.24.1 CadvisorRevision:ae6934c}
E0208 22:08:06.580310 1 factory.go:291] devicemapper filesystem stats will not be reported: unable to find thin_ls binary
I0208 22:08:06.580328 1 factory.go:295] Registering Docker factory
W0208 22:08:06.580342 1 manager.go:244] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused
I0208 22:08:06.580349 1 factory.go:54] Registering systemd factory
I0208 22:08:06.580781 1 factory.go:86] Registering Raw factory
I0208 22:08:06.581134 1 manager.go:1082] Started watching for new ooms in manager
W0208 22:08:06.581430 1 manager.go:272] Could not configure a source for OOM detection, disabling OOM events: unable to find any kernel log file available from our set: [/var/log/kern.log /var/log/messages /var/log/syslog]
I0208 22:08:06.581769 1 manager.go:285] Starting recovery of all containers
I0208 22:08:06.581837 1 manager.go:290] Recovery completed
F0208 22:08:06.581865 1 cadvisor.go:151] Failed to start container manager: inotify_add_watch /var/lib/docker/devicemapper/mnt/98d5d321dc375464f3fb4cdc61dfc09e97c813ce39fe7bde32c1cdef15a7f7b7/rootfs/sys/fs/cgroup/cpu: no such file or directory

@vishh
Copy link
Contributor

vishh commented Feb 13, 2017 via email

@blancoh
Copy link

blancoh commented Feb 14, 2017

Also seeing this using 10acre-ranch on Core Linux.

2/13/2017 7:54:06 PMFlag --api-servers has been deprecated, Use --kubeconfig instead. Will be removed in a future version.
2/13/2017 7:54:06 PMI0214 00:54:06.018650 32308 feature_gate.go:181] feature gates: map[]
2/13/2017 7:54:06 PMI0214 00:54:06.140527 32308 docker.go:356] Connecting to docker on unix:///var/run/docker.sock
2/13/2017 7:54:06 PMI0214 00:54:06.141512 32308 docker.go:376] Start docker client with request timeout=2m0s
2/13/2017 7:54:06 PMI0214 00:54:06.155548 32308 manager.go:143] cAdvisor running in container: "/docker/09b613c68222a76b216f45f8dbec055f50caa723a136df7c6cd15e616808ca1d"
2/13/2017 7:54:06 PMW0214 00:54:06.182879 32308 manager.go:151] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
2/13/2017 7:54:06 PMI0214 00:54:06.199632 32308 fs.go:117] Filesystem partitions: map[/dev/sda1:{mountpoint:/etc/cni major:8 minor:1 fsType:ext4 blockSize:0} none:{mountpoint:/ major:0 minor:52 fsType:aufs blockSize:0}]
2/13/2017 7:54:06 PMI0214 00:54:06.203559 32308 manager.go:198] Machine: {NumCores:1 CpuFrequency:1098977 MemoryCapacity:1044209664 MachineID:b07a180a2c8547f7956e9a6f93a452a4 SystemUUID:C4D0495A-0000-0000-A526-D0E450717A8D BootID:0ec50f1a-9043-4758-910e-0862a64f7453 Filesystems:[{Device:/dev/sda1 Capacity:19195224064 Type:vfs Inodes:2436448 HasInodes:true} {Device:none Capacity:19195224064 Type:vfs Inodes:2436448 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:20971520000 Scheduler:deadline} 251:0:{Name:zram0 Major:251 Minor:0 Size:203997184 Scheduler:none}] NetworkDevices:[{Name:dummy0 MacAddress:36:89:29:c2:d3:28 Speed:0 Mtu:1500} {Name:eth0 MacAddress:72:01:e7:8d:13:bd Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:0 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:4194304 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
2/13/2017 7:54:06 PMI0214 00:54:06.208938 32308 manager.go:204] Version: {KernelVersion:4.4.27-boot2docker ContainerOsVersion:Debian GNU/Linux 8 (jessie) DockerVersion:1.12.3 CadvisorVersion: CadvisorRevision:}
2/13/2017 7:54:06 PMW0214 00:54:06.215900 32308 container_manager_linux.go:205] Running with swap on is not supported, please disable swap! This will be a fatal error by default starting in K8s v1.6! In the meantime, you can opt-in to making this a fatal error by enabling --experimental-fail-swap-on.
2/13/2017 7:54:06 PMI0214 00:54:06.216636 32308 kubelet.go:252] Watching apiserver
2/13/2017 7:54:06 PMW0214 00:54:06.224945 32308 kubelet_network.go:69] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
2/13/2017 7:54:06 PMI0214 00:54:06.225843 32308 kubelet.go:477] Hairpin mode set to "hairpin-veth"
2/13/2017 7:54:06 PMI0214 00:54:06.272543 32308 docker_manager.go:257] Setting dockerRoot to /mnt/sda1/var/lib/docker
2/13/2017 7:54:06 PMI0214 00:54:06.272879 32308 docker_manager.go:260] Setting cgroupDriver to cgroupfs
2/13/2017 7:54:06 PMI0214 00:54:06.286342 32308 server.go:770] Started kubelet v1.5.0-115+611cbb22703182
2/13/2017 7:54:06 PME0214 00:54:06.311630 32308 kubelet.go:1145] Image garbage collection failed: unable to find data for container /
2/13/2017 7:54:06 PMI0214 00:54:06.312608 32308 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach
2/13/2017 7:54:06 PMI0214 00:54:06.312851 32308 rancher.go:641] ExternalID [rs-host2]
2/13/2017 7:54:06 PMI0214 00:54:06.313182 32308 rancher.go:648] InstanceID [rs-host2]
2/13/2017 7:54:06 PMI0214 00:54:06.314155 32308 server.go:123] Starting to listen on 0.0.0.0:10250
2/13/2017 7:54:06 PMI0214 00:54:06.415908 32308 rancher.go:648] InstanceID [rs-host2]
2/13/2017 7:54:06 PMI0214 00:54:06.440233 32308 rancher.go:648] InstanceID [rs-host2]
2/13/2017 7:54:06 PMI0214 00:54:06.475077 32308 kubelet_node_status.go:246] Adding node label from cloud provider: beta.kubernetes.io/instance-type=rancher
2/13/2017 7:54:06 PMI0214 00:54:06.476373 32308 kubelet_node_status.go:257] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=FailureDomain1
2/13/2017 7:54:06 PMI0214 00:54:06.476708 32308 kubelet_node_status.go:261] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=Region1
2/13/2017 7:54:06 PME0214 00:54:06.500240 32308 kubelet.go:1634] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
2/13/2017 7:54:06 PME0214 00:54:06.500883 32308 kubelet.go:1642] Failed to check if disk space is available on the root partition: failed to get fs info for "root": error trying to get filesystem Device for dir /var/lib/kubelet: err: could not find device with major: 0, minor: 15 in cached partitions map
2/13/2017 7:54:06 PMI0214 00:54:06.506987 32308 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
2/13/2017 7:54:06 PMI0214 00:54:06.507882 32308 status_manager.go:129] Starting to sync pod status with apiserver
2/13/2017 7:54:06 PMI0214 00:54:06.510982 32308 kubelet.go:1714] Starting kubelet main sync loop.
2/13/2017 7:54:06 PMI0214 00:54:06.511715 32308 kubelet.go:1725] skipping pod synchronization - [container runtime is down]
2/13/2017 7:54:06 PMI0214 00:54:06.508294 32308 volume_manager.go:242] Starting Kubelet Volume Manager
2/13/2017 7:54:06 PMI0214 00:54:06.609259 32308 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach
2/13/2017 7:54:06 PMI0214 00:54:06.610455 32308 rancher.go:641] ExternalID [rs-host2]
2/13/2017 7:54:06 PMI0214 00:54:06.610775 32308 rancher.go:648] InstanceID [rs-host2]
2/13/2017 7:54:06 PMI0214 00:54:06.637423 32308 rancher.go:648] InstanceID [rs-host2]
2/13/2017 7:54:06 PMI0214 00:54:06.658984 32308 rancher.go:648] InstanceID [rs-host2]
2/13/2017 7:54:06 PMI0214 00:54:06.676996 32308 factory.go:295] Registering Docker factory
Unknown Date
2/13/2017 7:54:06 PMW0214 00:54:06.677261 32308 manager.go:247] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
2/13/2017 7:54:06 PMI0214 00:54:06.677353 32308 factory.go:54] Registering systemd factory
2/13/2017 7:54:06 PMI0214 00:54:06.679758 32308 factory.go:86] Registering Raw factory
2/13/2017 7:54:06 PMI0214 00:54:06.682258 32308 manager.go:1106] Started watching for new ooms in manager
2/13/2017 7:54:06 PMI0214 00:54:06.686403 32308 kubelet_node_status.go:246] Adding node label from cloud provider: beta.kubernetes.io/instance-type=rancher
2/13/2017 7:54:06 PMI0214 00:54:06.686785 32308 kubelet_node_status.go:257] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=FailureDomain1
2/13/2017 7:54:06 PMI0214 00:54:06.687101 32308 kubelet_node_status.go:261] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=Region1
2/13/2017 7:54:06 PMI0214 00:54:06.688448 32308 oomparser.go:185] oomparser using systemd
2/13/2017 7:54:06 PMI0214 00:54:06.689606 32308 manager.go:288] Starting recovery of all containers
2/13/2017 7:54:06 PMI0214 00:54:06.689919 32308 manager.go:293] Recovery completed
2/13/2017 7:54:06 PMF0214 00:54:06.690084 32308 kubelet.go:1210] Failed to start cAdvisor inotify_add_watch /var/lib/docker/aufs/mnt/fcb38bdb43ffec21de47040b856bfeed3952be694fd9dffc7f4cfe360700eec0/sys/fs/cgroup/cpu: no such file or directory
Disconnected

@jralmaraz
Copy link

Hi,

I've seen the same kubelet issue when starting a kubernetes stack on Rancher within an AWS AMI.

Was this fixed?

Cheers!

W0324 03:14:36.151049 24540 manager.go:151] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
I0324 03:14:36.160518 24540 fs.go:117] Filesystem partitions: map[/dev/mapper/docker-202:1-395314-24d5625b839a1f50be37a765b3f443a8c1092cccbbeaaeea7a658081e49bcf68:{mountpoint:/ major:253 minor:20 fsType:xfs blockSize:0} /dev/xvda1:{mountpoint:/var/lib/docker major:202 minor:1 fsType:ext4 blockSize:0}]
I0324 03:14:36.164720 24540 manager.go:198] Machine: {NumCores:1 CpuFrequency:2400072 MemoryCapacity:1043574784 MachineID:efee03ac51c6418889650dfa2a40350d SystemUUID:EC2EFC94-EC2C-5A77-F9C6-17B06C214A5B BootID:281d292b-9308-4d1c-83e0-51d58e7b4f79 Filesystems:[{Device:/dev/xvda1 Capacity:8318783488 Type:vfs Inodes:524288 HasInodes:true} {Device:/dev/mapper/docker-202:1-395314-24d5625b839a1f50be37a765b3f443a8c1092cccbbeaaeea7a658081e49bcf68 Capacity:10725883904 Type:vfs Inodes:10484736 HasInodes:true}] DiskMap:map[253:18:{Name:dm-18 Major:253 Minor:18 Size:10737418240 Scheduler:none} 253:20:{Name:dm-20 Major:253 Minor:20 Size:10737418240 Scheduler:none} 253:7:{Name:dm-7 Major:253 Minor:7 Size:10737418240 Scheduler:none} 253:9:{Name:dm-9 Major:253 Minor:9 Size:10737418240 Scheduler:none} 253:1:{Name:dm-1 Major:253 Minor:1 Size:10737418240 Scheduler:none} 253:11:{Name:dm-11 Major:253 Minor:11 Size:10737418240 Scheduler:none} 253:15:{Name:dm-15 Major:253 Minor:15 Size:10737418240 Scheduler:none} 253:16:{Name:dm-16 Major:253 Minor:16 Size:10737418240 Scheduler:none} 253:4:{Name:dm-4 Major:253 Minor:4 Size:10737418240 Scheduler:none} 253:5:{Name:dm-5 Major:253 Minor:5 Size:10737418240 Scheduler:none} 253:8:{Name:dm-8 Major:253 Minor:8 Size:10737418240 Scheduler:none} 202:0:{Name:xvda Major:202 Minor:0 Size:8589934592 Scheduler:noop} 253:0:{Name:dm-0 Major:253 Minor:0 Size:107374182400 Scheduler:none} 253:10:{Name:dm-10 Major:253 Minor:10 Size:10737418240 Scheduler:none} 253:12:{Name:dm-12 Major:253 Minor:12 Size:10737418240 Scheduler:none} 253:14:{Name:dm-14 Major:253 Minor:14 Size:10737418240 Scheduler:none} 253:17:{Name:dm-17 Major:253 Minor:17 Size:10737418240 Scheduler:none} 253:2:{Name:dm-2 Major:253 Minor:2 Size:10737418240 Scheduler:none} 253:6:{Name:dm-6 Major:253 Minor:6 Size:10737418240 Scheduler:none} 253:13:{Name:dm-13 Major:253 Minor:13 Size:10737418240 Scheduler:none} 253:19:{Name:dm-19 Major:253 Minor:19 Size:10737418240 Scheduler:none} 253:3:{Name:dm-3 Major:253 Minor:3 Size:10737418240 Scheduler:none} 202:16:{Name:xvdb Major:202 Minor:16 Size:21474836480 Scheduler:noop}] NetworkDevices:[{Name:eth0 MacAddress:0a:6e:7e:f7:5b:f7 Speed:0 Mtu:9001}] Topology:[{Id:0 Memory:1043574784 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:31457280 Type:Unified Level:3}]}] CloudProvider:AWS InstanceType:t2.micro InstanceID:i-040e5d8fb87e06c91}
I0324 03:14:36.165299 24540 manager.go:204] Version: {KernelVersion:4.4.51-40.58.amzn1.x86_64 ContainerOsVersion:Debian GNU/Linux 8 (jessie) DockerVersion:1.12.6 CadvisorVersion: CadvisorRevision:}
I0324 03:14:36.166856 24540 kubelet.go:252] Watching apiserver
W0324 03:14:36.169796 24540 kubelet_network.go:69] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I0324 03:14:36.172036 24540 kubelet.go:477] Hairpin mode set to "hairpin-veth"
I0324 03:14:36.197273 24540 docker_manager.go:256] Setting dockerRoot to /var/lib/docker
I0324 03:14:36.197312 24540 docker_manager.go:259] Setting cgroupDriver to cgroupfs
I0324 03:14:36.224984 24540 server.go:770] Started kubelet v1.5.4-rancher1
E0324 03:14:36.226924 24540 kubelet.go:1145] Image garbage collection failed: unable to find data for container /
I0324 03:14:36.227117 24540 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach
I0324 03:14:36.227140 24540 rancher.go:641] ExternalID [ip-172-31-23-241.ap-southeast-2.compute.internal]
I0324 03:14:36.227151 24540 rancher.go:648] InstanceID [ip-172-31-23-241.ap-southeast-2.compute.internal]
I0324 03:14:36.229489 24540 server.go:123] Starting to listen on 0.0.0.0:10250
I0324 03:14:36.278986 24540 rancher.go:648] InstanceID [ip-172-31-23-241.ap-southeast-2.compute.internal]
I0324 03:14:36.324146 24540 rancher.go:648] InstanceID [ip-172-31-23-241.ap-southeast-2.compute.internal]
I0324 03:14:36.360564 24540 kubelet_node_status.go:246] Adding node label from cloud provider: beta.kubernetes.io/instance-type=rancher
I0324 03:14:36.360618 24540 kubelet_node_status.go:257] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=FailureDomain1
I0324 03:14:36.360637 24540 kubelet_node_status.go:261] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=Region1
E0324 03:14:36.398949 24540 kubelet.go:1634] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
E0324 03:14:36.398998 24540 kubelet.go:1642] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
I0324 03:14:36.400057 24540 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I0324 03:14:36.400113 24540 status_manager.go:129] Starting to sync pod status with apiserver
I0324 03:14:36.400132 24540 kubelet.go:1714] Starting kubelet main sync loop.
I0324 03:14:36.400154 24540 kubelet.go:1725] skipping pod synchronization - [container runtime is down]
I0324 03:14:36.401554 24540 volume_manager.go:242] Starting Kubelet Volume Manager
E0324 03:14:36.416297 24540 factory.go:291] devicemapper filesystem stats will not be reported: unable to find thin_ls binary
I0324 03:14:36.416338 24540 factory.go:295] Registering Docker factory
W0324 03:14:36.416377 24540 manager.go:247] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
I0324 03:14:36.416415 24540 factory.go:54] Registering systemd factory
I0324 03:14:36.416574 24540 factory.go:86] Registering Raw factory
I0324 03:14:36.416727 24540 manager.go:1106] Started watching for new ooms in manager
I0324 03:14:36.417897 24540 oomparser.go:185] oomparser using systemd
I0324 03:14:36.418256 24540 manager.go:288] Starting recovery of all containers
I0324 03:14:36.418329 24540 manager.go:293] Recovery completed
F0324 03:14:36.418351 24540 kubelet.go:1210] Failed to start cAdvisor inotify_add_watch /sys/fs/cgroup/cpuacct: no such file or directory

kolyshkin added a commit to kolyshkin/runc that referenced this pull request Nov 24, 2020
The `all` argument was introduced by commit f557996 specifically
for use by cAdvisor (see [1]), but there were no test cases added,
so it was later broken by 5ee0648 which started incrementing
numFound unconditionally.

Fix this (by not checking numFound in case all is true), and add a
simple test case to avoid future regressions.

[1] google/cadvisor#1476

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
kolyshkin added a commit to kolyshkin/runc that referenced this pull request Nov 24, 2020
The `all` argument was introduced by commit f557996 specifically
for use by cAdvisor (see [1]), but there were no test cases added,
so it was later broken by 5ee0648 which started incrementing
numFound unconditionally.

Fix this (by not checking numFound in case all is true), and add a
simple test case to avoid future regressions.

[1] google/cadvisor#1476

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
kolyshkin added a commit to kolyshkin/runc that referenced this pull request Nov 24, 2020
The `all` argument was introduced by commit f557996 specifically
for use by cAdvisor (see [1]), but there were no test cases added,
so it was later broken by 5ee0648 which started incrementing
numFound unconditionally.

Fix this (by not checking numFound in case all is true), and add a
simple test case to avoid future regressions.

[1] google/cadvisor#1476

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
kolyshkin added a commit to kolyshkin/runc that referenced this pull request Dec 1, 2020
The `all` argument was introduced by commit f557996 specifically
for use by cAdvisor (see [1]), but there were no test cases added,
so it was later broken by 5ee0648 which started incrementing
numFound unconditionally.

Fix this (by not checking numFound in case all is true), and add a
simple test case to avoid future regressions.

[1] google/cadvisor#1476

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
kolyshkin added a commit to kolyshkin/runc that referenced this pull request Dec 4, 2020
The `all` argument was introduced by commit f557996 specifically
for use by cAdvisor (see [1]), but there were no test cases added,
so it was later broken by 5ee0648 which started incrementing
numFound unconditionally.

Fix this (by not checking numFound in case all is true), and add a
simple test case to avoid future regressions.

[1] google/cadvisor#1476

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
ctalledo pushed a commit to ctalledo/sysbox-runc that referenced this pull request Jan 6, 2021
The `all` argument was introduced by commit f557996 specifically
for use by cAdvisor (see [1]), but there were no test cases added,
so it was later broken by 5ee0648 which started incrementing
numFound unconditionally.

Fix this (by not checking numFound in case all is true), and add a
simple test case to avoid future regressions.

[1] google/cadvisor#1476

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
dqminh pushed a commit to dqminh/runc that referenced this pull request Feb 3, 2021
The `all` argument was introduced by commit f557996 specifically
for use by cAdvisor (see [1]), but there were no test cases added,
so it was later broken by 5ee0648 which started incrementing
numFound unconditionally.

Fix this (by not checking numFound in case all is true), and add a
simple test case to avoid future regressions.

[1] google/cadvisor#1476

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
dims pushed a commit to dims/libcontainer that referenced this pull request Oct 19, 2024
The `all` argument was introduced by commit 55bdacb specifically
for use by cAdvisor (see [1]), but there were no test cases added,
so it was later broken by 4811d2f which started incrementing
numFound unconditionally.

Fix this (by not checking numFound in case all is true), and add a
simple test case to avoid future regressions.

[1] google/cadvisor#1476

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
dims pushed a commit to dims/libcontainer that referenced this pull request Oct 19, 2024
The `all` argument was introduced by commit 455647e specifically
for use by cAdvisor (see [1]), but there were no test cases added,
so it was later broken by 20d0023 which started incrementing
numFound unconditionally.

Fix this (by not checking numFound in case all is true), and add a
simple test case to avoid future regressions.

[1] google/cadvisor#1476

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
dims pushed a commit to dims/libcontainer that referenced this pull request Oct 19, 2024
The `all` argument was introduced by commit c197628 specifically
for use by cAdvisor (see [1]), but there were no test cases added,
so it was later broken by 34b443e which started incrementing
numFound unconditionally.

Fix this (by not checking numFound in case all is true), and add a
simple test case to avoid future regressions.

[1] google/cadvisor#1476

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
kolyshkin added a commit to kolyshkin/containerd-cgroups that referenced this pull request Nov 6, 2024
The `all` argument was introduced by commit e13d6e8 specifically
for use by cAdvisor (see [1]), but there were no test cases added,
so it was later broken by b2ac540 which started incrementing
numFound unconditionally.

Fix this (by not checking numFound in case all is true), and add a
simple test case to avoid future regressions.

[1] google/cadvisor#1476

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants