Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

panic: runtime error: invalid memory address or nil pointer dereference #385

Closed
RomanOrlovskiy opened this issue Jul 3, 2023 · 2 comments
Labels
bug Something isn't working

Comments

@RomanOrlovskiy
Copy link

What happened?

Crossplane init pods universal-crossplane-init are in a CrashLopp with the following error in logs after upgrading EKS to version 1.27:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x13fccf0]

goroutine 1 [running]:
k8s.io/client-go/discovery.convertAPIResource(...)
	k8s.io/client-go@v0.26.3/discovery/aggregated_discovery.go:114
k8s.io/client-go/discovery.convertAPIGroup({{{0x0, 0x0}, {0x0, 0x0}}, {{0xc000208dc8, 0x15}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...})
	k8s.io/client-go@v0.26.3/discovery/aggregated_discovery.go:95 +0x6f0
k8s.io/client-go/discovery.SplitGroupsAndResources({{{0xc00005c0c0, 0x15}, {0xc0004f25a0, 0x1b}}, {{0x0, 0x0}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...})
	k8s.io/client-go@v0.26.3/discovery/aggregated_discovery.go:49 +0x125
k8s.io/client-go/discovery.(*DiscoveryClient).downloadAPIs(0xc000512a98?)
	k8s.io/client-go@v0.26.3/discovery/discovery_client.go:328 +0x3de
k8s.io/client-go/discovery.(*DiscoveryClient).GroupsAndMaybeResources(0xc000512ec8?)
	k8s.io/client-go@v0.26.3/discovery/discovery_client.go:203 +0x65
k8s.io/client-go/discovery.ServerGroupsAndResources({0x2484fa8, 0xc0001fcc60})
	k8s.io/client-go@v0.26.3/discovery/discovery_client.go:413 +0x59
k8s.io/client-go/discovery.(*DiscoveryClient).ServerGroupsAndResources.func1()
	k8s.io/client-go@v0.26.3/discovery/discovery_client.go:376 +0x25
k8s.io/client-go/discovery.withRetries(0x2, 0xc000206ee0)
	k8s.io/client-go@v0.26.3/discovery/discovery_client.go:651 +0x71
k8s.io/client-go/discovery.(*DiscoveryClient).ServerGroupsAndResources(0x0?)
	k8s.io/client-go@v0.26.3/discovery/discovery_client.go:375 +0x3a
k8s.io/client-go/restmapper.GetAPIGroupResources({0x2484fa8?, 0xc0001fcc60?})
	k8s.io/client-go@v0.26.3/restmapper/discovery.go:148 +0x42
sigs.k8s.io/controller-runtime/pkg/client/apiutil.NewDynamicRESTMapper.func1()
	sigs.k8s.io/controller-runtime@v0.14.6/pkg/client/apiutil/dynamicrestmapper.go:94 +0x25
sigs.k8s.io/controller-runtime/pkg/client/apiutil.(*dynamicRESTMapper).setStaticMapper(...)
	sigs.k8s.io/controller-runtime@v0.14.6/pkg/client/apiutil/dynamicrestmapper.go:130
sigs.k8s.io/controller-runtime/pkg/client/apiutil.NewDynamicRESTMapper(0xc000215680?, {0x0, 0x0, 0x216d916?})
	sigs.k8s.io/controller-runtime@v0.14.6/pkg/client/apiutil/dynamicrestmapper.go:110 +0x182
sigs.k8s.io/controller-runtime/pkg/client.newClient(0xc000215680?, {0xc00037bc00?, {0x0?, 0x0?}, {0x2b?, 0x93?}})
	sigs.k8s.io/controller-runtime@v0.14.6/pkg/client/client.go:109 +0x1d1
sigs.k8s.io/controller-runtime/pkg/client.New(...)
	sigs.k8s.io/controller-runtime@v0.14.6/pkg/client/client.go:77
github.com/crossplane/crossplane/cmd/crossplane/rbac.(*initCommand).Run(0x2?, 0xc0003fbc08?, {0x247a660, 0xc00044d590})
	github.com/crossplane/crossplane/cmd/crossplane/rbac/init.go:46 +0x10c
reflect.Value.call({0x1e21b00?, 0x3450438?, 0x451356?}, {0x2155013, 0x4}, {0xc0003eb260, 0x2, 0x2?})

Based on the following comment, the issue seems to be related to prometheus-adapter, but looks like the fix is also possible by updating client-go dependency version.

How can we reproduce it?

EKS version: 1.27
Crossplane helm chart version: 1.12.2-up.2
prometheus-adapter helm chart version: 4.2.0

Helm values don't contain anything specific:

replicas: 2
affinity: 
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchLabels:
            app:
        topologyKey: kubernetes.io/hostname

###Expected outcome
The pods should start properly

###Additional Context:
Faced the same issue with RMQ operator, ALB ingress controller, and seems that the solution is to update the version of the client-go dependency, like here

Additional issues:
kubernetes-sigs/aws-load-balancer-controller#3214
kubernetes/kubernetes#116603
elastic/cloud-on-k8s#6848

@RomanOrlovskiy RomanOrlovskiy added the bug Something isn't working label Jul 3, 2023
@ezgidemirel
Copy link
Member

Hi @RomanOrlovskiy, we already bumped the client-go version to v0.27.3 on master. Can you try the RC version if you're not experiencing this issue on your prod environment? We may consider having a patch release if you cannot use the RC version.

@RomanOrlovskiy
Copy link
Author

Thanks @ezgidemirel, using the code from the master branch helped with this issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants