Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Operator 0.13.0 crashes with go panic #381

Closed
alfsch opened this issue Nov 23, 2022 · 6 comments · Fixed by #385
Closed

Operator 0.13.0 crashes with go panic #381

alfsch opened this issue Nov 23, 2022 · 6 comments · Fixed by #385
Labels
bug Something isn't working

Comments

@alfsch
Copy link

alfsch commented Nov 23, 2022

What version of redis operator are you using?

kubectl logs <_redis-operator_pod_name> -n <namespace>

{"level":"info","ts":1669194760.4806912,"logger":"controller-runtime.metrics","msg":"Metrics server is starting to listen","addr":":8080"}
{"level":"info","ts":1669194760.4818819,"logger":"setup","msg":"starting manager"}
{"level":"info","ts":1669194760.4829562,"msg":"Starting server","path":"/metrics","kind":"metrics","addr":"[::]:8080"}
{"level":"info","ts":1669194760.4829874,"msg":"Starting server","kind":"health probe","addr":"[::]:8081"}
I1123 09:12:40.483057       1 leaderelection.go:248] attempting to acquire leader lease redis-operator/6cab913b.redis.opstreelabs.in...
I1123 09:12:58.319509       1 leaderelection.go:258] successfully acquired lease redis-operator/6cab913b.redis.opstreelabs.in
{"level":"info","ts":1669194778.319815,"logger":"controller.redis","msg":"Starting EventSource","reconciler group":"redis.redis.opstreelabs.in","reconciler kind":"Redis","source":"kind source: *v1beta1.Redis"}
{"level":"info","ts":1669194778.3198977,"logger":"controller.redis","msg":"Starting Controller","reconciler group":"redis.redis.opstreelabs.in","reconciler kind":"Redis"}
{"level":"info","ts":1669194778.3198225,"logger":"controller.rediscluster","msg":"Starting EventSource","reconciler group":"redis.redis.opstreelabs.in","reconciler kind":"RedisCluster","source":"kind source: *v1beta1.RedisCluster"}
{"level":"info","ts":1669194778.319965,"logger":"controller.rediscluster","msg":"Starting Controller","reconciler group":"redis.redis.opstreelabs.in","reconciler kind":"RedisCluster"}
{"level":"info","ts":1669194778.4202948,"logger":"controller.redis","msg":"Starting workers","reconciler group":"redis.redis.opstreelabs.in","reconciler kind":"Redis","worker count":1}
{"level":"info","ts":1669194778.4217591,"logger":"controller.rediscluster","msg":"Starting workers","reconciler group":"redis.redis.opstreelabs.in","reconciler kind":"RedisCluster","worker count":1}
{"level":"info","ts":1669194778.421855,"logger":"controllers.RedisCluster","msg":"Reconciling opstree redis Cluster controller","Request.Namespace":"default","Request.Name":"redis-cluster"}
{"level":"info","ts":1669194778.4259007,"logger":"controller_redis","msg":"Redis statefulset get action was successful","Request.StatefulSet.Namespace":"default","Request.StatefulSet.Name":"redis-cluster-leader"}
{"level":"info","ts":1669194778.4380147,"logger":"controller_redis","msg":"Reconciliation Complete, no Changes required.","Request.StatefulSet.Namespace":"default","Request.StatefulSet.Name":"redis-cluster-leader"}
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x140bd62]

goroutine 202 [running]:
redis-operator/k8sutils.RedisClusterService.CreateRedisClusterService({{0x172857b, 0x19726d8}}, 0xc000154f00)
	/workspace/k8sutils/redis-cluster.go:180 +0x3a2
redis-operator/k8sutils.CreateRedisLeaderService(...)
	/workspace/k8sutils/redis-cluster.go:132
redis-operator/controllers.(*RedisClusterReconciler).Reconcile(0xc000884090, {0xc000695740, 0x155f2e0}, {{{0xc000613d16, 0x166a200}, {0xc000613d20, 0x30}}})
	/workspace/controllers/rediscluster_controller.go:78 +0x370
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0xc0005a28f0, {0x1944818, 0xc000695740}, {{{0xc000613d16, 0x166a200}, {0xc000613d20, 0x413a34}}})
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114 +0x26f
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0005a28f0, {0x1944770, 0xc000891ec0}, {0x15b4b40, 0xc00071e3c0})
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311 +0x33e
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0005a28f0, {0x1944770, 0xc000891ec0})
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266 +0x205
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227 +0x85
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:223 +0x357

redis-operator version: 0.13.0

Does this issue reproduce with the latest release?
yes

What operating system and processor architecture are you using (kubectl version)?

kubectl versionClient Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-06-15T14:22:29Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v4.5.4 Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.13", GitCommit:"592eca05be27f7d927d0b25cbb4241d75a9574bf", GitTreeState:"clean", BuildDate:"2022-10-26T15:19:38Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl version

What did you do?

I applied the following cluster

apiVersion: redis.redis.opstreelabs.in/v1beta1
kind: RedisCluster
metadata:
  name: redis-cluster
spec:
  clusterSize: 3
  persistenceEnabled: true
  clusterVersion: v7
  redisLeader:
    replicas: 3
    pdb:
      enabled: true
      maxUnavailable: 1
  redisFollower:
    replicas: 3
    pdb:
      enabled: true
      maxUnavailable: 1
  redisExporter:
    enabled: false
    image: "quay.io/opstree/redis-exporter:v1.44.0"
    imagePullPolicy: "IfNotPresent"
    resources:
      {}
  kubernetesConfig:
    image: "quay.io/opstree/redis:v7.0.5"
    imagePullPolicy: "IfNotPresent"
    resources:
      {}
  storage:
    volumeClaimTemplate:
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
        storageClassName: sinapse-standard
  securityContext:
    fsGroup: 1000
    runAsUser: 1000

What did you expect to see?

a running redis cluster after some time

What did you see instead?
Operator crashing

One hint, with 0.12.0 it works.

@alfsch alfsch added the bug Something isn't working label Nov 23, 2022
@Juansasa
Copy link

Juansasa commented Nov 24, 2022

This will fix the error for those who need a quick fix while waiting for the update. Just set annotations to something
spec: kubernetesConfig: service: annotations:

@Juansasa
Copy link

Juansasa commented Nov 24, 2022

Tried to make a PR but couldn’t do it but this is how to fix the problem
https://github.com/OT-CONTAINER-KIT/redis-operator/blob/master/api/v1beta1/common_types.go#L32

Its a bad idea to use nested attribute as pointers for non-required fields.
This is a possible fix
Service ServiceConfig json:"service,omitempty"

@anish749
Copy link
Contributor

+1 to this issue...

Facing this error for Redis Standalone as well... and unable to set spec: kubernetesConfig: service: annotations: for standalone.

@Juansasa
Copy link

Juansasa commented Nov 24, 2022

Yeah the Redis CustomResourceDefinition seems to be outdated and does not have the service definitions
One way to temporary fix it is to remove the existing CRD and manually apply this one to your cluster

https://github.com/OT-CONTAINER-KIT/redis-operator/blob/master/config/crd/bases/redis.redis.opstreelabs.in_redis.yaml

@gialloguitar
Copy link

The same issue in OKD 4.11 with redis-operator v0.13.0
panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x140c842] goroutine 535 [running]: redis-operator/k8sutils.CreateStandaloneService(0xc000282c00) /workspace/k8sutils/redis-standalone.go:21 +0x322 redis-operator/controllers.(*RedisReconciler).Reconcile(0xc00063a4b0, {0xc00057a8d0, 0x155f2e0}, {{{0xc0002a7566, 0x166a200}, {0xc00021e168, 0x30}}}) /workspace/controllers/redis_controller.go:70 +0x2d7 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0xc0005da580, {0x1944818, 0xc00057a8d0}, {{{0xc0002a7566, 0x166a200}, {0xc00021e168, 0x413a34}}}) /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114 +0x26f sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0005da580, {0x1944770, 0xc000b64fc0}, {0x15b4b40, 0xc00013d320}) /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311 +0x33e sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0005da580, {0x1944770, 0xc000b64fc0}) /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266 +0x205 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2() /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227 +0x85 created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:223 +0x357

@anish749 anish749 mentioned this issue Nov 25, 2022
3 tasks
@LuizRamos19
Copy link

Still facing the same problem here with both Redis and RedisCluster in Openshift. There is any way to downgrade the operator version to 0.12 in Openshift? I just see the 0.13 when installing the operator...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants