You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After following the instructions in the documentation to setup a GKE cluster I noticed that GCP was no longer automatically creating load balancers for ingress.
Following the instructions detailed in the GCE ingress repo (https://github.com/kubernetes/ingress-gce/tree/master/docs/deploy/gke) I was able to deploy an instance of the ingress controller into my cluster, rather than relying on the Google managed version, this meant I could see the logs.
The ingress controller is panicking within the following log
E0610 16:42:08.098452 1 runtime.go:78] Observed a panic: &errors.errorString{s:"unable to calculate an index entry for key \"nginx-test/nginx-testing\" on index \"by-service-index\": Failed to find a service label inside endpoint slice &EndpointSlice{ObjectMeta:{nginx-testing nginx-test 40c24838-d6bc-4e13-bb21-6f431f3843a1 4590 1 2022-06-10 16:22:26 +0000 UTC <nil> <nil> map[endpointslice.kubernetes.io/managed-by:lighthouse-agent.submariner.io lighthouse.submariner.io/sourceNamespace:nginx-test multicluster.kubernetes.io/service-name:nginx multicluster.kubernetes.io/source-cluster:testing] map[] [] [] [{lighthouse-agent Update discovery.k8s.io/v1 2022-06-10 16:22:26 +0000 UTC FieldsV1 {\"f:addressType\":{},\"f:endpoints\":{},\"f:metadata\":{\"f:labels\":{\".\":{},\"f:endpointslice.kubernetes.io/managed-by\":{},\"f:lighthouse.submariner.io/sourceNamespace\":{},\"f:multicluster.kubernetes.io/service-name\":{},\"f:multicluster.kubernetes.io/source-cluster\":{}}},\"f:ports\":{}} }]},Endpoints:[]Endpoint{Endpoint{Addresses:[10.100.1.7],Conditions:EndpointConditions{Ready:*true,Serving:nil,Terminating:nil,},Hostname:*nginx-6fdb7ffd5b-z2qxb,TargetRef:nil,Topology:map[string]string{kubernetes.io/hostname: gke-miketest-default-pool-2d65dcae-xgxg,},NodeName:*gke-miketest-default-pool-2d65dcae-xgxg,Hints:nil,},},Ports:[]EndpointPort{EndpointPort{Name:*http,Protocol:*TCP,Port:*8080,AppProtocol:nil,},},AddressType:IPv4,}"} (unable to calculate an index entry for key "nginx-test/nginx-testing" on index "by-service-index": Failed to find a service label inside endpoint slice &EndpointSlice{ObjectMeta:{nginx-testing nginx-test 40c24838-d6bc-4e13-bb21-6f431f3843a1 4590 1 2022-06-10 16:22:26 +0000 UTC <nil> <nil> map[endpointslice.kubernetes.io/managed-by:lighthouse-agent.submariner.io lighthouse.submariner.io/sourceNamespace:nginx-test multicluster.kubernetes.io/service-name:nginx multicluster.kubernetes.io/source-cluster:testing] map[] [] [] [{lighthouse-agent Update discovery.k8s.io/v1 2022-06-10 16:22:26 +0000 UTC FieldsV1 {"f:addressType":{},"f:endpoints":{},"f:metadata":{"f:labels":{".":{},"f:endpointslice.kubernetes.io/managed-by":{},"f:lighthouse.submariner.io/sourceNamespace":{},"f:multicluster.kubernetes.io/service-name":{},"f:multicluster.kubernetes.io/source-cluster":{}}},"f:ports":{}} }]},Endpoints:[]Endpoint{Endpoint{Addresses:[10.100.1.7],Conditions:EndpointConditions{Ready:*true,Serving:nil,Terminating:nil,},Hostname:*nginx-6fdb7ffd5b-z2qxb,TargetRef:nil,Topology:map[string]string{kubernetes.io/hostname: gke-miketest-default-pool-2d65dcae-xgxg,},NodeName:*gke-miketest-default-pool-2d65dcae-xgxg,Hints:nil,},},Ports:[]EndpointPort{EndpointPort{Name:*http,Protocol:*TCP,Port:*8080,AppProtocol:nil,},},AddressType:IPv4,})
goroutine 169 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x2be6aa0, 0xc000534e70)
/go/src/k8s.io/ingress-gce/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/src/k8s.io/ingress-gce/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
panic(0x2be6aa0, 0xc000534e70)
/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/src/k8s.io/ingress-gce/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x109
panic(0x2be6aa0, 0xc000534e70)
/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/src/k8s.io/ingress-gce/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x109
panic(0x2be6aa0, 0xc000534e70)
/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/client-go/tools/cache.(*threadSafeMap).updateIndices(0xc000526a80, 0x0, 0x0, 0x31aa080, 0xc0005e7ab0, 0xc00061d410, 0x18)
/go/src/k8s.io/ingress-gce/vendor/k8s.io/client-go/tools/cache/thread_safe_store.go:264 +0x4bd
k8s.io/client-go/tools/cache.(*threadSafeMap).Add(0xc000526a80, 0xc00061d410, 0x18, 0x31aa080, 0xc0005e7ab0)
/go/src/k8s.io/ingress-gce/vendor/k8s.io/client-go/tools/cache/thread_safe_store.go:78 +0x145
k8s.io/client-go/tools/cache.(*cache).Add(0xc00059d5a8, 0x31aa080, 0xc0005e7ab0, 0x0, 0x0)
/go/src/k8s.io/ingress-gce/vendor/k8s.io/client-go/tools/cache/store.go:155 +0x105
k8s.io/client-go/tools/cache.(*sharedIndexInformer).HandleDeltas(0xc0005dcf00, 0x2cbb840, 0xc00000c990, 0x0, 0x0)
/go/src/k8s.io/ingress-gce/vendor/k8s.io/client-go/tools/cache/shared_informer.go:557 +0x1ed
k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc0005dd680, 0xc00051e2c0, 0x0, 0x0, 0x0, 0x0)
/go/src/k8s.io/ingress-gce/vendor/k8s.io/client-go/tools/cache/delta_fifo.go:539 +0x322
k8s.io/client-go/tools/cache.(*controller).processLoop(0xc000312240)
/go/src/k8s.io/ingress-gce/vendor/k8s.io/client-go/tools/cache/controller.go:183 +0x42
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0005d0e70)
/go/src/k8s.io/ingress-gce/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007f9e70, 0x3552d60, 0xc0004aec30, 0xc00080dc01, 0xc0005211a0)
/go/src/k8s.io/ingress-gce/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005d0e70, 0x3b9aca00, 0x0, 0xc0001f3801, 0xc0005211a0)
/go/src/k8s.io/ingress-gce/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(...)
/go/src/k8s.io/ingress-gce/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*controller).Run(0xc000312240, 0xc0005211a0)
/go/src/k8s.io/ingress-gce/vendor/k8s.io/client-go/tools/cache/controller.go:154 +0x2e5
k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run(0xc0005dcf00, 0xc0005211a0)
/go/src/k8s.io/ingress-gce/vendor/k8s.io/client-go/tools/cache/shared_informer.go:410 +0x42a
created by k8s.io/ingress-gce/pkg/context.(*ControllerContext).Start
/go/src/k8s.io/ingress-gce/pkg/context/context.go:395 +0x40b
panic: unable to calculate an index entry for key "nginx-test/nginx-testing" on index "by-service-index": Failed to find a service label inside endpoint slice &EndpointSlice{ObjectMeta:{nginx-testing nginx-test 40c24838-d6bc-4e13-bb21-6f431f3843a1 4590 1 2022-06-10 16:22:26 +0000 UTC <nil> <nil> map[endpointslice.kubernetes.io/managed-by:lighthouse-agent.submariner.io lighthouse.submariner.io/sourceNamespace:nginx-test multicluster.kubernetes.io/service-name:nginx multicluster.kubernetes.io/source-cluster:testing] map[] [] [] [{lighthouse-agent Update discovery.k8s.io/v1 2022-06-10 16:22:26 +0000 UTC FieldsV1 {"f:addressType":{},"f:endpoints":{},"f:metadata":{"f:labels":{".":{},"f:endpointslice.kubernetes.io/managed-by":{},"f:lighthouse.submariner.io/sourceNamespace":{},"f:multicluster.kubernetes.io/service-name":{},"f:multicluster.kubernetes.io/source-cluster":{}}},"f:ports":{}} }]},Endpoints:[]Endpoint{Endpoint{Addresses:[10.100.1.7],Conditions:EndpointConditions{Ready:*true,Serving:nil,Terminating:nil,},Hostname:*nginx-6fdb7ffd5b-z2qxb,TargetRef:nil,Topology:map[string]string{kubernetes.io/hostname: gke-miketest-default-pool-2d65dcae-xgxg,},NodeName:*gke-miketest-default-pool-2d65dcae-xgxg,Hints:nil,},},Ports:[]EndpointPort{EndpointPort{Name:*http,Protocol:*TCP,Port:*8080,AppProtocol:nil,},},AddressType:IPv4,} [recovered]
panic: unable to calculate an index entry for key "nginx-test/nginx-testing" on index "by-service-index": Failed to find a service label inside endpoint slice &EndpointSlice{ObjectMeta:{nginx-testing nginx-test 40c24838-d6bc-4e13-bb21-6f431f3843a1 4590 1 2022-06-10 16:22:26 +0000 UTC <nil> <nil> map[endpointslice.kubernetes.io/managed-by:lighthouse-agent.submariner.io lighthouse.submariner.io/sourceNamespace:nginx-test multicluster.kubernetes.io/service-name:nginx multicluster.kubernetes.io/source-cluster:testing] map[] [] [] [{lighthouse-agent Update discovery.k8s.io/v1 2022-06-10 16:22:26 +0000 UTC FieldsV1 {"f:addressType":{},"f:endpoints":{},"f:metadata":{"f:labels":{".":{},"f:endpointslice.kubernetes.io/managed-by":{},"f:lighthouse.submariner.io/sourceNamespace":{},"f:multicluster.kubernetes.io/service-name":{},"f:multicluster.kubernetes.io/source-cluster":{}}},"f:ports":{}} }]},Endpoints:[]Endpoint{Endpoint{Addresses:[10.100.1.7],Conditions:EndpointConditions{Ready:*true,Serving:nil,Terminating:nil,},Hostname:*nginx-6fdb7ffd5b-z2qxb,TargetRef:nil,Topology:map[string]string{kubernetes.io/hostname: gke-miketest-default-pool-2d65dcae-xgxg,},NodeName:*gke-miketest-default-pool-2d65dcae-xgxg,Hints:nil,},},Ports:[]EndpointPort{EndpointPort{Name:*http,Protocol:*TCP,Port:*8080,AppProtocol:nil,},},AddressType:IPv4,} [recovered]
panic: unable to calculate an index entry for key "nginx-test/nginx-testing" on index "by-service-index": Failed to find a service label inside endpoint slice &EndpointSlice{ObjectMeta:{nginx-testing nginx-test 40c24838-d6bc-4e13-bb21-6f431f3843a1 4590 1 2022-06-10 16:22:26 +0000 UTC <nil> <nil> map[endpointslice.kubernetes.io/managed-by:lighthouse-agent.submariner.io lighthouse.submariner.io/sourceNamespace:nginx-test multicluster.kubernetes.io/service-name:nginx multicluster.kubernetes.io/source-cluster:testing] map[] [] [] [{lighthouse-agent Update discovery.k8s.io/v1 2022-06-10 16:22:26 +0000 UTC FieldsV1 {"f:addressType":{},"f:endpoints":{},"f:metadata":{"f:labels":{".":{},"f:endpointslice.kubernetes.io/managed-by":{},"f:lighthouse.submariner.io/sourceNamespace":{},"f:multicluster.kubernetes.io/service-name":{},"f:multicluster.kubernetes.io/source-cluster":{}}},"f:ports":{}} }]},Endpoints:[]Endpoint{Endpoint{Addresses:[10.100.1.7],Conditions:EndpointConditions{Ready:*true,Serving:nil,Terminating:nil,},Hostname:*nginx-6fdb7ffd5b-z2qxb,TargetRef:nil,Topology:map[string]string{kubernetes.io/hostname: gke-miketest-default-pool-2d65dcae-xgxg,},NodeName:*gke-miketest-default-pool-2d65dcae-xgxg,Hints:nil,},},Ports:[]EndpointPort{EndpointPort{Name:*http,Protocol:*TCP,Port:*8080,AppProtocol:nil,},},AddressType:IPv4,} [recovered]
panic: unable to calculate an index entry for key "nginx-test/nginx-testing" on index "by-service-index": Failed to find a service label inside endpoint slice &EndpointSlice{ObjectMeta:{nginx-testing nginx-test 40c24838-d6bc-4e13-bb21-6f431f3843a1 4590 1 2022-06-10 16:22:26 +0000 UTC <nil> <nil> map[endpointslice.kubernetes.io/managed-by:lighthouse-agent.submariner.io lighthouse.submariner.io/sourceNamespace:nginx-test multicluster.kubernetes.io/service-name:nginx multicluster.kubernetes.io/source-cluster:testing] map[] [] [] [{lighthouse-agent Update discovery.k8s.io/v1 2022-06-10 16:22:26 +0000 UTC FieldsV1 {"f:addressType":{},"f:endpoints":{},"f:metadata":{"f:labels":{".":{},"f:endpointslice.kubernetes.io/managed-by":{},"f:lighthouse.submariner.io/sourceNamespace":{},"f:multicluster.kubernetes.io/service-name":{},"f:multicluster.kubernetes.io/source-cluster":{}}},"f:ports":{}} }]},Endpoints:[]Endpoint{Endpoint{Addresses:[10.100.1.7],Conditions:EndpointConditions{Ready:*true,Serving:nil,Terminating:nil,},Hostname:*nginx-6fdb7ffd5b-z2qxb,TargetRef:nil,Topology:map[string]string{kubernetes.io/hostname: gke-miketest-default-pool-2d65dcae-xgxg,},NodeName:*gke-miketest-default-pool-2d65dcae-xgxg,Hints:nil,},},Ports:[]EndpointPort{EndpointPort{Name:*http,Protocol:*TCP,Port:*8080,AppProtocol:nil,},},AddressType:IPv4,}
Environment:
GKE Version: 1.22.8-gke.201
The text was updated successfully, but these errors were encountered:
What happened:
After following the instructions in the documentation to setup a GKE cluster I noticed that GCP was no longer automatically creating load balancers for ingress.
Following the instructions detailed in the GCE ingress repo (https://github.com/kubernetes/ingress-gce/tree/master/docs/deploy/gke) I was able to deploy an instance of the ingress controller into my cluster, rather than relying on the Google managed version, this meant I could see the logs.
The ingress controller is panicking within the following log
Environment:
The text was updated successfully, but these errors were encountered: