Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fleet server pod not created when Kibana is set with SERVER_BASEPATH #7909

Closed
msapitree opened this issue Jun 18, 2024 · 1 comment · Fixed by #8053
Closed

Fleet server pod not created when Kibana is set with SERVER_BASEPATH #7909

msapitree opened this issue Jun 18, 2024 · 1 comment · Fixed by #8053
Assignees
Labels
>bug Something isn't working v2.15.0

Comments

@msapitree
Copy link

msapitree commented Jun 18, 2024

Bug Report

What did you do?

Following quickstart

  1. Deployed ECK
  2. Deployed Elastic resource
  3. Deployed Kibana resource
  4. Redeployed Kibana resource with config modification (to setup public access to Kibana via k8s ingress on https://xxx.mydomain.com/monitoring/kibana):
  5. Deployed Fleet Server Agent (together with ClusterRole, ClusterRoleBinding and ServiceAccount)

What did you expect to see?

  • pod is created for the fleet-server agent
  • no error in operator logs
  • agent is reported as green

What did you see instead? Under which circumstances?

  • error in operator log failed to request http://monitoring-kb-http.monitoring-at.svc:5601/api/fleet/setup, status is 404)

    Due to the change to server context, the URL path /api/fleet/setup is no longer valid and should have become /monitoring/kibana/api/fleet/setup considering the server.basePath config parameter (or it's env variable equivalent).

  • no health information on elastic resource

$kubectl -n monitoring-at get elastic
NAME                                      HEALTH   AVAILABLE   EXPECTED   VERSION   AGE
agent.agent.k8s.elastic.co/fleet-server                                             109m

NAME                                      HEALTH   NODES   VERSION   AGE
apmserver.apm.k8s.elastic.co/monitoring   green    1       8.14.0    5d21h

NAME                                                    HEALTH   NODES   VERSION   PHASE   AGE
elasticsearch.elasticsearch.k8s.elastic.co/monitoring   green    3       8.14.0    Ready   5d22h

NAME                                      HEALTH   NODES   VERSION   AGE
kibana.kibana.k8s.elastic.co/monitoring   green    1       8.14.0    114m

Environment

  • ECK version: 2.13.0

  • Kubernetes information: AKS

Client Version: v1.29.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.9
  • Resource definition:
  • Kibana with SERVER_BASEPATH
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: monitoring
  namespace: monitoring-at
spec:
  version: 8.14.0
  count: 1
  elasticsearchRef:
    name: monitoring
  http:
    tls:
      selfSignedCertificate:
        disabled: true
  podTemplate:
    spec:
      containers:
      - env:
        - name: SERVER_BASEPATH
          value: /monitoring/kibana
        - name: SERVER_REWRITEBASEPATH
          value: "true"
        name: kibana
  • Fleet Server
apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata:
  name: fleet-server
spec:
  version: 8.14.0
  kibanaRef:
    name: monitoring
  elasticsearchRefs:
    - name: monitoring
  mode: fleet
  fleetServerEnabled: true
  policyID: eck-fleet-server
  deployment:
    replicas: 1
    podTemplate:
      spec:
        securityContext:
          runAsUser: 0
        serviceAccountName: fleet-server
        automountServiceAccountToken: true
  • Logs:
{
  "log.level": "error",
  "@timestamp": "2024-06-18T20:35:39.770Z",
  "log.logger": "manager.eck-operator",
  "message": "Reconciler error",
  "service.version": "2.13.0+8896afe1",
  "service.type": "eck",
  "ecs.version": "1.4.0",
  "controller": "agent-controller",
  "object": {
    "name": "fleet-server",
    "namespace": "monitoring-at"
  },
  "namespace": "monitoring-at",
  "name": "fleet-server",
  "reconcileID": "958131c8-738f-4062-b63e-0fcc831f297e",
  "error": "failed to request http://monitoring-kb-http.monitoring-at.svc:5601/api/fleet/setup, status is 404)",
  "errorCauses": [
    {
      "error": "failed to request http://monitoring-kb-http.monitoring-at.svc:5601/api/fleet/setup, status is 404)"
    }
  ],
  "error.stack_trace": "sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/controller/controller.go:329\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/internal/controller/controller.go:227"
}
@botelastic botelastic bot added the triage label Jun 18, 2024
@pebrc pebrc added the >bug Something isn't working label Jul 3, 2024
@botelastic botelastic bot removed the triage label Jul 3, 2024
@pebrc
Copy link
Collaborator

pebrc commented Jul 3, 2024

What you are trying to do is currently not supported. The operator, as you have found out is not aware of the server.basePath setting. The only work around I can think of is to try to do the rewrite proxy side with an ingress implementation specific path (e.g for nginx)

We could potentially inspect the Kibana config and adjust the paths. But it is a bit tricky your example shows. You are using environment variables while the more idiomatic way would be to use the config attribute:

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana
spec:
  version: 8.14.0
  count: 1
  elasticsearchRef:
    name: elasticsearch
  config:
    server:
      basePath: /kibana
      rewriteBasePath: true

A potential "fix" for this bug would probably only look at the config element.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug Something isn't working v2.15.0
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants