Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eck-beats chart does not support kind=deployment #6330

Closed
brsolomon-deloitte opened this issue Jan 17, 2023 · 8 comments · Fixed by #6621
Closed

eck-beats chart does not support kind=deployment #6330

brsolomon-deloitte opened this issue Jan 17, 2023 · 8 comments · Fixed by #6621
Assignees
Labels
>bug Something isn't working discuss We need to figure this out

Comments

@brsolomon-deloitte
Copy link

Bug Report

What did you do?

When using eck-beats as an "upstream chart", I'm unable to actually use the deployment type since daemonset: {} is specified in the upstream values.yaml. This causes

admission webhook "elastic-beat-validation-v1beta1.k8s.elastic.co" denied the request: Beat.beat.k8s.elastic.co "threatintel8-eck-beats" is invalid: [spec.daemonSet: Forbidden: Specify either daemonSet or deployment, not both, spec.deployment: Forbidden: Specify either daemonSet or deployment, not both, spec: Invalid value: v1beta1.BeatSpec{Type:"filebeat", Version:"8.5.3", ElasticsearchRef:v1.ObjectSelector{Namespace:"", Name:"elastic8", ServiceName:"", SecretName:""}, KibanaRef:v1.ObjectSelector{Namespace:"", Name:"elastic8", ServiceName:"", SecretName:""}, Image:"", Config:(*v1.Config)(0xc01ad84190), ConfigRef:(*v1.ConfigSource)(nil), SecureSettings:[]v1.SecretSource(nil), ServiceAccountName:"", DaemonSet:(*v1beta1.DaemonSetSpec)(0xc001e34900), Deployment:(*v1beta1.DeploymentSpec)(0xc001e34c00), Monitoring:v1.Monitoring{Metrics:v1.MetricsMonitoring{ElasticsearchRefs:[]v1.ObjectSelector(nil)}, Logs:v1.LogsMonitoring{ElasticsearchRefs:[]v1.ObjectSelector(nil)}}, RevisionHistoryLimit:(*int32)(nil)}: either daemonset or deployment must be specified]

What did you expect to see?

Working Deployment

  • ECK version:

    2.5.0 / 8.5.3

$ cat Chart.yaml
apiVersion: v2
appVersion: 8.5.3
dependencies:
  - name: eck-beats
    version: 0.1.0
    repository: https://helm.elastic.co
  - name: common
    version: 2.x.x
    repository: https://charts.bitnami.com/bitnami
description: A Helm chart to deploy Elastic Beats managed by the ECK Operator.
name: eck-beats
type: application
version: 0.1.0
$ cat values-threatintel.yaml
eck-beats:
  version: 8.5.3
  spec:
    type: filebeat
    daemonSet: null
    deployment:
      replicas: 1
      strategy:
        type: Recreate

[abridged]

I have also tried excluding daemonSet key entirely - again, none of that will work because there is no way to "remove" the upstream daemonSet key.

What's going on here is obvious from running helm template - both daemonSet and deployment keys will be included in the kind: Beats object with no way to actually remove the daemonSet key.

  ...
  daemonSet: {}
  deployment:
    podTemplate:
      spec:
     ...
@botelastic botelastic bot added the triage label Jan 17, 2023
@brsolomon-deloitte
Copy link
Author

eck-beats values.yaml should comment out both daemonSet and deployment - it is the only way to actually make deployment work for this very commonly used dependency-pattern case.

@barkbay
Copy link
Contributor

barkbay commented Jan 18, 2023

When using eck-beats as an "upstream chart", I'm unable to actually use the deployment type since daemonset: {} is specified in the upstream values.yaml

I was able to remove the daemonSet using the null value:

Content of the values-threatintel.yaml file:

version: 8.5.3
spec:
  type: filebeat
  daemonSet: null
  deployment:
    replicas: 1
    strategy:
      type: Recreate

Output helm template beat-example elastic/eck-beats --values ./values-threatintel.yaml

apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
  name: beat-example-eck-beats
  labels:
    helm.sh/chart: eck-beats-0.1.0
    app.kubernetes.io/name: eck-beats
    app.kubernetes.io/instance: beat-example
    app.kubernetes.io/managed-by: Helm
  annotations:
    eck.k8s.elastic.co/license: enterprise
spec:
  version: 8.5.3
  type: filebeat
  config: {}
  deployment:
    replicas: 1
    strategy:
      type: Recreate
  elasticsearchRef:
    name: elasticsearch
  type: filebeat

Output of helm install beat-example elastic/eck-beats --values ./values-threatintel.yaml

NAME: beat-example
LAST DEPLOYED: Wed Jan 18 06:21:51 2023
NAMESPACE: elastic-stack
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Check Beat status
  $ kubectl get beat beat-example-eck-beats -n elastic-stack

2. Check Beat pod status
  $ kubectl get pods --namespace=elastic-stack -l beat.k8s.elastic.co/name=beat-example-eck-beats

I might be missing something in your setup, but using spec.daemonSet: null seems to work.

@brsolomon-deloitte
Copy link
Author

I might be missing something in your setup, but using spec.daemonSet: null seems to work.

As I included in my original description by showing Chart.yaml, this issue is only reproducible if you use a "helm dependency pattern", with another example shown at https://github.com/argoproj/argocd-example-apps/tree/master/helm-dependency.

In other words, there is no issue if you install directly from the upstream chart as you have done. Installing from local chart that uses the upstream chart in Chart.yaml -> .dependencies[] is when this issue arises.

helm template . -f values-threatintel.yaml

@brsolomon-deloitte
Copy link
Author

@barkbay can you let me know if you are able to reproduce and if there is a plan to correct this? We need to decide if it will be necessary to go back to forking the chart entirely.

@leeleelou
Copy link

Hello - We have the same issue but with the eck-agent chart. I'm not sure if there is a bug but we were able to set daemonSet: null briefly and used eck-agent as a deployment. But somehow we can no longer get this to work after the last chart reinstall.

We are using the upstream chart. Here is our setting in our values file:

    eck-agent:
      enabled: true
      fullnameOverride: "eck-agent"
      spec:
        elasticsearchRefs: []
        kibanaRef:
          name: kibana
        fleetServerRef:
          name: fleet-server
        mode: fleet
        daemonSet: null
        deployment:
          replicas: 1
          podTemplate:
            spec:
              securityContext:
                runAsUser: 0

This is what is rendered during helm template (helm throws an error about using only either or and not both):

# Source: eck-stack/charts/eck-agent/templates/elastic-agent.yaml
apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata:
  name: eck-agent
  labels:
    helm.sh/chart: eck-agent-0.2.0
    app.kubernetes.io/name: eck-agent
    app.kubernetes.io/instance: eck-stack
    app.kubernetes.io/managed-by: Helm
  annotations:
    eck.k8s.elastic.co/license: enterprise
spec:
  version: 8.5.0
  daemonSet:
    podTemplate:
      spec:
        containers:
        - name: agent
          securityContext:
            runAsUser: 0
  deployment:
    podTemplate:
      spec:
        securityContext:
          runAsUser: 0
    replicas: 1
  elasticsearchRefs: []
  fleetServerRef:
    name: fleet-server
  kibanaRef:
    name: kibana
  mode: fleet

@brsolomon-deloitte
Copy link
Author

@barkbay can you please take another look at this? This issue prevents us from using the chart entirely and forces us to fork it from upstream.

@naemono
Copy link
Contributor

naemono commented Mar 29, 2023

I'm aware of this issue, and it's due to this upstream PR(s) to actually get a fix in place that have been going on for a large amount of time:

helm/helm#9138
helm/helm#11440

I'll discuss this with the team and get back to this issue quickly to try and get a good fix in place for this issue.

@naemono naemono self-assigned this Mar 29, 2023
@naemono naemono added discuss We need to figure this out >bug Something isn't working labels Mar 29, 2023
@botelastic botelastic bot removed the triage label Mar 29, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug Something isn't working discuss We need to figure this out
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants