Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

attributes processor bot worked #33343

Open
antonchernyaev-kit-ar opened this issue May 30, 2024 · 5 comments
Open

attributes processor bot worked #33343

antonchernyaev-kit-ar opened this issue May 30, 2024 · 5 comments
Labels
bug Something isn't working processor/attributes Attributes processor Stale

Comments

@antonchernyaev-kit-ar
Copy link

Describe the bug
I used optl for signoz
i have this fields list in signoz

severity_text
severity_number
trace_flags
trace_id
span_id
k8s.event.name
k8s.event.reason
k8s.event.action
k8s.cluster.name
k8s.namespace.name
log.iostream
logtag
k8s.namespace.name
time
k8s.event.count
log.file.path
k8s.event.start_time
k8s.event.uid
k8s.container.restart_count
k8s.object.api_version
k8s.container.name
k8s.pod.name
k8s.object.resource_version
k8s.node.name
k8s.object.kind
k8s.deployment.name
signoz.component
k8s.object.fieldpath
k8s.pod.start_time
k8s.object.uid
k8s.object.name
k8s.pod.uid

i need to add new filed and i added this part

          otelAgent:
            image:
              tag: 0.101.0
            config:
              processors:
                memory_limiter:
                  check_interval: 5s
                  limit_mib: 1000
                  spike_limit_mib: 300
                attributes:
                  include:
                    match_type: regexp
                    attributes:
                    - key: "k8s.pod.name"
                      value: .*?-x-.*?-x-.*?
                  actions:
                  - key: kitar.vcluster
                    action: insert
                    from_attribute: "k8s.namespace.name"

but i not see any changes

the result config is:


    otel-agent-config.yaml: |2-

      exporters:
        otlp:
          endpoint: ${OTEL_EXPORTER_OTLP_ENDPOINT}
          headers:
            signoz-access-token: Bearer ${SIGNOZ_API_KEY}
          tls:
            insecure: ${OTEL_EXPORTER_OTLP_INSECURE}
            insecure_skip_verify: ${OTEL_EXPORTER_OTLP_INSECURE_SKIP_VERIFY}
      extensions:
        health_check:
          endpoint: 0.0.0.0:13133
        pprof:
          endpoint: localhost:1777
        zpages:
          endpoint: localhost:55679
      processors:
        attributes:
          actions:
          - action: insert
            from_attribute: k8s.namespace.name
            key: kitar.vcluster
          include:
            attributes:
            - key: k8s.pod.name
              value: .*?-x-.*?-x-.*?
            match_type: regexp
        batch:
          send_batch_size: 10000
          timeout: 200ms
        k8sattributes:
          extract:
            metadata:
            - k8s.namespace.name
            - k8s.pod.name
            - k8s.pod.uid
            - k8s.pod.start_time
            - k8s.deployment.name
            - k8s.node.name
          filter:
            node_from_env_var: K8S_NODE_NAME
          passthrough: false
          pod_association:
          - sources:
            - from: resource_attribute
              name: k8s.pod.ip
          - sources:
            - from: resource_attribute
              name: k8s.pod.uid
          - sources:
            - from: connection
        memory_limiter:
          check_interval: 5s
          limit_mib: 1000
          spike_limit_mib: 300
        resourcedetection:
          detectors:
          - system
          override: true
          system:
            hostname_sources:
            - dns
            - os
          timeout: 2s
        resourcedetection/internal:
          detectors:
          - env
          override: true
          timeout: 2s
      receivers:
        filelog/k8s:
          exclude:
          - /var/log/pods/sys--mon-signoz_signoz*-signoz-*/*/*.log
          - /var/log/pods/sys--mon-signoz_signoz*-k8s-infra-*/*/*.log
          - /var/log/pods/kube-system_*/*/*.log
          - /var/log/pods/*_hotrod*_*/*/*.log
          - /var/log/pods/*_locust*_*/*/*.log
          include:
          - /var/log/pods/*/*/*.log
          include_file_name: false
          include_file_path: true
          operators:
          - id: get-format
            routes:
            - expr: body matches "^\\{"
              output: parser-docker
            - expr: body matches "^[^ Z]+ "
              output: parser-crio
            - expr: body matches "^[^ Z]+Z"
              output: parser-containerd
            type: router
          - id: parser-crio
            output: extract_metadata_from_filepath
            regex: ^(?P<time>[^ Z]+) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*) ?(?P<log>.*)$
            timestamp:
              layout: "2006-01-02T15:04:05.000000000-07:00"
              layout_type: gotime
              parse_from: attributes.time
            type: regex_parser
          - id: parser-containerd
            output: extract_metadata_from_filepath
            regex: ^(?P<time>[^ ^Z]+Z) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*) ?(?P<log>.*)$
            timestamp:
              layout: '%Y-%m-%dT%H:%M:%S.%LZ'
              parse_from: attributes.time
            type: regex_parser
          - id: parser-docker
            output: extract_metadata_from_filepath
            timestamp:
              layout: '%Y-%m-%dT%H:%M:%S.%LZ'
              parse_from: attributes.time
            type: json_parser
          - id: extract_metadata_from_filepath
            output: add_cluster_name
            parse_from: attributes["log.file.path"]
            regex: ^.*\/(?P<namespace>[^_]+)_(?P<pod_name>[^_]+)_(?P<uid>[a-f0-9\-]+)\/(?P<container_name>[^\._]+)\/(?P<restart_count>\d+)\.log$
            type: regex_parser
          - field: resource["k8s.cluster.name"]
            id: add_cluster_name
            output: move_stream
            type: add
            value: EXPR(env("K8S_CLUSTER_NAME"))
          - from: attributes.stream
            id: move_stream
            output: move_container_name
            to: attributes["log.iostream"]
            type: move
          - from: attributes.container_name
            id: move_container_name
            output: move_namespace
            to: resource["k8s.container.name"]
            type: move
          - from: attributes.namespace
            id: move_namespace
            output: move_pod_name
            to: resource["k8s.namespace.name"]
            type: move
          - from: attributes.pod_name
            id: move_pod_name
            output: move_restart_count
            to: resource["k8s.pod.name"]
            type: move
          - from: attributes.restart_count
            id: move_restart_count
            output: move_uid
            to: resource["k8s.container.restart_count"]
            type: move
          - from: attributes.uid
            id: move_uid
            output: move_log
            to: resource["k8s.pod.uid"]
            type: move
          - from: attributes.log
            id: move_log
            to: body
            type: move
          start_at: beginning
        hostmetrics:
          collection_interval: 30s
          scrapers:
            cpu: {}
            disk: {}
            filesystem: {}
            load: {}
            memory: {}
            network: {}
        kubeletstats:
          auth_type: serviceAccount
          collection_interval: 30s
          endpoint: ${K8S_HOST_IP}:10250
          extra_metadata_labels:
          - container.id
          - k8s.volume.type
          insecure_skip_verify: true
          metric_groups:
          - container
          - pod
          - node
          - volume
        otlp:
          protocols:
            grpc:
              endpoint: 0.0.0.0:4317
              max_recv_msg_size_mib: 4
            http:
              endpoint: 0.0.0.0:4318
      service:
        extensions:
        - health_check
        - zpages
        - pprof
        pipelines:
          logs:
            exporters:
            - otlp
            processors:
            - k8sattributes
            - attributes
            - batch
            receivers:
            - otlp
            - filelog/k8s
          metrics:
            exporters:
            - otlp
            processors:
            - k8sattributes
            - batch
            receivers:
            - otlp
          metrics/internal:
            exporters:
            - otlp
            processors:
            - resourcedetection/internal
            - resourcedetection
            - k8sattributes
            - batch
            receivers:
            - hostmetrics
            - kubeletstats
          traces:
            exporters:
            - otlp
            processors:
            - k8sattributes
            - batch
            receivers:
            - otlp
        telemetry:
          logs:
            level: debug
          metrics:
            address: 0.0.0.0:8888

i just see into logs that agent see this configuration and it's all

2024-05-30T16:49:54.196Z debug processor@v0.101.0/processor.go:301 Beta component. May change in the future. {"kind": "processor", "name": "attributes", "pipeline": "logs"}

Steps to reproduce
just use this config or use signoz chart for install,

What did you expect to see?
new filed in list

@antonchernyaev-kit-ar antonchernyaev-kit-ar added the bug Something isn't working label May 30, 2024
@codeboten
Copy link
Contributor

Not sure if this is an issue w/ the attributes processor or with signoz, will transfer this to collector-contrib repo and ping attributes processor owners on the issue

@codeboten codeboten transferred this issue from open-telemetry/opentelemetry-collector Jun 3, 2024
@codeboten codeboten added processor/attributes Attributes processor needs triage New item requiring triage labels Jun 3, 2024
Copy link
Contributor

github-actions bot commented Jun 3, 2024

Pinging code owners for processor/attributes: @boostchicken. See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

github-actions bot commented Aug 5, 2024

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@atoulme
Copy link
Contributor

atoulme commented Oct 2, 2024

You probably need to apply this processor: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourceprocessor as the k8sattributesprocessor likely adds the k8s.pod.name at the resource level.

@github-actions github-actions bot removed the Stale label Oct 3, 2024
Copy link
Contributor

github-actions bot commented Dec 3, 2024

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Dec 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working processor/attributes Attributes processor Stale
Projects
None yet
Development

No branches or pull requests

3 participants