Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus receiver scrape every 15s (wheras scrape_interval sets to 60s) #34381

Open
genseb13011 opened this issue Jul 29, 2024 · 3 comments
Open
Labels
bug Something isn't working receiver/prometheus Prometheus receiver waiting-for-code-owners

Comments

@genseb13011
Copy link

Hi,

I'm facing an issue with the scraping interval of prometheus receiver.

Even if i've set the scrape_interval value to 60s for each of my job and in the global variable, the scraping period is still 15s.

Technical informations:

` mode: deployment

    replicaCount: 2
    
    podDisruptionBudget:
      enabled: true
      minAvailable: 1
    
    topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: kubernetes.io/hostname
        whenUnsatisfiable: DoNotSchedule
        labelSelector:
          matchLabels:
            topologySpreadLabel: opentelemetry-k8s-metrics
    
    additionalLabels:
      topologySpreadLabel: opentelemetry-k8s-metrics
    
    image:
      repository: "otel/opentelemetry-collector-contrib"

    extraEnvs:
      - name: MY_POD_IP
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: status.podIP
      - name: K8S_NODE_NAME
        valueFrom:
          fieldRef:
            fieldPath: spec.nodeName
      - name: K8S_CLUSTER_NAME
        value: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
      - name: CORALOGIX_PRIVATE_KEY
        value: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
      - name: CORALOGIX_DOMAIN
        value: "eu2.coralogix.com"
      - name: CORALOGIX_APPLICATION_NAME
        value: "opentelemetry-k8s-metrics"
      - name: CORALOGIX_SUBSYSTEM_NAME
        value: "integration"
    
    config:
    
      extensions:
        health_check:
          endpoint: ${env:MY_POD_IP}:13133
    
      receivers:
        prometheus:
          config:
            
            global:
              scrape_interval: 60s
            
            scrape_configs:
              
              - job_name: opentelemetry-infrastructure-collector
                scrape_interval: 60s
                static_configs:
                  - targets:
                      - ${env:MY_POD_IP}:8888
              
              - job_name: cadvisor
                scrape_interval: 60s
                kubernetes_sd_configs:
                  - role: node
                relabel_configs:
                  - replacement: kubernetes.default.svc.cluster.local:443
                    target_label: __address__
                  - regex: (.+)
                    replacement: /api/v1/nodes/$${1}/proxy/metrics/cadvisor
                    source_labels:
                      - __meta_kubernetes_node_name
                    target_label: __metrics_path__
                metric_relabel_configs:
                  - source_labels: [__name__]
                    action: keep
                    regex: 'container_cpu_cfs_periods_total|container_cpu_cfs_throttled_periods_total|container_cpu_usage_seconds_total|container_fs_reads_bytes_total|container_fs_reads_total|container_fs_writes_bytes_total|container_fs_writes_total|container_memory_cache|container_memory_rss|container_memory_swap|container_memory_working_set_bytes|container_network_receive_bytes_total|container_network_receive_packets_dropped_total|container_network_receive_packets_total|container_network_transmit_bytes_total|container_network_transmit_packets_dropped_total|container_network_transmit_packets_total|machine_memory_bytes'
                scheme: https
                bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
                tls_config:
                  ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
                  insecure_skip_verify: false
                  server_name: kubernetes
              
              - job_name: kube-state-metrics
                scrape_interval: 60s
                kubernetes_sd_configs:
                  - role: pod
                relabel_configs:
                  - action: keep
                    regex: kube-state-metrics
                    source_labels:
                      - __meta_kubernetes_pod_label_app_kubernetes_io_name
                metric_relabel_configs:
                  - source_labels: [__name__]
                    action: keep
                    regex: 'kube_daemonset.*|kube_deployment_metadata_generation|kube_deployment_spec_replicas|kube_deployment_status_observed_generation|kube_deployment_status_replicas_available|kube_deployment_status_replicas_updated|kube_horizontalpodautoscaler_spec_max_replicas|kube_horizontalpodautoscaler_spec_min_replicas|kube_horizontalpodautoscaler_status_current_replicas|kube_horizontalpodautoscaler_status_desired_replicas|kube_job.*|kube_namespace_status_phase|kube_node.*|kube_persistentvolumeclaim_resource_requests_storage_bytes|kube_pod_container_info|kube_pod_container_resource_limits|kube_pod_container_resource_requests|kube_pod_container_status_last_terminated_reason|kube_pod_container_status_restarts_total|kube_pod_container_status_waiting_reason|kube_pod_info|kube_pod_owner|kube_pod_start_time|kube_pod_status_phase|kube_pod_status_reason|kube_replicaset.*|kube_resourcequota|kube_statefulset.*'
  
              - job_name: node_exporter
                scrape_interval: 60s
                kubernetes_sd_configs:
                  - role: pod
                relabel_configs:
                  - action: keep
                    regex: prometheus-node-exporter.*
                    source_labels:
                      - __meta_kubernetes_pod_label_app_kubernetes_io_name
                  - action: replace
                    source_labels:
                      - __meta_kubernetes_pod_node_name
                    target_label: instance
                  - action: replace
                    source_labels:
                      - __meta_kubernetes_namespace
                    target_label: namespace
                metric_relabel_configs:
                  - source_labels: [__name__]
                    action: keep
                    regex: 'node_cpu.*|node_exporter_build_info|node_filesystem.*|node_memory.*|process_cpu_seconds_total|process_resident_memory_bytes'
      
      processors:
        batch: {}
        metricstransform:
          transforms:
            - include: ^(.*)$$
              match_type: regexp
              action: update
              operations:
                - action: add_label
                  new_label: cluster_name
                  new_value: ${env:K8S_CLUSTER_NAME}
  
      exporters:
        coralogix:
          private_key: ${env:CORALOGIX_PRIVATE_KEY}
          domain: ${env:CORALOGIX_DOMAIN}
          application_name: ${env:CORALOGIX_APPLICATION_NAME}
          subsystem_name: ${env:CORALOGIX_SUBSYSTEM_NAME}
  
      service:
        extensions:
          - health_check
          
        pipelines:
          metrics:
            receivers: ["prometheus"]
            processors: ["metricstransform","batch"]
            exporters: ["coralogix"]
    
    clusterRole:
      create: true
      rules:
        - apiGroups: [""]
          resources: ["pods", "namespaces"]
          verbs: ["get", "watch", "list"]
        - apiGroups: ["apps"]
          resources: ["replicasets"]
          verbs: ["get", "list", "watch"]
        - apiGroups: ["extensions"]
          resources: ["replicasets"]
          verbs: ["get", "list", "watch"]
        - apiGroups: [""]
          resources: ["events", "namespaces", "namespaces/status", "nodes", "nodes/spec", "nodes/metrics", "nodes/stats", "nodes/proxy", "pods", "pods/status", "replicationcontrollers", "replicationcontrollers/status", "resourcequotas", "services" ]
          verbs: ["get", "list", "watch"]
        - apiGroups: ["apps"]
          resources: ["daemonsets", "deployments", "replicasets", "statefulsets"]
          verbs: ["get", "list", "watch"]
        - apiGroups: ["extensions"]
          resources: ["daemonsets", "deployments", "replicasets"]
          verbs: ["get", "list", "watch"]
        - apiGroups: ["batch"]
          resources: ["jobs", "cronjobs"]
          verbs: ["get", "list", "watch"]
        - apiGroups: ["autoscaling"]
          resources: ["horizontalpodautoscalers"]
          verbs: ["get", "list", "watch"]
        - apiGroups: [""]
          resources: ["pods", "endpoints", "nodes/stats", "nodes/metrics", "nodes", "services"]
          verbs: ["get", "watch", "list"]
        - nonResourceURLs:
          - "/metrics"
          verbs: ["get"]
        - apiGroups: ["events.k8s.io"]
          resources: ["events"]
          verbs: ["watch", "list"]
        - apiGroups: [""]
          resources: ["nodes"]
          verbs: ["get", "list", "watch"]`
@codeboten
Copy link
Contributor

Thanks for reporting @genseb13011, can you confirm how you're observing the 15s scrape? I'll transfer this issue to the contrib repo where i'll label it with prometheus receiver to get the code owners to review it

@codeboten codeboten transferred this issue from open-telemetry/opentelemetry-collector Aug 1, 2024
@codeboten codeboten added receiver/prometheus Prometheus receiver needs triage New item requiring triage labels Aug 1, 2024
Copy link
Contributor

github-actions bot commented Aug 1, 2024

Pinging code owners for receiver/prometheus: @Aneurysm9 @dashpole. See Adding Labels via Comments if you do not have permissions to add labels yourself.

@genseb13011
Copy link
Author

Thanks for reporting @genseb13011, can you confirm how you're observing the 15s scrape? I'll transfer this issue to the contrib repo where i'll label it with prometheus receiver to get the code owners to review it

Hi,

Sorry for the delay.

I notice this in my Grafana dashboard which display metrics point every 15 secondes.

image

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working receiver/prometheus Prometheus receiver waiting-for-code-owners
Projects
None yet
Development

No branches or pull requests

4 participants