Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

exporting logs is not working #1078

Open
fhuthmacher opened this issue Dec 28, 2023 · 4 comments
Open

exporting logs is not working #1078

fhuthmacher opened this issue Dec 28, 2023 · 4 comments
Labels
bug Something isn't working

Comments

@fhuthmacher
Copy link

I am trying to get to work traces, metrics and logs via AWS Distro for OpenTelemetry Collector (ADOT Collector), using the following Lambda Layer: arn:aws:lambda:us-east-1:901920570463:layer:aws-otel-python-amd64-ver-1-20-0:3 and Python 3.8.

My understanding is that this version includes OpenTelemetry Python v1.20.0 and ADOT Collector v0.35.0 and should support, trace, metrics and logs as experimental.
The issue seems to be that no matter how I configure otlphttp, the collector does not seem to export logs at all.

Steps to reproduce:

  1. create Python 3.8 Lambda
  2. Add Layer arn:aws:lambda:us-east-1:901920570463:layer:aws-otel-python-amd64-ver-1-20-0:3
    create collector.yml file with below content in root dir:
extensions:
  sigv4auth:
    region: "us-east-1"
    service: "osis"

receivers:
  otlp:
    protocols:
      http:
        endpoint:

exporters:
  logging:
    loglevel: debug
  otlphttp:
    traces_endpoint: "https://XXXX/v1/traces"
    metrics_endpoint: "https://YYYY/v1/metrics"
    logs_endpoint: "https:///ZZZZ/v1/logs"
    auth:
      authenticator: sigv4auth
    compression: none
    
service:
  extensions: [sigv4auth]
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [otlphttp,logging]

    metrics:
      receivers: [otlp]
      exporters: [otlphttp,logging]
    
    logs:
      receivers: [otlp]
      exporters: [otlphttp,logging]
      
  telemetry:
    metrics:
      address: localhost:8888
  1. Set two environment variables:
    AWS_LAMBDA_EXEC_WRAPPER = /opt/otel-instrument
    OPENTELEMETRY_COLLECTOR_CONFIG_FILE = /var/task/collector.yml

  2. Update lambda_function.py with below code:

import json
import boto3
from opentelemetry import metrics
from opentelemetry import trace
import logging

tracer = trace.get_tracer("appl.tracer")

meter = metrics.get_meter("appl.meter")

dir_counter = meter.create_counter(
    "dir.calls",
    description="The number of directory calls",
)

@tracer.start_as_current_span("do_work")
def lambda_handler(event, context):

    logging.getLogger().error("This is a log message")    

    # This adds 1 to the counter
    result = '0'
    dir_counter.add(1, {"dir.value": result})
        
    client = boto3.client("s3")
    client.list_buckets()
    
    client = boto3.client("ec2")
    list = client.describe_instances()
    
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from Lambda!')
    }

What is the expected behavior?
I would expect to see a similar output in logging for "logs" to what I get for traces and metrics.

What is the actual behavior?
I don't see any log output, only traces and metrics, see below example.

Function Logs
0\nScopeSpans SchemaURL: \nInstrumentationScope opentelemetry.instrumentation.botocore 0.41b0\nSpan #0\n Trace ID : b348c171927592ab5c0d60183c9352f4\n Parent ID : 6cdd27675458cb3f\n ID : f7ce677c8611cddd\n Name : S3.ListBuckets\n Kind : Client\n Start time : 2023-12-28 16:59:09.835148036 +0000 UTC\n End time : 2023-12-28 16:59:10.177052834 +0000 UTC\n Status code : Unset\n Status message : \nAttributes:\n -> rpc.system: Str(aws-api)\n -> rpc.service: Str(S3)\n -> rpc.method: Str(ListBuckets)\n -> aws.region: Str(us-east-1)\n -> aws.request_id: Str(VP5ZCEZERDWAKQ33)\n -> retry_attempts: Int(0)\n -> http.status_code: Int(200)\nSpan https://github.com/open-telemetry/opentelemetry-python/pull/1\n Trace ID : b348c171927592ab5c0d60183c9352f4\n Parent ID : 6cdd27675458cb3f\n ID : c9f69080451e23c4\n Name : EC2.DescribeInstances\n Kind : Client\n Start time : 2023-12-28 16:59:11.253592039 +0000 UTC\n End time : 2023-12-28 16:59:11.735706355 +0000 UTC\n Status code : Unset\n Status message : \nAttributes:\n -> rpc.system: Str(aws-api)\n -> rpc.service: Str(EC2)\n -> rpc.method: Str(DescribeInstances)\n -> aws.region: Str(us-east-1)\n -> aws.request_id: Str(8f664204-f9dd-4e05-9079-1a76caf78c84)\n -> retry_attempts: Int(0)\n -> http.status_code: Int(200)\nScopeSpans https://github.com/open-telemetry/opentelemetry-python/pull/1\nScopeSpans SchemaURL: \nInstrumentationScope appl.tracer \nSpan #0\n Trace ID : b348c171927592ab5c0d60183c9352f4\n Parent ID : 4c77eecbc20517cd\n ID : 6cdd27675458cb3f\n Name : do_work\n Kind : Internal\n Start time : 2023-12-28 16:59:08.037059752 +0000 UTC\n End time : 2023-12-28 16:59:11.793806031 +0000 UTC\n Status code : Unset\n Status message : \nScopeSpans https://github.com/open-telemetry/opentelemetry-python/pull/2\nScopeSpans SchemaURL: \nInstrumentationScope opentelemetry.instrumentation.aws_lambda 0.41b0\nSpan #0\n Trace ID : b348c171927592ab5c0d60183c9352f4\n Parent ID : \n ID : 4c77eecbc20517cd\n Name : lambda_function.lambda_handler\n Kind : Server\n Start time : 2023-12-28 16:59:07.968461942 +0000 UTC\n End time : 2023-12-28 16:59:11.793881653 +0000 UTC\n Status code : Unset\n Status message : \nAttributes:\n -> faas.id: Str(arn:aws:lambda:us-east-1:026459568683:function:AOS_Trace_Demo38)\n -> faas.execution: Str(0058b917-9560-4762-9e51-550b33479b0f)\n","kind":"exporter","data_type":"traces","name":"logging"}

{"level":"info","ts":1703782753.9983406,"msg":"MetricsExporter","kind":"exporter","data_type":"metrics","name":"logging","resource metrics":1,"metrics":1,"data points":1}
{"level":"info","ts":1703782753.9984167,"msg":"ResourceMetrics #0\nResource SchemaURL: \nResource attributes:\n -> telemetry.sdk.language: Str(python)\n -> telemetry.sdk.name: Str(opentelemetry)\n -> telemetry.sdk.version: Str(1.20.0)\n -> cloud.region: Str(us-east-1)\n -> cloud.provider: Str(aws)\n -> faas.name: Str(AOS_Trace_Demo38)\n -> faas.version: Str($LATEST)\n -> faas.instance: Str(2023/12/28/[$LATEST]c2c54d603ad44f4d8ee20f683340f655)\n -> service.name: Str(demo-lambda)\n -> telemetry.auto.version: Str(0.41b0)\nScopeMetrics #0\nScopeMetrics SchemaURL: \nInstrumentationScope appl.meter \nMetric #0\nDescriptor:\n -> Name: dir.calls\n -> Description: The number of directory calls\n -> Unit: \n -> DataType: Sum\n -> IsMonotonic: true\n -> AggregationTemporality: Cumulative\nNumberDataPoints #0\nData point attributes:\n -> dir.value: Str(0)\nStartTimestamp: 2023-12-28 16:59:08.093701828 +0000 UTC\nTimestamp: 2023-12-28 16:59:13.791589871 +0000 UTC\nValue: 1\n","kind":"exporter","data_type":"metrics","name":"logging"}

END RequestId: 0058b917-9560-4762-9e51-550b33479b0f
REPORT RequestId: 0058b917-9560-4762-9e51-550b33479b0f Duration: 6073.88 ms Billed Duration: 6074 ms Memory Size: 128 MB Max Memory Used: 128 MB Init Duration: 1847.31 ms

@fhuthmacher fhuthmacher added the bug Something isn't working label Dec 28, 2023
@atabakhafeez
Copy link

Hi @fhuthmacher, were you able to figure out a solution for this? I am facing the same issue. In my case, I am trying to use oltphttp exporter to send logs to Grafana Cloud in an AWS Lambda.

@gshpychka
Copy link
Contributor

Are logs supported in the collector at all??

@sebastienalbert
Copy link

Hi everyone,

I have the same problem with the ADOT layer aws-otel-python-amd64-ver-1-25-0:1.
Do you have any solution to solve this problem ? I need to send my lambda logs to the graphana loki service with the otlp protocol.

Thanks for your help

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants