-
Notifications
You must be signed in to change notification settings - Fork 239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Continuous Messages of exporting failed, dropped data, sender failed in the aws-otel-collector.log #551
Comments
could you attach more collector logs? |
Hello @mxiamxia - This is all the information in the collector logs. Let me know how to fix this. go.opentelemetry.io/collector/consumer/consumerhelper.ConsumeMetricsFunc.ConsumeMetrics |
Can you turn on debug logs please. echo "loggingLevel=DEBUG" | sudo tee -a /opt/aws/aws-otel-collector/etc/extracfg.txt then restart the collector if running on ec2 |
Attaching again the zipped log file with DEBUG logging level. Could you escalate and help resolve this as it is being going on for last few weeks. |
{2021-06-28 12:12:53.950912958 -0400 EDT m=+60.073175088, Level:debug, Caller:github.com/open-telemetry/opentelemetry-collector-contrib/exporter/awsemfexporter@v0.22.0/cwlog_client.go:158, Message:cwlog_client: creating stream fail, Stack:} I saw a similar error when I ran on ec2 but not when run inn a docker container. I think there might be a problem with the way we are getting creds. Can you try running this in a docker container and passing in the access key manually to the container with "-e AWS_ACCESS_KEY_ID={your access key here} -e AWS_SECRET_ACCESS_KEY={secret key here}" guide on how to run with docker is https://github.com/aws-observability/aws-otel-collector/blob/main/docs/developers/docker-demo.md |
Hello @sethAmazon - Can you expand on what your are saying above? My app is running in a docker. Are you asking to run the OTEL daemon process in a docker container? |
How are you passing in the credentials for otel? |
I am not passing any credentials. I just run on my linux host - sudo /opt/aws/aws-otel-collector/bin/aws-otel-collector-ctl -c /opt/aws/aws-otel-collector/etc/config.yaml -a start. This "aws-otel-collector" is not running in a docker. Each of my EC2 linux host has a aws role attached to it. How would you like me to pass the credentials to this "aws-otel-collector" process running on the linux hosts. |
I am facing the similar problem while exporting spans to aws xray. Metrics works fine for me. {2021-08-27 00:47:35.104061377 +0000 GMT m=+761.062074882, Level:error, Caller:go.opentelemetry.io/collector@v0.29.1-0.20210630003519-14d917479ef3/exporter/exporterhelper/queued_retry.go:245, Message:Exporting failed. Try enabling retry_on_failure config option., Stack:go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send |
I ran a debug and found that exporter is sending the request with the trace segment but getting the response error with empty line. {2021-08-27 19:31:33.259092001 +0000 GMT m=+68199.217105694, Level:debug, Caller:github.com/open-telemetry/opentelemetry-collector-contrib/exporter/awsxrayexporter@v0.29.1-0.20210630203112-81d57601b1bc/awsxray.go:54, Message:TracesExporter, Stack:}^M }, Stack:}**^M |
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 30 days. |
Hi @georges-git can we do a live debug session sometime on this? |
I had this problem on a Python Lambda function. After tweaking the code, I could reproduce the problem by simply using
I eventually stumbled upon aws-observability/aws-otel-lambda#10 so I assume this might be related. I switched from download_file to get_object and the error went away. I also tested with upload_file and that caused the same error to happen. |
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 30 days. |
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 30 days. |
This issue was closed because it has been marked as stall for 30 days with no activity. |
Hello @mxiamxia and AWS group - Keep getting following messages in the OTEL collector log. How to fix it?
{2021-06-23 09:52:04.43621453 -0400 EDT m=+120.147512184, Level:error, Caller:go.opentelemetry.io/collector@v0.27.0/exporter/exporterhelper/queued_retry.go:173, Message:Exporting failed. Dropping data. Try enabling sending_queue to survive temporary failures., Stack:go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send
go.opentelemetry.io/collector@v0.27.0/exporter/exporterhelper/queued_retry.go:173
go.opentelemetry.io/collector/exporter/exporterhelper.NewMetricsExporter.func2
go.opentelemetry.io/collector@v0.27.0/exporter/exporterhelper/metrics.go:103
go.opentelemetry.io/collector/consumer/consumerhelper.ConsumeMetricsFunc.ConsumeMetrics
go.opentelemetry.io/collector@v0.27.0/consumer/consumerhelper/metrics.go:29
go.opentelemetry.io/collector/service/internal/fanoutconsumer.metricsConsumer.ConsumeMetrics
go.opentelemetry.io/collector@v0.27.0/service/internal/fanoutconsumer/consumer.go:51
go.opentelemetry.io/collector/processor/batchprocessor.(*batchMetrics).export
go.opentelemetry.io/collector@v0.27.0/processor/batchprocessor/batch_processor.go:285
go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).sendItems
go.opentelemetry.io/collector@v0.27.0/processor/batchprocessor/batch_processor.go:183
go.opentelemetry.io/collector/processor/batchprocessor.(*batchProcessor).startProcessingCycle
go.opentelemetry.io/collector@v0.27.0/processor/batchprocessor/batch_processor.go:144}
{2021-06-23 09:52:04.436239353 -0400 EDT m=+120.147537026, Level:warn, Caller:go.opentelemetry.io/collector@v0.27.0/processor/batchprocessor/batch_processor.go:184, Message:Sender failed, Stack:}
The text was updated successfully, but these errors were encountered: