Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: What is the expected export rate on trace exporter? #388

Closed
tyrone-anz opened this issue May 11, 2022 · 2 comments
Closed

Question: What is the expected export rate on trace exporter? #388

tyrone-anz opened this issue May 11, 2022 · 2 comments
Labels
question Further information is requested

Comments

@tyrone-anz
Copy link

tyrone-anz commented May 11, 2022

I am looking into configuring my services with an appropriate sampling rate on traces.

I understand that the trace exporter is sending to BatchWriteSpans RPC. With a batch span processor (BSP), there is only one batch exported at a time. When there are too many spans being created, BSP will just drop some once its queue is full. As such, in order to avoid that scenario, I'd like to reduce the number of spans being created but not too small to be not useful. At best, it should probably match the rate of export possible.

From testing, with a batch size of 512, the exporter is able to finish in 3-5 seconds. Then with a higher batch size of 1024, around 5-10 seconds. Thus, the export rate I assume is around 120-160 seconds. Is there documentation that kinda supports this number or any documentation to describe the supported rates? Or this is something that can really vary (go higher or lower) in the future?

@tyrone-anz tyrone-anz changed the title What is the expected export rate on trace exporter? Question: What is the expected export rate on trace exporter? May 11, 2022
@dashpole
Copy link
Contributor

Reading through the limits for the cloud trace API, there isn't a limit on the number of spans per BatchWriteSpans call. Based on issues I can see, it looks like you can get deadline exceeded errors if you make the batch size too large, but the issues don't mention specific batch sizes. I don't believe there is a fixed export rate behind the API. I would imagine that with larger batch sizes, you should be able to achieve higher throughputs. You may need to increase the timeout on the exporter (e.g. 12s -> 30s) to get really high rates.

I unfortunately haven't done any stress-testing of the API myself, but if you do end up finding values that work it would be great to document them with the exporter.

@tyrone-anz
Copy link
Author

@dashpole I observed the same where higher batch yields higher throughputs. For instance, with 512 batch sizes, throughput goes around 122 spans per second (API call takes 3-5 seconds). At 1024 batch size, about 166 spans per second.

The spans being created by my services contain a lot of attributes and sometimes have events so the throughput that I am getting may be lower due to the size of individual spans.

In the meantime, I'll consider the throughput to be around 120 spans per second.

@jsuereth jsuereth added the question Further information is requested label Aug 22, 2022
@damemi damemi closed this as completed Jan 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants