-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to set sample_rate for nginx from v1.1.4 onward #148
Comments
Hi @tahnik, you've opened the issue on the correct project. A workaround in the meantime is to use sampling rules in your JSON config file. The example below will sample 10% of initiated traces, and will send the {
"service": "nginx",
"operation_name_override": "nginx.handle",
"agent_host": "dd-agent",
"agent_port": 8126,
"sampling_rules": [{"sample_rate": 0.1}]
} Let me know if that works for you, but we can keep the issue open until the bug is fixed. |
Thank you for your prompt reponse @cgilmour. As I mentioned, setting the sampling_rules sends the nginx ----> node server If I set the sampling rules to |
Right, so what you used to see with priority sampling disabled and a sampling rate set globally was a lower number of traces sent to the agent. The new behavior is that all traces are sent to the agent, even when the However, not all traces are sent from the agent to datadog. The remainder are dropped, but get counted via the metrics, so that the service pages can still show total requests, errors, latency, and endpoint information. Does that match what you're seeing for the nginx side of things? |
I see, that means that nginx is always being sampled at 100%. However, the agent should not be sending all the traces to the datadog. Although I will have to confirm this, wouldn't that hurt nginx performance quite a bit? If traces are being collected for every single request? |
From one perspective, yes - nginx sent trace data to the agent 100% of the time. In terms of performance, yes it will have an impact but the exact amount needs to be measured. There's a plan to do some benchmarking and optimizing at some point, to get some updated numbers on those things. The urgency is quite low though because nginx performance with tracing enabled has not been highlighted as an issue. |
I'm going to add my experiences here as well. In a high traffic datacenter (40k req/s) a sample rate of 1 (100%) results in about 100GB per hour of trace data. As we are working on adopting APM this immediate jump in spend with DataDog isn't something that is maintainable. Looking through all the documentation and information on here I'm not able to see a way to change the sample rate when the DataDog agent and NGINX are deployed to a kubernetes cluster via Helm. In the current setup, 100% of the spans are sent to the DataDog agent and 100% of those spans are then also sent to DataDog rather than being sampled out. |
There is the CPU/memory/IO cost of sending traces to the local datadog agent, and separately there is the internet bandwidth cost of the agent sending traces to datadog. Setting |
I am using nginx-opentracing with a nodejs backend. At first I tried v1.1.2 of dd-opentracing which uses the
sample_rate
indd-config.json
if thedd.priority.sample
is set tofalse
. However, it doesn't set anyx-datadog-
headers so the sample rate is not correctly propagated to the backend. From v1.1.4 onward, the backend does get the headers and it correctly uses the sample rate based on thesampling_rules
. However, nginx traces stays 100%. So nginx is not caring about thesample_rate
orsampling_rules
. How do I control the nginx sample rate in that case?I know it sounds like an nginx-opentracing issue but it seems to be originating from dd-opentracing so I am opening the issue here.
Thank you.
The text was updated successfully, but these errors were encountered: