Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

issues/271: invoke custom emitters from forwarder, not agent #272

Merged
merged 1 commit into from
Jan 29, 2013
Merged

issues/271: invoke custom emitters from forwarder, not agent #272

merged 1 commit into from
Jan 29, 2013

Conversation

charles-dyfis-net
Copy link
Contributor

Permits statsd output to be funneled through custom emitters, which previously could only handle checks.

As emitters are typically blocking, and the forwarder is async, each emitter has a separate worker thread handling events from a queue with a fixed maximum size.

@ghost ghost assigned clutchski Nov 21, 2012
@clutchski
Copy link
Contributor

Hey,

I thought about this a little bit. I totally agree we should have custom emitters in the forwarder. But I don't think we should add threads to a Tornado app (since the whole point of tornado is to be single threaded). If we do this, I'd rather see it fit in with the Tornado loop. Also, we already have two emitters (datadoghq & pup) with slightly different patterns. If we add a third, I'd rather us generalize into a single pattern that works for multiple targets, rather than three seperate ways for each. Does that sound reasonable?

@charles-dyfis-net
Copy link
Contributor Author

The custom emitter API here is patterned off of the DataDog emitter used directly from check code, being a callable that takes message, logger, agentConfig parameters -- if we wanted to, we could insert the synchronous emitter used for checks in place of the async emitter currently being used for submits to DataDog proper, and it should Just Work.

That's a somewhat silly thing to do, since we have a debugged and working async emitter -- but the point is to illustrate that I'm not adding a 3rd API, but rather allowing the 2rd one that already exists to be used in a different context.

Given as part of the point of custom emitters is to be able to talk to potentially diverse protocols and systems, adding a requirement that async implementations exist for each of those is unfortunate -- particularly as the target is Tornado, for which a much smaller set of client libraries is available, as opposed to Twisted (which I personally find distasteful, but... well, it does have a wealth of clients prewritten). The one escape hatch we have is duplicating what we have for pup -- requiring that anyone with a custom emitter have a separate process with an HTTP server that does its own proxy from HTTP->{whatever}. That's doable, to be sure, but unfortunate from a few perspectives -- it's more moving parts, more latency and more CPU overhead.

Finally, with respect to the whole point of Tornado being operating in a single-threaded fashion -- yes, Tornado IS a single-threaded async server. This code doesn't modify or change that in any way whatsoever -- all Tornado code continues to operate in only a single thread. What I do change is allowing Tornado to enqueue data items which are processed on different threads -- but that's an entirely one-way trip; we're emitting content which may be handled by processes on different threads, but code running in Tornado neither knows nor cares about anything beyond the enqueue operation: No callbacks, no status checks, no blocking. As such this is entirely consistent with the intent and behavior behind async IO operation, and (in fact) the only correct way to call code written to do synchronous IO from within an async IO framework.

@clutchski clutchski merged commit 5a72220 into DataDog:master Jan 29, 2013
@clutchski
Copy link
Contributor

@charles-dyfis-net Thanks for the patch. Much appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants