-
Notifications
You must be signed in to change notification settings - Fork 796
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GAUGE multiprocess_mode single stat #154
Comments
A single file would have locking issues. I'd guess that the min or max mode will do what you need. |
problem with using min or max is that I would not know which one should I use. this is a gauge, not a counter so I do not know if my value have increased or decreased during the last process. in my case for example I have a jobs gauge with 2 labels. total and done. total is constant while done is increasing until it reaches total. at that point the "master" job is done, and both these labels are now 0 again until the next "master" job starts. |
If it's always 0 and x, then you want max. In general it sounds like you want the default per-pid behaviour. |
I think maybe I haven't explained myself good enough here. |
There's some form of IPC going on here that I don't understand. I'd suggest getting whatever is managing all this work to be the only one setting the gauge. |
this is what happening, the problem is that manager is actually a uwsgi server (the workers themselves are separate servers). so as this being a multiprocess wsgi server, it is the only one writing the gause, but still in different pids. since gauges are single numerical value, I think it is important to have a way to know what the latest value is, even across multi processing. |
You want The latest value doesn't make sense in a multi-process app, as there's nominally independent processes. |
I was thinking in the direction of live modes, yet I didn't find any method in uwsgi to know when a process is dead and calling |
I still don't know enough about your use case to understand if this makes sense in the first place. |
I think the simplest way I can define it is something like a shared state between processes. |
@brian-brazil We have a metric which goes up and down. (E.g. the number of admin users). What we are interested about is the latest value of the gauge - no matter what the other processes know about it. As an example:
What is relevant here, is the latest value, which is 2, which is neither the sum, max or min. Without knowing the internals of the library, here are a couple of uninformed ideas of how to solve it:
|
I was able to work-around this issue by using Custom Collectors . Basically, instead of counting a metric, that is stored somewhere, the metrics genrated on-the-fly, whenever |
Any solution for this? Having the same issue here, I want to avoid having to store items in a database just because of sync issues with Prometheus. |
I second @singerjess about this being important. @brian-brazil: I understand that there's some concern about requiring a lock. My thought though is that if you don't care so much about order always being exactly correct, then no lock and simply attempting to take the most recently set value would be good enough. Given that Prometheus we're mainly dealing with large scale aggregate knowledge, my bet is this would be satisfactory for many people, including myself. |
For anyone coming here, #847 is open which should solve the remaining use cases I am seeing in the comments. Since this issue is almost 6 years old let's work over there. |
in some cases, when using gauge in multi process, what I really care about is the latest value updated by any of the workers.
for example, let's say I have a webapp running using uwsgi. this app updates some gauge on each request. what I really want to know is the latest value updated. in that case, I don't really need a separate db file for each pid, a single file would be sufficient to update it and read from it (basically mimicking single process but using a db file to store it across processes)
The text was updated successfully, but these errors were encountered: