-
Notifications
You must be signed in to change notification settings - Fork 263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
task thread worker is busy waiting when tasks queue is empty #1
Comments
Sorry, I'm not exactly sure what you mean. Is this a bug report or a feature request? Can you please provide more details? |
I suppose @wuqiong means that the pool consumes a full core for every thread in the pool before giving it any task. |
I see, thanks for the clarification. Each thread is assigned the worker function The precise behaviour of So let's say you have N cores and you're running a program using a thread pool of N threads, with no other programs running at the same time. Even if your program just sits and waits, without submitting any tasks to the queue, you should still see all N cores being utilized (I get about 25% utilization per core on my system) - that's just each thread's worker function continuously checking the task queue. However, if there are other multi-threaded applications running at the same time, then the OS will let them use those cores while the pool's threads are yielding. So the cores are not being taken over by the thread pool, they are actually shared with other programs. Therefore I do not think this is a problem, it is actually the intended behaviour - but please let me know if you think otherwise. |
@wuqiong and @corot: I decided to look into possible alternatives to In the new version (v1.2), which I committed today, the worker function by default sleeps for a duration given by the public member variable The interesting thing is that in my benchmarks, sleeping for Therefore, the overall performance of the thread pool actually improved thanks to the feedback you provided in this issue - thanks! I am closing this issue now, since the problem of high CPU usage has been solved in v1.2. However, you are welcome to open a new issue if you find any problems or shortcomings with the new method. |
I think is possible to remove the sleep and the yield using a condition variable and a mutex. while (running)
{
std::unique_lock<std::mutex> lk(run_mutex_);
queue_cv_.wait(lk);
if(!running){
return;
}
std::function<void()> task;
[...]
}
|
Thanks for the suggestion @teopiaz! :) This has previously been suggested in #12. However, the code people suggested there runs into a deadlock, and in any case does not seem to improve performance. Furthermore, I'm not sure there's a clear benefit to making this change in the first place. The implementations of And lastly, I think it's actually a good thing that the sleep duration is a variable that can be changed, because it allows further optimization by tuning it to just the right value for a particular system. Again, if I use |
I think that the current system works fine. However, it was my understanding that some operating systems provided condition variables that were not implemented with sleeps or spin locks, but instead were implemented by moving the waiting thread from a sleep state to a run state. Are you sure that |
I'm definitely not sure about that. I looked it up and could not find information on exactly how condition variables are implemented in any operating system, so I made a guess. I did some more research now and I did find this article which shows that with condition variables the author gets 0% CPU utilization vs. 100% CPU utilization with a waiting loop. However, in my thread pool, even though the workers do employ a loop, there is virtually zero CPU utilization while they're waiting, as long as That said, I did add to my TODO list a task to try to implement the thread pool using Also, FYI, I do plan to eventually convert the thread pool to using semaphores, which might work better, but will require C++20. So I will probably post that as a separate header file (or a separate repository altogether). Again, I just need to find the time to work on it... :\ |
If you set What you have works and is pretty simple. Here is a threadpool that uses condition variables:
And here a source code implementation for It turns out that C++11 has So what's the advantage to using them? None in your application. That's because you don't care about latency. But another user of |
Thanks for the explanation. I see your point and I will move this task to a higher priority. But still, it will take some time, since I have a lot of stuff to do before the fall term starts. I will use |
Thanks. There is no hurry; your system works well in its current form for my application. |
Great looking lib! But yes latency is very important to me (high-performance network server) so I want to avoid busy waits. It could be great to have condition variables as an option in this lib. |
@sketch34 thanks! You'll be happy to know that a draft for a version with condition variables has been suggested by another user (see #23). I want to do some tests on my end to make sure it doesn't have any issues, perhaps make some other changes, and then update the documentation. But it might take a few weeks since I'm a bit busy nowadays (the academic year just started). I'm also planning a C++20 version using semaphores, but that will come later. Stay tuned! :) |
Update: v3.0.0, to be released in the next few days, will incorporate the changes discussed here. |
so tasks queue should be blocking.
The text was updated successfully, but these errors were encountered: