Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for per-thread initialization function #105

Closed
wants to merge 1 commit into from

Conversation

v1993
Copy link

@v1993 v1993 commented Apr 7, 2023

Pull request policy (please read)

Contributions are always welcome. However, I release my projects in cumulative updates after editing and testing them locally on my system, so my policy is not to accept any pull requests. If you open a pull request, and I decide to incorporate your suggestion into the project, I will first modify your code to comply with the project's coding conventions (formatting, syntax, naming, comments, programming practices, etc.), and perform some tests to ensure that the change doesn't break anything. I will then merge it into the next release of the project, possibly together with some other changes. The new release will also include a note in CHANGELOG.md with a link to your pull request, and modifications to the documentation in README.md as needed.

Describe the changes

This adds support for running user-defined function once per thread on said thread's initialization, allowing user to assign custom resources to threads and closing #104. Actually storing and passing resources around is left to user, which can be accomplished fairly easily with thread_local variables. Function is given thread's id in thread pool as an argument, which is useful when resources are already allocated by the caller. All threads complete initialization before control is returned to caller in order to allow capturing references to caller's local variables.

Testing

Have you tested the new code using the provided automated test program and/or performed any other tests to ensure that it works correctly? If so, please provide information about the test system(s):

  • CPU model, architecture, # of cores and threads: Intel(R) Core(TM) i3-10100 CPU @ 3.60GHz, 4 cores, 8 threads
  • Operating system: Ubuntu Linux 22.10
  • Name and version of C++ compiler: g++ (Ubuntu 12.2.0-3ubuntu1) 12.2.0 and Ubuntu clang version 15.0.6
  • Full command used for compiling, including all compiler flags: g++ -std=c++17 -o BS_thread_pool_test -Wall -O3 BS_thread_pool_test.cpp, replace g++ with clang++ for clang

Additional information

I've also fixed a minor mistake in full version's tests where a few tests would fail on single-core machine due to half-concurrency being equal to zero.

Would you be interested to see a very simple build file for meson system submitted as a separate PR? It would simplify building/running tests by a fair bit, as well as make inclusion into https://github.com/mesonbuild/wrapdb seamless.

P.S.: why are .cpp files using CRLF line endings?

@v1993 v1993 mentioned this pull request Apr 7, 2023
@bshoshany
Copy link
Owner

Thanks for this pull request, @v1993!

I'm not sure I fully understand why this is needed. How does running the initialization function help, given that the tasks themselves do not have control on which thread they will be run on? It seems to me that this will only be relevant if every single task run in the pool is of the same type, so they all use the same resources, but this is not appropriate for a generic thread pool that runs different types of tasks on the same thread. And in cases where it is needed, I suspect there must be an easier way to do this initialization on the user side, without modifying the thread pool.

Can you give me a concrete code example of how such initialization functions would be used in practice?

@v1993
Copy link
Author

v1993 commented May 1, 2023

While I had to give up this particular idea due to issues in drivers, this is what I had:

  • Create shared OpenGL context on main thread, one for main thread and one context per thread in pool
			std::vector<cen::gl_context> worker_thread_contexts;
			worker_thread_contexts.reserve(worker_thread_count);
			for(BS::concurrency_t i = 0; i < worker_thread_count; ++i) {
				worker_thread_contexts.emplace_back(window);
			}
  • Create thread pool, giving each spawned thread an OpenGL context
			std::mutex sdl_mutex{};
			pool = std::make_unique<BS::thread_pool>(worker_thread_count,
			[this, &worker_thread_contexts, &sdl_mutex, configure_context, gl_symbol_resolver] (unsigned int thread_id) {
				const auto context_handle{WORKER_THREAD_CONTEXT_BASE + thread_id};
				auto& thread_gl_context{worker_thread_contexts[thread_id]};
				{
					// Only this specific call has to lock
					std::unique_lock lock{sdl_mutex};
					if (!thread_gl_context.make_current(window)) {
						std::cerr << fmt::format("Failed to make GL context current: {}!\n", SDL_GetError()) << std::flush;
						std::abort();
					}
				}
				glbinding::Binding::initialize(context_handle, gl_symbol_resolver);
				configure_context();
				// worker_thread_data is thread_local
				worker_thread_data = std::make_unique<WorkerThreadData>(thread_id, std::move(thread_gl_context));
			});

OpenGL contexts are thread-specific and required to be initialized (made current) once per thread. There's nothing wrong if function ran in such a thread pool doesn't do GL calls, but it is needed for those that do.

There's no clean way to do this without initialization function, from my understanding. Please correct me if I'm wrong.

@bshoshany
Copy link
Owner

Update: Unfortunately I did not yet have time to incorporate this PR in v3.5.0, which was released today. However, it is now on my TODO list for the next release. I just need to find some time to figure out exactly how this feature can be used (which might require familiarizing myself with OpenGL first, since I don't have any experience with it...), what is the best way to implement it (which may or may not be the way that you did it), and how to describe it in the documentation.

Meanwhile, I am closing this PR, since it is on track to be incorporated in the next release - but please keep the branch open. Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants