-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Single threaded alternative processor #773
Conversation
bc4731a
to
c40755e
Compare
1bfb967
to
02f599f
Compare
@JelleAalbers big thanks for the deep deep PR! We will review before next round of heavy low-level processing, for which I think this PR will mostly benefit about. |
for more information, see https://pre-commit.ci
02f599f
to
fbc34d2
Compare
Hey @JelleAalbers , can I resolve the conflicts and push to your forked repo? |
Thanks @JelleAalbers . This finally motivated me to read through and understand more about the In particular:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @JelleAalbers I tested the PR with no error or different result.
Hi Dacheng, great! Thanks for testing. In threaded mode, the operating system ultimately controls what strax code runs when (though we guide this with various locks/conditions in lazy mode). I think it's possible a thread gets interrupted for a while between when it does the final operation on some data and when it deletes it. One dirty optimization definitely helped a little: https://github.com/AxFoundation/strax/pull/773/files#diff-0052f4762c34c9846086909453739399c2c86e4ce7cd5f15a39553f8dab348eeR209. It's explained more in the stackoverflow linked below that line. Basically, if you yield and then del data, the data is still kept around until the next time the iterator is run. So you have to del first, keep the data in an anonymous container, and pop it when you yield. This is nothing thread-specific though, and I think there are other places in strax where we could do this. (But the marginal savings might not be worth it.) I hope the single threaded processor allows better tracking of where and when the memory and time is used. If so, we might get more savings in the future. When I looked at it, there was extra memory of about a chunk worth of data allocated somewhere outside the python layer -- at least no python tool I tried could find a reference to it. (There was no leak, i.e. it did not grow, so maybe this was just some kind of preallocation?) |
@JelleAalbers Yes the PR helps a lot with tracking the memory usage. Thanks for the explanation. I will merge the PR after the testing is finished. |
What is the problem / what does the code in this PR do
This adds a single-threaded alternative processor backend that avoids the mailbox system and uses less memory.
Strax has a custom-built concurrency backend (the 'mailbox system'). It works, but it has problems and we hope to eventually replace it. @jmosbacher has done much work towards this; see also the discussion in #81.
A single-threaded processor won't work for the DAQ, but it could help reprocessing, analysis, and debugging:
Can you briefly describe how it works?
I started from a commit from Yossi's #410 to allow processors to be selected at runtime by the context. Currently
BaseProcessor
is a bit empty, we can see if there is more we can generalize to there.The
SingleThreadProcessor
is an alternative to theThreadedMailboxProcessor
. To keep the processor classes comparable, I put most of the mailbox-replacing logic into aPostOffice
class. There is only onePostOffice
instance per processor, it handles all the data types ('topics'), and it is much simpler than a single mailbox since it needs no threading or locking. We could eventually split this off into a separate package, maybe together with mailboxes.The trickiest part was dealing with rechunking in savers. The current code assumed it had its own thread available to independently pull on an iterator in
save_from
. In single-threaded processing we instead have to send chunks to_save_chunk
individually (as we do in multiprocessing), so then the rechunking logic is skipped. I thus factored it out into a separateRechunker
class and wrapped that around the Saver when we are single-threading. Might be useful to have the rechunking logic separate anyway.For testing, I let some tests in test_core run on both processors, and there are some asserts. Maybe you'd like to see some dedicated unit tests and docs as well, let me know.
The default processor is not changed, you have to add
processor='single_thread'
to your get_array/get_df/etc call to switch to the single-threaded processor. I would propose we test it in some reprocessing jobs first, if it works, we could then make it the default formax_workers=1
.Can you give a minimal working example (or illustrate with a figure)?
This shows the mprof output for the full processing of a tiny (~90 second) background run, starting from lz4 raw records on my laptop, with all needed online resources already downloaded. First for the current mailbox processing:
st.make('026195', 'event_basics')
and for single-threaded processing:
st.make('026195', 'event_basics', processor='single_thread')
For reviewing, note the number of lines changed is deceptively large. The stuff in
strax.processor
basically just got moved tostrax.processors.threaded_mailbox
. I keptstrax.mailbox
in place since mailboxes also used directly in the rechunker script. (Andstrax.processor
is now present only to keep an old straxen test running that imports from it directly.)