-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/pull mplex #1
Conversation
@dryajov make this a PR to libp2p-mplex, not a new module. |
This module is implemented standalone so it can be consumed outside of libp2p, libp2p-mplex has been modified accordingly to consume it. The main idea is that this can be consumed by other projects. |
BTW, the same approach was taken with the go implementation - https://github.com/libp2p/go-mplex |
I was able to shave off ~15 mins of the new implementations running the mega stress tests, the perf is now about the same (or a little better ;) ) than the stream based implementation. The attached zip contains a heap snapshot and a perf log taken with Here is a screenshot from the perf log graph - GC is seems to take ~13% of the total time. (These graphs where generated using the attached logs, and webstorm perf and heap analyzers). |
I'd love to get some feedback from @Stebalien as well. |
This PR got horked badly, please use #2 for further dev/reviews. |
UPDATE: This is now ready for review.
TODO:
Bellow are the accompanying PRs:
The current implementation takes around ~15mins more running the mega stress tests defined in interface-stream-muxer.
Here is the outline:
New:
Old:
The issue is here - https://github.com/libp2p/pull-mplex/blob/850bbc52c33038f2cdeff40e83afbcf4823b3895/src/coder.js#L85...L89. I need to rework this part to use a preallocated buffer, instead of allocating it every time.