-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speedup IAVL iterator by removing defers when unneeded. #2143
Conversation
Note each defer occurs a 30ish ns overhead here, which is significant for IAVL iteration. We were previously using around 3-4 defers. (2 in next, one in Valid, one in Key, one in Value) This slows down the entire application quite significantly, as we require fast iteration.
7ea234c
to
f6cb4d4
Compare
Codecov Report
@@ Coverage Diff @@
## develop #2143 +/- ##
===========================================
- Coverage 63.91% 63.85% -0.07%
===========================================
Files 134 134
Lines 8194 8180 -14
===========================================
- Hits 5237 5223 -14
- Misses 2604 2607 +3
+ Partials 353 350 -3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
utACK -- thanks!
@@ -59,6 +59,7 @@ IMPROVEMENTS | |||
* SDK | |||
* [tools] Make get_vendor_deps deletes `.vendor-new` directories, in case scratch files are present. | |||
* [cli] \#1632 Add integration tests to ensure `basecoind init && basecoind` start sequences run successfully for both `democoin` and `basecoin` examples. | |||
* [store] Speedup IAVL iteration, and consequently everything that requires IAVL iteration. [#2143](https://github.com/cosmos/cosmos-sdk/issues/2143) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mhmmm, I think it's finally worthwhile committing to a change log format standard.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we not block this pr on that tho, and decide it in a separate pr and format accordingly there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not suggesting we block (I approved actually). I'm just stating it here for reference
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I misunderstood, totally agree with standardizing this
Codecov Report
@@ Coverage Diff @@
## develop #2143 +/- ##
===========================================
- Coverage 63.91% 63.91% -0.01%
===========================================
Files 134 134
Lines 8194 8199 +5
===========================================
+ Hits 5237 5240 +3
- Misses 2604 2606 +2
Partials 353 353 |
Are we sure all of these removals are safe (are the underlying IAVL functions guaranteed not to panic)? Keep in mind that many IAVL store calls (e.g. when running transactions) may be recovered at a higher level, so the daemon would continue to run even if the IAVL read failed (although arguably we should crash in that case...) |
The only thing where that's a concern is receive next. However the point of panicking there is the iavl stating this is irrecoverable, otherwise it should be erroring |
To be more clear, the mutex remains on that iterator. If the next method panicked, we should not keep on using that iterator, even if we recover. (hence the mutex remaining locked in that condition is fine) For every other scenario, we unlock in all situations where we may panic (unlock in assertIsValid) |
👍 |
Note each defer occurs a 30ish ns overhead here, which is significant
for IAVL iteration. We were previously using around 3-4 defers. (2 in
next, one in Valid, one in Key, one in Value) This slows down the entire
application quite significantly, as we require fast iteration.
This benchmark is just for the next operation, we will see 50% of this performance gain in the Valid and Value functions as well, so the performance gain in practice may be 2x this.
Before:
After:
This means this change could give us around 140 ns of improvement per iteration -- quite significant.
Targeted PR against correct branch (see CONTRIBUTING.md)
Wrote tests
Updated relevant documentation (
docs/
)Added entries in
PENDING.md
with issue #rereviewed
Files changed
in the github PR explorerFor Admin Use: