-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tweak overheads of Regex cache access #53449
Conversation
Tagging subscribers to this area: @eerhardt, @pgovind Issue DetailsSmall improvement, but then the cited regression was also small. I tried a few other things (e.g. passing the Key by in, storing an int milliseconds timeout instead of the TimeSpan, etc.), but they didn't help, and I'm not seeing a ton more that can be done. Closes #50051. This is on 64-bit:
|
@CertifiedRice, you "approved" about 25 PRs in rapid succession. Did you actually review them? Can you help me understand what action you took that led you to "approve" them? Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me.
{ | ||
if (lastAccessed.Key.Equals(key)) | ||
if (key.Equals(lastAccessed.Key)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For my knowledge - did this line make a perf difference? Or does it just read better this way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It did for an intermediate stage where I was experimenting with passing key as in
. I ended up reverting that, but left this as it was as I thought it still read slightly better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code change LGTM. But, I don't really get why this is performing slightly better :/
I'll take a stab at answering, and @stephentoub can correct me 😄. It basically comes to 2 micro-optimizations:
|
Yup, I noticed that part. We're reducing the amount of computation here, so this makes sense.
This is the part that I'm surprised by. The returned bool was on the stack, so I can't think of any reason why this would make a measurable change. Maybe when there's some down time I'll look at the IL generated here and/or measure how much this change contributed to the overall speedup. I'm guessing all of the speedup here is from point 1? |
Doing a really small example:
|
There's a difference between handing back an object via a return and handing back an object via a ref... the latter requires a write barrier: ; Program.ByReturn()
mov rax,[rcx+8]
ret
; Total bytes of code 5
; Program.ByOut(System.Object ByRef)
mov rax,rdx
mov rdx,[rcx+8]
mov rcx,rax
call CORINFO_HELP_CHECKED_ASSIGN_REF
nop
ret
; Total bytes of code 17 Our Try pattern goals afoul of this for usability reasons, but you can see for example as a minor optimization our internal abstraction in LINQ switches the bool/T positions so that when T is a reference type it's returned rather than passed out by ref: runtime/src/libraries/System.Linq/src/System/Linq/IPartition.cs Lines 33 to 47 in 7e65185
|
Small improvement, but then the cited regression was also small. I tried a few other things (e.g. passing the Key by in, storing an int milliseconds timeout instead of the TimeSpan, etc.), but they didn't help, and I'm not seeing a ton more that can be done. Closes #50051.
This is on 64-bit: