-
Notifications
You must be signed in to change notification settings - Fork 17.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reflect: optimize v.Set(reflect.Zero(v.Type())) to not allocate #33136
Comments
I would also like to see this added and want to give some context where this can have a big effect with a use case: |
Heh. Ironically, I filed this issue in trying to optimize the internal protobuf implementation itself and couldn't figure out an efficient to this. |
Given I think we could change the representation of reflect.Value to define that if that pointer is nil, it is interpreted as pointing to an appropriate number of zeroed bytes. Then the idiom in question starts being more efficient without any changes to client code, no new API to learn, and so on. The change seems like it could be done completely invisibly. The Value containing a nil pointer could not be addressable, but the result of reflect.Zero is not addressable anyway, so that wouldn't be a problem. Should we think about doing that invisible optimization instead of new API? |
I tried a simple version of that optimization and it did save 75% of the time on this benchmark. So the idea does seem worth pursuing. @dsnet Is this a plausible benchmark for the kinds of code you are concerned about? (It's not a tiny change to the reflect package, diffstat reports 159 insertions, 35 deletions.) func BenchmarkZero(b *testing.B) {
t := TypeOf(struct {a, b, c, d *byte}{})
v := New(t)
for i := 0; i < b.N; i++ {
v.Elem().Set(Zero(t))
}
} |
Do you have a CL for this experiment that I can take for a spin? |
Change https://golang.org/cl/191199 mentions this issue: |
The runtime already has a 1K region of zeros that we can use to be the backing store for the result of reflect.Zero.
We could also detect this particular buffer in It would only require allocation in the >1K case, which I think is rare. |
This is also a big inefficiency in packages like encoding/json. Please let me know when a CL is ready and I'll be happy to help review and test. If this can be done without new API, all the better :) |
Change https://golang.org/cl/192331 mentions this issue: |
@dsnet If you want one of these changes to go into 1.14, please do some benchmarking with real code. Thanks. |
It seems like this proposal is stalled on deciding whether it is possible to make @dsnet, do either of Ian's or Keith's patch (both above) work well enough for you? |
The use case I originally needed this for is no longer relevant. Perhaps one of the other 👍's could comment? |
My CL is simpler but has a performance discontinuity depending on size. |
I briefly mentioned
I'm travelling at the moment, so I can't spend the time to test both of the CLs against the current set of decode/unmarshal benchmarks. I can have a look next week, if noone beats me to it. From a quick look, they seem to be, respectively:
My hunch is that JSON often involves smaller types. If one is decoding huge types, plenty of other pieces in the decoder would start getting slow, so I don't think extra allocations there would matter. So I'd default for Keith's simpler CL, and perhaps archive Ian's in case anyone has a strong need for it in the future. |
It sounds like Ian's change is the more complete one and does not have significant performance problems, so we should probably go with that. Based on the discussion above and the lack of API change, this now seems like a likely accept. |
I think it'd be good to fill out Ian's change before choosing it over Keith's to confirm that there are no meaningful performance side-effects from the as-yet-unimplemented runtime changes. That shouldn't change the prognosis for this proposal, just possibly which implementation we should. |
No change in consensus so accepted. (For clarity, that there's no API change here, so if the idea had started at just an optimization, this would not have needed to be in the proposal process.) |
Change https://golang.org/cl/300992 mentions this issue: |
Change https://golang.org/cl/302269 mentions this issue: |
… reflect.Value comparison Update golang/go#33136 Update golang/go#43993 Change-Id: I306a8fd60a4d58cfd338edea4f21690338bf9a0b Reviewed-on: https://go-review.googlesource.com/c/website/+/302269 Trust: Keith Randall <khr@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
The
Value
API lacks an efficient way to zero out a value. Currently, one can dov.Set(reflect.Zero(v.Type()))
. However, this is not efficient sincereflect.Zero
may need to allocate a large object, and alsoreflect.Value.Set
will useruntime.typedmemmove
under the hood instead of the more efficientruntime.typedmemclr
.I propose adding
reflect.Value.SetZero
method as a more efficient way to set the value to the zero value.The text was updated successfully, but these errors were encountered: