-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ValueTask to corefx #15809
Comments
This is something we plan to do, it's just a matter of when and how. It's in the Channels library right now as a placeholder. You can see there's also a copy of it used internally in corefx, at least in one place. |
We're going to do some measurements and get some concrete numbers. This could be huge for us (asp.net) in certain scenarios. |
Oh hell yes. Would use that. Would be great (although I imagine that ship has sailed) if libs like ADO.NET could use that for scenarios where data is probably available, like:
And of course sockets etc (it reminds me of the cheap return option of the non-task async IO API), and streams. I can't see of any nice way of retrofitting this onto all the existing code without making the APIs ugly, but: yes. So much yes. I spend my life hunting allocations. |
Might loose some benefits with the Stream Async methods needing |
Glancing through this code is like when you open a present and only then realise you'd been wanting this thing you hadn't known existed, for some time. |
Glad you like it :) |
👍 |
Are there plans to let |
@JonHanna I think you can generalize that to just "completed synchronously". That includes not hitting any |
No concrete plans, but it's being discussed (cc: @jaredpar, @MadsTorgersen). There is value in doing so. However, using a public ValueTask<Something> BarAsync(string arg)
{
if (arg == null) throw new ArgumentNullException("arg");
return CanUseFastPath(arg) ?
new ValueTask<Something>(ComputeSync(arg)) :
new ValueTask<Something>(BarAsyncCore(arg));
}
private async Task<Something> BarAsyncCore(string arg) { ... } That's not to say there isn't value in being able to write: public ValueTask<Something> BarAsync(string arg)
{
if (arg == null) throw new ArgumentNullException("arg");
return BarAsyncCore(arg);
}
private async ValueTask<Something> BarAsynccore(string arg) { ... } There obviously is. I'm simply highlighting it to point out that |
In terms of desire, certainly. In terms of implementation I could see one being simpler than the other.
If it's ever attempted and abandoned I'd still love to know how close it came to having the first example you give perform like the second. (If it's ever attempted and adopted I'll likely be doing my own experiments on the results to decide when I should and shouldn't use it anyway). |
I would like to add this to corefx as part of a new library: System.Threading.Tasks.Extensions.dll. Here’s a commit containing the library; I'll submit it as a PR once approved: Here’s the API surface area being added: The motivation for adding ValueTask is performance. We still encourage folks to use Existing examples:
|
If the ValueTask return is always the last statement in each method in the call chain; will this type play well with jit tail call optimizations? Best I can seem to find is from 2009; which isn't RyuJit.
Which would suggest |
I'd trail a tail-call for an unnecessary allocation in a heartbeat... The bigger problem (for me, at least) is that a lot of the obvious places
|
Yes. Ths is a niche thing, very valuable in a few cases and unnecessary or not applicable in the majority. For non-generic Tasks, Task.CompletedTask is perfectly good. For generic Tasks, it's often the case that a cached task of some shape or form is sufficient to cover the majority of cases, e.g. MemoryStream/FileStream/etc.'s ReadAsync frequently returning a (I also really do want folks to think carefully about using this, as it not only has the potential to add overhead, it also complicates the programming model when it's used, at least a bit. But for the more niche cases where it adds value, it often adds a lot of value.) |
A weird side-effect of this would be in certain circumstances reading with a smaller buffer; then copying to larger may be more performant as it would allocate less Though that would probably end up with far stranger code than using a |
I didn't quite follow that, but it's often the case that an optimization applies to a particular common pattern and stepping away from that pattern causes the optimization to not apply. That would be the case here as well. It's extremely common to always read from a given stream the same number of bytes each time, and often for you to get back the number you requested (for a stream like MemoryStream, that only doesn't happen if you're done reading, and even with a stream like FileStream it tries to give you the data from a buffer and then tries to read the remaining you asked for from the file), in which case the optimization would apply. If you frequently change the number of bytes you request, then yeah, this particular optimization wouldn't be relevant. There are certainly patterns of access that do that, they're just not the most common in my experience. |
For example So you may experience perf gains by specifying a much smaller buffer size; as it could then return a cached Assuming the following was true for the read stream (else you'd just be increasing call count for no gain)
|
No major concerns. We should agree, though, how we name types that provide a struct-based alternative to a class. Should this be a prefix or a suffix?
The best location seems to be |
Another place where this can be very useful is TPL Dataflow as calls to Also, since TDF is used for concurrency to begin with, the probability of performance being an issue is higher than normal |
But that's also a case where it can and does return a cached task. |
Well then... another nice surprise from TPL Dataflow :) I should have looked at the source first. |
ValueTask being developed for System.Threading.Tasks.Channels in corefxlab is very valuable on its own and should be included independently of Channels (and earlier).
The text was updated successfully, but these errors were encountered: