Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

attempt to refine return type when it could be improved via PartialTuple #30385

Merged
merged 1 commit into from
Dec 18, 2018

Conversation

jrevels
Copy link
Member

@jrevels jrevels commented Dec 14, 2018

thanks to @vtjnash
fixes #30380

@jrevels
Copy link
Member Author

jrevels commented Dec 14, 2018

@nanosoldier runbenchmarks(ALL, vs = ":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan

@jrevels
Copy link
Member Author

jrevels commented Dec 17, 2018

Looks like some decent improvements across the board, a few regressions too. Here are the potential regressions that seem worth looking into:

ID time ratio memory ratio
["array", "comprehension", "(\"collect\", \"Array{Float64,1}\")"] 1.44 (5%) ❌ 1.00 (1%)
["broadcast", "mix_scalar_tuple", "(10, \"scal_tup_x3\")"] 1.14 (5%) ❌ 1.00 (1%)
["io", "array_limit", "(\"display\", \"Array{Float64,2}(10000, 10000)\")"] 1.07 (5%) ❌ 1.03 (1%) ❌
["sparse", "index", "(\"spmat\", \"row\", \"array\", 1000)"] 1.56 (30%) ❌ 1.00 (1%)
["sparse", "index", "(\"spmat\", \"row\", \"logical\", 1000)"] 2.05 (30%) ❌ 1.00 (1%)
["union", "array", "(\"map\", abs, Float32, true)"] 1.24 (5%) ❌ 1.00 (1%)
["union", "array", "(\"skipmissing\", sum, Union{Missing, Bool}, true)"] 1.25 (5%) ❌ 1.00 (1%)

I'll do some digging on these tomorrow or so when I get a chance.

In the meantime would be great to get some review, especially since there are already merge conflicts cropping up. Would like to get this in ASAP pending investigation of the above regressions.

@jrevels
Copy link
Member Author

jrevels commented Dec 18, 2018

Huh, the sparse index benchmarks are pretty noisy on my local machine. I have to kick samples up to 600,000 (from the default 10,000) to start to see some convergence in the minimum. At that point, master and this PR seem mostly the same (this PR is actually slightly faster, but I assume that could be noise as well).

Same for the array comprehension benchmark; it's unfortunately a bit longer running, so I don't have time to get convergence in the minimum, but the medians seemed to converge around the 500 sample mark on my machine, with the result once again pointing to essentially no change with this PR.

We should consider reconfiguring the time/sample budget on Nanosoldier for those benchmarks...

The ["io", "array_limit", "(\"display\", \"Array{Float64,2}(10000, 10000)\")"] benchmark seems like the only actual regression here, and IMO it's minor compared to the improvements this PR provides. My guess is that it's a result of overspecializing on something dumb, but I'm not sure yet - @code_typed is exactly the same between this PR and master a few invocations down the call chain and I haven't gotten all the way to the bottom yet (Cthulhu makes this so much easier). I'll keep poking to see if I can find a code_typed difference somewhere that yields anything interesting, but this doesn't seem like a merge blocker.

@jrevels jrevels merged commit 92ac90e into master Dec 18, 2018
@jrevels jrevels deleted the jr/moreptuple branch December 18, 2018 20:55
@KristofferC KristofferC mentioned this pull request Dec 20, 2018
52 tasks
@KristofferC KristofferC mentioned this pull request Dec 30, 2018
53 tasks
@StefanKarpinski StefanKarpinski added triage This should be discussed on a triage call and removed triage This should be discussed on a triage call labels Jan 31, 2019
@JeffBezanson JeffBezanson removed triage This should be discussed on a triage call triage backport pending 1.0 labels Jan 31, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:inference Type inference
Projects
None yet
Development

Successfully merging this pull request may close these issues.

more tuple/apply inference fun
6 participants