Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make promote_op rely on Core.Inference.return_type #17389

Merged
merged 1 commit into from
Aug 3, 2016

Conversation

pabloferz
Copy link
Contributor

@pabloferz pabloferz commented Jul 12, 2016

This should fix #17314 and the problem found in #17394

EDIT: This also removes all specialized promote_op methods, since the new ones should handle all previous cases.

CC @JeffBezanson, @nalimilan, @tkelman

@pabloferz
Copy link
Contributor Author

@martinholters After this, you should be able to use broadcast in #17313 as you originally intended.

for (iF, iB) in zip(eachindex(F), eachindex(B))
@inbounds F[iF] = ($f)(A, B[iB])
end
return F
end
function ($f){T}(A::AbstractArray{T}, B::Number)
F = similar(A, promote_array_type($f,typeof(A),typeof(B)))
F = similar(A, promote_op($f, T, typeof(B)))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

promote_array_type is responsible for e.g. typeof(Float32[1,2,3]*1.0) == Vector{Float32}, even though Base.promote_op(*, Float32, typeof(1.0)) == Float64 and typeof(Float32(1)*1.0) == Float64. If you look at above machinery closer, you'll note that the result of promote_op is only used in a few cases, and that's for a reason.

@martinholters
Copy link
Member

After this, you should be able to use broadcast in #17313 as you originally intended.

How is that? The reason for not not using broadcast all the time in #17313 was that it incurs some constant overhead even if promote_op returns a perfectly valid result, ruining performance for small arrays. Actually, if this PR "fixes" promote_op to more often do the right thing (and makes it into 0.5), #17313 would be obsolete.

@pabloferz
Copy link
Contributor Author

pabloferz commented Jul 12, 2016

The reason for not not using broadcast all the time in #17313 was that it incurs some constant overhead even if promote_op returns a perfectly valid result, ruining performance for small arrays.

Sorry, I overlooked that in #17313.

Actually, if this PR "fixes" promote_op to more often do the right thing (and makes it into 0.5), #17313 would be obsolete.

Well, this should make promote_op do the right thing more often. Though I didn't clean up the methods in broadcast.jl here.

@pabloferz pabloferz force-pushed the pz/pobc branch 5 times, most recently from f1b0edf to 2464ebc Compare July 12, 2016 23:07
@pabloferz pabloferz changed the title Make promote_op rely on Core.Inference.return_type Make broadcast rely on Core.Inference.return_type Jul 12, 2016
@tkelman
Copy link
Contributor

tkelman commented Jul 13, 2016

There's been resistance in the past to making program behavior (as opposed to just performance) depend on the results of inference.

@TotalVerb
Copy link
Contributor

I understand that sentiment, but something needs to be done about promote_op. What about the compromise of using it only in the empty case?

@JeffBezanson
Copy link
Member

The ideal situation is to use the inference result when it is concrete, and otherwise use the types of the actual values. Then there is only a problem in the empty case. Using the inferred type there is deemed acceptable for now.

@pabloferz pabloferz force-pushed the pz/pobc branch 3 times, most recently from 4a48215 to 3803ee4 Compare July 13, 2016 14:44
@pabloferz
Copy link
Contributor Author

pabloferz commented Jul 13, 2016

@JeffBezanson Does the current proposal address your concerns (at least provisionally and until we have a better mechanism)?

Tests are now passing on AV (at the moment Travis has a very long queue) and I don't think this should affect performance in general. Of course, that should be checked.

@pabloferz pabloferz mentioned this pull request Jul 13, 2016
@@ -164,8 +164,6 @@ m = [1:2;]'
@test @inferred([0,1.2].+reshape([0,-2],1,1,2)) == reshape([0 -2; 1.2 -0.8],2,1,2)
rt = Base.return_types(.+, Tuple{Array{Float64, 3}, Array{Int, 1}})
@test length(rt) == 1 && rt[1] == Array{Float64, 3}
rt = Base.return_types(broadcast, Tuple{typeof(.+), Array{Float64, 3}, Array{Int, 3}})
@test length(rt) == 1 && rt[1] == Array{Float64, 3}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this deleted?

Copy link
Contributor Author

@pabloferz pabloferz Jul 13, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With these changes broadcast wouldn't be inferable (as map) but it will return the correct type nevertheless. I guess I should change it to at least verify that. <- Done

@pabloferz
Copy link
Contributor Author

The Travis failure seems to be due to #16537 (see #17409)

@tkelman
Copy link
Contributor

tkelman commented Jul 16, 2016

otherwise use the types of the actual values

This isn't doing the latter, it's returning Any when inference doesn't return a leaf type?

@pabloferz
Copy link
Contributor Author

@tkelman You're right, but I believe the cases where the type returned by return_type is not concrete won't be the ones where we care about performance, so we can use the fall back broadcast method.

Actually, this is more in analogy with the _default_eltype used by map (through, the collect methods). So in a way broadcast and map would behave alike with this changes (this was what @stevengj was advocating IIUC).

If the first statement above is not accurate I can change the PR to accommodate Jeff's initial comment.

@yuyichao
Copy link
Contributor

yuyichao commented Jul 16, 2016

It'll be funny if we introduce another type inference dependent behavior for non-empty cases right after we worked really hard to fix one....

@pabloferz
Copy link
Contributor Author

I don't think so. What was the comment about?

@tkelman
Copy link
Contributor

tkelman commented Aug 2, 2016

I could have sworn that a few days ago I said I didn't think we should merge this until after we branch, though that was before seeing the issues raised in the last few days, and I need to look at the pkgeval results.

@pabloferz
Copy link
Contributor Author

This #16986 (comment)? I guess you misplaced it.

@tkelman
Copy link
Contributor

tkelman commented Aug 2, 2016

I was thinking of #16986, nevermind, sorry.

@stevengj stevengj added the broadcast Applying a function over a collection label Aug 2, 2016
@tkelman
Copy link
Contributor

tkelman commented Aug 2, 2016

Sorry for the delay, took me a while to get in front of a proper keyboard.

This turns out to be hugely breaking in packages: https://gist.github.com/8f19fefaf5021dbc076cc47b3ff8abd1

Could use a hand digging through those json logs (jq and/or JSON.jl are useful for this) to see if they're all the same failure though, maybe a re-run now that DataArrays has been tagged might fix them all?

@tkelman
Copy link
Contributor

tkelman commented Aug 3, 2016

With DataArrays tagged it's a much smaller number of failures now. https://gist.github.com/5049d028f493ef2a9163a269f9a634fc
Just BandedMatrices (which you have a fix for), GaussianMixtures, and NearestNeighbors. Could you take a look at the other 2?

(Reactive has flaky tests, and apparently this PR actually fixes SampledSignals http://pkg.julialang.org/detail/SampledSignals.html)

@tkelman
Copy link
Contributor

tkelman commented Aug 3, 2016

signal (15): Terminated\nwhile loading no file, in expression starting on line 0\n\nsignal (15): Terminated\nwhile loading /home/vagrant/.julia/v0.5/NearestNeighbors/test/test_monkey.jl, in expression starting on line 6\nuv__epoll_wait
I think that means NearestNeighbors' tests got slower with this change and they timed out.

@KristofferC
Copy link
Member

KristofferC commented Aug 3, 2016

Deprecation warnings? I fixed deprecations now so maybe try again (with newest commit).

@tkelman
Copy link
Contributor

tkelman commented Aug 3, 2016

PkgEval runs on tagged package versions.

@KristofferC
Copy link
Member

Ok, I tag then.

@pabloferz
Copy link
Contributor Author

pabloferz commented Aug 3, 2016

The NearestNeighbors one is fixed (locally) for me by fixing the deprecations as @KristofferC pointed out.

On the other hand I'm not that sure how this is affecting GaussianMixtures, it passed the first time, but it errored the second (on a function that has nothing to do with DataArrays). Also, it passes the tests on my machine. I suspect the memory used fluctuates around the edges of the memory of the testing machine, but I'm not sure.

@tkelman
Copy link
Contributor

tkelman commented Aug 3, 2016

Ah right it's an OutOfMemoryError() so possibly unrelated. Do you see a noticeable difference in peak memory use on this branch vs master?

@tkelman tkelman merged commit 121aa36 into JuliaLang:master Aug 3, 2016
@pabloferz
Copy link
Contributor Author

Do you see a noticeable difference in peak memory use on this branch vs master?

I don't. But you already merged it, so I guess there is no problem.

@pabloferz pabloferz deleted the pz/pobc branch August 3, 2016 10:59
@tkelman
Copy link
Contributor

tkelman commented Aug 3, 2016

There might be a problem, but I'll take one fixed package (plus 2 about-to-be-fixed packages) in exchange for one broken one. cc @davidavdav

@timholy
Copy link
Member

timholy commented Aug 3, 2016

Reactive has flaky tests

That might have been fixed by JuliaGizmos/Reactive.jl@d7a2bd3, which was tagged just a couple of days ago.

@tkelman
Copy link
Contributor

tkelman commented Aug 3, 2016

if isdefined(Core, :Inference)
function _promote_op(op, T::Type)
G = Tuple{Generator{Tuple{T},typeof(op)}}
R = Core.Inference.return_type(first, G)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does return_type(op, Tuple{T}) not work? This seems roundabout.

Copy link
Contributor Author

@pabloferz pabloferz Aug 3, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was the first thing I tried, the problem is that Test.@inferred does not work over _promote_op that way, but it works with this.

tkelman added a commit that referenced this pull request Aug 4, 2016
tkelman added a commit that referenced this pull request Aug 4, 2016
@tkelman tkelman mentioned this pull request Aug 10, 2016
tkelman added a commit that referenced this pull request Aug 20, 2016
tkelman added a commit that referenced this pull request Aug 20, 2016
ajkeller34 referenced this pull request in PainterQubits/Unitful.jl Feb 21, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
broadcast Applying a function over a collection types and dispatch Types, subtyping and method dispatch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

promote_op/broadcast should not call the function for one(T)