-
-
Notifications
You must be signed in to change notification settings - Fork 398
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance regressions from v0.18 #1403
Comments
|
For On master:
Gave up here |
It would be interested to see the results against |
It might also be interested to see the impact of removing the copy here: https://github.com/JuliaOpt/MathOptInterface.jl/blob/253e16c23882fbdcbcd48ed5381b5be216db3657/src/Utilities/model.jl#L272 |
I'm not sure what's happening in these tests, but one thing that will definitely be needed at some point is a custom |
Updated first post with 0.7/1.0 numbers - definitely better, but still way off. |
I just made a quick benchmark on Julia 1.0, I get the same timing than in the first post for the JuMP.Model and with a JuMPExtension.MyModel, I get PMEDIAN BUILD MIN=2.334072162 MED=4.033828604
PMEDIAN WRITE MIN=0.0 MED=0.0
CONT5 BUILD MIN=0.711627309 MED=0.932109656
CONT5 WRITE MIN=0.0 MED=0.0 Note that the main difference between a JuMPExtension.MyModel and a JuMP.Model is that the former stores the constraint in |
Running these again at JuMP commit
So roughly 4-4.5x slowdown versus JuMP 0.18 on Julia 0.6.3.
|
Reducing down using JuMP
function minipmed()
I, J = 5000, 100
model = Model()
x = @variable(model, [1:I, 1:J], Bin)
@show I * J
@objective(model, Min, sum(abs(j - i) * x[i, j] for i in 1:I, j in 1:J))
end
@time minipmed()
@time minipmed()
@time minipmed()
@time minipmed()
@time minipmed() On Julia 1.0.0, JuMP master vs 0.18: master
0.18
|
Worth noting that if you comment out the objective then its 3 M allocations |
This variant: function minipmed()
I, J = 5000, 100
model = Model()
x = @variable(model, [1:I, 1:J], Bin)
@show I * J
y = 0 * x[1, 1]
for i in 1:I, j in 1:J
JuMP.add_to_expression!(y, 1.0 * abs(j - i), x[i, j])
end
@objective(model, Min, y)
# @objective(model, Min, sum(abs(j - i) * x[i, j] for i in 1:I, j in 1:J))
end Also has 5M allocations, suggesting add_to_expression is allocating 4 times? |
In |
|
To drill down into just the ordereddict time:
Stubbing out ordereddict with an old-style double-vector AffExpr restores performance. |
... and actually running that code back with original OrderedDict is free (!). Its all in setobjective! |
I tracked the issue down to using BenchmarkTools
using JuMP
I, J = 5000, 100
@show I * J
model = Model()
x = @variable(model, [1:I, 1:J], Bin)
y = 0 * x[1, 1]
for i in 1:I, j in 1:J
JuMP.add_to_expression!(y, 1.0 * abs(j - i), x[i, j])
end
@btime JuMP.moi_function($y) yields
|
Before (#1403 (comment)):
After jump-dev/MathOptInterface.jl#567, jump-dev/MathOptInterface.jl#568, and #1604:
|
The most egregious performance regressions have been addressed as far as I'm aware, so I'm removing the 0.19 milestone. The remaining planned performance improvements are discussed at:
We'll want to revisit the comparison with 0.18 after that. |
Note that there are still some pretty severe linear algebra regressions: Random.seed!(999)
const N = 100
randmat() = Int[rand() > 0.8 ? 1 : 0 for i ∈ 1:N, j ∈ 1:N]
m = Model()
@variable(m, x[1:N, 1:N])
A = randmat() on 0.18: julia> @btime x*A;
133.674 ms (3050021 allocations: 231.10 MiB)
julia> @btime A*x*A;
2.653 s (10120042 allocations: 5.82 GiB) on current master: julia> @btime x*A;
397.424 ms (8210029 allocations: 666.43 MiB)
julia> @btime A*x*A;
24.011 s (14600058 allocations: 9.22 GiB) |
The issue is |
At this point I'm going to close this issue. No one has linked from another issue since October 2019, and no one has posted a follow-up comment complaining. We can use #2735 and #42 (a 2-digit issue!) to track on-going performance. Ideally, we'll set up something that makes it easy to benchmark JuMP, and then keep attacking that. We can also open issues if specific performance problems are identified. |
test/perf/speed.jl
On Julia 0.6.3, JuMP
release-0.18
branch:On JuMP master, with write branch no-oped because writeLP/writeMPS are gone.
So definitely an improvement in 0.7/1.0 over 0.6.
(Edited 2018/8/19 to add 0.7 an 1.0)
The text was updated successfully, but these errors were encountered: