We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Currently, we get nasty errors if we have generative supports that don't need to generate anything.
Consider the case with FEGaussLobatto and num_nodes = 2:
FEGaussLobatto
num_nodes = 2
using InfiniteOpt model = InfiniteModel() @infinite_parameter(model, t in [0, 1], num_supports = 3) @variable(model, y, Infinite(t)) @objective(model, Min, integral(y, eval_method = FEGaussLobatto(), num_nodes = 2, t)) ERROR: ArgumentError: reducing over an empty collection is not allowed Stacktrace: [1] _empty_reduce_error() @ Base .\reduce.jl:301 [2] reduce_empty(op::Function, #unused#::Type{Float64}) @ Base .\reduce.jl:311 [3] mapreduce_empty(#unused#::typeof(identity), op::Function, T::Type) @ Base .\reduce.jl:345 [4] reduce_empty(op::Base.MappingRF{typeof(identity), typeof(min)}, #unused#::Type{Float64}) @ Base .\reduce.jl:331 [5] reduce_empty_iter @ .\reduce.jl:357 [inlined] [6] mapreduce_empty_iter(f::Function, op::Function, itr::Vector{Float64}, ItrEltype::Base.HasEltype) @ Base .\reduce.jl:353 [7] _mapreduce(f::typeof(identity), op::typeof(min), #unused#::IndexLinear, A::Vector{Float64}) @ Base .\reduce.jl:402 [8] _mapreduce_dim @ .\reducedim.jl:330 [inlined] [9] #mapreduce#725 @ .\reducedim.jl:322 [inlined] [10] mapreduce @ .\reducedim.jl:322 [inlined] [11] #_minimum#747 @ .\reducedim.jl:894 [inlined] [12] _minimum @ .\reducedim.jl:894 [inlined] [13] #_minimum#746 @ .\reducedim.jl:893 [inlined] [14] _minimum @ .\reducedim.jl:893 [inlined] [15] #minimum#744 @ .\reducedim.jl:889 [inlined] [16] minimum @ .\reducedim.jl:889 [inlined] [17] UniformGenerativeInfo(basis::Vector{Float64}, label::DataType, lb::Int64, ub::Int64) @ InfiniteOpt C:\Users\bbgui\Documents\InfiniteOpt.jl\src\datatypes.jl:337 [18] generate_integral_data(pref::GeneralVariableRef, lower_bound::Float64, upper_bound::Float64, method::FEGaussLobatto; num_nodes::Int64, weight_func::Function) @ InfiniteOpt.MeasureToolbox C:\Users\bbgui\Documents\InfiniteOpt.jl\src\MeasureToolbox\integrals.jl:399 [19] integral(expr::GeneralVariableRef, pref::GeneralVariableRef, lower_bound::Float64, upper_bound::Float64; kwargs::Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol}, NamedTuple{(:eval_method, :num_nodes), Tuple{FEGaussLobatto, Int64}}}) @ InfiniteOpt.MeasureToolbox C:\Users\bbgui\Documents\InfiniteOpt.jl\src\MeasureToolbox\integrals.jl:828 [20] macro expansion @ C:\Users\bbgui\.julia\packages\MutableArithmetics\geMUn\src\rewrite.jl:322 [inlined] [21] macro expansion @ C:\Users\bbgui\.julia\packages\JuMP\9CBpS\src\macros.jl:1297 [inlined] [22] top-level scope @ REPL[174]:1
The text was updated successfully, but these errors were encountered:
pulsipher
Successfully merging a pull request may close this issue.
Currently, we get nasty errors if we have generative supports that don't need to generate anything.
Consider the case with
FEGaussLobatto
andnum_nodes = 2
:The text was updated successfully, but these errors were encountered: