-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question on the use of the Update! method and is_same_except() #212
Comments
Not sure what the problem might be. Can you provide a MWE demonstrating that using MLJModelInterface
import MLJModelInterface as MMI
mutable struct Classifier <: Probabilistic
x::Int
y::Int
end
model = Classifier(1, 2)
model2 = deepcopy(model)
model2.y = 7
@assert MMI.is_same_except(model, model2, :y) Or, if you suspect some other problem, a more self-contained MWE would be helpful. |
for example, using the this
gives false but i have only changed epochs for a simpler example that does not need LaplaceRedux consider this:
it's due to the fact that one of the field has a flux chain in it. If i remove it i get true. |
Thanks, this helps me see the problem: julia> c = Flux.Chain(Dense(2,3))
julia> c == deepcopy(c)
false Unfortunately, MLJ was not designed with this kind of behaviour in mind, for hyperparameter values. This has occurred once before and a hack was introduced, the trait Another possible resolution is for you to explicitly add an overloading In any case, make sure neither |
couldn't something like this replace the default es_same_except function?
with an helper function
it should work for every MLJ model that wrap a flux model. |
Great progress. I think your test for equality of Chains is not correct, for it will not behave as expected for nested chains, like I suggest you just overload locally and we not add complexity to MLJModelInterface for this one corner case. There is probably a more generic way to handle this, maybe by fixing |
indeed, i just found out that the models don't pass the test if optimiser= Adam() is included in the struct. How should I handle this case? should i always add it to the exceptions? |
Can you please provide some more detail. I don't see any problem at my end: julia> using Optimisers, Flux
julia> import MLJModelInterface as MMI
julia> model = NeuralNetworkClassifier();
julia> model2 = deepcopy(model);
julia> MMI.is_same_except(model, model2)
true
julia> model2.optimiser = Adam(42)
Adam(42.0, (0.9, 0.999), 1.0e-8)
julia> MMI.is_same_except(model, model2)
false
julia> model.optimiser = Adam(42)
Adam(42.0, (0.9, 0.999), 1.0e-8)
julia> MMI.is_same_except(model, model2)
true Are you perhaps using Flux.jl optimisers instead of Optimisers.jl optimisers? |
yes i think this is the issue because
gives me false. Looks like the tutorial i have read to write the training loop is outdated and now Flux prefer optimisers from the Optimisers.jl package but the documentation available online is a confusing mix of old and new rules... |
Well MLJFlux now definitely requires only Optimiser.jl optimisers. If any of the MLJ/MLJFlux docs are out-of-date in this respect, please point them out. |
ah but it was not the official documentation, it was i think a medium page or something like that. anyway i think i have fixed the update loop. if you don't mind i would like to keep this issue open for a bit longer, just in case i encounter another problem. in the opposite case i will close it myself. ok? thank you. |
Happy to support your work on an MLJ interface, and thanks for your persistence. |
@ablaom hi Anthony, i think it's done now https://github.com/JuliaTrustworthyAI/LaplaceRedux.jl/blob/direct_mlj_interface/src/direct_mlj.jl in addition to the mandatory fit and predict methods i have also implemented the training_loss , the fitted_params and the reformat functions. However, regarding this last one there is still a minor inefficiency that i was unable to solve. the reformat functions that I have implemented are
they simply transform the input X data in a matrix and permute the dims and reshape y.
which is kind of ugly and inefficient. It seems that I cannot move this part in the reformat function (even if i specialize it for LaplaceClassifier), because if I do it, I lose access to the labels that the predict method needs. Is there a better way or this is how it has to be done? thank you |
An option is to put the labels into the output Does this address your query? |
@ablaom do you mean return the labels as a third argument in format? if that's what you meant then i guess i will leave it as it is now because i would have to change also fit and predict together with the |
Yes, I'm indifferent as to how you proceed. If you'd like a new review of the interface, please (re-)open the issue at MLJModels.jl and ping me, thanks. |
@ablaom ok, thank you. i will ask first patrick to give a first check to the code i have written, then i will open the issue. |
@ablaom hi anthony, sorry for asking again, we have almost completed the interface but we are facing one last issue.
seems to me that the LogLoss is the appropriate measure according to https://juliaai.github.io/StatisticalMeasures.jl/dev/examples_of_usage/#Probabilistic-regression but evaluate! gives errors Should I overload evaluate! too? |
I think StatisticaMeasure's In any case, I probably need more context to be of any help. One does not overload |
mmm so i guess the problem is in the format of the output.
but i still got this error message
|
And what is at |
I can't see anything wrong with LogLoss: using MLJ
model = ConstantRegressor()
data = make_regression()
evaluate(model, data...; measure = LogLoss())
# Evaluating over 6 folds: 100%[=========================] Time: 0:00:02
# PerformanceEvaluation object with these fields:
# model, measure, operation,
# measurement, per_fold, per_observation,
# fitted_params_per_fold, report_per_fold,
# train_test_rows, resampling, repeats
# Extract:
# ┌──────────────────────┬───────────┬─────────────┐
# │ measure │ operation │ measurement │
# ├──────────────────────┼───────────┼─────────────┤
# │ LogLoss( │ predict │ 2.02 │
# │ tol = 2.22045e-16) │ │ │
# └──────────────────────┴───────────┴─────────────┘
# ┌─────────────────────────────────────┬─────────┐
# │ per_fold │ 1.96*SE │
# ├─────────────────────────────────────┼─────────┤
# │ [2.27, 2.23, 1.82, 1.89, 1.81, 2.1] │ 0.182 │
# └─────────────────────────────────────┴─────────┘
X, y = data
mach = machine(model, X, y) |> fit!
yhat = predict(mach, X);
yhat[1:3]
# 3-element Vector{Distributions.Normal{Float64}}:
# Distributions.Normal{Float64}(μ=-0.8146301697352831, σ=1.759433773949422)
# Distributions.Normal{Float64}(μ=-0.8146301697352831, σ=1.759433773949422)
# Distributions.Normal{Float64}(μ=-0.8146301697352831, σ=1.759433773949422)
LogLoss()(yhat, y)
#1.9839305711450421 |
it's just a bunch of tests to placate the authoritarian codebot that patrick has unleashed in every pull request . everything works except the last line.
if i run this i get
|
ok solved. i removed a |
@ablaom hi, i was reading the The document string standard |
no, not public api |
hi, sorry for the delay, i had an exam. I think (/i hope) the interface is ready for a review, patrick also gave a look. i will reopen the issue on the official MLJ page. @ablaom |
Hi, i was trying to implement the update method for laplaceredux but I am having a problem.
this is the model
this is the fit function that i have written
and now follows the incomplete update function that i was trying. I have removed the loop part since it's not important.
the issue is that if i try to rerun the model by changing only the number of epochs is_same_except still gives me
even though :epochs is listed as exception
so what is the correct way to implement is_same_except? thank you
The text was updated successfully, but these errors were encountered: