-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
automatically convert input matrix to Float32 #272
Conversation
Thanks indeed for this PR. Can you please look at the CI fails? For example, we have:
|
@tiemvanderdeure If you can fix the fails here, I can sort out the conflict. |
Works now! |
Okay @tiemvanderdeure, thanks. Turns out conflict resolution is a bit more complicated than anticipated. So I'm waiting on resolution of the following before trying again: |
@tiemvanderdeure Your PR has now been merged and is part of 0.6, just released. Thanks for you patience as we sorted out the conflicts. |
Awesome, thanks for seeing this one through! |
This follows up on JuliaAI/MLJModels.jl#565 and just adds a tiny step that automatically converts input to
Float32
before passing it to Flux.I can't see why anyone would ever not want this, as the eltype of any neural net generated through MLJFlux will always be
Float32
, and any other input type will be converted toFloat32
anyways, but with a much bigger computational cost. So I didn't build in an option to disable this behaviour.The reason to have this in MLJFlux in particular is that other machines such as
MLJModels.OneHotEncoder
output Float64 types.I didn't look into #267 in detail and it might make this redundant in some cases, but maybe not in all?