You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I love the {stacks} package and over the last several months have thought about it quite a bit and whether there is room to broaden the API to be more flexible. It seems to me that the current API is opinionated on a few things:
The stacking model must be Ridge/LASSO/ElasticNet
The resampling is done via bootstrapping
The training routine is tune::tune_grid()
I am wondering if there is interest in a function one level lower than blend_predictions() that is more flexible on the three design considerations described above. Most importantly, a more general API for stacking would allow users to take advantage of the huge breadth of models available through parsnip et al. for stacking predictions (e.g., random forest, XGBoost, etc.). In theory, any model that supports the mode would be a candidate for the stacking model.
Without actually considering the implementation too much, I image some function, let's call it stack_predictions(), because I don't have a better name off the top of my head, that looks something like:
stack_predictions(
data_stack,
model=parsnip::linear_reg(engine="glmnet", penalty= tune(), mixture= tune())
fn=tune::tune_grid,
resamples=rsample::bootstrap(times=25),
control=tune::control_grid(),
...# passed on to `fn` (metric, grid, param_info)
)
What do you think? This way the user can control the stacking more finely and blend_predictions() would be a special case of stack_predictions() and could potentially call this function internally. That way if you wanted to stack with a random forest, tune with {finetune}, and use 100 Monte Carlo resamples, you could do something like:
I have thought about this a few times and figured it was worth going full stream of consciousness and laying it all out for you to think about. Happy to chat more and think about this more thoroughly. As always, happy to contribute and not just request features.
While I'm thinking about it, in order to support more stacking models, there needs to be a way to define what it means to be a "non-zero stacking coefficient" for models for which coefficients don't really exist (e.g., Random Forest). Perhaps for tree-based models, if a model's prediction are used for a split in any tree, it is "non-zero" - this requires some more thinking.
The text was updated successfully, but these errors were encountered:
Just wanted to drop a note that I've seen this and appreciate the thorough issue description! Related to #54. We've been benchmarking some variants on this generalization and still have some work to do before we'd feel confident moving forward with an implementation.
Seconding this feature request! Stacks is beautifully fast but I'd love a native way to build a stacked ensemble from a workflowsets trained object that uses finetune::tune_race_anova and finetune::control_race. 🙏
Hi @simonpcouch,
I love the
{stacks}
package and over the last several months have thought about it quite a bit and whether there is room to broaden the API to be more flexible. It seems to me that the current API is opinionated on a few things:tune::tune_grid()
I am wondering if there is interest in a function one level lower than
blend_predictions()
that is more flexible on the three design considerations described above. Most importantly, a more general API for stacking would allow users to take advantage of the huge breadth of models available throughparsnip
et al. for stacking predictions (e.g., random forest, XGBoost, etc.). In theory, any model that supports themode
would be a candidate for the stacking model.Without actually considering the implementation too much, I image some function, let's call it
stack_predictions()
, because I don't have a better name off the top of my head, that looks something like:What do you think? This way the user can control the stacking more finely and
blend_predictions()
would be a special case ofstack_predictions()
and could potentially call this function internally. That way if you wanted to stack with a random forest, tune with{finetune}
, and use 100 Monte Carlo resamples, you could do something like:I have thought about this a few times and figured it was worth going full stream of consciousness and laying it all out for you to think about. Happy to chat more and think about this more thoroughly. As always, happy to contribute and not just request features.
While I'm thinking about it, in order to support more stacking models, there needs to be a way to define what it means to be a "non-zero stacking coefficient" for models for which coefficients don't really exist (e.g., Random Forest). Perhaps for tree-based models, if a model's prediction are used for a split in any tree, it is "non-zero" - this requires some more thinking.
The text was updated successfully, but these errors were encountered: