Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

further model testing #8

Merged

Conversation

promatty
Copy link
Contributor

trained models on KNN, Decision Trees, Random Forests, and Gradient Boosting, all with and without Cross Validation.

@promatty promatty requested a review from justin-phxm November 30, 2024 18:42
analysis/furtherModelTesting.ipynb Outdated Show resolved Hide resolved
analysis/furtherModelTesting.ipynb Outdated Show resolved Hide resolved
@justin-phxm
Copy link
Contributor

What are the differences between the two metrics, MAE and RMSE? How do they relate to precision and accuracy? How can we explain this in layman's terms, apply it directly to Helios, and make real world decisions from this?

@promatty promatty force-pushed the ML-11-Experiment-with-various-machine-learning-algorithms branch from 74327db to e02a582 Compare December 7, 2024 17:42
@promatty promatty requested a review from justin-phxm December 7, 2024 17:44
Copy link
Contributor

@justin-phxm justin-phxm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Few small changes.

Great conclusion.

analysis/furtherModelTesting.ipynb Outdated Show resolved Hide resolved
analysis/furtherModelTesting.ipynb Outdated Show resolved Hide resolved
"\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=global_test_size, random_state=global_random_state)\n",
"# Define the parameter grid for Random Forest\n",
"param_grid_rf = {\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tell me how many times this model will be trained with this size of param_grid

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would this change if we also wanted to use Cross validation of say... 5?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the model is trained once for each combination, so it's 3x4x3x3=108 times. cross validation of 5 means we would train 5 times per combination so 5x108= 540. Is this too much?

analysis/furtherModelTesting.ipynb Outdated Show resolved Hide resolved
@justin-phxm
Copy link
Contributor

Also I was hoping you would do the hyper parameter tuning optimizations in ML-12-optimize-model-performance-through-hyperparameter-tuning but this is fine ig.

I feel more comfortable reviewing smaller PRs

@promatty promatty force-pushed the ML-11-Experiment-with-various-machine-learning-algorithms branch from 1a04102 to 499c367 Compare December 7, 2024 22:02
@promatty promatty requested a review from justin-phxm December 7, 2024 22:04
@promatty promatty force-pushed the ML-11-Experiment-with-various-machine-learning-algorithms branch from 499c367 to 697deae Compare December 7, 2024 22:18
@justin-phxm justin-phxm merged commit 86f326d into master Dec 7, 2024
1 check passed
@justin-phxm justin-phxm deleted the ML-11-Experiment-with-various-machine-learning-algorithms branch December 7, 2024 22:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants