-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Statistical tests on a test set #23
Comments
Certainly. Make something assuming the ideal version you need, and we will work backwards to have it in neuropredict. |
Hi Dinga, did you get a chance to work on this yet? |
Hi Richard, did you get a chance to think about this? Take a look at related discussion here: maximtrp/scikit-posthocs#8 |
sorry for the delay, i was still in a vacation mode, and before I had to finish other papers. I am working on this now, i was looking at theory for tests and also how is sklearn doing things, so we could be consistent, also many usefull things are already implemented there and in statsmodels What kind of tests are you looking for in scikit-posthoc? |
I don't think sklearn has anything in this regard - let me know if you see something. I am particularly interested Friedman test and Nemenyi posthoc, but am open to learning, trying and testing all others too. |
They have permutation test. This might be of interest to you https://arxiv.org/abs/1606.04316 together with code https://github.com/BayesianTestsML/tutorial/ Comparing multiple models on multiple datasets is not as important to me at the moment, also, i think it is quite a niche feature in general. I will focus now on geting valid external validation and some reporting for one model, and add something more complex later. Probably for comparing competing models on the same test set. What do you say? |
I am doing lots of power comparisons and model comparisons now, so what i do, i will try to make it usable and put it here. |
Sure, we can start with something small. Yeah, do it only if it helps your research, and something you will use in short to medium term. |
any hints on how to write tests? |
Funny you ask, I was just informing folks about this: https://twitter.com/raamana_/status/1039150311842164737 |
Sounds good, but which one are you using here? (sorry for a noob question) |
NP, I use pytest. Its easy to learn. |
So this is a little demo of what I have now:
Out:
validate_out_of_sample_predictions takes (probabilistic) predictions as sckit learn outputs them and computes accuracy, AUC, logscore and brier score with it's p-values and CI. At the moment I am using permutation test to get p-values for logscore and brierscore and I don't have a way to compute CI for those, but I think I will do it with bootstrap. I have these measures there because that's what I am using in my paper at the moment, but I would like to add different ones that are interpretable and according to best pracitices. Is this functionality something you would like here? |
Can you push the code for Also, please do take a look at at the scikit-posthocs repo, and play with some examples.. I think you and I are on slightly different pages. |
This is what I have now dinga92@8e7a445 it's more in a script stage to run my own stuff and not really in a merging stage. Now I need to compare models against null, later I will also compare 2 models against each other. As far as I understand, the post-hoc tests you are referring to are to compare multiple models against each other, am I right? |
Yes. Also, Will you be at OHBM next month? |
Most probably I will
…On Tue, May 21, 2019 at 2:07 PM Pradeep Reddy Raamana < ***@***.***> wrote:
Yes.
Also, Will you be at OHBM next month?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#23?email_source=notifications&email_token=ACMVL432B3DQJPDVKNI6MA3PWPQXJA5CNFSM4FIJUDA2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODV3WFGA#issuecomment-494363288>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ACMVL44QSNCQQOL3E5SCC2LPWPQXJANCNFSM4FIJUDAQ>
.
|
I would like to add a functionality to easily run statistical tests (against null, against other classifiers) on an independent test set. Since the test set is independent, this should be easy to do (no need to deal with dependencies between folds).
IMHO the main task will be to make some usable API
The text was updated successfully, but these errors were encountered: