You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 10, 2020. It is now read-only.
We want to know in a realistic scenario - i.e. one that incorporates the effect of the class imbalance - how effective these attacks are in terms of true and false positives. A really nice plot that would show this (right now the machine learning pipeline generates only an ROC curve) is a graph of precision and recall as a function of k, the percent of the ranked list flagged. Let's add this to evaluate.py.
Also: see Figure 5 in this paper to see a nice comparison between ROC curves and precision/recall graphs in the presence of different base rates.
The text was updated successfully, but these errors were encountered:
We want to know in a realistic scenario - i.e. one that incorporates the effect of the class imbalance - how effective these attacks are in terms of true and false positives.
TPR and FPR are metrics independent of class balance. I know you know this, but it appears to be improperly or at least confusingly phrased.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
We want to know in a realistic scenario - i.e. one that incorporates the effect of the class imbalance - how effective these attacks are in terms of true and false positives. A really nice plot that would show this (right now the machine learning pipeline generates only an ROC curve) is a graph of precision and recall as a function of k, the percent of the ranked list flagged. Let's add this to
evaluate.py
.Also: see Figure 5 in this paper to see a nice comparison between ROC curves and precision/recall graphs in the presence of different base rates.
The text was updated successfully, but these errors were encountered: