You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If a post process parameter is marked for tuning, we should check what outcomes it changes and then cross-check with the metrics being measured.
For example, if someone is optimizing the probability cutoff for a binary classification model and they are only measuring ROC AUC (or some other probability-based metric), we should issue a warning since changing the cutoff will not change the metric.
The text was updated successfully, but these errors were encountered:
If a post process parameter is marked for tuning, we should check what outcomes it changes and then cross-check with the metrics being measured.
For example, if someone is optimizing the probability cutoff for a binary classification model and they are only measuring ROC AUC (or some other probability-based metric), we should issue a warning since changing the cutoff will not change the metric.
The text was updated successfully, but these errors were encountered: