-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Visualization: Compare best,median,worse cases for all benchmarked methods #21
Comments
pandas has a weird API print(df.nlargest(1, ["DSC"], "first")["subject-id"].values[0])
print(df.nsmallest(1, ["DSC"], "first")["subject-id"].values[0])
print(df[df.DSC == df.median(numeric_only=True)["DSC"]]["subject-id"].values[0]) |
The |
Without resampling, metadata copied from groundtruth both of these are wrong. |
fixed visualization issues: see #28 |
fury
andxvfb
try thin instead of
vedo
because of aestheticsfilter_runs_from_wandb
to findrun-id
for each anatomy and method under considerationevaluation/metrics.csv
and find the best, median and worse casesample-id
The text was updated successfully, but these errors were encountered: