Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation gaps in ML Practice: table #14

Open
3 tasks
msrepo opened this issue May 6, 2023 · 1 comment
Open
3 tasks

Evaluation gaps in ML Practice: table #14

msrepo opened this issue May 6, 2023 · 1 comment
Assignees

Comments

@msrepo
Copy link
Collaborator

msrepo commented May 6, 2023

Description

image
A Table for Evaluation gaps in Biplanar X-ray to 3D Segmentation similar to the above table will help argument for the paper and give evidence for evaluation gaps.

Tasks

Include specific tasks in the order they need to be done in. Include links to specific lines of code where the task should happen at.

  • List relevant papers
  • define rows and columns for the table
  • fill up the table
@msrepo msrepo self-assigned this May 6, 2023
@msrepo
Copy link
Collaborator Author

msrepo commented May 7, 2023

Adapted from Evaluation Gaps in Machine Learning Practice
We selected Papers dealing with X-ray to 3D Segmentation using Deep Learning. These papers were published from ... year onwards. We coded each of the papers along two dimensions.

  1. Metrics: which evaluation metrics were reported?
  2. Analysis: were error bars and/or confidence interval reported? was statistical significance of differences reported? were examples of model performance provided to complement measurements with qualitative information ? was error analysis performed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant