Skip to content

Latest commit

 

History

History
58 lines (35 loc) · 2.5 KB

README.md

File metadata and controls

58 lines (35 loc) · 2.5 KB

MedicOOD

Repository for the paper "Multi-layer Aggregation as a key to feature-based OOD detection" arxiv illustration

This is a demonstration of the use of the 8 feature-based Out-of-distribution detectors on a toy 3D image segmentation task.

In-distribution data: a 64x64x64 volume with spheres to segment, generated using TorchIO's RandomLabelsToImage function. ID

Out-of-distribution data: a ID sample is made OOD by adding TorchIO's RandomMotion artefact. OOD

  • Step 2: Train a simple segmentation DynUnet using train.py The training can be launched using:

python MedicOOD/MedicOOD/model/train.py MedicOOD/MedicOOD/model/config.yaml.

Don't forget to modify the paths of --output-folder and --data-csv in the YAML file.

  • Step 3: Launch evaluation using test.py This script will train an instance of each feature-based OOD detector from the features of the trained DynUnet. Then inference is launched on the test ID dataset and test OOD dataset. By comparing these scores, AUROC scores are extracted to estimate OOD detection performance.

You can use a command such as: python MedicOOD/MedicOOD/model/test.py --run-folder path/to/trained/model

OOD detector AUROC
Spectrum Single 0.979
Spectrum Multi 1.000
MD Pool Single 0.931
MD Pool Multi 0.979
Prototypes Single 0.823
Prototypes Multi 0.850
FRODO 1.000
OCSVM 0.987
  • Ressources:

FRODO: paper link

MD Pool: paper link

Prototypes: paper link

Spectrum: paper link

OCSVM: paper link

  • Citation: If you use this repository in your research please cite cite us ! arxiv