Skip to content

Commit

Permalink
Update readme and add requirements for installation
Browse files Browse the repository at this point in the history
Signed-off-by: heyufan1995 <[email protected]>
  • Loading branch information
heyufan1995 committed Aug 18, 2024
1 parent eb63644 commit 9bd6dcd
Show file tree
Hide file tree
Showing 3 changed files with 43 additions and 10 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,4 +12,4 @@ limitations under the License.
-->

# MONAI VISTA Repository
This is the repository for VISTA3D and VISTA2D For the older VISTA2.5d code, please checkout the vista2.5d branch
This is the repository for VISTA3D and VISTA2D. For the older VISTA2.5d code, please checkout the vista2.5d branch
34 changes: 25 additions & 9 deletions vista3d/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ limitations under the License.
-->

# MONAI **V**ersatile **I**maging **S**egmen**T**ation and **A**nnotation
[[`Paper`](https://arxiv.org/pdf/2406.05285)] [[`Demo`](https://build.nvidia.com/nvidia/vista-3d)] [[`Container`](https://docs.nvidia.com/ai-enterprise/nim-medical-imaging/latest/vista-3d.html)] [[`MONAI bundle`]](https://github.com/Project-MONAI/model-zoo/tree/dev/models/vista3d) [[`Checkpoint`]](https://drive.google.com/file/d/1eLIxQwnxGsjggxiVjdcAyNvJ5DYtqmdc/view?usp=sharing)
[[`Paper`](https://arxiv.org/pdf/2406.05285)] [[`Demo`](https://build.nvidia.com/nvidia/vista-3d)] [[`Checkpoint`]](https://drive.google.com/file/d/1eLIxQwnxGsjggxiVjdcAyNvJ5DYtqmdc/view?usp=sharing)
## Overview

The **VISTA3D** is a foundation model trained systematically on 11,454 volumes encompassing 127 types of human anatomical structures and various lesions. It provides accurate out-of-the-box segmentation that matches state-of-the-art supervised models which are trained on each dataset. The model also achieves state-of-the-art zero-shot interactive segmentation in 3D, representing a promising step toward developing a versatile medical image foundation model.
Expand Down Expand Up @@ -68,22 +68,38 @@ VISTA3D checkpoint showed improvements when finetuning in few-shot settings. Onc
## Usage

### Installation
The code requires `monai>=1.3`. Download the [model checkpoint](https://drive.google.com/file/d/1eLIxQwnxGsjggxiVjdcAyNvJ5DYtqmdc/view?usp=sharing) and save it at ./models/model.pt.
To perform inference locally with a debugger GUI, simply install
```
docker pull projectmonai/monai:1.3.2
git clone https://github.com/Project-MONAI/VISTA.git;
cd ./VISTA/vista3d;
pip intall -r requirements.txt
```

Download the [model checkpoint](https://drive.google.com/file/d/1eLIxQwnxGsjggxiVjdcAyNvJ5DYtqmdc/view?usp=sharing) and save it at ./models/model.pt.

### Inference
We provide two ways to use the model for inference. The label definition can be found at [label_dict](./data/jsons/label_dict.json).
1. We recommend users to use the optimized and standardized [MONAI bundle](https://github.com/Project-MONAI/model-zoo/tree/dev/models/vista3d) model. The bundle provides a unified API for inference.
The [VISTA3D NVIDIA Inference Microservices (NIM)](https://build.nvidia.com/nvidia/vista-3d) deploys the bundle with an interactive front-end.
2. For quick debugging and model development purposes, we also provide the `infer.py` script and its light-weight front-end `debugger.py`. `python -m scripts.debugger run`. Note we will prioritize [NIM](https://build.nvidia.com/nvidia/vista-3d) and [monai bundle]() developments and those functions will be deprecated in the future.
The [NIM Demo (VISTA3D NVIDIA Inference Microservices)](https://build.nvidia.com/nvidia/vista-3d) does not support medical data upload due to legal concerns.
We provide scripts for inference locally. The automatic segmentation label definition can be found at [label_dict](./data/jsons/label_dict.json).
1. We provide the `infer.py` script and its light-weight front-end `debugger.py`. User can directly lauch a local interface for both automatic and interactive segmentation.
```
python -m scripts.debugger run
```
or directly call infer.py to generate automatic segmentation. To segment a liver (label_prompt=1 as defined in [label_dict](./data/jsons/label_dict.json)), run
```
export CUDA_VISIBLE_DEVICES=0; python -m scripts.infer --config_file 'configs/infer.yaml' - infer --image_file 'example-1.nii.gz' --label_prompt "[1]" --save_mask true
```
To segment everything, run
```
export CUDA_VISIBLE_DEVICES=0; python -m scripts.infer --config_file 'configs/infer.yaml' - infer --image_file 'example-1.nii.gz' --label_prompt [1] --save_mask true
export CUDA_VISIBLE_DEVICES=0; python -m scripts.infer --config_file 'configs/infer.yaml' - infer_everything --image_file 'example-1.nii.gz'
```

The output path and other configs can be changed in the `configs/infer.yaml`

2. The [MONAI bundle](https://github.com/Project-MONAI/model-zoo/tree/dev/models/vista3d) wraps VISTA3D and provides a unified API for inference, and the [NIM Demo](https://build.nvidia.com/nvidia/vista-3d) deploys the bundle with an interactive front-end. Although NIM Demo cannot run locally, the bundle is available and can run locally. The running enviroment requires a monai docker. The MONAI bundle is more suitable for automatic segmentattion in batch.
```
docker pull projectmonai/monai:1.3.2
```


### Training
#### Dataset and SuperVoxel Curation
All dataset must contain a json data list file. We provide the json lists for all our training data in `data/jsons`. More details can be found [here](./data/README.md). For datasets used in VISTA3D training, we already included the json splits and registered their data specific label index to the global index as [label_mapping](./data/jsons/label_mappings.json) and their data path coded in `./data/datasets.py`. The supported global class index is defined in [label_dict](./data/jsons/label_dict.json). To generate supervoxels, refer to the [instruction](./data/README.md).
Expand Down
17 changes: 17 additions & 0 deletions vista3d/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
fire==0.6.0
matplotlib==3.8.3
monai==1.3.2
nibabel==5.2.1
numpy==1.24.4
Pillow==10.4.0
PyYAML==6.0.2
scipy==1.14.0
scikit-image==0.24.0
torch==2.0.1
tqdm==4.66.2
tensorboard==2.13.0
einops==0.6.1
ml-collections
timm
pytorch-ignite
tensorboardX

0 comments on commit 9bd6dcd

Please sign in to comment.