-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VCP tutorials #227
base: main
Are you sure you want to change the base?
VCP tutorials #227
Conversation
# %% | ||
# Install VisCy with the optional dependencies for this example | ||
# See the [repository](https://github.com/mehta-lab/VisCy) for more details | ||
# !pip install "viscy[metrics,visual]==0.3.0rc2" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using a release candidate for now. @mattersoflight I think we can aim for a 'stable' release after testing and merging these tutorials, so the version that is deployed in production can point to it.
cc @edyoshikun
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the tutorial reads well. The huggingface demo does allow inference with VSCyto2D. So the differential value of VCP tutorial is to demonstrate all 3 key models. @ziw-liu let's add them since we also describe them in model card.
# ?FullyConvolutionalMAE | ||
|
||
# %% | ||
vs_cyto_2d = FcmaeUNet.load_from_checkpoint( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about we illustrate inference with vs_cyto_3d
and VSNeuromast
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be best if we just have and load yml config files for each case.
# Adjust based on available memory (reduce if seeing OOM errors) | ||
batch_size=8, | ||
# Number of workers for data loading | ||
# Set to 0 for Windows and macOS if running in a notebook, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the past we had set the num_workers=0
given the Windows/Mac issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Google Colab always runs on Linux so I'm using MP to make it faster.
Tutorial notebooks for the Virtual Cell Platform.