-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to register multiple .ome.tif at the same time & issue registering .ome.tif #12
Comments
Hi @Boehmin thanks for the kind words :) Palom might be able to read h5 files depending on the h5 file structure. If you have a small dataset to share I'll find time to take a look. For the .ome.tif dataset that gave you the error, it looks to me that the file might have some issues. Would you be able to share that? To register multiple files, our recommended approach is to register everything (cycles 2, 3, 4, and so on) to the first cycle. Please checkout the example script and lmk for any questions. |
Hi @Yu-AnChen Thanks for pointing me to the example script. I'll try this! I have a smaller fused example h5 file (only one timepoint) and the ome tif dataset here. The .h5 consists of z-stacks so I do not know if that would be an issue for palom? If not that would be even better. Channel 4 is DAPI and channel 3 is a fiducial channel so in theory I could register over the fiducial channel as well if that helps? Again, thanks for looking into this. |
Thanks for sharing the images, I was able to reproduce the error when using your ome-tiff files as input. Will look into a solution there. On the h5 dataset (seems to be generated with BigDataViewer), I was able to convert it into a pyramidal ome-tiff using palom. Note that installing Since there are Z-stacks, I did a maximum intensity projection on the Z-axis in the following script (the output file size is ~50 MB; it can be opened in QuPath) import palom
import h5py
import dask.array as da
import pathlib
img_path = pathlib.Path('h5_small/small_fuse_fused.h5')
h5_img = h5py.File(img_path)
# X-Y pixel size at full resolution; unit is µm/pixel
PIXEL_SIZE = 1
CHANNEL_NAMES = ['marker-1', 'marker-2', 'marker-fiducial', 'DNA']
mosaics = [
# max projection on the Z axis
da.from_array(h5_img[f"t00000/{channel}/0/cells"]).max(axis=0)
for channel in h5_img['t00000']
]
palom.pyramid.write_pyramid(
mosaics=mosaics,
output_path=img_path.parent / f"{img_path.stem}-pyramid.ome.tif",
pixel_size=PIXEL_SIZE,
channel_names=CHANNEL_NAMES,
downscale_factor=2,
compression='zlib',
tile_size=1024,
save_RAM=True
) With the output ome-tiffs from the above script, I think you should be able to run palom without running into the |
As for the issue of |
Sorry to hijack this, is palom working on 2D files only or did you do a projection to speed it up ? |
@Yu-AnChen, I tried both solutions and they both worked! Thank you. |
The current image file reader and writer only works with images that have 2 or 3 dimensions. In our own application, it's |
Hello Yu-AnChen, Sorry to hijack the thread and thanks for this great tool. I discovered your tool after hitting a wall with Ashlar, as my data is already stitched. I used your sample script in this thread and just passed in the location of 3 of my images (CyCIF runs, each DAPI + 3 other channels) as well as an output destination. My files were originally .VSI images, which I converted to OME-TIFF files using QuPath. I get the following error:
|
Hi @moataz-youssef , no worries and thanks for giving it a try. Apologies that I should have included this detail in my recent updates - VSI files are now supported through From the above message, I'm guessing when converting to ome-tiff via QuPath, the tile-size wasn't specified and the default is 256x256 pixels. I found that small tile size will break how it's currently filtering out tiles that failed the alignment. If you could start from the VSI files, the tile size will be set to 1024x1024 pixels internally, and the error you are running into would likely be gone. It would be helpful if you could share a thumbnail of your scan. And here's a jupyter notebook that might give you some ideas. |
Thanks a lot @Yu-AnChen. I actually specified in QuPath that the tile size is 1024 pixels, but I will double check, what the final tile size in the converted OME-TIF files was. I can also share the scans/converted files with you privately, if you have time for this. Thanks a lot for this great tool, I will update you on my progress. |
@Yu-AnChen your solution worked! Using |
And just to add, the jupyter notebook also worked great and gave me a composite OME-TIFF with the DAPI channels from the 3 VSI images. How can I add the other channels through this method as well? I am really impressed with the results, they are comparable, if not better, than manual adjustment and elastic deformation using Warpy in QuPath and Fiji. |
Thanks, and glad to hear that the alignment seemed to work well! Would you be able to share the image or images from some test samples? I suspect that the |
Re: the jupyter notebook that only gives you the first channel of all the "cycles", try changing the |
Thanks for this suggestion. Removing one [0] from reader.pyramid, gave me 7 channels. Removing one [0] from r1.pyramid in the same block. The output of r1.pyramid is:
So I guess somehow, my images are seen as 3-channel images.
The registration of the 9 channels is just fantastic. So I have the 3 DAPIs + 6 other channels. Unfortunately, I can't distinguish which is which. Is there a way to add the channel names to the final mosaic OME-TIFF, either by supplying a csv file with the channel names and order or by querying the file name? This will be just awesome! I will send you my images (6 in total from 6 staining cycles) as well as the jupyter notebook with the output of all blocks on the email linked to your profile. |
By the way, installing napari in the same palom environment, to view the final file, breaks the palom packages. In the environment I only had palom (according to your instructions on this github) and jupyter-lab installed. Once I install napari, I get the following errors:
By installing palom in a fresh environment, the problem is solved. |
I see, it's unfortunate that only the first 3 channels are detected. Let me look into it once I got your files. For adding channel names to the output ome-tiff, you'll need to specify it in the palom.pyramid.write_pyramid(
mosaics,
"registered-image.ome.tif",
r1.pixel_size,
channel_names=[
["DAPI", "Marker A (cycle 1)", "Marker B (cycle 1)"],
["DAPI", "Marker C (cycle 2)", "Marker D (cycle 2)"],
["DAPI", "Marker E (cycle 3)", "Marker F (cycle 3)"],
],
downscale_factor=2,
compression="lzw",
tile_size=1024,
save_RAM=True,
) If you open the output image in QuPath, the channel names should show up in the interface. For napari, we use import napari
import ome_types
import palom
ome = ome_types.from_tiff("registered-image.ome.tif")
channel_names = [cc.name for cc in ome.images[0].pixels.channels]
v = napari.Viewer()
reader = palom.reader.OmePyramidReader("registered-image.ome.tif")
v.add_image(reader.pyramid, channel_axis=0, name=channel_names) |
The error seems to be due to the scikit-image version, you could try re-install palom (via |
Thanks. I am uploading the images now. These datasets are trial ones, for the actual experiments, we have a choice between saving in OME-TIFF or in VSI, but it will take sometime till we have them running. Is there an option to skip the 2nd and 3rd DAPI channels when writing the mosaic OME-TIFF? This will save time and disk space, once the workflow is standardized. Another question, the output mosaic OME-TIFF file from the jupyter notebook has the same pixel size as the original images, although the |
After running this, I get a different error:
|
Woof, I see, will try to see what is the right sequence to set it up - it's annoying to have separate envs.
Yes you can slice the channels you want and append it to the mosaics = []
aligned_img = palom.align.block_affine_transformed_moving_img(
aligner.ref_img, reader.pyramid[0], aligner.block_affine_matrices_da
)
mosaics.append(aligned_img[1:3])
The |
Thanks. I think it is the same in QuPath as well (4). Another question this time regarding brightfield images. The examples listed in the readme of this GitHub under SVS both are either pure brightfield images (Example 1) or a brightfield and IF image (Example 2). The question is, what is the output? How can you make a composite OME-TIFF of that? As far as I know, other WSI alignment tools, e.g. VALIS or CODA, can't do that. Can you share an example? |
Here's a brief doc for aligning "same section" H&E to Orion (an IF image) image (example output here). Note that the default channels that works for me might not work for your images, the key settings are As for the combined view, QuPath experts would prob know how to load multiple images and overlay them, if you figure out a way to do so, please share with us here :) I personally use napari as you can add IF channels and R/G/B channels from the BF to the same viewer and toggle on/off, setting colormap, etc. If you really want to generate one single file by merging the IF and BF images, $ palom-pyramid merge "[r'C:\Users\me\Desktop\palom-align-he-test\merge-test-1.ome.tif', r'C:\Users\me\Desktop\palom-align-he-test\merge-test-2.ome.tif']"
2024-12-17 16:32:11.742 | INFO | palom.reader:pixel_size:147 - Detected pixel size: 0.3250 µm
2024-12-17 16:32:11.750 | INFO | palom.cli.pyramid_tools:merge_channels:54 -
Processing:
merge-test-1.ome.tif
merge-test-2.ome.tif
2024-12-17 16:32:11.756 | INFO | palom.pyramid:write_pyramid:166 - Writing to C:\Users\me\Desktop\palom-align-he-test\merged-merge-test-1-zlib.ome.tif |
I decided to do some further troubleshooting:
So I checked the first file with QuPath and the OME-TIFF had all series in it. Using the NGFF-Converter, I did a second conversion, but this time I extracted only the series which is important (Series 2, counting from 0), the rest was ignored. I ran the output this time with the jupyter notebook and voila! The 4 channels per image were detected:
And running the cells further resulted in getting all 12 channels :-D
Naming the channels worked in the jupyter notebook with:
This is an awesome tool! Thanks a lot for the development and your support. Looking forward to what you will find out from the VSI files. I will try it out now by scripting instead of the jupyter notebook. |
A small update, as I have communicated this not very clearly. Running the script gives me the output of just the first image (3 channels while using VSI and all 4 when using properly converted OME-TIFFs). The jupyter notebook is the one that gave me first the 3 DAPI channels from all 3 images, then all channels from the moving images plus DAPI from the reference image, then finally all detected channels in one OME-TIFF (9 total when using VSI from a maximum of 12). Converting the VSI images to OME-TIFF and running the notebook gave me all 12 channels, with labels, according to your suggestion. So the question is what exactly does the jupyter notebook does differently than the python script? |
I did align an IF image and HE together but it was a virtual 32-bit image in QuPath using Warpy. However, I did not save it as a OME-TIFF. I will try your suggestions, and update. |
Last question for the day :-) Any advice on how to get only certain channels from one of the moving images? For the rest, I would like to take all channels. |
Thanks for all the updates and testing! I was able to fixe the 3 channel instead of 4 channel bug (I hardcoded that thinking all the VSI are RGB bright-field images). Try the v2024.12.1. And here's the script with some comments hopefully is helpful to you. I noticed that there are 2 scenes (ROIs) in the scan - not sure if you can configure that during the imaging. We should discuss your bright-field to IF registration in a separate issue - don't want to spam others' inbox in this issue.
You can slice the |
Thanks a lot. It works fantastically out of the box.
The whole thing took around 3 hours (registration + writing the image to disk). It didn't stress out my system at all. Is there room for parallelization? I do have access to a HPC as well and can gladly test. I will start a separate issue with brightfield images, once I have some nice scans to play with. Thanks again. |
I was able to install both palom and napari in the same conda environment (using micromamba, if you have a recent version of miniconda, you probably will not need mamba/micromamba (info)). I used these to set up the env, replace micromamba create -y -n test-palom python=3.10 "scikit-image<0.20" scikit-learn "zarr<2.15" tifffile=2024.7.2 imagecodecs matplotlib tqdm scipy dask numpy "loguru=0.5.3" ome-types pydantic pint "yamale<5" fire termcolor openslide napari pyqt -c conda-forge -c sdvillal
micromamba activate test-palom
python -m pip install palom
# for viewing the ome-tiff output in napari
# drag-and-drop file in napari window and select "WSI Reader"
pip install git+https://github.com/yu-anchen/napari-wsi-reader.git And here's my conda env lockfile (info) on my Windows machine. |
Hi @Yu-AnChen !
I just tried palom on two cycles of FISH and the registration so far seems really really impressive (in particular since one of the cycles is half out of focus). First of all, thank you so much for making this tool available!
I was wondering how it would be possible to register multiple .ome.tif images simultaneously. Additionally, the registration worked fine with one of my datasets (converted from .h5 to .ome.tif which was a big pain already), however I have issues with a smaller .ome.tif dataset. I get the below error message:
Thanks ever so much for your help!
The text was updated successfully, but these errors were encountered: