Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preprocessor - How does it work? #459

Open
whishei opened this issue Aug 13, 2024 · 1 comment
Open

Preprocessor - How does it work? #459

whishei opened this issue Aug 13, 2024 · 1 comment

Comments

@whishei
Copy link

whishei commented Aug 13, 2024

I have been exploring the source code of different preprocessors for ViT models and I am struggling to understand the exact pre-processing steps an image goes through in the ImageProcessor / CLIPProcessor . How is the image resized to 224x224? Is it similar to skimage.transform.resize() where it performs either up-sampling or down-sampling based on the image size? Thanks in advance.

Looking into the image_processing_clip.py through huggingface I found the following explanation:
"Resize an image. The shortest edge of the image is resized to size["shortest_edge"], with the longest edge resized to keep the input aspect ratio."

@99991
Copy link

99991 commented Jan 23, 2025

CLIP has been trained on images of size 224x224 and therefore works best at this size. If you want best results, your images have to be converted to this size also.

https://github.com/openai/CLIP/blob/main/clip/clip.py#L79-L86

First, the image is resized such that the size of the smaller side becomes 224. For example, if the image is of size (448, 2240), the smaller side is 448, so the image has to be resized by a factor of 2 down to (224, 1120). The next step is to crop the center from this image, so the top and bottom are discarded, leaving only a (224, 224) image.

Next, the range of the pixels is converted from [0, 255] (data type uint8) down to [0.0, 1.0] (float32).

Lastly, the image is normalized, which is cargo cult from back in the days when people hadn't figured out weight initialization yet.

Here is a preprocessing implementation without TorchVision using only the Pillow library and NumPy, which might be a bit easier to understand.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants