You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been exploring the source code of different preprocessors for ViT models and I am struggling to understand the exact pre-processing steps an image goes through in the ImageProcessor / CLIPProcessor . How is the image resized to 224x224? Is it similar to skimage.transform.resize() where it performs either up-sampling or down-sampling based on the image size? Thanks in advance.
Looking into the image_processing_clip.py through huggingface I found the following explanation:
"Resize an image. The shortest edge of the image is resized to size["shortest_edge"], with the longest edge resized to keep the input aspect ratio."
The text was updated successfully, but these errors were encountered:
CLIP has been trained on images of size 224x224 and therefore works best at this size. If you want best results, your images have to be converted to this size also.
First, the image is resized such that the size of the smaller side becomes 224. For example, if the image is of size (448, 2240), the smaller side is 448, so the image has to be resized by a factor of 2 down to (224, 1120). The next step is to crop the center from this image, so the top and bottom are discarded, leaving only a (224, 224) image.
Next, the range of the pixels is converted from [0, 255] (data type uint8) down to [0.0, 1.0] (float32).
Lastly, the image is normalized, which is cargo cult from back in the days when people hadn't figured out weight initialization yet.
Here is a preprocessing implementation without TorchVision using only the Pillow library and NumPy, which might be a bit easier to understand.
I have been exploring the source code of different preprocessors for ViT models and I am struggling to understand the exact pre-processing steps an image goes through in the ImageProcessor / CLIPProcessor . How is the image resized to 224x224? Is it similar to skimage.transform.resize() where it performs either up-sampling or down-sampling based on the image size? Thanks in advance.
Looking into the image_processing_clip.py through huggingface I found the following explanation:
"Resize an image. The shortest edge of the image is resized to size["shortest_edge"], with the longest edge resized to keep the input aspect ratio."
The text was updated successfully, but these errors were encountered: