Replies: 1 comment 1 reply
-
Yes. Note, most clip models require all config files to be available. The model you linked, is no longer maintained and is missing some of those. infinity_emb v2 --model-id zer0int/CLIP-GmP-ViT-L-14 --model-id jinaai/jina-clip-v1 --model-id patrickjohncyh/fashion-clip If you want to run another model, with some config files missing (like the above) you are on your own, good luck adding them via PR to the Model on Huggingface .. :) |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have a clip model that I import and use like this:
import torch
import torch_tensorrt
from PIL import Image
import open_clip
import numpy as np
Load the CLIP model and tokenizer
model, preprocess, _ = open_clip.create_model_and_transforms(
"hf-hub:timm/ViT-L-16-SigLIP-256"
)
checkpoint = torch.load("checkpoint.pt", map_location="cuda")
model.load_state_dict(checkpoint, strict=False)
model = model.eval().to("cuda")
Test image for the input
img = Image.fromarray(np.random.randint(0, 256, (256, 256, 3), dtype=np.uint8))
Preprocess the image for model input
input_tensor = preprocess(img).unsqueeze(0).to("cuda")
Ensure input_tensor is valid
print(f"Input Tensor Shape: {input_tensor.shape}") ## torch.Size([1, 3, 256, 256])
print(f"Input Tensor Device: {input_tensor.device}") ## cuda:0
Verify model outputs before compiling
with torch.no_grad():
output = model(input_tensor)
embeddings = output[0] # Extract the first element (embeddings tensor)
print(f"Embeddings Shape: {embeddings.shape}") ## torch.Size([1, 1024])
Is there any way I can deploy the model using infinity
Beta Was this translation helpful? Give feedback.
All reactions