-
Notifications
You must be signed in to change notification settings - Fork 896
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Processing in image encoding for Florence 2 #1170
Comments
This might be due to image reading differences in JavaScript vs. Python. Could you try passing the exact same data (e.g., all-zero tensor) to see if the difference is there too? Also, remember to load the full-precision model in Transformers.js, as this could be another source for differences. |
I've modified the minimal example by creating a blank image as follows:
However, the results are still different:
To clear any misunderstanding, the model I used is converted in full precision. Unfortunately, using the model in |
Hey , is the issue resolved , can you show how to get inference using onnx model ? |
how to decode the embeddings ? |
The issue has been resolved, and the conversion script has been updated. However, I'm not sure if the models on the hub are updated, as I've used the script directly. Unfortunately, I cannot share the inference code as I worked on this in my current company. You will have to do some trial and error, but I can confirm that you can get it working in python. For decoding, you will have to replicate some kind of decoding strategy like greedy decoding or beam search. |
@ir2718 Thankyou for the update, can these converted models be used for object detection task ? |
Since this issue has been resolved, I'll close the issue, but feel free to continue discussion here.
Yes, the models are capable of this - you just need to specify the correct prompts. See the original model card for more details. |
Question
Hi,
while having a look at the code for generation with the Florence 2 model, I've noticed something weird. The original code for inference uses the _encode_image method for creating image features. However, looking at the encode_image used in
transformers.js
, I've noticed the postprocessing after the model forward pass is missing. Here's a minimal reproducible example:The feature differences are pretty big:
Am I missing something here or is this a potential bug?
The text was updated successfully, but these errors were encountered: