You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+11
Original file line number
Diff line number
Diff line change
@@ -8,6 +8,17 @@ This repo is for fine-tuning CLIP in the command line. It does not add custom no
8
8
### 👇 Scroll all the way down for step-by-step instructions with ComfyUI! 👇
9
9
### ‼️ Don't want to fine-tune? You can download the model here: [https://huggingface.co/zer0int](https://huggingface.co/zer0int)
10
10
-------
11
+
## Changes 09-MAR-2025:
12
+
⚠️ A new way to fine-tune CLIP: 🌟 [github.com/zer0int/CLIP-fine-tune-registers-gated](https://github.com/zer0int/CLIP-fine-tune-registers-gated) 🌟
13
+
- But: Is it for you? 🤔
14
+
- You want a Text Encoder for T2I / T2V / Gen-AI, or you want best zero-shot accuracy: No / not necessarily. ❌
15
+
- You want a CLIP that is compatible with everything (no architecture change): No / stick with this repo. ❌
16
+
- You are frustrated by the modality gap and want a retrieval CLIP? Absolutely yes! [CLICK ME](https://github.com/zer0int/CLIP-fine-tune-registers-gated) ✅
17
+
- In a nutshell: New CLIP has +20M params, register tokens, Gated MLP / Fusion.
18
+
- Modality Gap (OpenAI pre-trained): 0.8276 --> (NEW CLIP): 0.4740 👈🤯
19
+
- Attention heatmaps are finally meaningful, not "burnt-in artifacts".
20
+
- Check out the models on my HuggingFace: [huggingface.co/zer0int/CLIP-Registers-Gated_MLP-ViT-L-14](https://huggingface.co/zer0int/CLIP-Registers-Gated_MLP-ViT-L-14)
21
+
-----
11
22
## Changes 11/NOV/2024:
12
23
- Added a new model saver: Saves either as GmP + full model object (default, legacy behavior)
13
24
- Optional conversion to .weight (converting back with extra script no longer needed)
0 commit comments