Skip to content
/ QLIP Public

[arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation

License

Notifications You must be signed in to change notification settings

NVlabs/QLIP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 

Repository files navigation

QLIP

[📂 GitHub] [📃 QLIP Tech Report] [🔗 Project Page] [🤗 HF Collections]

QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation
Yue Zhao1,*, Fuzhao Xue2,†, Scott Reed2, Linxi Fan2, Yuke Zhu2, Jan Kautz2, Zhiding Yu2, Philipp Krähenbühl1 De-An Huang2
1 UT Austin, 2NVIDIA
*The work was done during an internship at NVIDIA Research.
Now at Google DeepMind.
arxiv | bibtex

Introduction

We introduce Quantized Language-Image Pretraining (QLIP), a visual tokenization method that combines state-of-the-art reconstruction quality with state-of-the-art zero-shot image understanding. QLIP trains a binary-spherical-quantization-based autoencoder with reconstruction and language-image alignment objectives. We are the first to show that the two objectives do not need to be at odds. We balance the two loss terms dynamically during training and show that a two-stage training pipeline effectively mixes the large-batch requirements of image-language pre-training with the memory bottleneck imposed by the reconstruction objective. We validate the effectiveness of QLIP for multimodal understanding and text-conditioned image generation with a single model. Specifically, QLIP serves as a drop-in replacement for the visual encoder for LLaVA and the image tokenizer for LlamaGen with comparable or even better performance. Finally, we demonstrate that QLIP enables a unified mixed-modality auto-regressive model for understanding and generation.

Model Zoo

We provide the following models:

model name #bits CR 0-shot rFID HF Link
QLIP-B-16-256 28 219.4 74.3 3.21 🤗 link
QLIP-B-8-256 28 54.8 75.6 0.70 🤗 link
QLIP-L-14-392 28 168 79.1 1.46 🤗 link

Note:

  • CR: compression ratio = 24/(#bits)*patch_size^2;
  • 0-shot: zero-shot classification accuracy on IN-1k-val;
  • rFID: reconstruction FID on IN-1k-val.

Usage

Please refer to notebook.

Citing QLIP

@article{zhao2025qlip,
  title={QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation},
  author={Zhao, Yue and Xue, Fuzhao and Reed, Scott and Fan, Linxi and Zhu, Yuke and Kautz, Jan and Yu, Zhiding and Krähenbühl, Philipp and Huang, De-An},
  journal={arXiv preprint arXiv:2502.yyyyy},
  year={2025}
}

Acknowledgement

The project builds upon the following open-source efforts:

  • EVA-CLIP: We use EVA-CLIP as initialization which significantly speeds up the training convergence.

  • LLaVA: We use LLaVA to evaluate the multimodal understanding performance.

  • LlamaGen: We build the text-to-image generation evaluation on top of LlamaGen.

  • Lingua: We build the unified multimodal model on top of Lingua.

About

[arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published