The official implementation of the paper "DDP: Diffusion Model for Dense Visual Prediction".
This repository contains the official Pytorch implementation of training & evaluation code and the pretrained models for DDP, which contains:
- Semantic Segmentation
- Depth Estimation
- BEV Map Segmentation
- Mask Conditioned ControlNet
We use MMSegmentation, Monocular-Depth-Estimation-Toolbox, BEVfusion, ControlNet, as the correspond codebase. We would like to express our sincere gratitude to the developers of these codebases.
In the coming days, we will be updating the corresponding codebases.
We propose a simple, efficient, yet powerful framework for dense visual predictions based on the conditional diffusion pipeline. Our approach follows a "noise-to-map" generative paradigm for prediction by progressively removing noise from a random Gaussian distribution, guided by the image. The method, called DDP, efficiently extends the denoising diffusion process into the modern perception pipeline. Without task-specific design and architecture customization, DDP is easy to generalize to most dense prediction tasks, e.g., semantic segmentation and depth estimation. In addition, DDP shows attractive properties such as dynamic inference and uncertainty awareness, in contrast to previous single-step discriminative methods. We show top results on three representative tasks with six diverse benchmarks, without tricks, DDP achieves state-of-the-art or competitive performance on each task compared to the specialist counterparts.
please refer to each task folder for more details.
- Depth Estimation checkpoints
- Depth Estimation code
- BEVMap checkpoints
- BEVMap Segmentation code
- Mask Conditioned ControlNet checkpoints
- Mask Conditioned ControlNet code
- Segmentation checkpoints
- Segmentation code
- Initialization
If this work is helpful for your research, please consider citing the following BibTeX entry.
@article{ji2023ddp,
title={DDP: Diffusion Model for Dense Visual Prediction},
author={Ji, Yuanfeng and Chen, Zhe and Xie, Enze and Hong, Lanqing and Liu, Xihui and Liu, Zhaoqiang and Lu, Tong and Li, Zhenguo and Luo, Ping},
journal={arXiv preprint arXiv:2303.17559},
year={2023}
}