Cyclic Test-Time Adaptation on Monocular Video for 3D Human Mesh Reconstruction,
Hyeongjin Nam, Daniel Sungho Jung, Yeonguk Oh, Kyoung Mu Lee,
International Conference on Computer Vision (ICCV), 2023
- We recommend you to use an Anaconda virtual environment. Install PyTorch >=1.8.0 and Python >= 3.7.0.
- Then, run
pip install -r requirements.txt
. You should slightly changetorchgeometry
kernel code following here.
- Prepare
data/base_data
folder following belowDirectory
part. - Download demo files and place them into
data/Demo
. - To run CycleAdapt on a custom video, please refer here.
- Run
python main/adapt.py --gpu 0 --cfg asset/yaml/demo.yml
.
Refer to here.
In the asset/yaml/*.yml
, you can change datasets and settings to use.
Run
python main/adapt.py --gpu 0 --cfg asset/yaml/3dpw.yml
To evaluate the adapted models in your experiment folder, run
python main/test.py --gpu 0 --cfg asset/yaml/3dpw.yml --exp-dir {exp_path}
Refer to the paper's main manuscript and supplementary material for diverse qualitative results.
@InProceedings{Nam_2023_ICCV_CycleAdapt,
author = {Nam, Hyeongjin and Jung, Daniel Sungho and Oh, Yeonguk and Lee, Kyoung Mu},
title = {Cyclic Test-Time Adaptation on Monocular Video for 3D Human Mesh Reconstruction},
booktitle = {International Conference on Computer Vision (ICCV)},
year = {2023}
}