This repository contains the implementation of our paper. In particular, we release code for training a point cloud autoencoder network with different point-set distances, such as, sliced Wasserstein distance (SWD), Chamfer, ... and testing the autoencoder for classification, reconstruction, registration, and generation.
Morph a sphere into a chair by optimizing two different loss functions: Chamfer (top, red) and SWD (bottom, blue). |
Details of the model architecture and experimental results can be found in our following paper.
@InProceedings{Nguyen2021PointSetDistances,
title={Point-set Distances for Learning Representations of 3D Point Clouds},
author={Nguyen, Trung and Pham, Quang-Hieu and Le, Tam and Pham, Tung and Ho, Nhat and Hua, Binh-Son},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2021}
}
Please CITE our paper whenever our model implementation is used to help produce published results or incorporated into other software.
ShapeNet Core with 55 categories (refered from FoldingNet.)
cd dataset
bash download_shapenet_core55_catagories.sh
cd dataset
bash download_modelnet40_same_with_pointnet.sh
cd dataset
bash download_shapenet_chair.sh
cd dataset
bash download_3dmatch.sh
The code is based on Pytorch. It has been tested with Python 3.6.9, PyTorch 1.2.0, CUDA 10.0 on Ubuntu 18.04.
Other dependencies:
- Tensorboard 2.3.0
- Open3d 0.7.0
- Tqdm 4.46.0
To compile CUDA kernel for CD/EMD loss:
cd metrics_from_point_flow/pytorch_structural_losses/
make clean
make
Available arguments for training an autoencoder:
train.py [-h] [--config CONFIG] [--logdir LOGDIR]
[--data_path DATA_PATH] [--loss LOSS]
[--autoencoder AUTOENCODER]
optional arguments:
-h, --help show this help message and exit
--config CONFIG path to json config file
--logdir LOGDIR path to the log directory
--data_path DATA_PATH path to data for training
--loss LOSS loss function. One of [swd, emd, chamfer, asw, msw, gsw]
--autoencoder AUTOENCODER model name. One of [pointnet, pcn]
Example:
python train.py --config="config.json" \
--logdir="logs/" \
--data_path="dataset/shapenet_core55/shapenet57448xyzonly.npz" \
--loss="swd" \
--autoencoder="pointnet"
# or in short, you can run
bash train.sh
To test reconstruction:
python reconstruction/reconstruction_test.py --config="reconstruction/config.json" \
--logdir="logs/" \
--data_path="dataset/modelnet40_ply_hdf5_2048/"
# or in short, you can run
bash reconstruction/test.sh
To generate latent codes of the train/test sets of ModelNet40 and save them into files:
python classification/preprocess_data.py --config='classification/preprocess_train.json' \
--logdir="logs/" \
--data_path="dataset/modelnet40_ply_hdf5_2048/train/"
python classification/preprocess_data.py --config='classification/preprocess_test.json' \
--logdir="logs/" \
--data_path="dataset/modelnet40_ply_hdf5_2048/test/"
# or in short, you can run
bash classification/preprocess.sh
To get classification results:
python classification/classification_train.py --config='classification/class_train_config.json' \
--logdir="logs/"
python classification/classification_test.py --config='classification/class_test_config.json' \
--logdir="logs/"
# or in short, you can run
bash classification/classify_train_test.sh
To preprocess 3DMatch dataset:
python registration/preprocess_data.py --config='registration/preprocess_config.json' \
--logdir='logs/' \
--data_path='dataset/home1'
python registration/preprocess_data.py --config='registration/preprocess_config.json' \
--logdir='logs/' \
--data_path='dataset/home2'
python registration/preprocess_data.py --config='registration/preprocess_config.json' \
--logdir='logs/' \
--data_path='dataset/hotel1'
python registration/preprocess_data.py --config='registration/preprocess_config.json' \
--logdir='logs/' \
--data_path='dataset/hotel2'
python registration/preprocess_data.py --config='registration/preprocess_config.json' \
--logdir='logs/' \
--data_path='dataset/hotel3'
python registration/preprocess_data.py --config='registration/preprocess_config.json' \
--logdir='logs/' \
--data_path='dataset/kitchen'
python registration/preprocess_data.py --config='registration/preprocess_config.json' \
--logdir='logs/' \
--data_path='dataset/lab'
python registration/preprocess_data.py --config='registration/preprocess_config.json' \
--logdir='logs/' \
--data_path='dataset/study'
# or in short, you can run
bash registration/preprocess.sh
To generate transformations into log files:
python registration/registration_test.py --config='registration/registration_config.json' \
--logdir='logs/model/home1/'
python registration/registration_test.py --config='registration/registration_config.json' \
--logdir='logs/model/home2/'
python registration/registration_test.py --config='registration/registration_config.json' \
--logdir='logs/model/hotel1/'
python registration/registration_test.py --config='registration/registration_config.json' \
--logdir='logs/model/hotel2/'
python registration/registration_test.py --config='registration/registration_config.json' \
--logdir='logs/model/hotel3/'
python registration/registration_test.py --config='registration/registration_config.json' \
--logdir='logs/model/kitchen/'
python registration/registration_test.py --config='registration/registration_config.json' \
--logdir='logs/model/lab/'
python registration/registration_test.py --config='registration/registration_config.json' \
--logdir='logs/model/study/'
# or in short, you can run
bash registration/register.sh
To evaluate log files, follow the instruction in the Evaluation
section on this page.
To generate latent codes of train/test sets of ShapeNet Chair and save them into files:
python generation/preprocess.py --config='generation/preprocess_train.json' \
--logdir="logs/" \
--data_path="dataset/shapenet_chair/train.npz"
python generation/preprocess.py --config='generation/preprocess_test.json' \
--logdir="logs/" \
--data_path="dataset/shapenet_chair/test.npz"
# or in short, you can run
bash generation/preprocess.sh
To train the generator:
python generation/train_latent_generator.py --seed=1 \
--logdir="logs/"
# or in short, you can run
bash generation/train_latent_generator.sh
To test the generator:
python generation/test_generation.py --config='generation/test_generation_config.json' \
--logdir="logs/"
# or in short, you can run
bash generation/test_generation.sh