Skip to content

Commit b64b514

Browse files
authored
Fix bug of multiple pre-processing when segmentation (PyTorch) (#645)
It is very slow in performing segmentation inference. #531 #234 And, it is because the dataloader will apply multiple data preprocessing if self.cache_convert is None. https://github.com/isl-org/Open3D-ML/blob/fcf97c07bf7a113a47d0fcf63760b245c2a2784e/ml3d/torch/dataloaders/torch_dataloader.py#L77-L83 When running the run_inference method, the cache_convert of dataloader is None. https://github.com/isl-org/Open3D-ML/blob/fcf97c07bf7a113a47d0fcf63760b245c2a2784e/ml3d/torch/pipelines/semantic_segmentation.py#L143-L147 This leads to extreme slowness in performing reasoning. I've added a get_cache method to provide cache to avoid slowdowns caused by multiple preprocessing during inference. I tested it on a GV100 GPU with RandLA-Net on the Toronto3D dataset. Inferencing time for a single scene is only two minutes and 37 seconds. Reasoning is considerably faster than before ```bash After: test 0/1: 100%|██████████████████████████████████████████████████████| 4990714/4990714 [02:37<00:00, 31769.86it/s] Before: test 0/1: 4%|██ | 187127/4990714 [05:12<2:19:39, 573.27it/s] ```
1 parent 3754ece commit b64b514

File tree

2 files changed

+10
-5
lines changed

2 files changed

+10
-5
lines changed

ml3d/torch/dataloaders/torch_dataloader.py

+3-4
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ def __init__(self,
2222
sampler=None,
2323
use_cache=True,
2424
steps_per_epoch=None,
25+
cache_convert=None,
2526
**kwargs):
2627
"""Initialize.
2728
@@ -38,6 +39,7 @@ def __init__(self,
3839
self.dataset = dataset
3940
self.preprocess = preprocess
4041
self.steps_per_epoch = steps_per_epoch
42+
self.cache_convert = cache_convert
4143

4244
if preprocess is not None and use_cache:
4345
cache_dir = getattr(dataset.cfg, 'cache_dir')
@@ -59,10 +61,7 @@ def __init__(self,
5961
continue
6062
data = dataset.get_data(idx)
6163
# cache the data
62-
self.cache_convert(name, data, attr)
63-
64-
else:
65-
self.cache_convert = None
64+
self.cache_convert(name, data, attr)
6665

6766
self.transform = transform
6867

ml3d/torch/pipelines/semantic_segmentation.py

+7-1
Original file line numberDiff line numberDiff line change
@@ -136,6 +136,11 @@ def run_inference(self, data):
136136
model.device = device
137137
model.eval()
138138

139+
preprocess_func = model.preprocess
140+
processed_data = preprocess_func(data, {'split': 'test'})
141+
def get_cache(attr):
142+
return processed_data
143+
139144
batcher = self.get_batcher(device)
140145
infer_dataset = InferenceDummySplit(data)
141146
self.dataset_split = infer_dataset
@@ -144,7 +149,8 @@ def run_inference(self, data):
144149
preprocess=model.preprocess,
145150
transform=model.transform,
146151
sampler=infer_sampler,
147-
use_cache=False)
152+
use_cache=False,
153+
cache_convert=get_cache)
148154
infer_loader = DataLoader(infer_split,
149155
batch_size=cfg.batch_size,
150156
sampler=get_sampler(infer_sampler),

0 commit comments

Comments
 (0)