You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to use dpco with my Zeki dataset, which follows the KITTI format (RGB images + Lidar point clouds in .bin). However, I see that dpvo was trained on TartanAir, which provides RGB + dense depth maps, leading to a format mismatch.
Questions:
Does dpvo support KITTI-style datasets (RGB + sparse Lidar)?
If not, what’s the best way to convert sparse Lidar into dense depth maps? Or maybe I should consider doing another thing?
Are there specific depth format requirements for compatibility?
What I Tried:
Projected Lidar to depth maps but they are sparse.
Considered interpolation but unsure if this is the right approach.
Would appreciate any guidance on preparing KITTI-style data for DPVO. Thanks!
The text was updated successfully, but these errors were encountered:
I am trying to use dpco with my Zeki dataset, which follows the KITTI format (RGB images + Lidar point clouds in .bin). However, I see that dpvo was trained on TartanAir, which provides RGB + dense depth maps, leading to a format mismatch.
Questions:
Does dpvo support KITTI-style datasets (RGB + sparse Lidar)?
If not, what’s the best way to convert sparse Lidar into dense depth maps? Or maybe I should consider doing another thing?
Are there specific depth format requirements for compatibility?
What I Tried:
Projected Lidar to depth maps but they are sparse.
Considered interpolation but unsure if this is the right approach.
Would appreciate any guidance on preparing KITTI-style data for DPVO. Thanks!
The text was updated successfully, but these errors were encountered: