You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for sharing the code !
I have a question about median scaling. I saw "We propose an adaptive cost volume to overcome the scale ambiguity arising from self-supervised training on monocular sequences" in the paper, and i saw median scaling in evaluate_depth.py.
Is the problem of unknown scale still untackled?
Could you explain more about scale ambiguity that you have overcome in the paper?
Thanks in advance.
The text was updated successfully, but these errors were encountered:
Yes we still require median scaling at test time to evaluate our depth predictions - ManyDepth will make it's predictions in an arbitrary scale (the same as MonoDepth2 etc) and so this needs to be adjusted using median scaling so we can compare to the ground truth.
Our adaptive cost volume is to overcome the scale ambiguity at training time - during training, our pose/depth network will converge to a scale, but this scale is not known ahead of time - and so we don't know in advance the bounds for our cost volume (i.e. the minimum and maximum depth planes to use for warping). The adaptive cost volume allows our cost volume to provide useful geometric information regardless of the final scale the network converges to.
Let me know if that makes sense, or if I can help explain in more detail.
Thanks for reply, sorry for late.
The following is what i understand from the code and your reply.
Is scale ambiguity means the opt.min_depth and opt.max_depth (these two values is fixed in monodepth2, and is changed dynamically)
adaptive cost volume use linspace(min_depth, max_depth) to build cost volume, min_depth, max_depth may not be the same unit as meter(since in the prediction we use median scaling). The network will converge to a scale that has unknown unit.
Hi, thanks for sharing the code !
I have a question about median scaling. I saw "We propose an adaptive cost volume to overcome the scale ambiguity arising from self-supervised training on monocular sequences" in the paper, and i saw median scaling in evaluate_depth.py.
Is the problem of unknown scale still untackled?
Could you explain more about scale ambiguity that you have overcome in the paper?
Thanks in advance.
The text was updated successfully, but these errors were encountered: