-
Notifications
You must be signed in to change notification settings - Fork 66
Calibration & Setup
This is a manual to guide you through the automatic spatial alignment of a multi-sensor setup comprising of 4 Intel Real Sense cameras. The proposed method is reliant on an easily-assembled calibration structure which lifts the requirement of placing/gluing markers onto it. In essence, it includes two independent steps in order to achieve high quality estimations. Firstly, given as an input a quadraple of depth images (180x320 px) it utilizes machine learning techniques to produce initial estimations. Secondly, towards further refining the initial results, a graph-based optimization scheme is used to maximize the overlap of point-clouds of neighboring viewpoints.
The goal is to deliver an easily-assembled reference structure for the calibration procedure to the public. Thus, the proposed structure requires 4 low-cost and commercially available packaging boxes from the IKEA, and specifically the JÄTTENE boxes. In practice, the calibration structure could be assembled using any 4 boxes, sized 56 x 33 x 41 cm each. Optionally, random noise pattern could be glued onto the calibration boxes in order to improve the quality of the acquired depth measurements.
Since the proposed calibration structure does not require markers, users should only pay attention in placing the boxes, one on top of each other following a 90o rotational pattern, as depicted in the following pictures. Snapping is performed using each respective bottom boxes sides. The first image shows the assembling procedure from a diagonal perspective. For more descriptive design we use color-coded representation. The first (bottom-most) box is colored in red, the second in green, the third in blue while the last (top-most) is depicted in yellow.
The succeeding image displays the identical positioning of the boxes as viewed from the front-view.
And this last image depicts the corresponding calibration procedure from the top-view.
The formerly placed calibration structure acts as the global coordinate system anchor.
-
The setup requires all 4 Intel RealSense sensors to be placed perimetrically around the calibration structure and looking inwards (i.e., towards the structure).
-
Importantly, all sensors are placed vertically (i.e., flipped 90o clockwise and connections are facing bottomwards) to enable the capturing of the object at closer distance exploiting sensor's wide horizontal field of view.
-
In order to capture the desired object in full-360o, sensors should be placed approximately at 90o intervals,
-
Importantly, towards achieving better performance during the graph-based refinement step, sensors should be positioned diagonally, which means that planar sides of the structure look in-between two adjacent sensors.
-
All viewpoints should target the middle of the structure in a way that the structure is depicted at the center of the received images both horizontally and vertically.
-
Considering the specific field-of-view of the Intel RealSense and with a view to capturing a standing person at the center of the capturing space, all sensors must be positioned at a distance from the structure ranging from 1.75 to 2.2 meters.
-
Finally, the distance from the ground of all sensors must be between 1.1 and 1.5 meter.
An outline of the appropriate positioning of sensors is depicted in the following figure.
Optionally, increasing the laser power at its maximum value during images acquisition for the calibration process displayed better accuracy in depth measurements and could improve the pose estimations. Significantly, DO NOT FORGET to revert the laser power value to its default in order to prevent device failure or damage.