Thursday, 15th of April 2021, 12:00 – 1:00

3D reconstruction with focus on minimally-invasive surgery

Please use this link & sign in with your real name

Stefan Spiss
Researcher at IGS, University of Innsbruck


In recent years, new endoscopic multi-camera system that provide omnidirectional views were developed to improve navigation and orientation and to reduce the amount of blind spots not visible with standard endoscopic cameras. Videos of interventions recorded with such camera systems can be used to create new ways of interactive training media. Especially, for more advanced interactions like free viewpoint selection, 3D data of the surgical scene is required. Therefore, 3D reconstruction with a 360° multi-camera system was developed in this thesis to lay the basis for such training media. Since multi-camera systems at endoscopic scale are not publicly available yet, a commercial 360° camera system was used instead. Depth data is calculated for neighboring pairs of cameras with overlapping field of views using stereo vision. The resulting point clouds are merged into one large reconstruction of the whole environment. Moreover, tracking of objects with known geometry in the reconstructed environment was investigated in a second step. This was mainly motivated by the fact that usually 3D data of organs and anatomical parts from preoperative examination are available. Here the 3D object has to be manually registered in the reconstructed point cloud first. Then frame-wise tracking is performed by finding the updated objects position in each new point cloud using iterative closest point. In the test scene a small 3D printed object (∼ 80 mm) was successfully tracked along a circular motion (radius: ∼ 109 mm) within an mean error of about 2 mm even at transitions between different stereo pairs.


Nach oben scrollen