kARbon

Indoor scene with oriented image surfaces

The kARbon project uses interactive virtual scenes generated from pictures to support decision-making in construction and installation work. The scenes are generated from pictures taken from different positions at indoor construction sites. The main purpose of the project is to deliver a virtual interaction space where one or more people can review the recorded situation and plan ahead without having to be on-site.

Overview of the kARbon approach and its components.
Overview of the kARbon approach and its components

Spatial reconstruction of camera parameters from photographic images has been done before e.g. to generate scenes for image based rendering or for the orientation of self driven vehicles. The reconstructions in this project are based on the Structure From Motion technique which also has been used in the Photo Tourism project from Snavely et al. Camera parameters are reconstructed by correlating characteristic features inside the photographies. Spatially oriented canvases for the images are then combined with 2d and 3d CAD models to create the complete scene.

The generated scenes are then used to create a virtual interaction space as a replacement for the real world environment. This space is used for navigating the underlying image collection in a way that resembles walking through the real world indoor scene, thereby making it easier to understand the combined content of multiple images. Further features include placement and inspection of markers from different camera positions or the transfer of annotations from the floor plan to the 3d-scene.

The interaction space is then for a distributed conference which can be attended by multiple people. Meeting in a virtual environment reduces travel time and still allows collaboration of partners with different qualification from potentially any place in the world. The distributed application is implemented with a focus on accessibility. The users can meet inside a scene via a browser from a desktop pc or a mobile device (Smartphone or Tablet). The visualization of the 3d scenes is based on WebGL with the use of the x3dom x3dom library from Fraunhofer IGD. The 2d visualization is implemented using the W3C Standard for Scalable Vector Graphics as well as other HTML5 technologies like Canvas. The central server to handle the communication between clients is built as an application running on the NodeJS platform which delivers a bidirectional asynchronous communication via HTML5 Websockets. For the spatial reconstruction of the camera parameters the Bundler software and Microsoft Photosynth have been used.

 

ffdpThis work was part of a cooperation between RheinMain University of Applied Sciences and Darmstadt University of Applied Sciences, funded by the program “Forschung für die Praxis” of the Hessian Ministry for Science and Art and supported by Ove Arup & Parnters, Hessisches Baumanagement, Fraunhofer IGD, Goethe Universität Frankfurt a.M., und University of London.