Navigating and Editing virtual environments can be challenging. This is especially true when using mobile devices with small 2d screens and fixed hardware interfaces. Touch interaction has become the de facto standard for interaction with smartphones and tablets. However touch interaction is more useful for ballistic interaction and less useful for longer ongoing interaction like rotating a camera or placing an object inside a scene. Smaller screens suffer more from being covered with the fingers and interaction can become fragmented. Photography based virtual scenes are especially sensitive to that because of their potentially higher inconsistency and the need to examine the visualization to understand e.g. the orientation of the camera. This project uses the inertial sensors of mobile devices to allow continuous non-fragmented interaction without covering the screen and allows the user to rely on his real world orientation to understand and control the virtual camera orientation.

Mobile devices offer gyroscopes, accelerometers and compasses to calculate the devices real world orientation. Applying that orientation to the rotation of the virtual camera turns the mobile device into a magic window into the virtual scene. Based on the orientation sensors an interaction scheme has been built to allow the navigation of scenes as well as placing objects inside them. As a use case the rapid prototyping of installations inside an virtual lab room has been implemented.

For that use case the touch interaction is heavily reduced, so that the screen can be inspected all the time. The only touch areas are put to the side of the screen and used for switching between camera position and selecting objects to be placed in the scene. The view direction of the virtual camera is used to aim at targets for navigation as well as place objects inside the virtual room. To further test the potential of such interaction methods more test scenarios have been deployed. In the first scenario touch and rotation interaction were used to aim at and remove randomly placed markers from a simple scene. The test showed a significant advantage in speed for the rotation interaction.

A second scenario was used to test if the real world orientation could support the virtual orientation in cases when the visualization might be restrikted. Such cases are more common in scenes reconstructed from photographies. The picture collection might not cover the full scene forcing the user to understand position and orientation without seeing real world content. But even with a high coverage scene the visualization might be inconsistent. Spatially oriented image surfaces can only be rendered perspectively correct from certain viewpoints in the scene. Therefore image content might appear and disappear depending on the virtual cameras position. This creates a broken falsification function for the users. Seeing the desired image content means the camera is oriented correctly. However not seeing the desired image content can be a result of incorrect navigation or a too large perspective error preventing images to be rendered.

In the second scenario the virtual camera is placed in the center of a U-shape room. The room is rendered partly with images and partly with gray walls. A clock widget tells the users to rotate their view horizontally according to the designated time on the clock. (“Look along the hour hand”) Over multiple repetitions the visualization is reduced by removing the images and the gray walls thus making it harder for the users to understand orientation by looking at the screen. The rotation based interaction has shown to be significantly more robust against this reduction by achieving smaller deviation from the target and shorter times to aim.


ffdpThis work was part of a cooperation between RheinMain University of Applied Sciences and Darmstadt University of Applied Sciences, funded by the program “Forschung für die Praxis” of the Hessian Ministry for Science and Art and supported by Ove Arup & Parnters, Hessisches Baumanagement, Fraunhofer IGD, Goethe Universität Frankfurt a.M., und University of London.