Integrated Spatial Gesture-Based Direct 3D Modeling and Display
Key Words:
Interactive systems, 3D modeling, see-through display, gesture-based modeling, human-computer interaction
In this project we introduce InSpire, an interactive 3D modeling system combining an optical seethrough “holo-display” and video-based motion sensing and head tracking to co-locate 3D model display and user gestures. Users can directly create, edit, and manipulate digital geometry, taking a step towards an intuitive gesture modeling environment that liberates designers’ hands from the limitation of 2D mouse input and monitor output and InSpire designer’s ideas. In this paper, we describe our goals and the concepts and implementation behind the prototype, on both the software and hardware side. In addition, we present several use case examples that explore potential applications. Finally, based on initial user responses to the prototype, some future development directions are discussed.
Created by:
Teng Teng,
Brian Johnson
Research Paper:
ACADIA 14: Design Agency [Proceedings of the 34th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA) ISBN 9781926724478]Los Angeles 23-25 October, 2014), pp. 445-452
[pdf download]