HOME
Charlotte Visualization Center
[ RESEARCH ]     [ PUBLICATIONS ]     [ PEOPLE ]     [ GALLERY ]     [ INSIDER ]
3-Dimensional Interaction


Figure 1: (A) Illustrates a two-handed interface to a volumetric weather visualization.

(B) Illustrates a variety of icons and symbols demarking features extracted from the raw volumetric data.
Interactive 3D graphics applications require specialized human-computer interfaces for traveling through the 3D environment and selecting and manipulating objects in the environment. These interfaces must go beyond the standard WIMP (Windows-Icons-Menus-Pointing-Devices) to enable the user to interact in a 3 dimensional space. Travel in 3D environments requires controlling at least 6 degrees of freedom-position and orientation-and often also requires controlling scale as a 7th degree of freedom. 3D interaction can be accomplished by mapping traditional WIMP interfaces to each of these dimensions or by using advanced 6DOF devices (Figure 2).

We are currently developing two-handed 6DOF interfaces for travel, selection and manipulation of volumetric, Doppler weather radar data which is embedded in a global, geospatial 3D environment (Figure 1)[Hout05]. This builds upon our work and general interest in travel and interaction techniques in 3D environments that contain geometric details at several orders of magnitude in size [War01][War99b]. Many of these projects focus on geospatial 3D environments and many are based on our Virtual Geographic Information System (VGIS) software. We have a general interest in developing and researching how various manipulation techniques originally designed for both immersive and non-immersive virtual reality systems can be adapted and expanded to the display systems such as Desktop VR and the virtual workbench [Hout05][vdPol99] (see also Stereoscopic Displays).

We are also interested in formal usability evaluation of various interaction techniques. In [Krum03], we investigated various 3D travel techniques suitable for wearable computers running a 3D geospatial application. In [Krum02a][Krum02b], we evaluated combinations of speech and vision-based gesture recognition for controlling travel through a geospatial environment. In [Seay01], we conducted research into simulator sickness. In [Seay00], we also evaluated and compared several different 6DOF input devices and one-handed versus two-handed operation for a peg-in-hole task on the virtual workbench.

Another research area is collaborating with computer vision colleagues to develop wireless 6DOF interfaces for VR. We are currently collaborating with Dr. Min Shin at UNCC on wireless computer-vision approaches for tracking objects in VR. Our past work in [Krum02a][Krum02b] also used vision-based gesture recognition for interaction in a 3D environment. In [Bas00] we combined the virtual workbench with computer vision techniques to track the user's hands and props and to capture geometric shape using shadows.


Figure 2: Each sensorís position and orientation in 3D space is tracked and each sensor has 3 buttons.


Copyright © 2005 Data Visualization Group. All rights reserved.