Figure 1: a) The virtual workbench at Georgia Tech, previous home of several VisCenter members . It is a Fakespace Immsersive Workbench using Crystal Eyes stereo glasses and a Polhemus Fastrak tracking system.
b) This shows what a user sees while looking at the Earth on the workbench and walking from the left side to the right side of the workbench
A variety of virtual reality displays are available today. A common type of virtual reality display is called stereo head-tracked-display (HTD). These displays are stationary, attached to a desk, table top or wall. The display generates stereoscopic imagery similar to that used in 3D movies. As a result the displayed 3D objects appear to exist above and below the physical display surface. The display system also accounts for the current position of the user's head when displaying the 3D images. This allows a user to view the 3D virtual objects from different vantage points by simply moving her head and/or body around the display. The virtual workbench is a single screen stereoscopic HTD as shown in Figure 1a. Figure 1b illustrates what a user sees on the workbench as he walks from the left side to the right side of the bench. In order to manipulate the virtual objects, the user reaches out and grabs the objects using either his hands or hand-held devices. The video (Video 1) shows a user traveling through a geospatial virtual environment displayed on the virtual workbench.
Figure 2: Navigation and Interaction in a Multi-Scale Stereoscopic Environment [Hout05]
At the VisCenter we are currently putting together several stereoscopic HTD's to continue our research in interaction in stereoscopic displays. We research fundamental stereoscopic display issues related to maintaining good stereoscopic viewing conditions. While stereoscopic viewing issues in HTD's are fairly easy to address for small scenes, the viewing issues become more challenging for general and extended environments. Our work in [War99a] begins with traditional interaction techniques for traveling through a virtual environment and augments these methods with automated view adjustments to generate optimal stereoscopic imagery. In [War99b] we mathematically analyze the distortion generated by deliberately using an incorrect eye separation value in the computer graphics viewing model and showed how certain aspects of this distortion could be removed. In [War01], we describe a set of geometric characteristics and constraints of hypothetical virtual reality applications and show how these issues influence an application designer's choice of method for addressing stereoscopic viewing issues. In [War02], we mathematically analyzed and compared the distortions of the 3D image created when applying several traditional techniques for adjusting the stereoscopic view. In [Hout05], we describe an approach for combining a two-handed interface for viewing and manipulating volumetric data with automated view adjustments for maintaining good stereoscopic effects (Figure 2). This involves automating view adjustments not only during travel through the virtual environment but also during selection and manipulation of regions of interest in the environment.