acm-header
Sign In

Communications of the ACM

Collaborative Virtual Design Environments

Shared Vision


In a dark room at the North American Design Center of General Motors (GM), a discussion is going on around a vehicle concept. Opposite the reviewers, a detailed, full-size car spins on a turntable on a sun-lit patio. At the press of a button, the patio changes to a winter scene with the bare trees reflected in the car's surface. Next, the car stands alongside the current vehicle brand line, or semitransparently overlays its predecessor to show the design evolution.

Operations that were formerly impractical or expensive (such as reviewing vehicle mock-ups outdoors) are becoming commonplace in the automotive industry through the use of virtual environments. In production use at GM since 1995, these venues have become collaborative meeting rooms, and are now linking global working groups together electronically via a shared virtual model. Here, I describe the use of single and networked collaborative virtual environments for design review, including lessons learned in group communication, visual perception, and remote control of model and viewpoints.

The collaboration among artists and engineers, as well as others in a cross-functional vehicle development team, requires an unambiguous language—the current lingua franca of physical models. For exterior aesthetics, designers rely on 2D sketches (electronic or otherwise) for their fluidity and expressiveness. To move from design intent to product involves the transformation of 2D artwork to 3D shapes, and the negotiated convergence of aesthetic concepts with engineering and manufacturing realities. Detailed physical models continue to play a central role in both these stages because no other representation is good enough to replace them. Yet, electronic alternatives are used because of the production time and expense for physical models and the inability to share them across a global enterprise. Virtual reality (VR) helps make these electronic models appear more real. Our research goal is to create and advance the usage of realistic virtual models, smoothly paving the way to a digital enterprise in the future.

The original symbols of VR—the glove and head-mounted display (HMD)—are not usually seen in the automotive industry [1]. Instead, Immersive Projection Technology (IPT) display systems have become popular. An IPT display is composed of a number of stereoscopic projection screens, often configured as a Wall or CAVE [2] as shown in Figure 1. The former is for "outside, looking in," and the latter for "inside, looking out;" in this context, for viewing car exteriors and interiors respectively.

Stereo glasses, a 3D head-tracking device (to determine the correct perspective viewpoints), a 3D pointing device, and audio speakers complete the basic hardware interface. It, together with a computer and software to render a stored model, comprise the display system. For our application, a real bucket seat and steering wheel—coordinated spatially with the virtual vehicle interior in the CAVE—helps drivers adjust their body positions and feel more like they are in a car, rather than in a theater seat.

Back to Top

Kicking the Tires

GM Research and Development has helped develop and later disseminate IPT systems globally throughout the corporation since 1992 [9]. While the examples discussed here are necessarily specific to our experience and application, it is hoped this distillation will be useful for other immersive applications.

Communicate visually. Surprisingly, many engineers consider visualization just a check. CAD data, which has to exist prior to many kinds of visualization, can be used to calculate vehicle dimensions, packaging volumes, human reach curves, and sight lines. Why visualize? Visual exploration provides discovery of new problems as well as validation of expectations. Aspects of the design, for example, aesthetics, or even a sense of interior roominess are subjective perceptions, and are not directly calculable. Communication of the concept and experience is the key.

Minimize hardware intrusion. It is difficult to communicate with the person standing next to you if you are wearing an HMD. IPT displays are preferred, but even with them, the need to wear stereo glasses is annoying, enough so that people will dispense with stereo viewing altogether if it is of minimal value for a particular task.

Display the model full size. A beautifully drawn car shown on a stereoscopic, head-tracked 19-in. monitor is a novelty that generates little real enthusiasm. The difference between it and the same model shown full-size on a 18-ft. wide Wall was put succinctly by one manager: "That's a model; this is a car!" The psychological impact is enormous. But there are also practical and perceptual reasons that motivate a 1:1 scale.

  • It's easier to have a group meeting in front of a big display than around a monitor.
  • A full-size display promotes natural interaction with a human-scale, familiar object. You can walk up to a virtual car and peer inside, and simulate opening the door by reaching for it with your tracked hand. When sitting inside, reach and visibility issues can be experienced first-person.
  • People estimate vertical model size correctly with respect to their eye height above the ground plane in real scenes and immersive displays, but not in small picture or monitor displays [5]. A Wall screen depicting a car needs to extend to the physical floor, so the vehicle and viewer ground planes flow together.
  • Scale models, even physical ones, don't generate correct shape expectations for the full-size model. Designers often remark that a full-size car does not look the same as the physical scale model they are accustomed to viewing daily. Full-size virtual models can help eliminate such false expectations.

Figure 3. A virtual meeting in a virtual car.


Operations that were formerly impractical or expensive are becoming commonplace in the automotive industry through the use of virtual environments.


Back to Top

Branching Out

Networked virtual environments [6–8, 12] share data and control. GM has multiple sites connected by the company intranet. Four of these contain a CAVE and a Wall in the same room so that virtual vehicle interiors and exteriors can be viewed together if desired. The principal data structure in these display systems is the scene graph, which holds the parameters and relationships among the objects to be rendered graphically. With a programming interface allowing dynamic modification of the data over a network, the scene graph can be shared in several ways: Among applications and the display system; among two or more display systems in the same room; and among geographically distributed display systems.

Live editing. Live links. A scene graph shared among applications (Figure 2a) can hold varied data—for instance, aesthetic surfaces, and engineering simulation data (thermal flow, airbag deployment). A specific plug-in for each application can link data to corresponding entities in the scene graph over a network, so some parameters can be altered in real time [10] during a design meeting. Even though 3D interactive tools are available in the virtual environment, remote application control of the scene graph has some advantages:

  • People are trained on the application user interface, making it easier to modify particular data.
  • Data editing tools (for example, material editors) requiring fine control or numeric input are easier to use in a traditional 2D mouse/keyboard interface than in an immersive 3D display (in our experience).
  • Changes stay in the native databases of the applications where they belong.
  • Helpers can control or guide the action remotely. The reviewers don't usually take direct control.

See what I see? Besides being linked to application databases, scene graphs in multiple display systems can be linked to each other, item by item (Figure 2b). If an item is modified, all displays are updated. Each display system's viewpoint of a shared model can be independent or controlled. The view of a vehicle interior seen from inside a CAVE can control the view on a Wall of the same interior, making it simpler for many people in front of the Wall to experience what the driver sees through his eyes. The displays can be in the same room, or globally distributed; in the latter case, verbal communication is implemented with speaker phones, although experiments with digital audio have been conducted [4].

See what I am doing? Full-body avatars are available to represent remote participants. However, GM's current design reviews are prestructured, involve at most two sites at a time today, and focus on the shared model, not the participants. In this constrained context, full-body avatars are not necessary, and are unused. A virtual arrow pointer to indicate areas of interest in the model is the only representation of the remote participant other than voice requested.

We have experimented with less constrained design review scenarios in which awareness [3] of the other participants is more important. In particular, avatars were used to illustrate various actions initiated by the remote participant [10, 11]. For example, reaching for the glove box can be illustrated to another viewer by an avatar with predefined animation—from motion capture, or an accurate simulation. The real participant triggers his remote avatar animation by simply reaching with a tracked hand for the glove box. Only the triggering command needs to be sent across the network, rather than real-time tracking information or changes in all of the avatar's joints, since the animation is done locally.

Back to Top

Down the Road

The future of collaborative design depends on more technological issues than the ones described here. For one, database management issues can be thorny in any enterprise, and are all the more challenging for globally distributed data, particularly when we envision pieces of the design coming from different worldwide studios, and even different vendors, just in time for a joint effort. But what will really determine success or failure is whether the technology can be made invisible—that is, not only suited to the task, but unobtrusive.

Today, latencies in communication networks make distributed face-to-virtual-face meetings feel unnatural, and make fine-grained interaction difficult. There are additional problems within a single IPT site. Most IPT displays have only one correct viewpoint, with all but one viewer seeing the shared model distorted to some degree. Even with the correct viewpoint, individual perceptions can differ considerably for reasons not understood. Replacing physical reality with an illusion will require significant additional research, but even limited successes can have a huge impact.

Back to Top

References

1. Bullinger, H.-J., Blach, R., and Breining, R. Projection technology applications in industry—Theses for the design and use of current tools. In Proceedings of the Third International Immersive Projection Technology Workshop. (Stuttgart, Germany, 1999).

2. Cruz-Neira, C., Sandin, D.J., DeFanti, T.A., Kenyon, R.V., and Hart, J.C. The CAVE: Audio Visual Experience Automatic Virtual Environment. Commun. ACM 35, 6 (1992), 64–72.

3. Curry, K.M. Supporting collaborative awareness in tele-immersion. In Proceedings of the Third International Immersive Projection Technology Workshop. (Stuttgart, Germany, 1999).

4. Daily, M., Howard, M., Jerald, J., Lee, C., Martin, K., McInnes, D., Tinker, P., and Smith, R.C. Distributed design review in virtual environments. In Proceedings of the Collaborative Virtual Environments 2000. ( San Francisco, Sept. 10–12, 2000).

5. Dixon, M.W., Wraga, M., Proffitt, D.R., and Williams, G.C. Eye height scaling of absolute size in immersive and non-immersive displays. J. Experimental Psychology: Human Perception & Performance 26, 2, 582–593.

6. Leigh, J., Johnson, A., and DeFanti, T.A. CAVERN: A distributed architecture for supporting scalable persistence and interoperability in collaborative virtual environments. J. Virtual Reality Res., Dev. and Apps. 2, 2 (1997), 217–237.

7. Macedonia, M.R., M. J. Zyda, et. al. NPSNET: A network software architecture for large-scale virtual environments. Presence 3, 4 (1994), 265–287.

8. Singhal, S., and M. J. Zyda. Networked Virtual Environments—Design and Implementation. July 1999. ACM Press, New York, NY.

9. Smith, R.C., Peruski, L., Celusnak, T., and McMillan, D.J. Really getting into your work: The use of immersive simulations. In Proceedings of the International Body Engineering Conference (IBEC), Advanced Technology and Processes 25 (1996). Also, Symposium on Virtual Reality in Manufacturing, Research and Education. (Chicago, 1996).

10. Smith, R.C., Pawlicki, R.R., Leigh, J., and Brown, D. Collaborative VisualEyes. In Proceedings of the Fourth International Immersive Projection Technology Workshop. (Ames, Iowa, 2000).

11. Smith, R.C., Pawlicki, R.R., Leigh, J., and Brown, D. Collaborative VisualEyes Video. In Video Proceedings of IEEE VR 2000. (New Brunswick, N.J., Mar. 18–22, 2000).

12. Tramberend, H. Avocado: A distributed virtual reality framework. In Proceedings the of IEEE Virtual Reality. (Houston, Tex., 1999).

Back to Top

Author

Randall C. Smith ([email protected]) is a staff research scientist at General Motors Research Laboratories in Warren, MI.

Back to Top

Footnotes

The CAVE is a registered trademark of the Board of Trustees of the University of Illinois.

Back to Top

Figures

F1Figure 1. A Wall and a CAVE display.

F2Figure 2. Display system connections to applications, and to each other.

F3Figure 3. A virtual meeting in a virtual car.

Back to top


©2001 ACM  0002-0782/01/1200  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2001 ACM, Inc.