acm-header
Sign In

Communications of the ACM

Interactive immersion in 3D graphics

Toward the Merging of Real and Virtual Spaces


When designing payloads for space missions, NASA engineers face the daunting task of having to build a complex device capable of surviving sustained vibrations and gravity forces during launch, then robustly operating with infrequent (at best) maintenance in the debilitating environment of outer space. These are demanding requirements for a system functioning on a strict schedule and that represents a substantial financial outlay. How might engineers use virtual environments, modeling, and simulation to help reduce the potential for errors in designing and building a payload?

Construction of individual payload components is typically subcontracted out, and the integration stage always involves compatibility and layout issues. For example, attaching external cables is a common final integration task, and NASA payload designers have reported at least several occasions of spacing problems during this step—too late to redesign the payload or reschedule the mission. Each of the payload's subsystems might have conformed to specifications, but the physical reality of attaching the cables ultimately left inadequate space for hands, tools, or parts. The potential for such errors was not detected (early) in the development of specification documents, design verification, or (later) in the physical mock-ups of the payloads. Layout errors can result in schedule delays, equipment redesign, or makeshift engineering fixes. In spite of the care invested in the design specification, integration is always problematic.

Given the multiple prototype designs for each space payload, engineers would like to answer: Can the payload be assembled?; Can repair technicians readily service the payload?; and Can others be trained to assemble and repair the payload?

Immersive virtual environments (VEs), also known as virtual reality, provide a powerful tool to help answer these questions: Will the payload fit among the other components in, say, a space shuttle cargo bay? Is there enough space between payload components to attach power cables? In what order should payloads be loaded and connected? Detecting assembly and integration issues early in the design process using virtual models would potentially save hundreds of thousands of dollars and weeks of precious development time.

Immersive VEs are broadly defined as systems that allow participants to experience interactive computer-generated worlds from a first-person perspective, as opposed to prerendered movies, videos, or animations. Natural locomotion (such as turning one's head and walking about) changes the participant's viewpoint, as opposed to relying on a mouse, keyboard, or joystick. VEs typically seek to elicit a sense of presence, or being there, by physically immersing participants, approximating the sensory information of their real-world experience and relying on their own direct experience [11]. The ideal payload-assembly VE system would have participants fully convinced they were actually performing the assembly. Parts and tools would have mass, feel real, and handle properly, with appropriate visual and haptic feedback.


Using a six-degree-of-freedom tracked joystick to simulate a wrench, for example, is far from realistic, perhaps too far to be useful.


Conducting design-evaluation and assembly-verification tasks in VEs enable designers to assess and validate alternative designs more readily and inexpensively than if they had to build mock-ups and more thoroughly than if they had only drawings to work with. Design review has become a major VE productivity application [2], and substantial research [1], as well as a number of commercial packages, offer completely virtual approaches to these tasks.

However, using VEs in design is still limited in part because of users' difficulty interacting with the 3D virtual world, which is substantially more complex than any 2D interaction. Computers can generate amazing visually and aurally realistic VEs, yet our ability as users to interact with them is limited by the interface. Is constrained interaction thus so restricting that it reduces the applicability of these potentially revolutionary technologies?

Even in today's most advanced VEs, almost every object in the environment is virtual; that is, the object exists only as a computer representation and is not registered with a corresponding real object. But common virtual reality application domains, including engineering assembly and servicing, are hands-on tasks, and the principal drawbacks of virtual models—that there is nothing there to feel, nothing to give manual affordances, and nothing to constrain motion—is a serious limitation for these applications. Using a six-degree-of-freedom tracked joystick to simulate a wrench, for example, is far from realistic, perhaps too far to be useful.

Most objects in VEs are virtual because of the difficulty in obtaining the accurate shape, appearance, and motion information of real objects, including human participants, specialized tools, or parts. Real objects often have many degrees of freedom in terms of both movement and deformation. Current tracking, modeling, and interaction approaches, involving input and output between real objects and the system, are often inadequate for producing high-fidelity representations of the objects. Flight simulators are an example of a perfect combination of real and virtual objects; near-field objects (such as the cockpit and flight controls) are real, while the far-field objects are computer-generated.

A hybrid environment system that merges dynamic real objects with virtual objects would improve VE interactivity and effectiveness. It would also allow participants to see themselves, along with tools and parts, incorporated into the VE. Here, I explore an approach to merging real and virtual spaces, illustrating it with a case study in which NASA payload designers used a hybrid environment to explore an abstracted payload design task. My aim is to generate interest in hybrid environments (real and virtual objects) as an alternative to traditional immersive VEs (virtual objects only) for tasks requiring high-fidelity manual interaction, including assembly and training.

Back to Top

Real Objects in VEs

Many approaches help obtain shape, appearance, and motion information of specific real objects. Taking measurements, then using a modeling package is laborious for static objects and near impossible for capturing all the degrees of freedom of dynamic objects. Newer methods use laser-scanning and image-based techniques to generate 3D models of real objects (such as scanning sculptures for visualization and education purposes [7]).

Precise models of real objects are not necessary for some applications. Approximations (such as a visual hull, or an object's tightest volume, as defined by its silhouettes, as viewed by multiple cameras [6]) are useful substitutes for helping a system achieve real-time performance. Image-based rendering approaches are popular in computing visual hulls [10]. Similar in spirit to this work are augmented-reality systems [4] that incorporate a few virtual objects with the real world. My colleagues and I focus on incorporating real objects into the virtual world.

In physical simulations, detecting collisions among moving virtual objects (composed of polygons, splines, volumes, or surfaces) is an active area of research. Collision detection between real and virtual objects initially creates geometric models of rigid-body real objects, then uses standard collision-detection approaches.

Ideally, participants would use their hands, bodies, and tools to manipulate objects in the VE. As a step toward this goal, some systems provide tracked, instrumented input devices. Commercial devices include articulated data gloves with gesture recognition or buttons (Immersion's Cyberglove), mice (Ascension Technology's 6D Mouse), and joysticks (Fakespace's NeoWand).

Another approach to enabling more natural interaction with virtual objects also allows the user to perceive the object's role. For example, registering a toy spider with a virtual spider [5] would allow users to touch a physical object at the precise location they see its virtual representation. However, such specialized engineering can be time-consuming and produce results that are limited to only a specific application.

We use a hybrid environment system employing image-based object-reconstruction algorithms to generate real-time virtual representations, or avatars, of real objects. The system has four fixed-position video cameras to view the scene. At each frame, the system computes the visual hull of a real object by "reprojecting" the object's profile as viewed by each camera into the volume and determining the resulting intersection volume. The volume-intersection computation is done in real time by leveraging the massively parallel computation power of graphics-card accelerators to determine whether a point in space is within the volume intersection [8].

By regenerating the virtual representations at each frame, the system handles highly dynamic and deformable objects (such as clothing, tools, and hands). Using head-mounted displays, participants see real-object avatars visually merged into the VE. Moreover, participants handle and feel the real objects while interacting with the virtual objects.

We developed collision-detection and collision-response algorithms so we could use the real-object avatars in virtual lighting and in physically based mechanics simulations. These algorithms enable real-object avatars to naturally affect simulations (such as particle systems, cloth simulations, and rigid-body dynamics). The resulting system supports a more natural VE interface. Figure 1 shows a simple simulation of a virtual ball rolling around on a virtual table while bouncing off real objects (such as the wooden blocks the user extemporaneously added to the scene without prior modeling or tracking).

Back to Top

NASA Payload Design Evaluation

How might hybrid environment systems be applied to tasks that would clearly be hampered by traditional VE approaches? In November 2001, I was approached by NASA Langley Research Center (LaRC) engineer Danette Allen who was interested in applying VE technology to the design of spacecraft payloads. I then toured the LaRC facilities in Norfolk, VA, meeting with managers, designers, engineers, and technicians involved in designing and building launch vehicles.

In payload design, designers first develop a design-and-assembly specification for subcontractors, documenting the assembly procedure in a step-by-step instruction list. The subcontractors then provide a computer-aided design (CAD) model of their subpayloads for design verification. Later, simplified physical mock-ups are manufactured for design verification and layout. Finally, the various components are integrated and evaluated by assembly technicians.

My collaborators and I wanted to determine if using hybrid environments would help identify integration issues in payload CAD models. Hybrid environments, as opposed to traditional VEs, would enable the designers to test configurations using the final assembly personnel and real physical tools and parts. Changes in the early project stages are substantially cheaper in terms of money and time than fixes later on.

In an exploratory case study, four NASA LaRC payload design experts came to Chapel Hill, NC, in March 2002 to meet with me and the Effective Virtual Environments Group in the Department of Computer Science at the University of North Carolina. They used our real-time object-reconstruction hybrid environment system to evaluate a simulated payload assembly task. To evaluate the applicability of the technology to NASA payload evaluations, we devised a task involving CAD models of the light-imaging unit—the photon multiplier tube (PMT)—of a weather-imaging satellite that was launched the following year.

The PMT model and two other fictional payloads were rendered in a VE. The task was to screw a cylindrical shield (mocked-up as a PVC pipe) into a receptacle, then plug a power connector into an outlet inside the shield (see Figure 2). Additional handheld tools were available for participants in need of further assistance.

Simulating this task would be challenging with traditional immersive virtual environment modeling and tracking technologies. For example, most tracking systems report the position of a few (typically fewer than 10) tracked points. Thus, most VE systems would have difficulty tracking some of the objects here (such as cables, complex metal tools, and participants' hands). A completely virtual approach would mean participants would not be able to handle real tools or parts while interacting with the virtual world, and interaction (and presumably training and fidelity) would be hampered in this predominately manual task.

The hybrid environment system performed object reconstruction (on participants, tools, and parts) and collision-detection among the virtual payloads and the real-object avatars.

Back to Top

How Much Space?

We first provided task information in approximately the same manner as would be available in an actual design evaluation. The primary question was how much space was needed—and how much would actually be allocated—between the PMT and payload A in Figure 2 to complete the task.

Next, each participant performed the procedure in the hybrid environment. After a period of adjustment, participants picked up the pipe and eased it into the center cylindrical assembly while trying to prevent it from colliding with any of the virtual payloads. Finally, they snaked the power cord down the tube, inserting it into the outlet (Figure 3).

If a participant asked to increase or decease the space between payload A and the PMT, the experimenter could dynamically adjust the spacing, thus allowing quick evaluation of different payload configurations (see Table 1). The PVC pipe had a length of 14cm and a diameter of 4cm, and NASA engineers were stingy with the limited space. Before conducting the task, most participants allocated a minimum amount of space to be able to slide in the cylindrical shield.

Participant 1 completed the task without using any tool, as the power cable was stiff enough to force into the outlet. Since an aim of the experiment was to evaluate the possibility of requiring tools in assembly or repair, we used a more flexible power cable for the remaining participants.

While trying to insert the new power cable, participants 2, 3, and 4 reported they could not complete the task. They could not snake the more flexible power cable down the pipe and insert it into the outlet without using some device to help push the connector when it was inside the pipe; the pipe was too narrow for their hands. When asked what they needed, all three requested a tool to assist in plugging in the power cable. Each was given a tool (a set of tongs) and was then able to complete the power-cable insertion task. However, the tool also increased the required spacing between Payload A and the PMT from about 15cm to an average of 24cm.

While the cable problems might be apparent in retrospect, none of the original designers anticipated this requirement. The way the assembly information was provided (diagrams and assembly documents) made it difficult for them—even those with substantial payload-development experience—to identify subtle assembly integration issues.

Accommodating tools extemporaneously in a VE session, without additional modeling or development effort, enabled efficient design evaluation. The presence of the pipe threads and cable socket provided important motion constraints that aided in interacting with these objects.

Full physical mock-ups, which are costly and require substantial time to create, are used primarily during the later stages of payload development. We found that even early on, hybrid environments could provide an effective tool for evaluating designs and layouts. The NASA engineers were surprised that both an unplanned-for tool and significant additional space were required. The experiment participants said the financial cost of the spacing error could range from moderate (keeping personnel waiting until a design fix was implemented) to extreme (launch delays) (see Table 2).

Though the virtual PMT model was not very detailed, and the visual contrast between real and virtual objects was rather abrupt, all participants tried to avoid touching the virtual model. Upon being told about their response to the virtual environment, one said: "That was flight hardware. You don't touch flight hardware." The familiarity and relevance of the task made it a vivid experience for the participants.

Back to Top

Conclusion

NASA LaRC payload designers felt that both VEs and hybrid environments would be useful for assembly training, hardware layout, and design evaluation. They said substantial improvements in detecting errors could be realized through virtual models in almost every stage of payload development. NASA is now looking to evaluate the applicability of hybrid environments as an iterative design tool on future NASA projects with more complex designs and multiple participants.

Other VE applications, including training, telepresence, phobia treatment, and assembly verification, might also be improved by enabling participants to interact with real objects in a VE. My work with NASA has shown that such a system can provide a substantial advantage in hardware-layout and assembly-verification tasks. Future work should seek to identify the tasks most likely to benefit from having participants handle dynamic real objects.

Back to Top

References

1. Badler, N., Erignac, C., and Liu, Y. Virtual humans for validating maintenance procedures. Commun. ACM 45, 7 (July 2002), 56–63.

2. Brooks, F. What's real about virtual reality? IEEE Comput. Graph. Applic. 19, 6 (Nov./Dec. 1999), 16–27.

3. Ehmann, S. and Lin, M. Accurate proximity queries between convex polyhedra by multi-level Voronoi marching. In Proceedings of the International Conference on Intelligent Robots and Systems (Takamatsu, Japan, Oct. 31–Nov. 5, 2000).

4. Feiner, S., Macintyre, B., and Seligmann, D. Knowledge-based augmented reality. Commun. ACM 36, 7 (July 1993), 52–62.

5. Garcia-Palacios, A., Hoffman, H., Carlin, A., Furness, T., and Botella-Arbona, C. Virtual reality in the treatment of spider phobia: A controlled study. Behav. Res. Ther. 40, 9 (2002), 983–993.

6. Laurentini, A. The visual hull concept for silhouette-based image understanding. IEEE Transact. Pattern Analys. Machine Intelli. 16, 2 (Feb. 1994), 150–162.

7. Levoy, M., Pulli, K., Curless, B., Rusinkiewicz, S., Koller, D., Pereira, L., Ginzton, M., Anderson, S., Davis, J., Ginsberg, J., Shade, J., and Fulk, D. The Digital Michelangelo Project. In Proceedings of ACM SIGGRAPH 2000 (New Orleans, July 23–28). ACM Press, New York, 2000, 131–144.

8. Lok, B., Naik, S., Whitton, F., and Brooks, F. Incorporating dynamic real objects into immersive virtual environments. In Proceedings of the ACM Symposium on Interactive 3D Graphics (Monterey, CA, Apr. 28–30). ACM Press, New York, 2003, 31–34.

9. Lok, B. Online model reconstruction for interactive virtual environments. In Proceedings of the 2001 ACM Symposium on Interactive 3D Graphics (Chapel Hill, NC, Mar. 19–21). ACM Press, New York, 2001, 69–72.

10. Matusik, W., Buehler, C., Raskar, R., Gortler S., and McMillan, L. Image-based visual hulls. In Proceedings of ACM SIGGRAPH 2000 (New Orleans, July 23–28). ACM Press, New York, 2000, 369–374.

11. Sheridan, T. Musing on telepresence and virtual presence. Presence: Teleoperators and Virtual Environments 1, 1 (Winter 1992), 120–125.

Back to Top

Author

Benjamin C. Lok ([email protected]) is an assistant professor in the Department of Computer and Information Science and Engineering at the University of Florida in Gainesville.

Back to Top

Footnotes

This work was supported by the Link Simulation and Training Foundation, the Office of Naval Research's Virtual Technologies and Environments Program, the National Institutes of Health's National Institute of Biomedical Imaging and Bioengineering, and the National Science Foundation's Information Technology Research BioGeometry Project. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

Back to Top

Figures

F1Figure 1. Virtual ball bouncing off real objects, with arrows indicating ball motion between images; left, third-person view, the rest are first-person views.

F2Figure 2. NASA engineers evaluated the spacing between two virtual payloads (Payload A and the photon multiplier tube, left) while trying to attach a cylindrical shield and power cable (middle). A participant handles the real objects (right).

F3Figure 3. A participant interacting naturally with physical parts (cable, tool, and tube) and with virtual models (Payload A and the photon multiplier tube) in the hybrid environment.

Back to Top

Tables

T1Table 1. NASA Langley Research Center participants' responses concerning distance between payload components and task results (all distances are cm).

T2Table 2. Participants' responses concerning the time and financial cost of discovering a similar spacing error during the integration stage.

Back to top


©2004 ACM  0002-0782/04/0800  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2004 ACM, Inc.