Microsurgical interventions are increasingly used in almost all types of surgical procedures. But they demand a great deal of the surgeon's manual and mental abilities over periods of several hours and are often viewable only through a stereo microscope. Teaching the related techniques is just as difficult. Animal cadavers fall short of providing equivalent human anatomy, and certain pathologies are difficult or even impossible to arrange for training individual surgeons. Simulations, on the other hand, enable surgeons to train on and practice difficult interventions on scientifically accurate virtual organs, providing many advantages over their real-life biological counterparts. Pathological conditions and complications can be selected arbitrarily and crucial parts of an intervention repeated as often as necessary, helping surgeons gain experience and self-confidence, even in rare and especially complicated procedures. Surgeons can also test prototype instruments, well before they are formally approved or manufactured. New methods can be developed and tested without risk to living human subjects.
Pathological conditions and complications can be selected arbitrarily and crucial parts of an intervention repeated as often as necessary.
For such a system to emerge as a routine aspect of medical training, a convincing virtual reality simulation has to allow its users to feel as if they are actually experiencing the simulated surgery. The technology's designers would want users to ignore their immediate surroundings and perform intuitively and naturally, despite being part of an artificial environment. In science fiction terms, the following approach might apply: wire the brain directly to the simulation, then generate the necessary electrical nerve stimuli.
Although such an option is not yet entirely possible, a current alternative is to transmit the information about the virtual environment through the available information channels, namely the visual, haptic, and acoustic human senses. Unlike the computer-generated illusion of, say, a movie, a virtual reality simulation has to satisfy the additional constraint of real-time interaction. If a surgeon makes an error, real tissue doesn't wait before tearing; virtual tissue shouldn't wait either. All underlying computing operationsreading sensory data to determine what the surgeon is doing, testing instruments for collisions with virtual tissue, computing tissue reactions, generating force feedback, and rendering a stereo imagehave to be performed with imperceptible delay.
Computing a realistic virtual world for medical simulations in real time requires the joint effort of a number of research fields: computer graphics and animation, display, interface technologies, image processing, biomechanical modeling, and real-time system design.
Intraocular surgery is one of the most demanding microsurgical procedures in a field noted for demanding procedures and critical decision making. Needle-shaped instruments of at most 0.9mm diameter are inserted into a living eye. The surgeon operates with submillimeter accuracy, avoiding damage to the eye's sensitive structures. Haptic feedback of the extremely soft tissue is barely perceptible, so the surgeon relies almost entirely on the visual information provided by a stereo microscope, but its magnified view makes hand-eye coordination very difficult. Intraocular surgery can involve the following steps:
The EyeSi system [7] we've been developing for the past five years involves a mechanical component that models these delicate geometric conditions while monitoring the surgeon's activities through a high-speed optical tracking system. Tissue behavior is computed via real-time deformation algorithms, then visualized with stereoscopic computer graphics to deliver a realistic view of the procedure.
Real surfaces for a virtual experience. The realism of the geometric properties and haptic impression of the EyeSi mechanical design plays an important role in teaching the manual skills eye surgeons need. EyeSi includes a facial mask covering the mechanical model of an eye, including a metallic hemisphere with gimbal suspension with the same rotational degrees of freedom as the human eye in an eye socket. The effect of the eye muscles returning the eye to its rest position is modeled by a set of springs with appropriate strength, so the characteristics of its movement correspond to those of a human eye. The mechanical eye has two puncture holes, through which standard surgical instruments are inserted (see Figure 1).
Two small LCD displays mounted in the equivalent of a microscope eyepiece show stereoscopic images of the virtual scenario. An optical tracking system situated under the facial mask measures the current position and direction of the instruments, as well as the orientation of the eye about to undergo the procedure (see Figure 2). The data is transferred to a PC that in turn updates the computer graphics model. Thus, the movements of the instruments and the mechanical eye cause their virtual counterparts to move in a corresponding relationship.
Tracking is the process of determining positions and movements of a given object in space. For tracking in virtual reality simulations, optical tracking systems offer some attractive properties [9]. For example, unlike mechanical encoders, they are contact free, providing greater accuracy than electromagnetic systems and are undisturbed by metallic objects. Object positions are measured by analyzing the images of a given set of video cameras. EyeSi includes three cameras mounted below the mechanical eye; their field of view is adjusted to consider the relevant volume. The equator of the eye and the tips of the surgical instruments are marked with color tags. As the cameras view the scene from different angles, the positions of the color markers in space are reconstructed by using the information from two camera images; the third camera is activated automatically when the view of one of the other cameras is blocked.
Latency is another major consideration in the effort to create a convincing simulation. A conventional optical tracking system stores camera images in the memory of a dedicated framegrabber card, then transfers the images to the computer's main memory, letting the CPU locate marker or object positions. However, this approach involves two major disadvantages: the tracking process produces a high system load; and the latency associated with the overall process is too great for generating a natural-appearing simulation.
A way around this problem is to process the camera data before it arrives at the PC. EyeSi uses a proprietary hardware device that captures and analyzes the images of its cameras. Its main component is a field programmable gate array processor (FPGA) that can be configured in software but executes the specified operations with the speed of hardware. Its two main advantages are developmental flexibility and operational speed.
As a camera successively sends the pixels of an image, the pixels are tested by the FPGA against the unique color of each marker. The FPGA then averages the sensor coordinates of all pixels belonging to a certain marker. It takes no more than 2ms after the last pixel has arrived on the FPGA to transfer the resulting positions of all color markers to the PC, thus minimizing CPU load and communication bandwidth between the FPGA and the PC.
Surgery simulations and video games share a quest for realistic object behavior and high-quality images. What they do not share is the complexity of object behavior; biological tissue behaves in an incomparably more complex way than any car on any simulated computer-game race track. When the positions of the instruments and the orientation of the mechanical eye arrive from the tracking system, the simulation's first step is to calculate collisions between instruments and the structures of the eye; for example, if an instrument touches a pathological membrane, the membrane has to change shape in accordance with the forces exerted by the instruments and by the membrane itself.
If forces exceed a material-specific limit, the membrane has to tear. Slight collisions with the retina may lead to bleeding that the EyeSi surgeon perceives as a red spot on the retina. When feasible, simple geometric intersection calculations detect collisions between instruments and the eyeball. Due to the eyeball's spherical shape, this can be accomplished with a few simple geometric calculations that produce no additional latency. When collisions with more complex shapes (such as deformable objects) have to be considered, collision detection becomes more expensive, making advanced approaches necessary. Most algorithms for collision detection handle very complex objects by analyzing their shapes in a time-consuming preprocessing step, then using the information for the real-time calculation. Unfortunately, however, this approach is not applicable for simulated tissue that changes shape during runtime. In order to accelerate collision detection, EyeSi implements an image-based approach to detect collisions with hardware-accelerated graphics operations (see the sidebar "Image-based Collision Detection").
When a collision occurs, EyeSi has to determine the displacements and forces acting between the objects. These forces are then used to calculate the reaction of, say, a pathological membrane defined by a biomechanical model of tissue behavior. The speed of the calculation depends on the complexity of the membrane's mathematical description, which in turn depends on the approach chosen for the biomechanical model (see the sidebar "Descriptive Modeling"). EyeSi uses the descriptive models Mass-Spring [5] and Enhanced ChainMail [2, 6].
The moment the tracking information and the results of the tissue simulation are available, EyeSi generates a computer-graphical view. The shape of every object is represented by a mesh of triangles and their surface structure by images projected onto the objects (texture-mapping). For the eye model, EyeSi uses anatomical information and photos of a real retina and iris.
Visualization quality is clearly important for the surgeon navigating in the virtual eye. Two visual depth cues help estimate the positions of an instrument: the first is the shadow cast by the instrument's tip, the second the 3D view through the stereo microscope (see Figure 3). Therefore, EyeSi's graphical model includes light effects and shadows. Stereoscopic images are produced by rendering the same scene from two different points of view and presenting the results on two micro-LCD panels in the microscope's virtual model.
In addition to realistic training tasks, a virtual reality simulation also has to present artificial abstract tasks for training on the more arcane aspects of complex procedures. For example, to enable users to exercise bi-manual instrument control, EyeSi implements abstract tasks (such as a formation of spheres distributed over the eyeball). To teach a surgeon how to handle different instrument types, these spheres have to be punctured with the tip of a needle, sorted with a forceps, or absorbed by a vitrector (a hollow needle with oscillating knife in its tip for cutting and removing tissue).
EyeSi is a simulation system for training surgeons in intraocular surgery, incorporating all essential details from the real-life surgical scenario, merging the surgeon's sensory perceptions associated with using a facial mask, surgical instruments, and mechanical eye with virtual tissue deformations and stereo computer graphics. The result is a highly convincing level of immersion.
EyeSi solves the problem of latency between the surgeon's physical movement and corresponding reactions on a screen by implementing the image processing of its optical tracking system as hardware and by using high-speed algorithms for tissue simulation and visualization. The system has an overall latency of less than 50ms. Our experience suggests this performance is sufficient to deliver a convincingly real system response.
It offers several training modules, including interaction with abstract objects for learning basic instrument-handling skills and the removal of a pathological membrane, which has to be peeled off the retina. The procedure for retina relocation is still being developed, and cataract surgery will follow. All training modules undergo performance analysis, informing users as to their individual training performance in terms of accuracy of motion and the time needed to complete individual tasks. (Development of EyeSi and its key technologies, especially low-latency optical tracking, is being pursued by the authors' startup company VRmagic GmbH in Mannheim, Germany; see www.vrmagic.de.)
EyeSi prototypes are also being tested in clinical practice. Preliminary studies (2001) evaluated EyeSi in clinical education by examining the learning curve of study participants on repeated membrane peeling [4] and abstract navigation [3]. In each study, significant learning among participants was reported. One study [4] concluded that EyeSi is useful for "learning and training important and complicated steps in vitreoretinal surgery."
EyeSi development shows that simulation technology can provide versatile and realistic environments for learning, practicing, and teaching complex procedures. Virtual reality simulation as a common tool for education is far more than a vision of some far-off hhypothetical, computerized learning environment.
1. Baciu, G., Wong, W., and Sun, H. Rendering in object interference detection on conventional graphics workstations. In Proceedings of Pacific Graphics (Seoul, Korea, Oct. 1316). IEEE Computer Society Press, Los Alamitos, CA, 1997, 5158.
2. Gibson, S. 3D ChainMail: A fast algorithm for deforming volumetric objects. In Proceedings of the Symposium on Interactive 3D Graphics (Providence, RI, Apr. 2730). ACM Press, New York, 1997, 149154.
3. Harder, B., Bender, H.-J., and Jonas, J. Pars-plana-vitrectomy simulator for learning vitreoretinal surgical steps: First experiences. In Proceedings of Jahrestagung der Deutschen Ophthalmologischen Gesellschaft (Berlin, Sept. 29Oct. 10). Springer, Berlin, 2001.
4. Rabethge, S., Bender, H.-J., and Jonas, J. In-vitro training of membrane-peeling using a pars-plana-vitrectomy-simulator. In Proceedings of Jahrestagung der Deutschen Ophthalmologischen Gesellschaft (Berlin, Sept. 29Oct. 10). Springer, Berlin, 2001.
5. Schill, M. Biomechanical Soft Tissue Modeling Techniques, Implementation, and Application. Ph.D. dissertation, University of Mannheim, 2000.
6. Schill, M., Gibson, S., Bender, H.-J., and Männer, R. Biomechanical simulation of the vitreous humor in the eye using an enhanced ChainMail algorithm. In Proceedings of Medical Image Computing and Computer-Assisted Intervention (MICCAI'98) (Cambridge, MA, Oct. 1113). Springer, Berlin, 1998, 679687.
7. Schill, M., Wagner, C., Hennen, M., Bender, H.-J., and Männer, R. Eyesi: A simulator for intra-ocular surgery. In Proceedings of Medical Image Computing and Computer-Assisted Intervention (MICCAI'99), Lecture Notes in Computer Science, vol. 1679, C. Taylor and A. Colchester, Eds. (Cambridge, U.K., Sept. 1922). Springer, Berlin, 11661174.
8. Shinya, M. and Forge, M. Interference detection through rasterization. J. Visualiz. Comput. Anim. 2, 4 (Oct.Dec. 1991), 132134.
9. Wei, G., Arbter, K., and Hirzinger, G. Automatic tracking of laparoscopic instruments by color coding. In Proceedings of the First Joint Conference on Computer Vision, Virtual Reality, and Robotics in Medicine and Medical Robotics and Computer-Assisted Surgery (CVRMed-MRCAS'97) (Grenoble, France, Mar. 1922). Springer, Berlin, 1997, 357366.
Figure. Carmen, starting with a 2D scan image, a 3D reconstruction in the 24th week of gestation, and a photo on the day of birth, using a software system called InViVo: 3D Ultrasound Upgrading of Conventional Scanners (Georgios Sakas, Fraunhofer Institute for Computer Graphics, Darmstadt, Germany).
Figure 1. A surgical procedure in the eye is performed under a stereo microscope (left). The virtual simulation (right) provides the sensory cues eye surgeons are used to. These scenes might appear different at first glance but are quite similar in light of the sensory information available to each surgeon.
Figure 2. View through a stereo microscope of a real surgical procedure (left) and its virtual counterpart (right).
©2002 ACM 0002-0782/02/0700 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2002 ACM, Inc.