acm-header
Sign In

Communications of the ACM

Medical image modeling tools and applications

Incorporating 3D Virtual Anatomy Into the Medical Curriculum


The introduction of the Visible Human Project by Ackerman in 1995, described in the seminal paper in 1996 [9], brought a promise to anatomists that these two frozen, milled, and digitized cadavers—the Visible Male and Female—would revolutionize anatomy teaching by providing the most complete and detailed anatomical images ever. This vision could be compared to the proverbial "man on the moon" program for medical education and has proven to be much more challenging than expected. Although the ramifications of this project are not of the same scale as the shock created five centuries ago by the Padua physician Andreas Vesalius when he challenged the ancient Greek physician Galen's description of the human body, the original interest created by the availability of the Visible Human data was enormous.

Where does the medical community stand now, 10 years after the initiation of the Visible Human Project? Do medical schools show interest in using 3D models derived from the data sets in their anatomy curricula? Do advances in image processing and 3D visualization match the initial expectations that both cadavers would be segmented swiftly and completely? What remains to be done to realize the full potential of this famous data set? After initial enthusiastic response to the Visible Human data, only a few medical institutions remain committed to the implementation of the original vision: Columbia University's Vesalius Project1 is one example. Here, we address the challenges surrounding the incorporation of 3D virtual anatomy into our medical curriculum, a challenge we remain committed to meeting.

We also address another related question: Might the methodologies used to process the Visible Human Male and Female data be extendible and scalable to provide segmentation and visualization of patient data in simulations and clinical training applications? How should such training systems be designed to best serve medical training and education? What evaluation process should be applied to the systems and the users to show the benefits of incorporating image-based technology into medical training and clinical practice? These are difficult questions that do not yield easy answers, yet the answers may provide insights into how the translational research from image-based technology to clinical application should proceed. This leads to two distinct educational application classes: anatomy teaching and clinical training.

Back to Top

The Vesalius Project at Columbia University

The most prominent, and originally intended, role of the Visible Human Project was to provide an image library of structural anatomy. There are two important factors that make the data suitable for this purpose. It contains anatomical details that are not found in any other anatomical data, and it depicts the complete anatomy of an adult male and female. From this data, using proper imaging, 3D reconstruction, and modeling tools, one can extract 3D anatomical structures, texture them with photorealistic color of the fresh human tissue, and depict 3D spatial relationships among them. We call a 3D model of an anatomical structure the maximal model if its shape matches the spatial resolution and its texture inherits full-color resolution of the original voxel set. In addition, each maximal model must be accurately labeled. It is important to generate such maximal models because they serve as building blocks for 3D depictions of anatomy.

The success of such an effort hinges on two challenges: segmentation and 3D reconstruction. The need to generate maximal models motivated us to explore and compare various approaches to segmentation of anatomical structures. We used both hand and automated segmentation and developed our own tools for 3D modeling and visualization: the 3D Vesalius Visualizer [2]. The result is that individual anatomical structures are segmented from the raw color Visible Human data, stored in 2D/3D binary masks, surface-modeled, and textured in the original volumetric data. We can build 3D anatomical scenes with an arbitrary number of anatomical structures, in correct 3D spatial relationships. The resulting 3D photorealistic models/scenes are labeled and used in the teaching of anatomy.

Despite the availability of the Visible Human data sets for the past 10 years, there are no complete and coherent libraries of 3D anatomy. To address this, our effort is directed at systematically developing maximal models for specific body regions (such as male pelvic anatomy [12] and foot anatomy) and incorporating the visualizations into network-based segments of the anatomy curriculum at Columbia University College of Physicians and Surgeons. This is a long-term and tedious task that requires strong interdisciplinary collaboration among experts in the areas of image processing, computer graphics, 3D visualization, anatomy, cognitive psychology, computational linguistics and multimedia. It is our belief that without all, or the majority, of these talents creating appropriate, user-friendly teaching and learning tools from the Visible Human data sets, the effort will not be successful. This is a major contributing factor to why there has been so little use made of the data—it is an extremely large, complex, and challenging raw resource. To turn this resource into useful educational applications is both a technological task (generation of maximal models) and a pedagogic task (content determination, cognitive, and multimedia design). Either task undertaken apart from the other will fail. Without the strong interaction among scientists, content experts, and designers in the context of a health sciences education setting there is little hope for success. The visually impressive Visible Human melt-throughs and fly-throughs have no educational value, nor do the poorly segmented and partially labeled anatomy CD-ROMs on the market. The strength and success of our work is tied directly to the interactions of the multidisciplinary team.

We have identified the following steps that must be present in developing anatomy teaching applications:

  • Segmentation of individual anatomical structures;
  • Generation of maximal models for all segmented structures;
  • Supplementing the maximal models with illustrations of structures that are not present in the data set but are crucial for anatomy teaching;
  • Design of anatomy lessons that combine 3D models, illustrations, and interactive labeling; and
  • Formal evaluation of the electronic material in the classrooms to demonstrate the benefits of using this technology for teaching and learning.

The need to generate maximal models motivated us to explore and compare various approaches to segmentation of anatomical structures.


Back to Top

Hand vs. Automated Segmentation

Automated segmentation of the Visible Human data sets is still an unresolved problem. It is as challenging as segmentation of clinical anatomical data, but because the Visible Human data contains much higher detail and rich color texture, automated outlining of the anatomical structures becomes even more difficult. We have been involved in the project, which is funded by National Library of Medicine (NLM), to create a Visible Human Project Segmentation and Registration Toolkit (more information is available at the Insight Consortium Web site: www.itk.org) [4]. Even though the results are promising, for anatomy teaching applications the quality and precision of automatically segmented images is unlikely ever to surpass hand-segmentations done by a highly qualified anatomist/illustrator. This is especially true for the small tubular structures, such as blood vessels. For this reason we have been pursuing both hand and automated segmentation. The hand-segmentations that are done by our anatomist/illustrator are currently used in building the maximal models for our electronic anatomy curriculum.

The (semi)-automated segmentation algorithms for the Visible Human data set, developed under the Insight Consortium, were also applied to segmentation of clinical image data. These methods provide us with a new generation of (semi)-automated segmentation and registration tools with which to obtain 3D models of anatomy for clinical training tools. In fact, inclusion of patient images in the Insight Consortium enabled us to develop a formal approach to validation of segmentation [11]. This cross-fertilization, or sharing of methods between Visible Human and clinical images, has benefited both communities, and provided appropriate segmentation tools for both anatomy teaching and clinical training applications.

Back to Top

Hybrid Segmentation

Automatic internal organ segmentation from various medical imaging modalities, including color Visible Human data, is an open research problem. Over the past several years a variety of segmentation methods have been developed. Boundary-based techniques such as snakes [7] start with a deformable boundary and attempt to align this boundary with the edges in the image. The advantage here is that image information inside the object is considered as well as that on the boundaries. However, there is no provision in the region-based framework for including the shape of the region, which can lead to noisy boundaries and holes in the interior of the object.

Like several other recent approaches [1, 5], our design integrates the boundary and region-based techniques into a hybrid framework. By combining these we gain greater robustness than found in either technique alone. Most of the earlier approaches use prior models for their region-based statistics, something we would rather avoid in order to increase usefulness in situations where a comprehensive set of prior models may not be available.

We have recently developed a new segmentation method that integrates the deformable model [5] with fuzzy connectedness [10], and the Voronoi Diagram (VD) classification [3]. We have tested this method, which requires minimal manual initialization, on the Visible Human and clinical data [4]. We start with the fuzzy connectedness algorithm to generate a region with a sample of tissue. From the sample, a homogeneity operator is automatically generated. The homogeneity operator is used by the VD classification to produce an estimation of the boundary. We apply the deformable model to determine the final boundary. We have segmented various tissue types using the method to achieve promising results, see Figure 1, parts a–c.


To turn this resource into useful educational applications is both a technological task and a pedagogic task.


Back to Top

Visualization of Large Color Data Sets

The 3D Vesalius Visualizer [2] creates high-resolution, surface-based models from the segmented Visible Human data. The initial models, represented by polygonal meshes, are created using an extension of the alligator algorithm [6] that in turn is a variation of the marching cubes algorithm. The method utilizes winged-edge structure and creates conformed meshes without any surface anomalies, such as holes. The meshes are textured by "color-dipping" the vertices in the original volumetric data and interpolating each vertex color from a collection of the nearest voxels. The maximal 3D mesh models are generated at the resolution of the data sets. The 3D Vesalius Visualizer uses a simple shading algorithm.

The maximal models of anatomical structures are very large and expensive to render because they are highly detailed. But the high-precision surface triangulations, in which texture is computed for each vertex, cannot be manipulated in real time on an arbitrary platform and transmitted efficiently. Therefore a simpler, smaller, and less-expensive-to-manipulate model derived from the maximal mesh models should be computed. We are developing a mesh-reduction algorithm that is automated and produces a user-specified hierarchy of multiresolution meshes where the original texture is perceptually preserved.

Back to Top

Virtual Anatomy Lessons

Until the technical challenges related to image processing, visualization, representation, storage, and manipulation of complex color data sets like the Visible Human data are resolved, systematic building of an anatomy curriculum will never evolve beyond a "boutique" operation. The mission of the Vesalius Project at Columbia reaches far beyond such a boutique operation, as it intends to introduce the electronic anatomy curriculum in a broad and systematic fashion. 3D visualizations obtained from the Visible Human data are used in lectures on male pelvic anatomy [12], and since spring 2002 the foot anatomy lesson has been taught using the Foot Atlas [8] derived from the Visible Human-based visualizations. These electronic applications will require thorough evaluation to demonstrate their effectiveness in teaching and learning, a process now under way for the foot material.

Back to Top

3D Visualization of Foot Anatomy

The foot anatomy has been hand-segmented from the Visible Human Male data and modeled with the 3D Vesalius Visualizer, as shown in Figure 2, parts a) and b). These 3D visualizations have been augmented by detailed illustrations of all the anatomical structures that are not visible, or are not fully segmentable in the original data set. The 3D Visible Human-based models are used as a starting point to generate layered illustrations representing the standard views used in teaching foot anatomy, as shown in parts c) and d) of Figure 2. The 3D models and illustrations for the foot have been incorporated into an interactive program by a multimedia and HCI expert to create an interactive environment simulating the style of anatomy teaching at Columbia [8]. Such electronic lessons are approved and are being employed in Columbia University's anatomy curriculum.

Back to Top

Biomedical Imaging Informatics

We are in a process of implementing a biomedical imaging informatics curriculum in the Department of Biomedical Informatics to train interdisciplinary experts who will provide medicine with image-based and robotic-based systems that are clinically needed, are useful for teaching and training, and will help reduce clinical error. The program will also focus on evaluation and validation of these technologies. Figure 3 shows our various research and development projects that integrate medical imaging, visualization, robotics, computer vision, and medicine. Development of the electronic anatomy curriculum fits into the framework of this program. The diagram depicts three layers/categories around which the core of the curriculum is designed with the rings representing the following three broad areas:

  • Applications: Image-based, robotics, and vision-based systems. Understanding technology to develop best-suited systems needed in medicine needs.
  • Evaluation: Qualitative and quantitative evaluation of systems that are image-based and robotics-based and evaluation of the skills of systems' operators.
  • Certification (future): Understanding the requirements of the American Board of Surgery (and other standards-setting organizations) for certification of medical professionals. Design and propose formal protocols for skills testing on approved image-based and robotics-based systems in medicine.

Translational research from medical image technology to clinical applications differs from biomedical engineering and computer science in one key aspect: the audience that will evaluate the merit of the work will be physicians and the clinical community. It is important to note that research and development efforts in biomedical imaging informatics are clinically centered, unlike the typical engineering-centered projects in medical imaging.

Back to Top

The Future

It is clear that we have charted a very challenging course, one that will be useful not only for our own students, but for others as well. The promise outlined in [9] may have been too ambitious but it was not without basis. NLM's funding of projects such as the Insight Consortium, and its expansion to include clinical images, helped the user community more quickly identify the barriers to the effective use of the Visible Human data for both medical education and clinical training. Still, there is a shortage of serious funding for developing electronic anatomy atlases and other teaching tools. There are more challenges to come. For example, we are experimenting with downloading images to wireless and mobile computing devices to help students capture information in the classroom and take their learning to the patient's bedside. When the Visible Human data was first made available many people thought it meant an electronic curriculum in anatomy would soon follow—we now understand why this effort has only begun.

Back to Top

References

1. Chakraborty, A. and Duncan, J.S. Integration of boundary finding and region-based segmentation using game theory. In Y. Bizais, et al., Eds., Information Processing in Medical Imaging, Kluwer (1995), 189–201.

2. Imielinska, C., Laino-Pepper, L., Thumann, R., and Villamil, R. Technical challenges of 3D visualization of large color data sets. In Proceedings of the 2nd User Conference of the NLM's Visible Human Project, 1998.

3. Imielinska, C., Downes, M., and Yuan, W. Semiautomated color segmentation of anatomical tissue. J. of CMIG 24 (Apr. 2000), 173–180.

4. Imielinska, C., Udupa, J., Metaxas, D., Jin, Y., Angelini, T., Chen, T., and Zhuge, Y. Hybrid segmentation methods. In T. Yoo, Ed., Insight into Images: Principles and Practices for Segmentation, Registration, and Image Analysis. A.K. Peters, Ltd., 2004.

5. Jones, T. and Metaxas, D. Image segmentation based on the integration of pixel affinity and deformable models. In Proceedings of IEEE CVPe Conference (Santa Barbara, CA, June 1998).

6. Kalvin, A.D. Segmentation and surface-based modeling of objects in three-dimensional biomedical images. Ph.D. dissertation, NYU, March 1991.

7. Kass, M., Witkin, A., and Terzopoulos, D. Snakes: Active contour models. International Journal of Computer Vision 1, 4 (1998), 321–331.

8. Sinav, C., Imielinska, H., Ng, E., Soliz, R., Thumann, R., Ambron, P., and Molholt, P. A new illustrated foot anatomy atlas based on the Visible Human data set. In Proceedings of the Visible Human Project Conference (2002).

9. Spitzer, V., Ackerman, M.J., Scherzinger, A.L., and Whitlock, D. The Visible Human Male: A technical report. JAMIA 3, 2 (1996), 118–130.

10. Udupa, J.K. and Samarasekera, S. Fuzzy connectedness and object definition: Theory, algorithms, and applications in image segmentation. Graphical Models and Image Processing 58, 3 (1996), 246–261.

11. Udupa, J.K. et al. A methodology for evaluating image segmentation algorithms. In Proceedings of SPIE: Medical Imaging (San Diego, CA, 2002), 266–277.

12. Venuti, J.M., Imielinska, C., and Molholt, P. New views of pelvis anatomy: Role of computer-generated 3D images. Clinical Anatomy 17, 3 (2004), 261–271.

Back to Top

Authors

Celina Imielinska ([email protected]), co-founder of the Vesalius Project, is an associate research scientist with the College of Physicians and Surgeons at Columbia University in New York.

Pat Molholt ([email protected]), co-founder of the Vesalius Project, is the associate dean for scholarly resources with the College of Physicians and Surgeons and senior lecturer in the Department of Biomedical Informatics at Columbia University in New York.

Back to Top

Footnotes

1The Vesalius Project is named after Andreas Vesalius, a sixteenth century anatomist whose work laid the foundation for all subsequent anatomical research. Columbia University trademarked the name Vesalius in 1997 for use in the production of educational software.

This work was supported in part by NLM contract on the VHP Segmentation and Registration Toolkit; NLM99-103/DJH.

Back to Top

Figures

F1Figure 1. a) Automated segmentation of temporalis muscle: (1) color VH Male slice, (2) a fuzzy connected component, (3–5) iterations of the VD-based algorithm, (6) an outline of the boundary; b) 3D segmentation of the left kidney: (1) input data, (2) fuzzy connectedness, (3, 4) VD classification (5) deformable model, (6) hand segmentation. c) 3D segmented and visualized left kidney derived from the Visible Human Male data set—3D models of: (1) fuzzy connectedness, (2) Voronoi Diagram classification, (3) deformable model, (4) hand segmentation.

F2Figure 2. Foot anatomy: a) flexor muscles (oblique view), b) all the structures (oblique view), c) a "reference" 3D model (plantar view), d) corresponding medical illustration based on the model in c).

F3Figure 3. Issues in biomedical imaging informatics.

Back to top


©2005 ACM  0001-0782/05/0200  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2005 ACM, Inc.


 

No entries found