Digital Cadavers (TM): A Environment for the Study and Visualization of Anatomic Data

Steven Senger
Department of Computer Science
Univ. of Wisconsin - La Crosse

INTRODUCTION

The University of Wisconsin - La Crosse has been actively developing a software environment to allow undergraduate students in anatomy and physiology to exploit the potential offered by the Visible Human Project (TM) data sets and similar anatomic data. The term Digital Cadavers (TM) is used to describe the working environment produced by this software system. The system does not attempt to simulate the physical activity of dissection. Rather it supports an intellectual activity where students build upon existing knowledge to find and present anatomical structures. To accomplish this the environment supports individual access to the data set, autonomous generation of arbitrary cross section images and volume rendered reconstructions and the ability to preserve these images together with descriptive textual discussion in the form of a laboratory notebook.

PEDAGOGY

The working environment produced by this project uses the Visible Human data set as more than a basis for an atlas of prepared images and goes well beyond the standard array of resources used in the undergraduate curriculum. The environment developed by this project provides students the tools necessary to support a teaching method in which students a have primary responsibility for the examination, interpretation and presentation of anatomical structures. In this environment the images students derive from the Visible Human data set will carry the imprint of their decisions about how the structures are most appropriately presented. This firmly involves students in the learning process to an extent that the simple examination of images produced by others cannot approach. This active involvement produces a cycle of investigation in which students build upon existing knowledge as they uncover and present new detail. This cycle is fundamental to the development and long term retention of anatomical understanding.

CLIENT INTERFACE

The software environment is designed using a client/server model. The client interface provides a data set browser for viewing the data set images and a collection of tools for working with the volume data. Data sets are organized as a collection of 2D section images taken at periodic intervals over the length of the cadaver. For example the male cryosection data consist of axial section images taken at 1mm intervals. The client interface can be used with MRI, CT and cryosection image data sets. The client interface communicates requests to compute images derived from the data set (both arbitrary cross section and 3D volume reconstructed images) to the data set server.

The client interface is document centeric and provides a set of tools for constructing image request. Each tool supports the creation and manipulation of a specific set of document types. Document types include positional information such as anatomical landmarks, axes and bounding boxes. To support the description of volume reconstructed images the system also provides color sample and stain document types. Color sample documents record the colors occurring in user selected anatomical structures (muscle, bone etc). Stain documents record the regions occupied by selected structures. Tools are provided to extract and manipulate the data for these documents from the raw section images of the data set.

The cross section and volume reconstruction tools allow the user to collect documents of various types into a image request. The cross section tool works with positional documents which are used to describe the image plane and extent of the computed cross section. In addition to positional documents, the volume reconstruction tool allows color sample and stain documents to be flexibly combined into an image request. This design allows the user to create and maintain a large number of related documents and to pick and choose from these documents in order to obtain the desired images.

VOLUME RENDERING TOOL

Volume rendering is a technique for producing 3D reconstructions of volumetric data [1,2]. While not offering the real time interaction of polygonal surface textured mapped systems, volume rendering does offer the advantage of potentially allowing all voxels of a data set to contribute to the final image. The main impediment to effectively using this technique is the need to associate opacity values to data set voxels. Typically volume rendering systems provide the user with an interface which allows various ranges of colors or pseudo-intensity values to be associated with specific opacity values. Such systems tend to require a substantial technical sophistication in order to reliably produce useful images.

The volume rendering engine and user interface contained in this system presents the user with an intuitive interface which requires to user to specify a minimum of information.The minimum information required consists of a focus view point, a bounding rectangle, an orientation and a collection of color sample and stain documents to be used in generating the image. Color sample and stain documents can be specified as representing either primary or secondary material. If primary, those portions of the anatomy are rendered fully opaque. If secondary, it is assumed that the structures provide a surrounding context for the primary structures and are rendered partially transparent. The exact degree of transparency is computed by the system in such a way as to ensure that the primary structures remain visible in the final image. This simplification of the imaging model provides a reliable mechanism for users to obtain useful images at relatively little loss in overall flexibility.

STAINING TOOL

The stain tool provides an intuitive user-directed scene segmentation mechanism. The stain tool is used to select specific anatomy through the data set section images. The user is presented with the illusion that a color selective stain flows out from the mouse position and penetrates into the data set section image. The stain respects boundaries present in the section images. The results of staining are kept in a compact encoded form separate from the section image data. Stain documents are composed of stain information for a range of section images. The tool provides a mechanism for propagating stain information through to adjacent section images. The result is a 3D description of the selected anatomy.

IMAGING SERVER

Because of the data set size all derived images are computed on the server machine hosting the data set. Server processes running on this machine receive image requests from the client application and return the computed image in real time as it is computed. This allows the user to abort an image request before completion.

ANNOTATION AND PRESERVATION

Derived images can be annotated and preserved along with textual discussion using standard word processors. The annotations are maintained separate from the image data and can be edited after creation.

ACKNOWLEDGEMENTS

This work was supported in part by a grant from the University of Wisconsin System Undergraduate Teaching Improvement Council. The author would like to thank Drs. Brice, Mahrer and Mowbray of the UW- L biology department and Dr. Gendreau of the UW-L computer science department for their participation in this project.

IMAGES

Single Images:

MPEG Animations:



Author's email: senger@csfac.uwlax.edu
Project Web Site: http://www.visu.uwlax.edu

REFERENCES

1. Elvins, T.T., "A Survey of Algorithms for Volume Visualization", Computer Graphics, Volume 26, Number 3, August 1992, 194-201.

2. Elvins, T.T. and Nadeau, D.R., "NetV: An Experimental Network-based Volume Visualization System," Proceedings of the IEEE Visualization '91 Conference, IEEE Computer Society Press, October 1991, 239-245.