3-D Imaging In Virtual Environment: A Scientific, Clinical And Teaching Tool

Muriel D. Ross
Biocomputation
Center
NASA Ames Research Center
Moffett Field, CA 94035
The advent of powerful graphics workstations and computers has led to the advancement of scientific knowledge through three-dimensional (3-D) reconstruction and imaging of biological cells and tissues. The Biocomputation Center at NASA Ames Research Center pioneered in the effort to produce an entirely computerized method for reconstruction of objects from serial sections studied in a transmission electron microscope (TEM). Software development was driven by the scientific question of the true, 3-D organization of mammalian gravity sensors since these sensors were shown by space-related research to have an architecture compatible with parallel processing of information. The results from 3-D reconstruction of gravity sensors, using our method, have been directly applicable to resolving the issue of their wiring pattern. The software developed, ROSS (Reconstruction of Serial Sections), is now being distributed to users across the United States through Space Act Agreements. The software is in widely disparate fields such as geology, botany, biology and medicine. In the Biocomputation Center, ROSS serves as the basis for development of virtual environment technologies for scientific and medical use. This report will describe the Virtual Surgery Workstation Project that is ongoing with clinicians at Stanford University Medical Center, and the role of the Visible Human data in the project.

First, a brief description of ROSS. For our applications to gravity sensor research, the software captures images automatically from thin sections visualized in a TEM. The images are captured automatically as digitized micrographs by use of a video camera connected to a Silicon Graphics Indy workstation. The micrographs are sent electronically to a Silicon Graphics Onyx workstation in the Biocomputation Center where the individual micrographs are mosaicked to reproduce the sections. This part of the procedure is specific for TEM research, but the following parts of the software can be used to reconstruct any material that is sectioned and studied by any means. Contours of selected objects are traced, the contours are then registered and smoothed, and the contours are connected by polygons. This last step, grid generation, describes the surface as a mesh that can then be rendered in color-coded solid or semi-transparent form.

An important addition to the code is the ability to generate meshes of branched objects. Once reconstructed, the objects can be viewed from any angle and can be animated to facilitate viewing. Functions can be simulated. Reconstructions have also been rendered in virtual environment, providing new insights to researchers and a new method of teaching science and medicine. Attention has now turned toward using combined 3-D reconstruction and virtual environment technologies to train clinicians and to help surgeons plan patient-specific, complex procedures in craniofacial reconstructive and plastic surgery. The method involves interrelating laser scans of the face and head with computerized tomography and magnetic resonance images so that features of the skull can be visualized relative to soft tissues of the face. The method permits the use of virtual tools in the sequence actually used in surgery, and 3-D sound is being added to mimic the sounds of bone drilling and cutting. For this research, contours are captured automatically from CAT scans and the skull and soft tissues are registered automatically. A unique grid generation method is under development to permit virtual surgical procedures to reconstruct the affected craniofacial features. Currently, a grid describing the skull in great detail is generated. A drill is simulated and holes are made in a portion of the skull to be used as a bone graft. The lines between the holes are cut so that the bone can now be moved to a new position in situ, or removed and placed in a different region of the skull or facial region. When all the reconstructive work is completed, the soft tissues are replaced so that the surgeon can visualize the result. If the outcome of the virtual surgery is not desirable to the surgeon, the virtual process can be repeated until satisfaction is obtained, without touching the patient. The research represents the first step in the development of fully interactive workbenches that will find their way into every modern clinic for teaching and surgery planning purposes. Additionally, the workbenches, when fully interactive, will be useful in providing expert help in clinical procedures to health workers located at remote sites.

The important role played by the Visible Human dataset is to provide us with normal examples of craniofacial features, both skeletal and soft tissue, for comparison with patients suffering craniofacial trauma or disfigurement. This will provide the novice with information concerning the degree of change involved in the affected patients.

Reconstruction from CAT scan of an infant with craniosynostosis

Reconstruction from CAT scan of Visible Human Female

Reconstruction from serial-sections of Visible Human Male

(Data courtesy of Dr. Cesar Compadre and Todd Nolte, University of Arkansas)

It is important to note that the method under development will be general and applicable to other medical fields. It is anticipated to find widespread use to reduce the amount of time required to reach surgical proficiency, and to do this by computer simulation rather than in the more expensive arena of the surgery suite. In craniofacial surgery, for example, it may take 20 years or more to train a surgeon. If this period is reduced by just one-half, ten years will be added to the independent productivity of the surgeon. Moreover, the outcome for the patient should be better, leading to a fuller and more productive life.

The Biocomputation Center team that developed this method consists of Rei Cheng, Sam Linton, and Kevin Montgomery. This work is supported by NASA grants to Muriel Ross.