Segmentation and Reconstruction Strategies for the Visible Man

John E. Stewart*, James H. Johnson#, and William C. Broaddus*

* Division of Neurosurgery and Department of Biomedical Engineering
# Department of Anatomy
Virginia Commonwealth University / Medical College of Virginia, Richmond, Virginia 23298
jestewart@gems.vcu.edu

Abstract

This paper describes a collection of techniques designed to create photo-realistic computer models of the National Library of Medicine's Visible Man. An image segmentation algorithm is described which segments anatomical structures independent of the variation in color contained in the anatomical structure. The generation of pseudo-radiographic images from the 24-bit digital color images is also described. Three-dimensional manifold surfaces are generated from these images with the appropriate anatomical colors assigned to each surface vertex. Finally, three separate smoothing algorithms -- surface, normal, and color, are applied to the surface to create surprisingly realistic surfaces. A number of examples are presented which include solid surfaces, surface cutaways, and mixed opaque and translucent models.

1 Introduction

"The Visible Man" is the name given to the male cadaver of the National Library of Medicine's Visible Human Project [1]. This project involved the medical imaging and physical sectioning of a male and female cadaver. We have focused our attention on the Visible Man data primarily because these were the data made first available to the public. The Visible Man was imaged using both Magnetic Resonance (MR) and Computed Tomography (CT) imaging techniques. MR images were spaced at 5mm intervals while CT images were spaced at only 1mm intervals. Pixel size of these image sets varied depending on the field of view of the scanner. CT images were obtained before and after the cadaver was frozen. Color digital (24-bit) images were taken after each 1 mm section of the cadaver was removed. These images are 2048 x 1216 pixels with a pixel size of 1/3 mm. The memory required to store all image data for the visible man is approximately 15 gigabytes.

There are currently a large number of institutions, both public and private, which are looking at possible uses for these data. Most of these are focused around educational uses such as interactive anatomy atlases or three-dimensional (3D) surfaces for virtual surgery. A basic requirement of these applications is that each image set be segmented to identify the anatomical structures contained within the image. 3D surfaces can then be constructed using these segmentations. In order to achieve the maximal visual effect, each surface should be colored using the anatomical colors contained in the 24-bit image data and shaded as realistically as possible. Techniques which can be used to achieve these goals are the topic of this paper.

2 Image Segmentation

The first step in creating 3D surfaces is segmentation. Segmentation of color digital images offers a unique challenge to those who are used to dealing with CT and MR grayscale images. Twenty-four-bit color images contain three 8-bit variables at each pixel -- red, green, and blue. The approach to segmenting these images is to create a new image which combines these three variables into a single variable at each pixel. The resulting image would then be analogous to a CT or MR image which could be segmented via thresholding. The challenge is to create a grayscale image from the digital color images which has high-intensity pixel values for the anatomical object of interest and lower-intensity pixel values elsewhere. This is particularly difficult when one realizes that most anatomical structures are made up of multiple colors that may be quite different from one another. Techniques which perform segmentation based on a range of red, green, or blue values will often produce segmentations which include much more than the object of interest. Other techniques based on color gradients may divide an anatomical structure into two parts if two colors are present in the structure.

The choice was made to look at the colors in the image as existing in a 3D Cartesian coordinate system, with red increasing along the x-axis, green increasing along the y-axis, and blue increasing along the z-axis (Fig. 1). Each axis has a range of 0 to 255 creating a 3D color volume capable of representing every color in the 24-bit digital color images. A 1-byte variable is allocated for each point in this volume resulting in a total storage requirement of 16.7 megabytes. The values contained in these 1-byte variables will be used to transform the 24-bit color images into 8-bit grayscale images.

Fig. 1. 3D color volume.

All 1-byte variables in the color volume are initialized to a value of 0. A collection of colors is then interactively selected from the Visible Human Project 24-bit color images. These "selected pixels" belong to the anatomical structure to be segmented. A software system developed at Virginia Commonwealth University entitled IsoView [2] allows a user to perform this task. It is not necessary to meticulously identify a large number of pixels at this stage. A small group of pixels is often sufficient to select a structure. A value of 255 is placed in the 1-byte variables of the 3D color volume for all colors represented by the selected pixels. All other values in the volume remain equal to 0. If the selected pixels were drawn opaque, a set of small surfaces would appear in the 3D color volume (Fig. 1). Note that typically there are multiple, disconnected surfaces.

A dilation algorithm is then applied to the 3D color volume in order to fill in all 1-byte variables with a non-zero number. This algorithm begins with the 1-byte variables in the color volume which equal 255 but border a 1-byte variable equal to 0. These bordering variables are set equal to 254. This process is repeated for those 1-byte variables which equal 254 such that the bordering 1-byte variables which equal 0 are set equal to 253. Each iteration of this dilation algorithm adds a layer on top of the original selected pixels. The result is that every color in the 3D color volume has a value indicating its distance from the colors of the selected pixels.

A 1-byte pseudo-radiographic grayscale image is generated for each 24-bit color image by using the 3D color volume as a lookup table. The colors in close proximity to the selected colors are of high intensity while those distant from the selected color are of low intensity. This allows for image segmentation of the anatomical object of interest by thresholding. The threshold is interactively adjusted until contours wrap around the anatomical structure of interest in the image. If the segmentation does not capture the entire structure, the currently segmented pixels plus additional pixels manually selected are used as the selected pixels as described above. The grayscale images are then regenerated using the adjusted 3D color volume and a new threshold. The segmentation of the images generated in this step can then be saved to a file. This file contains an index to each image segmented and a 1-bit variable set to 1 for pixels above the threshold and 0 for pixels below the threshold.

3 Segmentation of Bone

Segmentation of bone is performed in a different manner than that of other anatomical structures. Because bone is well delineated on CT images, the CT images of the Visible Man replace the 1-byte grayscale images described above. This requires that the CT images be registered with the 24-bit digital color images such that there is a one-to-one correspondence between pixels in both sets of images. Fortunately, there is a CT image of the Visible Man taken in the same plane as almost every color image. Unfortunately, these images are not registered with the color images and do not have the same pixel size as the color images. To remedy this problem, a separate program was written which reads in the CT images, finds the pixel size, resamples the CT images using bilinear interpolation, and outputs a new image with the same pixel size as the 24-bit color images. The x and y offset of the CT images are read from the command line allowing the new CT images to be shifted to register with the 24-bit color images. This program allows a user to manually register the CT images with the color images. The registered CT images are then stored for future use.

4 Surface Creation

Surfaces are created using the software system IsoView [2]. This system uses the Marching Cubes [3] and Border Case Comparison [4] algorithms to create 3D manifold surfaces made up of triangles. IsoView runs on a Silicon Graphics Indigo2 with an Extreme graphics engine. Because the Extreme graphics engine does not support texture mapping, a color is stored at each vertex to be used later in rendering. This color comes from the 24-bit digital color images and is determined at the time the vertex is created. Figure 2 illustrates a typical voxel triangulation as defined by Marching Cubes. The color assigned to this vertex is the color of the "on" voxel vertex. This is demonstrated by the upper and lower case letters in Fig. 2. Note that the edge vertex is the same letter as the on voxel vertex of that edge. Linear interpolation of the red, green, and blue colors of the image is not appropriate since this would result in a mixing of the colors outside the surface with those inside the surface. A single 4-byte variable stores the color and alpha value. The alpha value will be used later to permit surface transparency.

Fig. 2. Marching Cubes triangulation showing edge vertex color assignment.

Surface normals are computed by taking an area-weighted average of the triangle normals surrounding a vertex. These normals are then normalized to unit length and stored at the vertex to permit surfaces to be Gouraud shaded.

5 Smoothing

It became apparent soon after creating the first few 3D surfaces from the Visible Man that some degree of smoothing is necessary if these surfaces were to appear lifelike. After experimenting with a number of different algorithms, three separate types of smoothing were incorporated into IsoView -- surface smoothing, normal smoothing, and color smoothing. All three types are essentially Laplacian smoothers with constraints applied to them. All smoothing is vertex centered rather than triangle centered. The surface and normal smoothers are constrained by the angle formed between the central vertex normal and the surrounding normals. If this angle exceeds 45 degrees, smoothing is not performed for this vertex. This constraint preserves the appearance of surface edges. The color smoother is constrained by the difference between the red, green, or blue index of the central vertex and the surrounding vertices. If either of these indices differs by more than 25 from the central vertex indices, the vertex is not smoothed. This constraint permits large changes in surface color to be preserved.

The number of iterations for each type of smoothing is determined by the user. The results of the smoothing are displayed immediately to permit the interactive selection of the most effective smoothing criteria.

6 Results

Figures 3 and 4 are representative 3D surfaces of the Visible Man created using the techniques described above. Each figure is a composite of a number of surfaces created from the Visible Man data. The skull seen in these figures is created from both the registered CT images and the 24-bit color images. The entire skull is made up of 881,000 triangles and requires 30 seconds to generate on a Silicon Graphics Indigo2 Extreme with a 150 MHz processor and 192 Mbytes of RAM. The muscle and brain surfaces are generated from the 24-bit color images using the pseudo-radiographic images created from these 24-bit color images.

Fig. 3. Brain, skull and muscle composite picture.

Fig. 4. 3D surface of the head with 50 % transparency applied to the skin.

Figure 3 demonstrates the use of cut-away surfaces to display the internal anatomy of the Visible Man. These cutaways are generated by simply limiting the range of pixels which can be used to create the 3D manifold surfaces. Transparency can also be used to display the internal anatomy through solid surfaces. Figure 4 demonstrates the internal anatomy of the head with the skin made 50% transparent. This is accomplished by setting the alpha value of the 32-bit triangle vertex color to 128. An mpeg movie can be viewed which shows the head with consecutive layers removed until only the white matter remains.

7 Conclusion

The Visible Human Project has provided an amazing (and somewhat overwhelming) amount of anatomical data which can be used for a multitude of applications. In this report, we have focused on the generation of photo-realistic 3D anatomical surfaces for the purposes of education. Although these surfaces clearly illustrate the human anatomy, they are too complex to be rendered at interactive speeds with typically available hardware. Recent efforts have been aimed at surface simplification (decimation) which should provide a means of interactively manipulating these surfaces in 3D. The ultimate goal of this project is to create a database of 3D surfaces which can be used for medical education, surgical simulation, and finite-element modeling.

8 Acknowledgments

This work was supported in part by the Jeffress Memorial Trust and the Virginia Commonwealth University / Medical College of Virginia M.D./Ph.D. Fund.

References

  1. Ackerman, M.J.: The Visible Human Project. J. Biocomm. vol. 18 no. 2 (1991) pp. 14
  2. Stewart, J.E.: IsoView: An Interactive Software System for the Construction and Visualization of Three-Dimensional Anatomical Surfaces. Va. Med. Q. vol. 121 no. 4 (1994) pp. 256
  3. Lorensen, W.E. and Cline, H.E.: Marching Cubes: A High Resolution 3D Surface Construction Algorithm. Comput. Graphics. vol. 21 no. 4 (1987) 163-169
  4. Stewart, J.E., Samareh, J.A., and Broaddus, W.C.: Border Case Comparison: A Topological Solution to the Ambiguity of Marching Cubes. IEEE Comp. Graphics, Submitted.