next up previous
Next: Related Work Up: Semi-Automatic Generation ... Previous: Semi-Automatic Generation ...

Subsections

Introduction

The Task of Finding Transfer Functions

Transfer functions make a volume dataset visible by assigning renderable optical properties to the numerical values which comprise the dataset. The most general transfer functions are those that assign opacity, color, and emittance [12]. Useful renderings can often be obtained, however, from transfer functions which assign just opacity, with the color and brightness derived from simulated lights which illuminate the volume according to some shading model. We use the term opacity functions to refer to this limited subset of transfer functions. During the rendering process, the sampled and interpolated data values are passed through the opacity function to determine their contribution to the final image. Since the opacity function does not normally take into account the position of the region being rendered, the role of the opacity function is to make opaque those data values which consistently correspond, across the whole volume, to features of interest. This paper addresses only the problem of setting opacity functions, as this is a non-trivial yet manageable problem whose solution is pertinent to more general transfer function specification issues.

Finding a good transfer function is critical to producing an informative rendering, but even if the only variable which needs to be set is opacity, it is a difficult task. Looking through slices of the volume dataset allows one to spatially locate features of interest, and a means of reading off data values from a user-specified point on the slice can help in setting an opacity function to highlight those features, but there is no way to know how representative of the whole feature, in three dimensions, these individually sampled values are. User interfaces for opacity function specification typically allow the user to alter the opacity function by directly editing its graph, usually as a series of linear ramps joining adjustable control points. This interface does not itself guide the user towards a useful setting, as the movement of the control points is unconstrained and unrelated to the underlying data. Thus finding a good opacity function tends to be a slow and frustrating trial and error process, with seemingly minor changes in an opacity function leading to drastic changes in the rendered image. This is made more confusing by the interaction of other rendering parameters such as shading, lighting, and viewing angle.


Direct Volume Rendering of Boundaries

A significant assumption made in this paper is that the features of interest in the scalar volume are the boundary regions between areas of relatively homogeneous material3. For instance, this is often true of datasets from medical imaging. But if the goal is to render the boundaries of objects, why use direct volume rendering, and not isosurface rendering? Although this question itself deserves investigation, it is widely accepted that direct volume rendering avoids the binary classification inherent in isosurface rendering -- either the isosurface passes through a voxel or not [11]. To the extent that an object's surface is associated with a range of values, an opacity function can make a range of values opaque or translucent. This becomes especially useful when noise or measurement artifacts upset the correlation between data value and material type.

As a quick illustration of this, consider a dataset generated from limited angle tomography [6], where there are often streaks and blurriness in the data caused by the unavailability of projections at some range of angles. This type of data is studied in the Collaboratory for Microscopic Digital Anatomy[19], an ongoing project aimed at providing remote, networked access to sophisticated microscopy resources. Fig. 1 shows two renderings of a mammalian neuron dataset, using the same viewing angle, shading, and lighting parameters, but rendered with different algorithms: a non-polygonal ray-cast isosurface rendering and a shear-warp direct volume rendering produced with the Stanford VolPack rendering library [10]. Towards the bottom of the direct volume rendered image, there is some fogginess surrounding the surface, and the surface itself is not very clear. As can be confirmed by looking directly at slices of the data itself, this corresponds exactly to a region of the dataset where the material boundary is in fact poorly defined. The surface rendering, however, shows as distinct a surface here as everywhere else, and in this case the poor surface definition in the data is manifested as a region of rough texture. This can be misleading, as there is no way to know from this rendering alone that the rough texture is due to measurement artifacts, and not a feature on the dendrite itself.

Figure 1: Two renderings of a spiny dendrite from a cortical pyramidal neuron. The volume dataset was reconstructed from images of a 2 micron thick section acquired with an intermediate high voltage electron microscope at the National Center for Microscopy and Imaging Research, San Diego, California, using single-tilt axis tomography. Specimen kindly provided by Prof. K. Hama of the National Institute for Physiological Sciences, Okazaki, Japan.
\begin{figure}
\setcounter{subfigure}{0}
\centering {
\subfigure[Isosur...
...{figure=figure/pyramidal.dvr.clip.eps,
width=0.45\columnwidth}}}
\end{figure}



Footnotes

... material3
We use boundary to refer, not to an infinitesimally thin seperating surface between two areas of disparate data value, but to the thin region wherein the data value transitions from one material value to the other.

next up previous
Next: Related Work Up: Semi-Automatic Generation ... Previous: Semi-Automatic Generation ...
Gordon Kindlmann
1999-07-25