Since our particular goal is the visualization of material boundaries,
we have chosen a model for what constitutes an ideal boundary and
developed methods around that. We assume that at their boundary,
objects have a sharp, discontinuous change in the physical property
measured by the values in the dataset, but that the measurement
process is band-limited with a Gaussian frequency response, causing
measured boundaries to be blurred by a Gaussian.
Fig. 2 shows a step function representing an ideal
boundary prior to measurement, the Gaussian which performs the
band-limiting by blurring, and the resulting measured boundary (prior
to sampling). The resulting curve happens to be the integral of a
Gaussian, which is called the *error function* [9].
Actual measurement devices band-limit, so they always blur boundaries
somewhat, though their frequency response is never exactly a Gaussian,
since this has infinite support. Although certain mathematical
properties of the Gaussian are exploited later, we have not found the
inexact match of real-world sampling to the Gaussian ideal to limit
application of our technique. A final assumption made for the
purposes of this analysis is that the blurring is isotropic, that is,
uniform in all directions. Again, our methods will often work even if
a given dataset does not have this characteristic, but results may be
improved if it is pre-processed to approximate isotropic blurring.

Directional Derivatives along the Gradient

Although it was suggested in Section 1.2 that isosurfaces are not always sufficient for visualizing objects in real world volume data, the method presented in this paper still indirectly employs them as an indicator of object shape. That is, based on the mathematical property that the gradient vector at some position always points perpendicular to an isosurface through that position, we use the gradient vector as a way of finding the direction which passes perpendicularly through the object boundary. Even though isosurfaces do not always conform to the local shape of the underlying object, if we average over the whole volume, the gradient vector does tend to point perpendicular to the object boundary. We rely on the statistical properties of the histogram to provide the overall picture of the boundary characteristics.

The directional derivative of a scalar field along a vector , denoted , is the derivative of as one moves along a straight path in the direction. This paper studies and its derivatives as one cuts directly through the object boundary -- moving along the gradient direction -- in order to create an opacity function. Because the direction along which we are computing the directional derivative is always that of the gradient, we employ a mild abuse of notation, using and to signify the first and second directional derivative along the gradient direction, even though these would be more properly denoted by and , where is the gradient direction. We treat as if it were a function of just one variable, keeping in mind that the axis along which we analyze always follows , which constantly changes orientation depending on position. Fig. 3 shows how the gradient direction changes with position to stay normal to the isosurfaces of a simple object.

Fig. 4 analyzes one segment of the cross-section of this same object. Shown are plots of the data value and the first and second derivatives as one moves across the boundary. Because of band-limiting, the measured boundary is spread over a range of positions, but an exact location for the boundary can be defined with either the maximum in , or the zero-crossing in . Indeed, two edge detectors common in computer vision, Canny [4] and Marr-Hildreth [14], use the and criteria, respectively, to find edges.

Relationship Between , , and

As our goal is to find functions of data value which highlight
boundary regions, our problem is rather different than that addressed
by edge detectors. Because the opacity function will be applied
throughout the volume irrespective of position, we must locate the
boundary not in the spatial domain, but in the range of data values.
In contrast, edge detectors locate boundaries in the spatial domain.
Yet, we still want to borrow from
computer vision the notion that boundaries are somehow associated with
a maximum in
and/or a zero-crossing in . To see how this is
possible, consider just the relationship between
and . As
both of these are functions of position, they can be plotted with a
three-dimensional graph, as in Fig. 5. The
three-dimensional curve can be projected downward to form the plot of
data value versus position, and projected to the right to show first
derivative versus position. Projecting the curve along the position
axis, however, eliminates the position information, and reveals the
relationship between data value and first derivative. Because the
data value increases monotonically, there is a (non-linear) one-to-one
relationship between position and data value, so the first derivative
, which had been a function of *position* , can also be
expressed as a function of *data value* . This is what the
third projection in Fig. 5 depicts.

The same projections can be done for data value and its second derivative, as seen in Fig. 6. Projecting the curve downward or to the right produces the graphs of data value or second derivative versus position (first seen in Fig. 4), while projecting along the position axis reveals the relationship between data value and its second derivative.

Finally, having ``projected out'' position information, one can make a
three-dimensional graph of the first and second derivatives as
functions of *data value*, as seen in Fig. 7. The
significance of this curve is that it provides a basis for
automatically generating opacity functions. If a three dimensional
record of the relationship between ,
and
for a given
dataset contains curves of the type shown in Fig. 7, we
can assume that they are manifestations of boundaries in the volume.
With a tool to detect those curves and their position, one could
generate an opacity function which makes the data values corresponding
to the middle of the boundary (indicated with cross-hairs in
Fig. 7) the most opaque, and the resulting rendering
should show the detected boundaries. Short of that, one could use a
measure which responds to some specific feature of the curve (say, the
zero crossing in ) and base an opacity function on that. This is
what the current paper seeks to do.