Other approaches are possible. It is extremely convenient and
simple to interpolate the
diffusion tensor matrices
component-wise; this is useful for cases where the eigensystem is
not required (as with hue-balls and deflection opacity maps). On the
other hand, it is worth noting that the tensor matrix itself is not
actually measured directly by the MRI machine- it is computed from a
set of measured ``diffusion-weighted'' images. Since we are
accustomed to interpolating measured *scalar* CT or MRI data as
part of volume rendering it, one could argue that interpolating the
raw diffusion-weighted images is the most defensible choice -
previous work has followed this path [7]. We believe
the issue of tensor interpolation deserves more research, and we
present our initial investigation into three different schemes for
tensor interpolation.

Working at the lowest level of representation in the tensor data, we
can interpolate the measured diffusion-weighted images which come off
the MRI machine; we call this approach *channel interpolation*.
Our tensor data was derived from a seven measured images
(``channels''), notated ,
. Each channel
corresponds to a pair of gradient encoding directions used during the
scan acquisition. In channel interpolation, values from these seven
images are stored at each sample point in the volume. These values
are interpolated to produce image values at intermediate points in the
volume, from which the diffusion tensor is calculated
[3]. First, knowing the direction-independent
diffusion weighting (about 900
secmm) we
calculate a set of log image values ,
:

From these, the diffusion tensor is calculated:

While this approach to interpolation has the benefit of working with
the original measured data, there is considerable computational
expense associated with evaluating the natural logs in the calculation
of . Avoiding this expense is achieved with *matrix
interpolation*, wherein the tensor matrix is calculated once per
sample point, and then interpolated component-wise to produce a tensor
at intermediate locations. Were the components of the tensor matrix
simply linear combinations of the measured channels, then matrix
interpolation would be equivalent to channel interpolation. However,
they are different because of the non-linearities introduced by
Equation 16.

As mentioned above, it is also conceivable that rather than interpolate
channels or matrices, we interpolate the necessary information derived
from them, such as eigenvalues or eigenvectors, a process we term *eigensystem interpolation*. In our work, this has immediate relevance
for the barycentric map methods of assigning color and shading, and
the lit-tensor shading model. This style of interpolation has the
tremendous benefit of doing all the eigensystem calculations once as a
pre-process, and storing the results at each dataset point.

Despite its obvious benefits, there is a subtle issue of *correspondence*, which complicates eigensystem interpolation. Given a
set of three eigenvectors at one sample point
, and
another three eigenvectors at a second sample point
, we
do not immediately know the correspondence between the two sets of
eigenvectors. However, it is necessary to define some correspondence
between the eigensystems at the two sample points in order to perform
eigenvector interpolation. Knowing the *continuous* tensor field
from which the discrete dataset is sampled, we could possibly learn
the ``correct'' correspondence. As the tensor field is sampled
continuously on a path between
and
, we
could determine if (for example) the principal eigenvector at
smoothly becomes the principal eigenvector at
. An analogous correspondence problem complicates
eigenvalue interpolation. Efficiently determining the ``correct''
correspondence from the measured data, and characterizing those
situations where the ``correct'' correspondence may be undefined (such
as passing through a region of total or nearly total isotropy), is a
topic of our current research.

We have evaluated one specific form of eigensystem interpolation that
makes an assumption about the sampling density of the data. In *eigenvalue interpolation*, we interpolate eigenvalues based on the
correspondence induced by sorting. Thus, the largest eigenvalues at
two sample points are assumed to correspond, and are interpolated with
standard scalar interpolation; likewise for the middle and smallest
eigenvalues^{5}. Because we learn
the association between eigenvalues and eigenvectors as part of the
eigensystem calculation, we note that the correspondence induced by
eigenvalue sorting also determines a correspondence between
eigenvectors (useful for lit-tensors), although such a correspondence
could also be determined by a more complicated analysis of
similarities in eigenvector orientation.

The ``correct'' eigenvalue correspondence is undefined if we know *a priori* there is a point (called a *critical point*
[9]) between the two samples where two eigenvalues are
equal (one of the anisotropy measures or are zero).
This means the magnitudes of two eigenvalues crossed paths and changed
sorting order. The eigenvectors associated with the (double)
eigenvalue are also not uniquely defined at critical points.
Fortunately, none of the methods presented in this paper crucially
depend on non-unique eigenvectors or eigenvalues. For instance, the
lit-tensor shading model (Section 3.2.1) becomes
insensitive to eigenvector direction precisely when the direction is
ill-defined, and the barycentric anisotropy coordinates vary
continuously across critical points in continuous tensor data.

Thus, the sampling density assumption made in eigenvalue interpolation is that any important feature located near the critical point is also represented by neighboring sample points. More generally, the anisotropy properties that identify a feature need to be present at nearby sample points, instead of falling between them. We justify this assumption by noting that the features we are interested in visualizating tend to have characteristic size larger than the sampling grid resolution of the tensor data.

- ...
eigenvalues
^{5} - This explanation implies the use of an interpolation kernel that overlaps only two samples at a time. Using a larger kernel would require determining the simultaneous correspondence over a larger set of eigensystems.