next up previous
Next: Evaluation Up: Strategies for Direct Volume Previous: Reaction-Diffusion Textures

Diffusion Tensor Interpolation

One important decision to make when rendering and manipulating datasets is the method of interpolation. In scalar volume rendering, the usual technique is to resample data values by trilinear interpolation, and then map them through the transfer function. One could also resample vectors by interpolating the vector data component-wise. It is less clear, however, how best to interpolate diffusion tensor data for direct volume rendering. This is because sometimes, as in the case of barycentric maps or lit-tensors, the underlying tensor matrix is not required for rendering, only some quantity derived from its eigensystem. In these cases, it would seem best to interpolate pre-computed eigenvalues or eigenvectors.

Other approaches are possible. It is extremely convenient and simple to interpolate the $ 3 \times 3$ diffusion tensor matrices component-wise; this is useful for cases where the eigensystem is not required (as with hue-balls and deflection opacity maps). On the other hand, it is worth noting that the tensor matrix itself is not actually measured directly by the MRI machine- it is computed from a set of measured ``diffusion-weighted'' images. Since we are accustomed to interpolating measured scalar CT or MRI data as part of volume rendering it, one could argue that interpolating the raw diffusion-weighted images is the most defensible choice - previous work has followed this path [7]. We believe the issue of tensor interpolation deserves more research, and we present our initial investigation into three different schemes for tensor interpolation.

Working at the lowest level of representation in the tensor data, we can interpolate the measured diffusion-weighted images which come off the MRI machine; we call this approach channel interpolation. Our tensor data was derived from a seven measured images (``channels''), notated $ A_i$, $ i = 0 \ldots 6$. Each channel corresponds to a pair of gradient encoding directions used during the scan acquisition. In channel interpolation, values from these seven images are stored at each sample point in the volume. These values are interpolated to produce image values at intermediate points in the volume, from which the diffusion tensor is calculated [3]. First, knowing the direction-independent diffusion weighting $ b$ (about 900 sec$ /$mm$ ^2$) we calculate a set of log image values $ I_i$, $ i = 1 \ldots 6$:

$\displaystyle I_{i} = \frac{ln(A_{i}) - ln(A_0)}{b}$ (16)

From these, the diffusion tensor $ D$ is calculated:
$\displaystyle D_{xx}$ $\displaystyle =$ $\displaystyle - I_1 - I_2 + I_3 + I_4 - I_5 - I_6$ (17)
$\displaystyle D_{xy}$ $\displaystyle =$ $\displaystyle - I_5 + I_6$  
$\displaystyle D_{xz}$ $\displaystyle =$ $\displaystyle - I_1 + I_2$  
$\displaystyle D_{yy}$ $\displaystyle =$ $\displaystyle + I_1 + I_2 - I_3 - I_4 - I_5 - I_6$  
$\displaystyle D_{yz}$ $\displaystyle =$ $\displaystyle - I_3 + I_4$  
$\displaystyle D_{zz}$ $\displaystyle =$ $\displaystyle - I_1 - I_2 - I_3 - I_4 + I_5 + I_6$  

While this approach to interpolation has the benefit of working with the original measured data, there is considerable computational expense associated with evaluating the natural logs in the calculation of $ I_i$. Avoiding this expense is achieved with matrix interpolation, wherein the tensor matrix $ D$ is calculated once per sample point, and then interpolated component-wise to produce a tensor at intermediate locations. Were the components of the tensor matrix simply linear combinations of the measured channels, then matrix interpolation would be equivalent to channel interpolation. However, they are different because of the non-linearities introduced by Equation 16.

As mentioned above, it is also conceivable that rather than interpolate channels or matrices, we interpolate the necessary information derived from them, such as eigenvalues or eigenvectors, a process we term eigensystem interpolation. In our work, this has immediate relevance for the barycentric map methods of assigning color and shading, and the lit-tensor shading model. This style of interpolation has the tremendous benefit of doing all the eigensystem calculations once as a pre-process, and storing the results at each dataset point.

Despite its obvious benefits, there is a subtle issue of correspondence, which complicates eigensystem interpolation. Given a set of three eigenvectors at one sample point $ \mathbf{x}_1$, and another three eigenvectors at a second sample point $ \mathbf{x}_2$, we do not immediately know the correspondence between the two sets of eigenvectors. However, it is necessary to define some correspondence between the eigensystems at the two sample points in order to perform eigenvector interpolation. Knowing the continuous tensor field from which the discrete dataset is sampled, we could possibly learn the ``correct'' correspondence. As the tensor field is sampled continuously on a path between $ \mathbf{x}_1$ and $ \mathbf{x}_2$, we could determine if (for example) the principal eigenvector at $ \mathbf{x}_1$ smoothly becomes the principal eigenvector at $ \mathbf{x}_2$ . An analogous correspondence problem complicates eigenvalue interpolation. Efficiently determining the ``correct'' correspondence from the measured data, and characterizing those situations where the ``correct'' correspondence may be undefined (such as passing through a region of total or nearly total isotropy), is a topic of our current research.

We have evaluated one specific form of eigensystem interpolation that makes an assumption about the sampling density of the data. In eigenvalue interpolation, we interpolate eigenvalues based on the correspondence induced by sorting. Thus, the largest eigenvalues at two sample points are assumed to correspond, and are interpolated with standard scalar interpolation; likewise for the middle and smallest eigenvalues5. Because we learn the association between eigenvalues and eigenvectors as part of the eigensystem calculation, we note that the correspondence induced by eigenvalue sorting also determines a correspondence between eigenvectors (useful for lit-tensors), although such a correspondence could also be determined by a more complicated analysis of similarities in eigenvector orientation.

The ``correct'' eigenvalue correspondence is undefined if we know a priori there is a point (called a critical point [9]) between the two samples where two eigenvalues are equal (one of the anisotropy measures $ c_l$ or $ c_p$ are zero). This means the magnitudes of two eigenvalues crossed paths and changed sorting order. The eigenvectors associated with the (double) eigenvalue are also not uniquely defined at critical points. Fortunately, none of the methods presented in this paper crucially depend on non-unique eigenvectors or eigenvalues. For instance, the lit-tensor shading model (Section 3.2.1) becomes insensitive to eigenvector direction precisely when the direction is ill-defined, and the barycentric anisotropy coordinates vary continuously across critical points in continuous tensor data.

Thus, the sampling density assumption made in eigenvalue interpolation is that any important feature located near the critical point is also represented by neighboring sample points. More generally, the anisotropy properties that identify a feature need to be present at nearby sample points, instead of falling between them. We justify this assumption by noting that the features we are interested in visualizating tend to have characteristic size larger than the sampling grid resolution of the tensor data.



Footnotes

... eigenvalues5
This explanation implies the use of an interpolation kernel that overlaps only two samples at a time. Using a larger kernel would require determining the simultaneous correspondence over a larger set of eigensystems.


Subsections
next up previous
Next: Evaluation Up: Strategies for Direct Volume Previous: Reaction-Diffusion Textures
Gordon Kindlmann 2001-09-15