next up previous
Next: Discussion Up: Interactive Volume Rendering Using Previous: Direct Manipulation Widgets

Subsections

Hardware Considerations


While this paper is conceptually focused on the matter of setting and applying higher dimensional transfer functions, the quality of interaction and exploration described would not be possible without the use of modern graphics hardware. Our implementation relies heavily on an OpenGL extension known as pixel textures, or dependent textures. This extension can be used for both classification and shading. In this section, we describe our modifications to the classification portion of the traditional hardware volume rendering pipeline. We also describe a multi-pass/multi-texture method for adding interactive shading to the pipeline.

The volume rendering pipeline utilizes separate data and shading volumes. The data volume, or ``VGH'' in Figures 6 and 7, encodes data value (``V''), gradient magnitude (``G''), and second derivative (``H'', for Hessian) in the color components of a 3D texture, using eight bits for each of these three quantities. The quantized normal volume, or ``QN'' in Figure 6, encodes normal direction as a 16-bit unsigned short in two eight-bit color components of a 3D texture. The ``Normal'' volume in Figure 7 encodes the normal in a scaled and biased RGB texture, one normal component per color channel. Conceptually, both the data and shading/normal volumes are spatially coincident. A slice through one volume can be represented by the same texture coordinates in the other volume.


Pixel Texture


Pixel texture is a hardware extension which has proven useful in computer graphics and visualization [6,9,22,33]. Pixel texture and dependent texture are names for operations which use color fragments to generate texture coordinates, and replace those color fragments with the corresponding entries from a texture. This operation essentially amounts to an arbitrary function evaluation via a lookup table. The number of parameters is equal to the dimension of components in the fragment which is to be modified. For example, if we were to pixel texture an RGB fragment, each channel value would be scaled to between zero and one, and these new values would then be used as texture coordinates into a 3D texture. The color values for that location in the 3D texture replace the original RGB values. Nearest neighbor or linear interpolation can be used to generate the replacement values. The ability to scale and interpolate color channel values is a convenient feature of the hardware. It allows the number of elements along a dimension of the pixel texture to differ from the number of bit planes in the component that generated the texture coordinate. Without this flexibility, the size of a 3D pixel texture would be prohibitively large.


Classification


Each voxel in our data volume contains three values. We therefore require a 3D pixel texture to specify color and opacity for a sample. It would be prohibitively expensive to give each axis of the pixel texture full eight-bit resolution, or 256 entries along each axis. We feel that data value and gradient magnitude variation warrants full eight-bit resolution. Because we are primarily concerned with the zero crossings in a second derivative, we choose to limit the resolution of this axis. Since the second derivative is a signed quantity, we must maintain the notion of its sign (with a scale and bias) in order to properly interpolate this quantity. We can choose a limited number of control points for this axis and represent them as ``sheets'' in the pixel texture. Specifically, we can exert linear control over the opacity of second derivative values with three control points: one each for negative, zero, and positive values. The opacity on the center sheet, representing zero second derivatives, is directly controlled by the classification widgets. The opacity on the outer sheets, representing positive and negative second derivatives, is scaled from the opacity on the central sheet according to the boundary emphasis slider. It is important to note here that if a global boundary emphasis is desired, i.e., applying to all classification widgets equally, one could make this a separable portion of the transfer function simply by modulating the output of a 2D transfer function with the per-sample boundary emphasis value.


Shading


Shading is a fundamental component of volume rendering because it is a natural and efficient way to express information about the shape of structures in the volume. However, much previous work with texture-memory based volume rendering lacks shading. We include a description of our shading method here not because it is especially novel, but because it dramatically increases the quality of our renderings with a negligible increase in rendering cost.

Since there is no efficient way to interpolate normals in hardware which avoids redundancy and truncation in the encoding, normals are encoded using a 16-bit quantization scheme, and we use nearest-neighbor interpolation for the pixel texture lookup. Quantized normals are lit using a 2D pixel texture since there is essentially no difference between a 16-bit 1D nearest neighbor pixel texture and an eight-bit per-axis 2D nearest neighbor pixel texture. The first eight bits of the quantized normal can be encoded as the red channel and the second eight bits are encoded as the green channel. We currently generate the shading pixel texture on a per-view basis in software. The performance cost of this operation is minimal. It, however, could easily be performed in hardware as well. Each quantized normal could be represented as a point with its corresponding normal, rendered to the frame buffer using hardware lighting, and then copied from the frame buffer to the pixel texture. Some hardware implementations, however, are not flexible enough to support this operation.

Figure 6: Octane2 Volume Rendering pipeline. Updating the shade volume (right) happens after the volume has been rotated. Once updated, the volume would then be re-rendered.
\begin{figure}\epsfig{figure=eps/simianPipeline.eps, width=\columnwidth} \end{figure}

Figure 7: GeForce3 Volume Rendering pipeline. Four-way multi-texture is used. The textures are: VGH, VG Dependant Texture, H Dependant texture, and the Normal texture (for shading). The central box indicates the register combiner stage. The Blend VG&H Color stage is not usually executed since we rarely vary color along the second derivative axis. The Multiply VG&H Alpha stage, however, is required since we must compose our 3D transfer function separably as a 2D$ \times $1D transfer function.
\begin{figure}\epsfig{figure=eps/simianGF3_pipeline.eps, width=\columnwidth} \end{figure}


Hardware Implementation


We have currently implemented a volume renderer using multi-dimensional transfer functions on the sgi Octane 2 with the V series graphics cards, and the nVidia GeForce3 series graphics adaptor. The V series platform supports 3D pixel texture, albeit only on either a glDrawPixels() or glCopyPixels() operation. Since pixel texture does not occur directly on a per-fragment basis during rasterization, we must first render the slice to a buffer, then pixel texture it using a glCopyPixels() operation. Our method requires a scratch, or auxiliary, buffer since each slice must be rendered individually and then composited. If shading is enabled, a matching slice from a shading volume is rendered and modulated (multiplied) with the current slice. The slice is then copied from the scratch buffer and blended with previously rendered slices in the frame buffer. A key observation of this volume rendering process is that when the transfer function is being manipulated or changed, the view point is static, and vice versa. This means that we only need to use the pixel texture operation on the portion of the volume which is currently changing. When the user is manipulating the transfer function, the raw data values (VGH) are used for the volume texture, and a pre-computed RGBA shade volume is used.

The left side of Figure 6 illustrates the rendering process. The slices from the VGH data volume are first rendered (1) and then pixel textured (2). The ``Shade'' slice is rendered and modulated with the classified slice (3), then blended into the frame buffer (4). When the volume is rotated, lighting must be updated (shown on the right side of Figure 6). For interactive efficiency, we only update the shade volume once a rotation has been completed. A new quantized normal pixel texture (for shading) is generated and each slice of the quantized normal volume is rendered orthographically in the scratch buffer (1) and then pixel textured (2). This slice is then copied from the scratch buffer to the corresponding slice in the shade volume (3). The volume is then re-rendered with the updated shade volume. Updating the shade volume in hardware requires that the quantized normal slices are always smaller than scratch buffer's dimensions.

The GeForce3 series platform supports dependent texture reads on a per-fragment basis as well as 4-way multi-texture, see Figure 7. This means that the need for a scratch buffer is eliminated, which significantly improves rendering performance by avoiding several expensive copy operations. Unfortunately, this card only supports 2D dependent texture reads. This constrains the 3D transfer functions to be a separable product of a 2D transfer function (in data value and gradient magnitude) and a 1D transfer function (in second derivative), but it also allows us to take full advantage of the eight-bit resolution of the dependent texture along the second derivative axis. The second derivative axis is implemented with the nVidia register combiner extension. Shading can either be computed as described above, or using the register combiners.



next up previous
Next: Discussion Up: Interactive Volume Rendering Using Previous: Direct Manipulation Widgets