We'll start with a brief video which shows our system in action. All of our videos were taken directly from the video output of the graphics card.
teaser0.mov, 9.91MB: This is the frozen CT of the Visible Male. We're turning the dataset around, and rendering with a "default" transfer function, but we can improve on that, by adding a clipping plane, and then querying the values and gradient magnitudes at some locations. Based on that feedback, we add a classification widget for the skin, change the color, add another classification widget for a second feature, change the color with one mouse motion, change the opacity with another. Now, when we remove the clipping plane, turn the dataset to front, and zoom in, we see that we've produced a visualization of the frontal and maxillary sinuses in the context of the skin.