Color Model based on the Human Vision System The project explores the area of color/intensity control for rendering. The general idea is to design a color model based on the human vision system. With the new model, controls are then designed that will simplify image color manipulation such as recoloring an image using warm-cool tones. The new color model, known as the DKL model, was created by one of the graduate students. My job consisted of 1) designing a set of filters that will then be used for image color manipulation 2) using a global illumination renderer to render our 3D scenes. The generated images will later be used to run the filters over. Each section below will describe in more details the different stages of the project.
 New Color Model Why a new model? What's wrong with the RGB model? The RGB model has been adequate for most graphics purposes so far. However, the model does not accurately represent how the human eye works. It would be interesting to create a new color model that better approximates and represents how our eyes perceive color. Given an RGB image, we can convert it to the new model values. Afterwards, a number of filtering and coloring techniques can be applied to the image to create more interesting results. Some of those techniques include: Increase/decrease luminance contrast across entire image Create luminance transfer curve that maximizes/minimizes contrast in a region Use color contrast to enhance areas where there is no luminance contrast Increase contrast in regions with high luminance/contrast, decrease others
Filters

In image processing, filters are essentailly n x n matrices that process pixel data. For example, a Gaussian filter is a smoothing filter that takes a weighted average of the neighboring pixels and repalces the current pixel by the average value. The result is a blurred image as shown below.

(7x7 Gaussian matrix, standard deviation = 2)

Contrast (Edge) Filters
For this project, we are interested in designing a set of edge-detection filters. We define an edge to be the area of the image in which there exists a sharp contrast in intensity.

A number of edge filters already exist, so why aren't we using them? In a sense we are. Our edge filters are based on the Prewitt filter model and are either 3x3 or 5x5 matrices. However, what's different is the shape of our filters. A Prewitt edge filter looks like this:

 -1 0 1 -1 0 1 -1 0 1 To visualize this filter, we can re-map it to fall in the range of (0,1) and view it as an image:
We devided the filters into two catagories based on their shape:
• Edge contrast filters
• Center-surround filters
Edge Contrast Filters

Given an image containing intensity values, the edge filters are shaped like "blobs" with positive values on one side and negative on the other. Originally the filters were created to be 20x20 and then later down sampled to either 3x3 or 5x5. Here are a few examples of our 20x20 filters:

 Horizontal Filters: Vertical Filters:
Center Surround Filters

Similar to the edge conrast filters with the input being an intensity image. These filters were designed to reflect the concept of "double-opponent" cells in our vision system. The cover of The Journal of Neuroscience (vol 21, number 8) showed spatial mapping for receptive field of cortical color cells.

 Radiance is a collection of lighting and rendering programs used for physical lighting simulations. For the purpose of this project, Radiance is used as the rendering tool for our 3D scenes. Given a set of geometry and material files, and specifiying lighting conditions in Radiance, the scene is then rendered and the output is: a color image a text file containing spectral radiance values. After generating the radiance values, the data will then be converted to a new set of values using the color model. After applying the coloring and filtering techniques, the modified data will then be imported back into Radiance for rendering. The new render will be a color image displaying the effects of the coloring and filtering methods that were applied earlier.