"God created the whole numbers;
all the rest is man's work."
|
|
- Kronecker
|
|
This page has links to my solutions to homeworks for CS684-Winter'98,
Pete's Wicked Render-Rama
, taught by the brilliant, bubbly (yet buff) modern renaissance dude,
Pete Shirley.
There's no guarantee that anything you see in my answers has any bearing
on reality or semblance of truth. Peruse at your own risk, believe at your
own peril.
A Peck Of Pickled Peppers
Since Pete more often than not says useful and interesting things, I
occasionally take notes in class. In case you take the couch potato
attitude in class but now wish you had notes, here are mine:
Assignment 8: Indirect Lighting
This, the final, penultimate asssignment, was to produce an indirect-lighting
solution to the Cornell box. The idea is to produce a purely computational
image of a scene that matches a
photograph of a physical model (I hope that's
what this image I stole from the Cornell page is...)
as closely as possible. It's really amazing just how much indirect lighting
contributes to the appearance of a scene.
New and exciting things this week include:
- Lighting via indirect surface interaction. The indirect
component
of the energy incident on a surface is computed as a Monte Carlo
integral over the hemisphere above the surface. Random scattering
vectors are chosen in hemispherical-polar coordinates (theta down
from surface normal). Values for phi are chosen uniformly and values
for theta follow a cos-distribution that captures the geometry of
diffuse emission. In addition to the pixel sampling and light
sampling distributions, the indirect sampling distribution is a
third Fun And Exciting Knob to twiddle when looking for good noise
behavior.
- Working spectral power distributions. Currently implemented
by Pete's
"multiply the sample values and don't think about it" method,
complete with linear interpolation between samples. All images here
carry 9 samples through the computation on the interval [400nm,700nm].
I need to look into energy conservation and come up with a spectrum
representation that doesn't give me the screaming heebie-geebies.
- Chromaticity-preserving clipping to RGB-space. The idea is
that you
figure out what the biggest value in the (r,g,b) triple is, and if
it's too big you scale the (x,y,z) triple by a constant chosen so
that the maximum (r,g,b) is 1.0. This way the light ends up with the
right chromaticity, and isn't just white because it's real bright.
I had hoped that it would look like a nice happy yellowish
incandescent bulb, but it doesn't. It's pink. This
bothered me a lot until I looked at the Cornell Box photo (above)
and noticed that the scene there is lit with pink light, too.
Like, \/\/hat-everrrrah.
- Whitepoints from Planck's Law. I've got code now to compute
whitepoint chromaticities given a color temperature by integrating
the Planck's Law blackbody radiation curve with the CIE
tristimulous-response curves. It's reassuring that the numbers
from Sony for the 9300K whitepoint (w=<0.283,0.298>) agree with the
blackbody results. This is how I'm able to produce an image for
a 6500K Trinitron (below), which is probably close to what you're
looking at.
These parameters apply to all of the following images:
Scene: planarized Cornell Box
Pixel Sampling: 16x16 multijittered
Light Sampling: 2x2 multijittered (why break the trend?)
Indirect Lighting Integral Sampling: 1x1 jittered (single
random sample)
If you're counting, at single indirection this is sortof comparable to 4096
samples/pixel with single light and single indirect sampling, however it's
significantly faster to oversample the indirect lighting integral and sample
pixels less, since that's what really varies the most, and it avoids a lot
of costly intersection computations.
Single Indirection: Indirection
paths in this image are limited to one hop, which yields surprisingly good
results. The color conversion matrix is the one Milan Ikits pulled out of
his rear, which seems to compensate nicely for the fact that the cbox light
source's spectrum is "just the wrong shade of pink". Milan's Magic
Whitepoint is (w=<0.380,0.360>) and his phosphor chromatcities are
(r=<0.625,0.340>, g=<0.280,0.595>, b=<0.155,0.070>). Gamma adjusted to 1.7.
This took 6 hours to compute on a 300MHz Pentium-II. The shadow behind the
big block has somethin' weird goin' on...
More on the way unless surreal dies...
Assignment 7: Direct Lighting
This assignment was to produce a direct-lighting solution to the Cornell
box. The "magic iris value" used to scale the final energy distribution
into legal RGB values was determined by trial-and-error, and chosen to
produce a reasonably bright image. Some clipping of the highest-intensity
values may have resulted in the walls' hot-spots.
In all cases, the scene rendered is the
planarized Cornell Box reportedly
due to Dave Weinstein and Steve Parker. Good work, guys.
RGB Color Model: Pixel Sampling is 4x4
Multijittered, Light Sampling is 4x4 Multijittered (256 rays/pixel total).
Note that the colors of the walls and light are pure full-on red, green
and white, and bear only superficial resemblance (not even that, really)
to the real Cornell Box spectra.
Spectral Energy Model: Same
sampling as above, but this time a discrete spectral radiance distribution
was calculated. Even with the extremely low spectral sampling (samples at
400,500,600,700nm), the colors are recognizable. The spectrum was
converted to CIE tristulous-response triples then to RGB triples via
a conversion matrix of dubious origin. I'll do a better conversion in the
indirect image.
Assignment 6: Metal and Glass
The purpose of this assignment is to model the surface and volumetric
material properties of metal and glass. Glass is modelled by assuming
that reflectance is a function of incidence angle. In particular,
reflectivity goes to 1 when the incidence angle is 90 degrees. The effects
of this reflectance shift propogate into lower incidence angles with
increasing index of refraction. Used for computing the reflection
coefficient is a function that produces attractive results:
reflectivity(theta) = R + (1-R) (1-cos(theta))^5
R = (1-nGlass)^2/(1+nGlass)^2
The portion of the incident ray not reflected is transmitted by refraction.
As the ray propogates through the glass, it is attenuated by an exponential
spectral attenuation function which can produce green "coke bottle" or
grey/brown smoked glass effects.
Metal is modelled by spectral attenuation at the point of reflection.
Attenuation functions characteristic of particular metals can be found in
physical databooks.
Here are some sample images to demonstrate the functionality:
Pyramid of Smoked Glass Spheres: Index
of refraction is 1.08,
attenuation is 25% per linear unit. The fourteen identical spheres are
tightly packed in a pyramid resting on a checkerboard. The camera has
an extremely wide field of view (90 degrees), so the top sphere looks
large due to being "right in your face" and undergoing the usual implicit
perspective transformation.
Copper, Gold and Aluminum Spheres: Three
spheres, one copper, one gold, and one aluminum, demonstrating metallic
reflectance. An RGB color model was used, so don't expect the metal
colors to be perfect. Note that the sky is black, which is
what's reflected off the top of the metal spheres.
Sheet of Glass: Rectangular piece of
glass with a slight green-producing absorption. Demonstrates the familiar
green-glass-edge effect of long pathlength due to repeated total internal
reflection. Real glass introduces nonuniform refraction to rays that
spend a long time in the medium (the ones that color the edge). That's
why you can't normally "see things" through the edge of a sheet of glass.
One can see the grid through the edge here, since reflection and refraction
are perfect. Maximum recursion depth (15) was exceeded on the lower part
of the lefthand edge.
One of these days I'll make a real scene and ditch that checkerboard. All
these were rendered on a 133MHz Pentium laptop in less time than it takes to
watch Babylon 5.
Assignment 5: Make It Go Fast
This assignment is an implementation of an efficiency structure for
intersection testing. In this case, spatial partitioning (uniform gridding)
was implemented. Rendering times are given here for
scenes of many spheres. The gridded times can be compared against their
related linear search times to get some feel for the efficiency that
gridding affords. The number of grid cells for each scene was chosen to
optimize the render time. Runtimes are wall times via gettimeofday().
Sphere counts are linked to images in the table. Images were rendered at
512x512 with one sample per pixel.
It should be noted that the viewpoint for these images was (accidentally)
EyePos=<0,0,-6.2> EyeDir=<0,0,1> EyeUp=<0,1,0> Angle=Pi/8, not the
recommended
EyePos=<0,0,2> EyeDir=<0,0,-1> EyeUp=<0,1,0> Angle=Pi/2. This change
of viewpoint makes my render times about 20% better than they should be. I'll
rerun the timings with the right eye as soon as the gridding code works
again.
| Spheres |
Cells/axis |
Rays/ms (Grid) |
Render Time (Grid) |
Rays/ms (Linear) |
| 1 | 3 | 250.5 |
1.05 | 356.0 |
| 10 | 4 | 228.2 |
1.16 | 168.5 |
| 100 | 12 | 172.3 |
1.52 | 27.2 |
| 1000 | 20 | 121.8 |
2.15 | 2.4 |
| 10000 | 42 | 74.5 |
3.52 | - |
| 100000 | 63 | 25.3 |
10.36 | - |
| 1000000 | 156 | 8.6 |
30.48 | - |
Assignment 4: Thin-Lens Camera
This assignment is an implementation of a thin-lens focussing model for
a ray tracer. The following images were generated by a camera situated
at <0,0,1> looking in direction <5,1,-1> with up vector <0,0,1> in a
scene consisting only of an infinite checkered plane. The checker boxes
on the plane are of unit size. The film is positioned to achieve a
field-of-view of 90 degrees. Three sets of three images are given, each
set demonstrating a different depth-of-field (controlled by aperture size)
and each image within each set demonstrating a different target focus
distance (3 units, 8 units and infinity).
You can count squares on the floor to verify the focus distance meets
expectations. Focus-at-infinity can be verified by noting that the horizon
line is in sharp focus.
Extremely Short Depth-of-Field (aperture=5 units) focussed at:
3 units,
8 units,
infinity
Medium Depth-of-Field (aperture=1 units) focussed at:
3 units,
8 units,
infinity
Largish Depth-of-Field (aperture=0.3 units) focussed at:
3 units,
8 units,
infinity
Assignment 3: Image Sampling
This assignment is an extention to my spiffy little ray tracer. I added
unform (over)sampling, jittered sampling and multijittered sampling to the
little beastie. These images are filtered with a 1-pixel box filter, which
allows the program to run in constant space regardless of the number
of samples per pixel, since arithmetic averages can be extended
incrementally. Homie don't do core dumps.
Checkerboard - a checkerboard of unit square-size with camera one unit
above its surface looking out in direction of <1,1,0>.
L(x,y) = 1/2 + 1/2*(1-y/ymax)^3*sin(2*Pi*(x/xmax)*e^(5*(x/xmax)) - a wavy
test pattern of increasing amplitude in -y and frequency in +x.
An adaptive sampling strategy is in the works, but not yet available.
Assignment 2: Color Theory
This assignment includes a derivation of a nicely regular general form
for the conversion matrix from XYZ-space to RGB screen-space. It's
in terms only of phosphor chromaticities and whitepoint chromaticity.
I think my discussion is easier to follow that Foley & van Dam, but maybe
that's just 'cause I wrote it.
You've your choice of the document as a big
PostScript file or as GIFs of individual
pages:
1,
2,
3,
4,
5,
6
The bummer about the GIFs is that they're only 8-bit color, so the
pictures lose important color detail. So here are the three images as
TIFs for:
sigma=7200,
sigma=350,
sigma=1000
And here's the original assignment in
PostScript.
Assignment 1: Math Stuff
I went to a lot of trouble to LaTeX this thing, and after nearly 40
hours of struggling with cryptic nonsense, I have developed enough
skill laying out math crud to give the following advice: LaTeX is
a complete waste of time - do it with a pencil and scan it.
You can get the whole shebang in PostScript for
your printing enjoyment (it really is pretty on paper, even if wrong).
Alternatively, the individual problems are available here in GIF format
(anti-aliased even!) so you can look at 'em in your little web browser:
1,
2,
3,
4,
5,
6,
7,
8,
9ab,
9cd,
9ef,
9g,
9h,
10,
11,
12,
13,
14-page1,
14-page2,
14-page3,
14-page4
Here's the original assignment in
PostScript for posterity.
You can mail me at mcq@cs.utah.edu,
but you'll have to type the address yourself. Did you know that webcrawlers
collect email addresses gleaned from mailto: references in web pages
and sell them to junkmail providers? Insidious, isn't it?