Make your system progressive. Start with a large error tolerance and
decrease epsilon over several to many iterations. Use an error metric
that includes the radiance of the source patch. Describe the error
metric, tell how you decrease epsilon, show an image for each of the
iterations (refinement iterations, not gather iterations), and give the
CPU time for each iteration.
Here are some images from a progressive radiosity solution. The idea of
progressive radiosity is that instead of trying to fully refine a "dark"
scene, then propagate all the energy around, you refine the scene a little,
propagate the energy, refine it some more, propagate some more and so on.
Progressive radiosity is a Good Deal for several reasons. Numerically,
energy propagation tends to converge to a correct solution faster because
gross energy transport is carried out on a small matrix, and detailed
matrices need only be iterated to perturb an already mostly-correct
solution to completion.
More importantly, progressive radiosity lets us involve energy in
our error terms. Whereas in assignment 5 we needed to specify separate
target error values for form-factor error, visibility error and so forth,
with knowledge of energy we can unify all these error metrics into a single
currency, energy error. For a given target energy error value, we
refine by considering sources of error on each path as follows:
- The L-infinity norm of the source's energy times the path's
form factor and visibility represents the amount of energy error
the receiver will see as a result of source cell aggregation. If
this error exceeds threshold, the source is subdivided.
- The L-infinity norm of the form-factor times the path's visibility
times the source's energy represents the amount of energy error the
receiver will see as a result of form-factor error. If this error
exceeds threshold, the receiver is subdivided.
- The L-infinity norm of the visibility times the path's form factor
times the source's energy represents the amount of energy error
some point on the receiver may see due to a shadow cast by an
intervening object. If this error exceeds threshold, the receiver
is subdivided.
Following are images of a five-stage progressive solution to the Cornell
Box. Note that wholistically, energy error reflects the amount of confidence
you can have in a cell's value at an arbitrary point within the cell. As
you wish to become more certain of your continuously-varying solution, it
is only natural that the cells become smaller.
|
| |
|
|
We start out by computing the maximum error in the scene and
quartering it. This produces some refinement for form-factor
error.
| Path generation/refinement:
| 3s
|
| Energy Propagation:
| 0.246s
|
|
|
| |
|
|
On the next refinement, further form factor refinement occurrs
for the hotspots on the walls, but more importantly, refinement
to capture shadow detail from the room light occurs on the
floor.
| Path generation/refinement:
| 20s
|
|
|
| |
|
|
The shadow refinement has reached the minimum cell size in some
locations. Form-factor refinement continues on the walls in
the hot spots.
| Path generation/refinement:
| 1m 42s
|
|
|
| |
|
|
Now we're getting a nice image. Most regions of fast perceptual
change are reasonably well-refined, including shadow penumbrae.
The wall/ceiling corners are beginning to refine to capture the
dark areas there, but not excessively.
| Path generation/refinement:
| 4m 50s
|
|
|
|
| |
(click for big images)
|
|
|
And just because I can, one more refinement brings the hotspots
to the minimum feature size. At this point, the border between
any two adjacent cells is almost imperceptable, even without
a higher-order display mesh.
| Path generation/refinement:
| 33m 50s
|
| Energy Propagation:
| 2m 8s
|
|
A Note on Performance
No real effort has been made to achieve efficient code or reuse values
that are expensive to compute. Visibility terms are computed by drawing
100 samples from the source, and visibility errors are computed by
computing 100 visibilities. Clearly this may be unnecessary computation.
I estimate a factor of five speedup to be achievable with a solid day's
effort.
For example, all form factors, visibilities and errors are recomputed
on each refinement
iteration. This is tremendously wasteful, because the amount of work
done each time is a function of the total number of paths rather than the
number of paths newly introduced. A few hours of reorganization will yield
a factor of two speedup from this problem alone.
Create a matrix represented as links for the Cornell box. Solve
the system of equations using either Gauss-Seidel or Jacobi. What
was your termination criterion? Roughly how much faster was
this than PA 1 (running time, not coding time)?
Here are two radiosity solutions to the Cornell Box.
Energy propagation was terminated when no basis coefficient changed
by more than 1/1000th of the light source intensity. Tracing was
performed with a pinhole lens model and 16 regularly distributed samples
per image pixel.
| Basis:
| Rectangular, constant
|
| Form Factors:
| Exact node-to-polygon by Stokes' Theorem
|
| Visibility:
| 4x4 regular grid of visibility rays from node to energy source
|
| Paths:
| 1,807,680 energy transport paths were constructed, forming
a non-hierarchical complete O(N^2) connection of 1,345
elements.
|
| Runtime:
| 2h 30m (path construction), 18m (energy propagation), 2h (ray
tracing) on surreal
|
|
|
|
|
(click for big image)
|
|
| RGB Color Model
| Colors:
| Light: <1,1,1>, White: <0.8,0.8,0.8>, Red: <0.8,0,0>,
Green: <0,0,0.8>
|
| Mesh:
| 16x16 (walls), 4x4 (boxes)
|
| Notes:
| The ceiling is a single mesh; the mesh elements partially
eclipsed by the light source receive no radiation, hence
the black border around the light.
|
|
|
|
|
(click for big image)
|
|
| Spectral Color Model
| Colors:
| Physical Cornell Box spectra (gamma 2.5 for display)
|
| Mesh:
| 16x16 (walls), 4x4 (boxes), 9x9/4x4 (ceiling fragments)
|
| Notes:
| The ceiling has been (manually) split into four fragments
which exactly border the light source in order to
eliminate the icky light-source border.
|
|