next up previous
Next: Conclusions Up: Efficiency Issues for Ray Previous: Caching Objects

Results

Now we look at the cumulative effects for shadow rays of the three main optimizations described in the paper. First we speed up bounding box tests. Next we speed up the traversal using the different methods from Section 4.3. We then treat shadow rays differently from intersection rays and lastly we add a shadow cache. In all of the experiments 1,000,000 rays are generated by choosing random pairs of points from within a bounding box 20% larger than the bounding box of the environment. In the last experiment, 500,000 rays are generated, each generated ray is cast twice, resulting in 1,000,000 rays being cast overall. The first two test cases are real environments, the rest are composed of randomly oriented and positioned unit right triangles. The number gives the number of triangles. Small, mid, and big refer to the space the triangles fill. Small environments are 20 units cubed, mid are 100 units cubed, and big are 200 units cubed. The theater model has 46502 polygons. The science center model has 4045 polygons. The code was run on an SGI O2 with a 180 MHz R5000 using the SGI compiler with full optimization turned on2. No shading or other computation was done and time to build the hierarchies was not included.

The experiments reported in Table 1 are explained in more detail below:

1.
Bounding box test computes intersection point, traversal uses recursion, and shadow rays are treated as intersection rays.
2.
Bounding box test replaced by slab version from Section 4.1.
3.
Recursive traversal replaced by iterative traversal using left child, right sibling, and parent pointers as in Section 4.3.
4.
Skip pointer used to speed up traversal as in Section 4.3.
5.
Tree traversal replaced by array traversal as in Section 4.3.
6.
Intersection rays replaced by shadow rays as in Section 4.2.
7.
Shadow caching used as in Section 4.4.
8.
Shadow caching used, but each ray checked twice before generating a new ray. The same number of checks were performed.


Table 1: Results of the different experiments described in the text on different environments. Times rounded to the nearest second.
  1 2 3 4 5 6 7 8
theater 64 36 30 21 22 11 10 6
lab 79 41 32 22 20 12 12 7
10,000 small 415 223 191 142 110 48 50 27
10,000 mid 392 185 154 103 81 77 79 65
10,000 big 381 179 152 104 82 79 77 69
100,000 small 995 620 550 449 351 62 63 33
100,000 mid 932 473 424 324 230 146 148 89
100,000 big 1024 508 442 332 240 210 212 156
300,000 mid 1093 597 536 421 312 120 121 64

The first thing to notice is that real models require much less work than random polygons. This is because the polygons are distributed very unevenly and vary greatly in size. The theater has a lot more open space and even more variation in polygon size than the lab, resulting in many inexpensive rays and a faster average time. In spite of this, the results show very similar trends for all models. In the first 5 experiments we haven't used any model-specific knowledge, we have just reduced the amount of work done. Special shadow rays and caching are more model specific. Shadow rays are more effective when there are many intersections along the ray and are almost the same when there is zero or one intersection. Shadow caching is based on ray coherence and the likelihood of having an intersection. In experiment 7 there is an unrealistically low amount of coherence (none). In experiment 8 we guaranteed that there would be significant coherence by casting each ray twice.


next up previous
Next: Conclusions Up: Efficiency Issues for Ray Previous: Caching Objects
Comments: Brian Smits
1999-02-19