Refreshments 3:20 p.m.
Abstract
Much of computer science revolves around experimentally validating new
ideas. Experimental validation is hard. A slight mistake in the
methodology may perturb results enough to lead to incorrect
conclusions. Recent studies of the reproducibility of systems
measurements yield depressing results: changing an environment
variable can significantly affect execution time. But this problem is
not unique to computer science. This talk attempts to shed light on
how to conduct rigorous systems evaluation by looking at the methods
used in an area of science that has faced, and overcome, similar
challenges: very-high-energy gamma-ray astrophysics. The
astrophysicist is a passive observer who has little hope of truly
reproducing a measurement, since the phenomena being measured are
outside of our scope of control. In gamma-ray science, the problems
are multiplied by noise and bias introduced by the complex optics,
electronics, and software used to detect light. The experimenter is a
source of further bias: after all, an interesting result is crucial
for one's career. Yet the field has largely overcome these problems
and continues to flourish. I will begin by introducing the methods
used in astrophysics, drawing analogies to the methods used in
computer science, and conclude with practical suggestions for
improving the robustness of our systems evaluation methodology.
BIO
Filip Pizlo is finishing his PhD in computer science at Purdue
University under Professors Jan Vitek and Tony Hosking. His work
deals with developing new compiler, concurrency, and memory management
techniques to increase the performance and robustness of high-level
computer programming languages. Artifacts of this work include the
Fiji VM bare-metal Java runtime, the Schism concurrent real-time
garbage collector, and the Jikes RVM high-throughput locking
infrastructure. Filip is also involved in gamma-ray astrophysics as
part of VERITAS, the world's most sensitive very-high-energy gamma-ray
telescope.