The School of Computing at the University of Utah has research efforts in compilers, computer architecture, embedded systems, networks, operating systems, parallel distributed systems, programming languages, security, and software engineering.
Research
Computer Architecture and VLSI
Computer architecture and VLSI design are inexorably intertwined. At Utah, Architecture and VLSI researchers are tackling issues related to the synergy of these fields, including multiple efforts to understand and reduce the architectural impact of interprocessor communication and a project designing custom hardware for interactive ray tracing.
On modern multi-core chips, it is critical that on-chip interconnects and coherence protocols enable fast and power-efficient data transfers between parallel threads. Professor Balasubramonian's research focuses on architectural mechanisms to improve the efficiency of on-chip communication by employing heterogeneous wires, novel interconnect topologies, and optimal caching structures. He is also involved in efforts to minimize on-chip communication with intelligent placement of data in large caches and with scalable transactional memory protocols. He and Professors Davis and Hall are also collaborating to optimize on-chip memory controller organizations in future multi-cores.
Professors Brunvand and Davis, in conjunction with computer graphics colleagues, are designing special-purpose hardware for ray tracing, a form of computer graphics that generates much higher quality and more realistic images than commodity graphics chips. The resulting processor employs multiple ray tracing pipelines and is based on previous work designing domain specific processors that support run-time configuration of the datapath. This allows it to operate at very close to the speed and power efficiency of a fully custom pipeline, but with enough programmability so that a variety of ray trace algorithms can be supported.
Mobile and Embedded Systems
Mobile and embedded computer systems have become pervasive in all aspects of human life: transportation, entertainment, communication, etc. This trend will continue. Professor Kessler's group is working on context-aware web search systems where context can include physical location and activity type.
Professor Regehr's efforts address problems in creating dependable embedded software, particularly for highly constrained platforms. His group has created tools that take existing sensor network applications and enforce type-safe execution, detect the possibility of stack overflow, and reduce RAM usage using data compression techniques. Professor Davis' group is designing high performance, low-power, flexible domain-specific architectures which do things like speech and visual feature recognition. They are also designing tools to automate the design process.
Networking and Operating Systems
Computer systems research at Utah spans operating systems, distributed systems, networking, and security. Much of this work is done in collaboration with researchers in programming languages and compilers, mobile and embedded systems, software engineering, and formal methods.
The Flux Research Group, led by several systems faculty and senior technical staff, develops the internationally acclaimed Emulab network emulation testbed. Technologies and ideas from Emulab will help form the basis of NSF's new GENI network instrumentation and experimentation infrastructure. Current Flux projects also include building an experimentation workbench, which integrates scientific workflow with Emulab, and developing a programmable wireless testbed using software radios. Past projects include operating systems like Fluke and Janos, which pushed the envelope in terms of OS design and features.
Professor Kasera's networking research encompasses mobile systems and wireless networks, network security, new network architectures, and networks measurements. Ongoing research includes developing novel methods for enhancing wireless network security using unique device fingerprints and link signatures, developing distributed medium access protocols for next generation wireless communication technologies, building social networking platforms to facilitate collective decision making, and building robust overload control. Professor Kasera's group is also building mobile ad hoc network routing and security using accurate network performance characterizations and cross layer approaches.
Parallel Computing
We are entering the multi-core era where every computer, whether embedded, laptop, desktop, server or supercomputer, is a parallel computer. As parallel computing is reaching the masses, faculty at Utah are developing new courses and expanding their research to embrace the changes in programming tools and systems software that naturally must arise in response to this paradigm shift, in collaboration with the previously-described architecture and VLSI research in this area.
Professor Hall is developing performance tuning tools, called autotuners, designed to ease the programming burden in the face of the growing complexity and diversity of modern computer architectures. Autotuners experiment with a set of alternative strategies for mapping application code to hardware to automatically select the mapping that yields the best performance. Such programming tools increase programmer productivity by reducing the effort of porting to new architectures, and empowering the programmer to maintain code that is simpler, and architecture independent.
Advances in parallel computing are ultimately tied to delivering correct and efficient systems. Professor Gopalakrishnan's group is developing push-button formal verification tools that can accept C programs written using the MPI, Pthread, and MCAPI (multicore communications API) libraries, and automatically seek deadlocks, resource leaks, and assertion violations. These techniques are based on automatically instrumenting the program and running them under the mercy of a verification scheduler. This scheduler plays out one interleaving (schedule), analyzes process/thread action dependencies in the schedule, and (for each dependency) forces another replay, till the interleaving space (inherently exponential) has been parsimoniously examined. Using this technique, their tool ISP (joint work with Professor Kirby) recently took a 14KLOC MPI/C program, determined the program to have only one relevant interleaving, and hence finished verification in a few seconds. Similar results are being obtained for Pthread and (soon) MCAPI code.
Programming Languages and Compilers
Professor Flatt's group is investigating how multiple programming languages can be made to interoperate without forcing all interactions to be defined in terms of a single shared language (such as JVM bytecode). They are also exploring how individual programming languages can be made more extensible.
Professor Kessler's research group is developing a new requirements specification technique that incorporates use cases, class diagrams and simple state machines to automatically generate concrete scenarios for the various stakeholders. They are also working on an investigation into techniques to improve meta-modeling in UML.
Professor Hall's research group is developing compiler optimization technology that exploits performance-enhancing features of modern architectures, including multi-core microprocessors, SIMD compute engines, accelerators, complex memory hierarchies, and software-controlled storage.
Professor Might's research team investigates software analysis technology that optimizes performance, detects security vulnerabilities, improves parallelism and eliminates bugs.

