University of Utah School of Computing
John B. Carter

Associate Director
School of Computing
University of Utah
Now (2006)   Then (1993)
Professor Carter joined the Department of Computer Science in January 1993. His research interests include operating systems, parallel and distributed computing, and multiprocessor computer architecture. Of particular interest are memory and storage systems, both hardware and software. Professor Carter is also very interested in the issues associated with developing and managing high-tech startup companies. (C.V.)


  • Operating Systems (Fall 2007)
  • Architecture Seminar (Fall 2007)
  • Ultra Large Scale Systems Seminar (Fall 2007)
  • Industry Forum (Fall 2007)

  • Research

    o Wide-area distributed systems
    Swarm is a middleware system designed to manage the data replication needs of a wide variety of distributed services. Its design was motivated by an extensive survey we performed of the data sharing needs of a wide variety of distributed applications and services[1]. Swarm implements a novel composable consistency model that can express a broader range of consistency requirements than existing models. Swarm supports aggressive replication and composable consistency behind a file system-like interface. It can be used to implement wide area proxies for a variety of distributed services that operate on cached service data. Swarm builds on our earlier wide-area distributed storage system, Khazana.

    WASCo is a Wide Aread Surrogate Computing system that allows users to exploit resources on computers spread around the Internet. WASCo runs remotely managed virtual machine instances on surrogates to exploit their resources. Our original design ([1,2]) allowed clients to run arbitrary code on surrogates, e.g., to offload energy-intensive operations from energy-constrained devices, execute bandwidth-intensive queries near a data source, or replicate a website near underserved clients. Our recent work focuses on harnessing surrogates into managed Collectives, in which collective managers run only approved applications on surrogates. We are developing a cheat-resilient incentive model to compensate users for providing their computer's resources to the Collective. To show the technical and financial value of collectives, we are developing collective services that exploit both the idle resources and the geographic dispersion of Collective nodes.

    o Advanced memory system design
    The memory hierarchies in modern multicore processors (CMPs) are very similar to traditional shared memory multiprocessors. We are investigating CMP memory hierarchy designs that exploit the unique characteristics of CMPs to optimize power and performance. For example, mapping coherence traffic to global wires with different characteristics (e.g., mapping time-critical operations to low latency wires and latency insensitive bulk transfers to slow power-efficient wires) can dramatically improve performance and reduce power.

    Professor Carter led Utah's portion of SGI's Ultraviolet DARPA HPCS project. The UV team was a collaboration of architects, systems programmers, and developers from SGI, Utah, MIT, Minnesota, and GWU. We designed a peta-scale system targeted for the 2010 timeframe. Professor Carter's group subsequently collaborated with SGI and NSA researchers to develop mechanisms that enhance the performance of large shared memory multiprocessors, including active memory operations that enable select computation to be performed within the memory system, adaptive coherence mechanisms that observe how data is being shared and push data to where it will likely be needed, and novel scalable synchronization mechanisms.
    The Impulse memory controller supports a variety of physical-to-physical address remappings. In traditional memory systems, when a cache requests a line (32-256 bytes) of memory, the memory controller reads this data from contiguous physical memory. In Impulse, software can configure the memory controller to fill/store cache lines from/to discontiguous regions of memory. Impulse supports a large number of memory optimizations, including efficient vector/stream memory operations and shadow superpages that use a single TLB entry to map discontiguous superpages. We built a working Impulse prototype by replacing an SGI O200 motherboard (including memory controller) with one of our own design. For applications with sparse but predictable memory access patterns, the Impulse prototype delivered 20%-2500% speedup.

    Past Projects Selected Publications Systems Conference Tracker

      Office: 3144 Merrill Engineering Building
    Email: retrac AT cs DOT utah DOT edu
    Phone: (801) 585-5474
    Fax: (801) 581-5843
    Mailing Address:
    50 South Central Campus Drive, Room 3190
    School of Computing
    University of Utah
    Salt Lake City, UT 84112