OSDI '94 Tutorials


Monday, November 14, 1994

9:00-12:30 Morning
1:30-5:00 Afternoon

The Spring Operating System: Internals Overview (AM1)

Instructor: Thomas W. Doeppner, Brown University, Consultant to SunSoft

Intended Audience: People who are knowledgeable about operating-system design, experienced object-oriented programmers, and people who are contemplating using Spring.

Spring is a new operating system from Sun Microsystems Laboratories and SunSoft. It not only supports distributed object-oriented programming, but is completely object-oriented itself. Additional features include microkernel design, modularity with IDL interfaces, and improved security. It provides direct support for CORBA and can be viewed as an alternative system platform for distributed object-oriented programming. A research distribution of the system including full sources will be available to universities and R&D labs in early 1995. Details are available now on the Internet via Mosaic.

In this half-day tutorial, after covering the overall architecture of Spring, the internal workings of Spring's microkernel and how the virtual-memory system and file system are built from it will be explained.

1. Overview and Architecture

2. Spring Internals

As in other microkernel-based systems, Spring has a nucleus providing the basic essentials. All other parts of the system make use of the nucleus's functionality, but are separate from it. We explain the workings of the nucleus, including its support for threads and interprocess communication. Two key, but separate, components are the virtual-memory system and the file system. We cover the design of these components, showing how they are supported by the nucleus, how they exploit object-oriented technology, and how they help form an integrated distributed operating system.

Tutorial Slides

Thomas W. Doeppner is a faculty member in the Computer Science Department of Brown University, where he has been since 1976. He received his PhD from Princeton in 1977. His research interests are in operating systems and multithreaded programming. He is currently a consultant with the Spring group at SunSoft, where he is writing a multi-day course for them on Spring. Since 1984 he has been associated with the Institute for Advanced Professional Studies, of Cambridge, MA, with whom he develops and delivers courses in the areas of Operating Systems and Distributed Computing. He has presented tutorials on the internals of UNIX, OSF/1 and Mach at USENIX conferences over the past seven years.


Reliable Distributed Computing Using the Isis and Horus Systems (AM2)

Instructor: Ken Birman, Cornell University

Intended Audience: Programmers and managers involved in the design and implementation of software for distributed computing applications where availability or fault-tolerance are important design objectives. Participants should gain a practical understanding of how the ISIS and Horus group programming concepts and technology can simplify their task.

This tutorial focuses on building distributed computing systems that:

The topics covered are: Most of the technology described in this tutorial is available in the form of commercial products, and several public domain communications tools embody at least some aspects of what we will be discussing. The material draws heavily from a collection of papers on Isis, which can be obtained from IEEE Computer Society Press: Reliable Distributed Computing Using the Isis Toolkit, by Ken Birman and Robbert van Renesse, 1994.

Kenneth P. Birman received his PhD in 1981 from U.C. Berkeley and is now Professor of Computer Science at Cornell University in Ithaca, New York. He is the leader of the Isis and Horus projects, which have resulted in technology now used in settings such as the New York Stock Exchange, the Iridium cellular telecommunications system, Sematech, the next generation of a European air traffic control system, and in many financial, telecommunications and manufacturing applications. In 1988, Birman founded Isis Distributed Systems, Inc.; he has served as Chief Scientist for Isis since its acquisition in 1993 by Stratus Computer Inc. of Boston.


The Architecture of the GNU Hurd (AM3)

Instructor: Michael I. Bushnell, Free Software Foundation

Intended Audience: Programmers, managers and researchers involved in the design and implementation of decomposed operating systems and microkernels, and those contemplating using the GNU Hurd. Attendees should be familiar with microkernel principles, the C language, and the facilities offered by traditional UNIX operating systems, but expertise is not required.

The GNU Hurd is a multi-server operating system which runs on Mach 3.0. In UNIX and most Mach-based systems, the majority of system facilities are concentrated in a single entity (called variously the 'kernel' or the 'single server'). The GNU Hurd has divided these facilities into eight servers, but still achieves reasonable performance by placing a great deal of traditional UNIX functionality in the user-mode library. The goal of this tutorial is to describe the architecture of the Hurd with special attention to its innovative aspects, as well as to provide guidance to programmers who wish to program or extend the Hurd. At least one machine running the GNU Hurd will be available at the tutorial for inspection and experimentation. All the source code for the GNU Hurd is freely redistributable under the terms of the GNU General Public License.

All but three of the servers that make up the GNU Hurd run unprivileged, making it extensible. For example, any user can write a server for a filesystem and mount it into the directory hierarchy. This includes providing a random service, having nothing to do with I/O, that merely uses the filesystem name space. Libraries are provided which make this easy to do. As a major technical departure, the Hurd implements Posix signals entirely in user-space, which leads to significant performance advantages and simplicity. The Hurd is also very portable, with less than 1000 lines of machine-dependent code.

This tutorial will describe the existing Hurd servers and the library. In addition, it will cover subjects such as:

Participants should leave the tutorial with a confident understanding of the overall structure of the Hurd and be able to begin writing servers for it.

Slides, 4-up Slides

Michael Bushnell is the principal architect of the GNU Hurd. He has been actively programming UNIX-like operating systems for nine years, and for the last four has worked for the Free Software Foundation doing operating systems development.


The Architecture of CHORUS (PM1)

Instructor: Jim Lipkis, Chorus Systemes

Intended Audience: Operating system researchers, developers, and users who are interested in the CHORUS microkernel technology, its design and evolution, and its application in realtime, distributed, fault tolerant, and standards compliant computer systems. Familiarity with modern operating system concepts is helpful, but no specific knowledge of CHORUS or other systems is assumed.

CHORUS is an underlying technology for operating systems on platforms ranging from massively parallel multicomputers to small embedded processors. This tutorial introduces the basic concepts, structure, and facilities of the CHORUS microkernel and of the OS personalities built on top of it. Emphasis is placed on the themes of software modularity and policy-mechanism separation, both of which are key objectives of CHORUS.

A goal of the tutorial is to portray the CHORUS design approach in terms of the evolution in OS design over the last 5-10 years and perhaps, the next 5-10 years. After a partial comparison with some other microkernel-based operating systems (both research and industrial), we will attempt to identify and evaluate some of the current trends in microkernel design. Lessons learned in the past, especially those involving the tradeoffs among performance, compatibility, and software engineering, are useful indicators of future research and product directions.

Specific topics include:

Jim Lipkis has been a senior engineer and architect at Chorus Systmes for the last five years, and has spent a fair amount of that time giving talks and teaching courses on CHORUS. He has worked in various areas of parallel operating system and programming language design at Chorus and previously at the Ultracomputer Lab at New York University.

Distributed Shared Memory: Principles, Practices, and Packages (PM2)

Instructor: John Carter, University of Utah

Intended Audience: Programmers, managers, and researchers involved in the design and implementation of applications for parallel or distributed systems; software tools to aid in the writing of parallel applications; or hardware support for scalable shared memory multiprocessing. Participants should be familiar with basic operating system, memory management, and networking concepts, but expertise is not required.

A distributed shared memory (DSM) system allows shared memory parallel programs to be executed on distributed memory multiprocessors (both dedicated multiprocessors such as the CM-5 and networks of workstations). The message passing systems that are traditionally used to program these machines (e.g., PVM and p4) force programmers to specify all of the communication and synchronization requirements of the program, complicating the already difficult task of writing parallel programs. A DSM system, on the other hand, supports the abstraction of a single shared address space spanning the processors of a distributed memory multiprocessor, which simplifies the task of programming these machines and allows parallel programs to be ported easily. The challenge of building a DSM system is to achieve performance comparable to message passing over a wide range of shared memory programs, which translates to reducing the amount of communication of DSM programs to that performed by equivalent message passing programs.

DSM is a hot research area, as evidenced by the number of DSM-related papers appearing at this year's OSDI conference. In addition, DSM is becoming an increasingly viable area for commercial use. This tutorial will summarize and synthesize the historical and current trends in DSM research, discuss the inherent strengths and weakness of DSM systems, compare a number of existing DSM packages, introduce the state-of-the-art in scalable hardware distributed shared memory systems (e.g., the KSR-2 and Convex Exemplar), and explore future trends.

This tutorial will cover issues such as:

Participants should come away with a practical understanding of how to design, implement and use both hardware and software distributed shared memory systems.

Tutorial Slides (minus some of the graphs, to be incorporated soon).

John Carter is an Assistant Professor of Computer Science at the University of Utah, where he teaches the operating systems, advanced operating systems, and advanced networking classes. He has been instrumental in the design, implementation, and evaluation of the Munin and Quarks distributed shared memory systems, and has extensive hands-on experience both implementing and using real DSM systems. John is currently co-leading two ARPA-funded research projects that involve DSM. The first entails restructuring of the Mach operating system, part of which involves making DSM a fundamental operating system service used by both user applications and system services. The second project involves the design and implementation of a scalable shared-memory multiprocessor that incorporates many of the design features found in software DSM systems, including relaxed consistency and application driven consistency protocols.


The x-kernel: Operating System Support for High-Speed Networking (PM3)

Instructor: Larry Peterson, University of Arizona

Intended Audience: Programmers and managers involved in the design and implementation of software for high-speed networks. Participants should be familiar with basic networking and operation system concepts, but expertise is not required. People who have experience with ethernet-based networks should come away with a practical understanding of how to implement protocols for higher-speed networks like FDDI and ATM.

This tutorial focuses on the design and implementation of network software running on the end hosts connected to high-speed networks. Using the x-kernel - a freely available object-oriented protocol implementation framework - as an example, the tutorial considers such issues as:

The tutorial discusses standard protocols like TCP/IP and RPC, as well as describes the principles behind the design of emerging protocols for high-speed networks. There will be a particular emphasis on ATM.

After completing this tutorial, participants will be aware of the key issues in implementing network protocols for high-speed networks, as well as understand several techniques for addressing these issues. Note that although the tutorial uses the x-kernel as an example, the techniques described in the tutorial are widely applicable.

Larry Peterson is an Associate Professor of Computer Science at the University of Arizona, where he directs the Network Systems Research Group. He has been involved in the design and implementation of several software systems, including the x-kernel operating system, the Profile and Univers naming services, and the Psync communication protocol. His current research focuses on operating system support for high-speed networks. Larry works in both the operating system and networking research communities. He generally teaches the OS and networking classes at Arizona, is an Associate Editor of the ACM Transactions on Computer Systems and the IEEE/ACM Transactions on Networking, and has served on program committees for SOSP and SIGCOMM. Larry is also a member of the Internet's End-to-End research group.

spo@cs.utah.edu
Last modified on Wed Apr 8 1995