|
|
|||
|
Research projects
Replay-based performance analysis of complex software systems
Performance analysis as it exists today often degrades to performance debugging – a time-consuming,
iterative, "measure and modify" procedure aimed at tuning the system. Analysis is used to provide hints about
performance bottlenecks, but not a general understanding of system's performance.
Our work is aimed to provide an engineering foundation for development of formal, precise, and computationally intensive methods of performance analysis. We split analysis into two stages. First, we capture the complete run-time state of the system by means of full-system deterministic replay. Then, we invoke analysis on a complete copy of original execution off-line. We eliminate major restrictions on run-time performance analysis – complexity, nondeterminism, and lack of global comprehension of the entire system state. To enable performance analysis during replay, we extend traditional replay approaches with an ability to recreate a nonfunctional property of execution – performance.
Time-travel for distributed systems
Development of a replay debugging environment for distributed systems consisting of hundreds of machines. Our
time-travel system will run along with an experiment launched in Emulab and capture original execution of an entire
network of virtual machines. In case of an error, a researcher will be able to rollback execution of the network to the
point before the crash and inspect its state or debug it with a traditional step-by-step debugger. On each node we replay
entire OS along with its running processes. (Poster [PDF]).
Stateful Swapout (true time-sharing for computer networks)
Emulab includes more than 350 PC nodes. Some 1500 users run experiments of size 1-300 nodes, on a first-come
first-served basis. Demand far exceeds capacity, so Emulab provides a weak form of time-sharing. We would like
to schedule Emulab experiments similarly to how OS schedules its processes.
The Emulab scheduler could preempt
low-priority or idle experiments, save the full state of entire network, and replace it with
other experiments. The goal is that swapout/swapin must be transparent to the
experiment, and except for scheduling delays, transparent to users. In the
context of Emulab, this means that we are able to preempt large long running
experiments without compromising the correctness of experimental results.
(Eurosys'09 paper [PDF],
Prashanth's thesis [PDF],
Poster [PDF]).
Some ideas I've never really started
Benchmarks Can Make Sense
Poster [PDF]
Operating Systems as Parallel Pipelines
Mainstream operating systems are starting to face the need for support of
multi-core platforms. We argue that pipeline parallelism is one of the practical
ways to utilize performance offered by multiple cores with minimal
changes to the existing software stack. Similar to the organization of hardware
processors, execution of a system can be structured as a set of pipeline
stages running in parallel on separate cores. Explicit data path defined by
the pipeline provides advantages of simple isolated stage-to-stage synchronization,
controlled data sharing and possibility to develop communication
primitives managing memory as a first-class communication medium.
(Poster [PDF]).
Aggressive Server Consolidation through Pageable Virtual Machines
Historically, virtual machine monitors (VMMs) have
chosen isolation as a primary goal. This design choice
prohibits almost any form of resource sharing. The only
shared resource is a physical CPU. There are many situations, however, in which it would
be desirable to keep hundreds of VMs around.
We extend the Xen virtual machine monitor with the ability to host a hundred of virtual machines on a single physical node. Similarly to a demand paging of virtual memory, we page out idle virtual machines making them available on demand. Paging is transparent. An idle virtual machine remains fully operational. It is able to respond to external events with a delay comparable to a delay under a medium load. (Poster [PDF]).
Work done before joining the University of Utah
August - November 2004
PhD Practicum at the National ICT Australia, University of New South Wales
Supervision: Prof. Gernot Heiser, Dr. Ihor Kuz.
Implementation of the µITRON real-time embedded operating system on top of the L4 microkernel
µITRON is a specification of a real-time embedded operating system kernel. Being minimalist
yet powerful enough it became de-facto standard on Japanese embedded system market. Implementation of a
µITRON compatible kernel on top of L4 was a good test on both completeness and flexibility of L4 API
and on its potential applicability to development of embedded systems. (Informal report [PDF],
source code [.TGZ])
2000 - 2005
Postgraduate research
E1 distributed operating system
E1 distributed operating system project is aimed to provide reliable
software platform for distributed applications. State and functionality of all operating system components
are encapsulated by distributed objects. A complete or partial copy of distributed object's state is placed
in each network node where the object is used. Copy coherence is ensured by replication algorithms.
(Design paper [PDF]).
1999 - 2000
Undergraduate research
Agent based operating system
This research was aimed on the investigation of an applicability of an agent oriented programming methodology to construction of operating systems capable to operate in complex dynamic environments relying on an intensional logic. All operating system components were treated as intelligent software agents cooperating in order to achieve common goals.
|
|||
|
|
|||
|
Updated: July, 2010
|
|||