Each talk is scheduled for 20 minutes, with a 5 minute interval for questions.
We are still finalizing our Schedule. Thank you for your patience.
Existing frameworks to program non-volatile memory (NVM) are hard to use. They require programmers to specify the data structures or objects that should reside in NVM. Such requirement increases the burden on programmers and introduces opportunities for bugs.
We propose to dramatically change the status-quo by asking programmers to only identify durable roots -- namely, the entry points to the persistent data structures at recovery time. The AutoPersist programming framework then automatically ensures that all the data structures reachable from these roots are in NVM, and that stores to these data structures are persistently completed in an intuitive order. We demonstrate that AutoPersist requires minimal code modifications, and significantly outperforms expert-marked NVM applications.
This work is in collaboration with Tom Shull and Jian Huang.
Modern computing systems employ numerous tunable settings that have an effect on performance. These settings reside in their processors, firmware, application frameworks, applications, and even in the choice of compiler flags. Tailoring these settings to the currently applied workload may significantly improve the workload's performance. However, what used to be easy to tune manually with only a handful of settings is now impossible with today's 100's of tunables. Moreover, applications have different phases, and each phase may have a different optimal system configuration. In this talk, the concepts of automatic static and dynamic real-time tuning will be presented as well as case studies of using Concertio's tuning tools at Intel, Marvell, and Mellanox.
This talk will cover the design decisions that drove significant performance improvements in the new generation of the Neoverse N1 platform for cloud native workloads. These improvements have been driven by our work on characterizing SW behavior on Arm systems, with particular attention to runtime environments. Compared against Cortex-A72 systems, these efforts yielded an impressive 70% performance uplift.
Alibaba Dragonwell JDK aims at a Java runtime scalable in the cloud, supporting a variety of application scenarios covering e-commerce, logistics and e-finance. In this talk, we will describe the challenges in scaling different mixes of applications in the cloud and the solutions we are introducing in Alibaba Dragonwell JDK. We will describe some techniques in dynamic AOT code deployment and elastic memory management that might be applicable to other managed runtime systems in the cloud.
Implementations of application workloads in the cloud can range from monolithic processes on a dedicated server to micro services running in containers that themselves run in VMs. In this diverse environment, the 'performance envelop' in terms of the number of cores and available memory vary dramatically.
Microsoft's .NET runtime powers the full spectrum of workloads, including Microsoft Bing, Microsoft Exchange, Amazon Lambda, Azure Functions and more. In this talk we will cover how the .NET runtime dynamically adjusts itself to support this diverse environment with minimal effort from the developer. We will cover our most recent work on memory limited scenarios in a Docker container. We will also discuss the challenges with scaling up the number of cores.