Workshop on MLIR for HPC

October 21, 2019, Georgia Institute of Technology, Atlanta, GA

Rooms 1123 and 1116, Klaus Advanced Computing Building

Held in conjunction with International Workshop on Languages and Compilers for Parallel Computing (LCPC 2019)

Call for Participation

Migrating compiler and programming systems research for HPC into practice has always been difficult, due to the complexity of this technology, the continuous evolution of programming languages for HPC such as C++ and Fortran, the relatively small HPC market, and capability gaps between open source compilers such as Clang/LLVM and hardware vendor compilers. If we look at where industry is making extensive investments in compiler technology, it is for deep learning applications. There is significant overlap between requirements for deep learning and HPC applications: (1) abundant parallelism; (2) large data sets demanding optimizations to manage data movement; (3) a diversity of target architectures; and, (4) need for scalability. Among the efforts focused on deep learning compilers, of particular interest is Google's recent introduction of the MLIR intermediate representation. MLIR, part of the Google Tensor Flow framework, has the capability to lower MLIR to LLVM, thus making it compatible with a widely-used open source compiler ecosystem. A key idea in MLIR is a set of higher-level abstractions (e.g., tensors) that permit MLIR to perform higher-level array and loop optimizations common to parallelizing compilers more naturally than at the C-like IR abstractions offered in LLVM. MLIR has recently committed to contributing to the LLVM Foundation (see link) . At present, there are significant gaps in MLIR capability, but as it is new, this is an ideal time to envision how it might support HPC applications in the future.

This workshop will gather researchers from the LCPC community interested in advancing the availlability of state-of-the-art compiler technology for parallel computing in open source compiler technology. The format of the workshop will be a series of brief presentations highlighting their experiences and identifying requirements for MLIR to support HPC applications. In the second part of the workshop, the participants will outline a path forward.

  Final Program
  Time     Title     Speaker  
 08:45AM     Welcome and Workshop Plan    Mary Hall, University of Utah 
 09:00AM    LLVM/MLIR in Exascale Computing Project    Hal Finkel, Argonne  
 09:15AM    MLIR Tutorial    Jacques Pienaar, Google; Sana Damani, Georgia Tech 
 10:15AM  BREAK
 10:40AM   Loop Nest Optimization, Parallelization and Acceleration in MLIR   Albert Cohen, Google 
 11:00AM   On Using MLIR to Optimize Tensor Contractions   Rui Li & P. Sadayappan, U. Utah; Atanas Rountev, Ohio State 
 11:20AM  Compiler-Managed Cache on Commodity Hardware   Chen Ding, University of Rochester 
 11:40AM   Accelerating Climate Modeling: GPU Mapping for Stencil Graphs    Jean-Michel Gorius & Tobias Wicky, ENS Rennes & ETH 
 12:00PM   MLIR for Fortran   Vivek Sarkar, Georgia Tech; Nelson Amaral, U. Alberta 
 12:20PM   LUNCH 
 01:30PM  Flash Talks  Student Poster Presenters 
 01:45PM   Breakout Groups 
 03:15PM   BREAK and Student Posters 
 03:45PM   Breakout Group Reports   Scribes 
 04:45PM   Panel Discussion on Breakout Groups  Scribes 
 05:30PM   Adjourn 

Organizing Committee

Albert Cohen, Google

Uday Bondhugula, Indian Institute of Science

Tobias Grosser, ETH

Mary Hall, University of Utah

Santosh Pande, Georgia Tech

P. Sadayappan, University of Utah

V. Sarkar, Georgia Tech

Michelle Strout, University of Arizona

Reid Tatge, Google

Contact

Submit a Talk or Poster Abstract

To give either a 20 minute talk or present a poster and 5-minute Flash talk at the workshop, please fill out the submission form

Submission deadline (Extended): Sept. 27, 2019

Register to Attend

To register attendance so that we get an accurate head count, please fill out the registration form