The Journal of Instruction-Level Parallelism

3rd JILP Workshop on Computer Architecture Competitions (JWAC-3):
Memory Scheduling Championship (MSC)

June 9th 2012, in conjuction with ISCA-39


Final Program

Final Results


The workshop on computer architecture competitions is a forum for holding competitions to evaluate computer architecture research topics. The third workshop is organized around a competition for memory scheduling algorithms. The Memory Scheduling Championship (MSC) invites contestants to submit their memory scheduling code to participate in this competition. Contestants must develop algorithms to optimize multiple metrics on a common evaluation framework provided by the organizing committee.

Objective
The goal for this competition is to compare different memory scheduling algorithms in a common framework. Algorithms will be evaluated in three tracks: (1) Delay (or Performance), (2) Energy-Delay Product (EDP), (3) Performance-Fairness Product (PFP).

Prizes
The best entry in each track will receive a trophy commemorating the triumph (OR some other prize to be determined later). Top submissions will be invited to present at the workshop, when results will be announced. All source code, write-ups and results will be made publicly available through the JWAC-3 website. Authors of accepted workshop papers will be invited to submit full papers for possible inclusion in a special issue of the Journal of Instruction-Level Parallelism (JILP). Inclusion in the special issue will depend on the outcome of JILP's peer-review process: invited papers will be held to the same standard as regular submissions.

Submission Requirements
Each contestant is allowed a maximum of three submissions to the competition. Each submission should include the following:

Competition Rules
The competition will proceed as follows. Contestants are responsible for implementing and evaluating their algorithm in the distributed framework. Submissions will be compiled and run with the original version of the framework. Quantitatively assessing the cost/complexity of auxiliary structures is difficult. To simplify the review process, maximize transparency, and minimize the role of subjectivity in selecting a champion, MSC will make no attempt to assess the cost/complexity of scheduling algorithms. Instead, contestants have a simulated hardware storage budget of (64K + 4K) bytes, i.e., a total of 68 Kilobytes, or 69632 bytes. Contestants will also have to justify that their algorithm can be implemented by the memory controller within a few processor cycles. Clear documentation, in the code as well as the paper writeup, must be provided to assure that this is the case.

Evaluation Metric
A given scheduling algorithm will be executed with a suite of single- and multi-programmed workloads, with two different memory system configurations. The metric for each track will be computed across all simulated configurations. The only exception is the PFP metric, where only all multi-programmed simulated configurations will be involved in the computation. Delay will be measured as the sum of execution times for all involved programs in all simulations. EDP will be measured as the sum of EDPs from each simulation, where each simulation's EDP is measured by multiplying the system energy for that simulation and the delay to finish the last program in that workload. For each multi-programmed experiment, we will compute the slowdown for each program, relative to its single-thread execution; the fairness metric for that experiment is the ratio of the max slowdown to the min slowdown (a number typically between 0 and 1, with 1 being extremely fair); the PFP metric is derived by dividing the average fairness across all multi-programmed workloads by the sum of delays of all involved programs. ** We will likely change the fairness metric for an experiment to be the slowdown of the program that suffers the maximum slowdown, i.e., the winner of this track has to minimize the product of max-slowdown and sum of delays of all involved programs. **

Acceptance Criteria
In the interest of assembling a quality program for workshop attendees and future readers, there will be an overall selection process, of which ranking (in any metric) is the primary component. To be considered, submissions must conform to the submission requirements described above. Submissions will be selected to appear in the workshop on the basis of the ranking, novelty, and overall quality of the paper and commented code. Novelty is not a strict requirement, for example, a contestant may submit his/her previously published design or make incremental enhancements to a previously proposed design. In such cases, ranking is a heavily weighted criterion, as is overall quality of the paper (for example, analysis of new results on the common framework, etc.).

Simulation Infrastructure

Submission
To submit, register and upload your paper pdf on the submission site . See paper formatting guidelines above (six pages, double-column, single-spaced). Blind submissions are allowed, but not required. Also email your scheduler.c and scheduler.h files to nil@cs.utah.edu. Indicate your paper id in your email.

Important Dates
Evaluation framework available: February 20, 2012
Submissions due: Tuesday April 24, 2012, 9pm PDT
Acceptance notification: May 1, 2012
Final version due (code and report): May 15, 2012
Results announced: at workshop (June 9, 2012)

Committees

Organizing Committee: Rajeev Balasubramonian (Univ. of Utah), Niladrish Chatterjee (Univ. of Utah), Zeshan Chishti (Intel)

Steering Committee: Alaa R. Alameldeen (Intel), Eric Rotenberg (NC State)

Program Committee:
Howard David, Intel
Hillery Hunter, IBM
Engin Ipek, Univ. of Rochester
Naveen Muralimanohar, HP Labs
Onur Mutlu, CMU
Mike O'Connor, AMD
Lixin Zhang, ICT/CAS
Zhao Zhang, Iowa State

Submission Chair: Manjunath Shevgoor (Univ. of Utah)