This portal has been archived. Explore the next generation of this technology.
Reproducibility initiative at ACM ASPLOS'20 (the 1st artifact evaluation)
[ Back to the ASPLOS 2020 conference website ]

Results

Important dates

Paper decision: November 20, 2019
Artifact submission: December 4, 2019
Artifact decision: January 15, 2020
Camera-ready paper: January 20, 2020
Conference: March 16-20, 2020

Reproducibility chairs

Motivation

Authors of accepted ASPLOS'20 papers are invited to formally submit their supporting materials (code, data, models, workflows, results) to the Artifact Evaluation process (AE). AE is run by a separate committee whose task is to assess how submitted artifacts support the work described in accepted papers while reproducing at least some experiments. This submission is voluntary and will not influence the final decision regarding the papers.

Since it is not always trivial to perform a full validation of computer architecture experiments and may require expensive computational resources, we use multistage Artifact Evaluation: we will validate only the "availability" and "functionality/reusability" of submitted artifacts at ASPLOS'20. Thus, depending on evaluation results, camera-ready papers will include the Artifact Appendix and will receive at most two ACM stamps of approval printed on the first page (note that authors still need to provide a small sample dataset to test the functionality of their artifacts):

       or

We think that the second AE stage can be a special reproducibility session or an open tournament to perform a full validation of experimental results from above papers with artifacts at the next conference based on the successful ASPLOS-ReQuEST'19 experience. It is under discussion so feel free to provide your feedback to the ASPLOS AE chairs!

Artifact Evaluation Committee

  • Maaz Bin Safeer Ahmad (University of Washington)
  • Ismail Akturk (University of Missouri, Columbia)
  • Basavesh Ammanaghatta Shivakumar (Purdue University)
  • Vinay Banakar (Hewlett Packard Labs)
  • Gilbert Bernstein (UC Berkeley)
  • Abhishek Bhattacharyya (U. Wisconsin)
  • James Bornholt (University of Texas at Austin)
  • Rangeen Basu Roy Chowdhury (Intel Corporation)
  • Tapan Chugh (University of Washington)
  • Basile Clément (École Normale Supérieure and Inria)
  • Alexei Colin (USC Information Sciences Institute)
  • Weilong Cui (Google)
  • Deeksha Dangwal (University of California, Santa Barbara)
  • Davide Del Vento (UCAR)
  • Murali Emani (Argonne National Laboratory)
  • Vitor Enes (HASLab / INESC TEC and Universidade do Minho)
  • Haggai Eran (Technion - Israel Institute of Technology & Mellanox Technologies)
  • Ata Fatahi (Penn State University)
  • Swapnil Gandhi (Indian Institute of Science (IISc))
  • Hervé Guillou (CodeReef & cTuning foundation)
  • Faruk Guvenilir (Microsoft, The University of Texas at Austin)
  • Bastian Hagedorn (University of Münster, Germany)
  • Kartik Hegde (UIUC)
  • Qijing Huang (UC Berkeley)
  • Sergio Iserte (Universitat Jaume I)
  • Shehbaz Jaffer (University of Toronto)
  • Tanvir Ahmed Khan (University of Michigan)
  • Sung Kim (University of Michigan)
  • Marios Kogias (EPFL)
  • Iacovos G. Kolokasis (University of Crete and FORTH-ICS)
  • Tzu-Mao Li (UC Berkeley)
  • Chien-Yu Lin (U. Washington)
  • Hongyuan Liu (College of William and Mary)
  • Sihang Liu (University of Virginia)
  • Joseph McMahan (University of Washington)
  • Fatemeh Mireshghallah (University of California-San Diego)
  • Thierry Moreau (University of Washington)
  • Chandrakana Nandi (University of Washington, Seattle)
  • Mohammad Nasirifar (U. Toronto)
  • Asmita Pal (University of Wisconsin-Madison)
  • Pratyush Patel (University of Washington)
  • Arash Pourhabibi (EPFL)
  • Thamir Qadah (Purdue University, West Lafayette)
  • Alexander Reinking (University of California Berkeley)
  • Emily Ruppel (Carnegie Mellon University)
  • Gururaj Saileshwar (Georgia Institute of Technology)
  • Solmaz Salimi (Sharif University of Technology (SUT))
  • Gus Smith (U. Washington)
  • Linghao Song (Duke University)
  • Akshitha Sriraman (University of Michigan)
  • Tom St. John (Tesla)
  • Pengfei Su (College of William and Mary)
  • Arun Subramaniyan (University of Michigan)
  • Mark Sutherland (EPFL)
  • Iman Tabrizian (University of Toronto)
  • Dmitrii Ustiugov (EPFL)
  • Jyothi Vedurada (IIT Madras)
  • Di Wu (Department of ECE, University of Wisconsin-Madison)
  • Xi Yang (University of Sydney)
  • Felippe Zacarias (UPC/BSC)
  • Rui Zhang (Ohio State University)
  • Fang Zhou (Ohio State University)

Public discussion

We plan to organize an open session at ASPLOS to discuss artifact evaluation results and a common methodology to perform a full validation and comparison of computer architecture experiments (see related SIGARCH blogs "A Checklist Manifesto for Empirical Evaluation: A Preemptive Strike Against a Replication Crisis in Computer Science" and "Artifact Evaluation for Reproducible Quantitative Research").

Artifact submission

Prepare your submission and this Artifact Appendix using the following guidelines and register it at the ASPLOS'20 AE website. Your submission will be then reviewed according to the following guidelines. Please, do not forget to provide a list of hardware, software, benchmark and data set dependencies in your artifact abstract - this is essential to find appropriate evaluators!

The papers that successfully go through AE will receive a set of ACM badges of approval printed on the papers themselves and available as meta information in the ACM Digital Library (it is now possible to search for papers with specific badges in ACM DL). Authors of such papers will have an option to include up to 2 pages of their Artifact Appendix to the camera-ready paper.

At the end of the process we will inform you about how to add badges to your camera-ready paper.

Questions and feedback

Please check AE FAQs and feel free to ask questions or provide your feedback and suggestions via the dedicated AE discussion group.