Help others to build upon the contributions of your paper!

The Artifact Evaluation process is a service provided by the community to help authors of accepted papers provide more substantial supplements to their papers so future researchers can more effectively build on and compare with previous work.

Authors of papers that pass Round 1 of PACMPL (OOPSLA) will be invited to submit an artifact that supports the conclusions of their paper. The AEC will read the paper and explore the artifact to give feedback about how well the artifact supports the paper and how easy it is for future researchers to use the artifact.

This submission is voluntary. Papers that go through the Artifact Evaluation process successfully will receive a seal of approval printed on the first page of the paper. Authors of papers with accepted artifacts are encouraged to make these materials publicly available upon publication of the proceedings, by including them as “source materials” in the ACM Digital Library.

Artifacts Evaluated Functional and/or Reusable

Title
A Derivation Framework for Dependent Security Label Inference
Artifacts
Automatic Diagnosis and Correction of Logical Errors for Functional Programming Assignments
Artifacts
Bidirectional Evaluation with Direct Manipulation - Artifact Evaluation
Artifacts
Collapsible Contracts: Pruning Pathological Performance for Sound Gradual Typing
Artifacts
Concurrency-aware Object-oriented Programming with Roles
Artifacts
Conflict Resolution for Structured Merge via Version Space Algebra
Artifacts
Cross-Component Garbage Collection
Artifacts
Distributed System Development with ScalaLoci
Artifacts
Empowering Union and Intersection Types with Integrated Subtyping
Artifacts
Link to publication DOI Pre-print
ExceLint: Automatically Finding Spreadsheet Formula Errors
Artifacts
Finding Broken Promises in Asynchronous JavaScript Programs
Artifacts
Finding Code That Explodes Under Symbolic EvaluationDistinguished Artifact
Artifacts
FlashProfile: A Framework for Synthesizing Data Profiles
Artifacts
goSLP: Globally Optimized Superword Level Parallelism Framework
Artifacts
Gradual Liquid Type Inference
Artifacts
GraphIt - A High-Performance Graph DSL
Artifacts
Horn-ICE Learning for Synthesizing Invariants and Contracts
Artifacts
Identifying Refactoring Opportunities for Replacing Type Code with Subclass and State
Artifacts
Incrementalizing Lattice-Based Program Analyses
Artifacts
Julia Subtyping: a Rational Reconstruction
Artifacts
Leto: Verifying Application-Specific Fault Tolerance via First-Class Execution Models
Artifacts
NVM ReConstruction: Object-Oriented Recovery for Non-Volatile Memory
Artifacts
One Tool, Many Languages: Language-Parametric Transformation with Incremental Parametric Syntax
Artifacts
Optimal Stateless Model Checking under the Release-Acquire Semantics
Artifacts
Precise and Scalable Points-to Analysis via Data-Driven Context Tunneling
Artifacts
Precision-Guided Context Sensitivity for Pointer Analysis
Artifacts
Reconciling High-level Optimizations and Low-level Code in LLVM
Artifacts
Scopes as Types
Artifacts
Sound Deadlock Prediction
Artifacts
Speeding up Symbolic Reasoning for Relational Queries
Artifacts
Test Generation for Higher-Order Functions in Dynamic Languages
Artifacts
The Root Cause of Blame: Contracts for Intersection and Union Types
Artifacts
Thread-Safe Reactive Programming
Artifacts
Unified Sparse Formats for Tensor Algebra Compilers
Artifacts
Virtual Machine Design for Parallel Dynamic Programming LanguagesDistinguished Artifact
Artifacts

Call for Artifacts

This process was inspired by the ECOOP 2013 artifact evaluation process by Jan Vitek, Erik Ernst, and Shriram Krishnamurthi.

Selection Criteria

The artifact is evaluated in relation to the expectations set by the paper. Thus, in addition to just running the artifact, the evaluators will read the paper and may try to tweak provided inputs or otherwise slightly generalize the use of the artifact from the paper in order to test the artifact’s limits.

Artifacts should be:

  • consistent with the paper,
  • as complete as possible,
  • well documented, and
  • easy to reuse, facilitating further research.

The AEC strives to place itself in the shoes of such future researchers and then to ask: how much would this artifact have helped me?

Submission Process

If your paper makes it past Round 1 of the review process, the AEC chairs will contact you with submission instructions.

Your submission should consist of three pieces: an overview of your artifact, a URL pointing to a single file containing the artifact, and an md5 hash of that file (use the md5 or md5sum command-line tool to generate the hash). The URL must be a Google Drive or Dropbox URL, to help protect the anonymity of the reviewers. You may upload your artifact directly if it’s less than 15 MB.

Overview of the Artifact

Your overview should consist of two parts:

  • a Getting Started Guide and
  • Step-by-Step Instructions for how you propose to evaluate your artifact (with appropriate connections to the relevant sections of your paper);

The Getting Started Guide should contain setup instructions (including, for example, a pointer to the VM player software, its version, passwords if needed, etc.) and basic testing of your artifact that you expect a reviewer to be able to complete in 30 minutes. Reviewers will follow all the steps in the guide during an initial kick-the-tires phase. The Getting Started Guide should be as simple as possible, and yet it should stress the key elements of your artifact. Anyone who has followed the Getting Started Guide should have no technical difficulties with the rest of your artifact.

The Step by Step Instructions explain how to reproduce any experiments or other activities that support the conclusions in your paper. Write this for readers who have a deep interest in your work and are studying it to improve it or compare against it. If your artifact runs for more than a few minutes, point this out and explain how to run it on smaller inputs.

Where appropriate, include descriptions of and links to files (included in the archive) that represent expected outputs (e.g., the log files expected to be generated by your tool on the given inputs); if there are warnings that are safe to be ignored, explain which ones they are.

The artifact’s documentation should include the following:

  • A list of claims from the paper supported by the artifact, and how/why.
  • A list of claims from the paper not supported by the artifact, and how/why.

Example: Performance claims cannot be reproduced in VM, authors are not allowed to redistribute specific benchmarks, etc. Artifact reviewers can then center their reviews / evaluation around these specific claims.

Packaging the Artifact

When packaging your artifact, please keep in mind: a) how accessible you are making your artifact to other researchers, and b) the fact that the AEC members will have a limited time in which to make an assessment of each artifact.

Your artifact can contain a bootable virtual machine image with all of the necessary libraries installed. Using a virtual machine provides a way to make an easily reproducible environment — it is less susceptible to bit rot. It also helps the AEC have confidence that errors or other problems cannot cause harm to their machines.

You should make your artifact available as a single archive file and use the naming convention <paper #>.<suffix>, where the appropriate suffix is used for the given archive format. Please use a widely available compressed archive format such as ZIP (.zip), tar and gzip (.tgz), or tar and bzip2 (.tbz2). Please use open formats for documents.

COI

Conflict of interests for AEC members are handled by the chairs. Conflicts of interest involving one of the two AEC chairs are handled by the other AEC chair or the PC of the conference if both chairs are conflicted. To be validated, artifacts must be unambiguously accepted and may not be considered for the distinguished artifact.