ASE 2021
Mon 15 - Fri 19 November 2021 Melbourne, Australia

Call for Papers

Authors of papers accepted to the any track of ASE 2020 are invited to submit artifacts associated with those papers to the ASE Artifact Track for evaluation as candidate reusable, available, replicated or reproduced artifacts`. If those artifact(s) are accepted, they will each receive one (and only one) of the badges below on the front page of the authors’ paper and in the proceedings.

Also, authors of any prior SE work (published at ASE or elsewhere) are invited to submit an artifact to the ASE Artifact Track for evaluation as a candidate replicated or reproduced artifact. If the artifact is accepted:

  • Authors will be invited to give lightning talks on this work at ASE’20
  • We will do our best to work with the IEEE Xplore and ACM Portal administrator to add badges to the electronic versions of the authors’ paper(s).

Functional Reusable Available Replicated Reproduced
No Badge
Artifacts documented, consistent, complete, exercisable, and include appropriate evidence of verification and validation Functional + very carefully documented and well-structured to the extent that reuse and repurposing is facilitated. In particular, norms and standards of the research community for artifacts of this type are strictly adhered to. Functional + placed on a publicly accessible archival repository. A DOI or link to this repository along with a unique identifier for the object is provided. Available + main results of the paper have been obtained in a subsequent study by a person or team other than the authors, using, in part, artifacts provided by the author. Available + the main results of the paper have been independently obtained in a subsequent study by a person or team other than the authors, without the use of author-supplied artifacts.

Papers with such badges contain reusable products that other researchers can use to bootstrap their own research. Experience shows that such papers earn increased citations and greater prestige in the research community. Artifacts of interest include (but are not limited to) the following.

  • Software, which are implementations of systems or algorithms potentially useful in other studies.
  • Data repositories, which are data (e.g., logging data, system traces, survey raw data) that can be used for multiple software engineering approaches.
  • Frameworks, which are tools and services illustrating new approaches to software engineering that could be used by other researchers in different contexts.

This list is not exhaustive, so the authors are asked to email the chairs before submitting if their proposed artifact is not on this list.

Evaluation Criteria

The ASE artifact evaluation track uses a single-blind review process. Artifacts will be evaluated using the criteria in the last row of the above table.

Review will be via Github. All submitting authors must supply a valid Github id that identifies themselves.

The goal of this track is to encourage reusable research products. Hence, no functional badges will be awarded.

Best Artifact Awards

There will be two ASE 2020 Best Artifact Awards to recognize the effort of authors creating and sharing outstanding research artifacts.

Submission and Review

IMPORTANT NOTE: different badges have different submission procedures. See below.

Note that all submissions, reviewing, and notifications for this track will be via the Github repository

For Replicated and Reproduced Badges

For “replicated” and “reproduced” badges, authors will need to offer appropriate documentation that their artifacts have reached that stage.

By January 27, 2020, create a subdirectory under the directories, (perhaps followed by a unique number if the same author is submitting multiple artifacts; e,g, menzies1, menzies2) and add a one page (max) abstract in PDF format:

  • TITLE: A (Partial)? (Replication|Reproduction) of XYZ. Please add the term partial to your title if only some of the original work could be replicated/reproduced.
  • WHO: name the original authors (and paper) and the authors that performed the replication/reproduction. Include contact information (emails). Mark one author as the corresponding author.
    IMPORTANT: include also a web link to a publically available URL directory containing (a)the original paper (that is being reproduced) and (b)any subsequent paper(s)/documents/reports that do the reproduction.
    IMPORTANT: include also a web link to a publically available URL directory containing (a)the original paper (that is being reproduced) and (b)any subsequent paper(s)/documents/reports that do the reproduction.
  • WHAT: describe the “thing” being replicated/reproduced;
  • WHY: clearly state why that “thing” is interesting/important;
  • HOW: say how it was done first;
  • WHERE: describe the replication/reproduction. If the replication/reproduction was only partial, then explain what parts could be achieved or had to be missed.
  • DISCUSSION: What aspects of this “thing” made it easier/harder to replicate/reproduce. What are the lessons learned from this work that would enable more replication/reproduction in the future for other kinds of tasks or other kinds of research.

Two PC members will review each abstract, possibly reaching out to the authors of the abstract or original paper. Abstracts will be ranked as follows.

  • If PC members do not find sufficient substantive evidence for replication/reproduction, the abstract will be rejected.
  • Any abstract that is judged to be unnecessarily critical of prior work will be rejected (*).
  • The remaining abstracts will be sorted according to (a) interestingness and (b) correctness.
  • The top ranked abstracts will be invited to give lightning talks.

(*) Our goal is to foster a positive environment that supports and rewards researchers for conducting replications and reproductions. To that end, we require that all abstracts and presentations pay due respect to the work they are reproducing/replicating. Criticism of prior work is acceptable only as part of a balanced and substantive discussion of prior accomplishments.

For Reusable and Available Badges

Create a subdirectory under the directories, (perhaps followed by a unique number if the same author is submitting multiple artifacts; e,g, menzies1, menzies2).

Authors need to write and submit documentation explaining how to obtain the artifact package, how to unpack the artifact, how to get started, and how to use the artifacts in sufficient detail. The artifact submission must describe only the technicalities of the artifacts and uses of the artifact that are not already described in the paper. The submission should contain the following documents (in markdown plain text format).

  • A file listing emails and github ids (important) for the authors. Please mark one author as the corresponding author.
  • A file listing the names of the artifact evaluation committee for which the authors have a COI. This file may be empty but it must be supplied.
    • Important: if you are a member of that committee, then say so on line of one that file.
  • A main file describing what the artifact does and where it can be obtained (with hidden links and access password if necessary). Also, there should be a clear description of how to reproduce the results presented in the paper.
  • A file stating what kind of badge(s) the authors are applying for as well as the reasons why the authors believe that the artifact deserves that badge(s).
  • A file describing the distribution rights. Note that to score “available” or higher, then that license needs to be some form of open source license.
  • An file with installation instructions. These instructions should include notes illustrating a very basic usage example or a method to test the installation. This could be, for instance, information on what output to expect that confirms that the code is installed and working; and that the code is doing something interesting and useful.
  • A copy of the accepted paper in pdf format.

The review process will be interactive as follows:

  • Prior to reviewing, there may be some interactions to handle setup and install. Before the actual evaluation reviewers will check the integrity of the artifact and look for any possible setup problems that may prevent it from being properly evaluated (e.g., corrupted or missing files, VM won’t start, immediate crashes on the simplest example, etc.). The Evaluation Committee may contact the authors to request clarifications on the basic installation and start-up procedures or to resolve simple installation problems.
  • Reviewing will be on Github with reviewers commenting via anonymous Github ids. Authors should not comment until at least two reviewers have added comments. Earlier author comments will be deleted.
  • Authors are informed of the outcome and, in case of technical problems, they can help solve them during a brief 48-hour author response period. Authors may update their research artifacts after submission only for changes requested by reviewers in the rebuttal phase.