Proposal: Reproducibility Reviews for Information Systems Each paper can, at the authors' discretion, be accompanied by a "reproducibility review" in which a member (or members) of the reproducibility committee determines whether the paper is reproducible. If so, then that reviewer (those reviewers) becomes a co-author in a companion paper that discusses reproducibility by itself to be published in a companion electronic-only journal called Information Systems Methods whose first editor-in-chief, we recommend, will be Philippe Bonnet. Submission procedure -------------------- Authors of accepted papers submit (i) the software that is necessary to reproduce the experiments (i.e., system under test, data generation scripts, experimentation scripts, graph generation scripts) and (ii) guidelines for running experiments and for producing the graphs or tables contained in the paper. The software is preferably submitted as a virtual machine, where all appropriate software components are readily installed. If not possible, the source code for the software components is submitted together with installation scripts and instructions. Review procedure ---------------- The committee will take a decision within four weeks. The committee reviews the submitted experiments and takes the decision to award or not the following labels: - reproducible label: the committee was able to succesfully reproduce central results reported in the paper. In that case the procedure that was used to establish reproducibility will be stated as well as any issues and or limitations. - Sharable label: the authors have made the experiments (code+data) available to the community - a URL is provided. In this case, there will not necessarily be a publication. How does the committee assess whether central results reported in the paper were successfilly reproduced? To get a reproducible label, a submission must fulfill the following three criteria: 1. [depth] - Each submitted experiment contains - the set of experiments (system configuration and initialization, scripts, workload, measurement protocol) used to produce the raw data; - prototype systems are provided are provided as white boxes (source, configuration files, build environment) and commercial systems are fully specificed. 2. [portability] - The results can be reproduced on a different environment (i.e., on a different OS or machine). 3. [coverage] - Central results and claims from the paper are supported by the submitted experiments.