CCGrid 2026 will award three IEEE reproducibility badges to papers accepted to the main Research Tracks: Open Research Objects (ORO), Reusable/Research Objects Reviewed (ROR), and Results Reproduced (ROR-R). More information about each of these badges is available below. Authors may optionally request the badge(s) for their accepted paper by submitting relevant artifact(s) and a 2-page artifact description. If their submission is successfully reviewed, they will be awarded the badge(s). The badges will appear as part of the paper in the conference proceedings.
We suggest permanent repositories for archival that promote findable, accessible, interoperable and reusable (FAIR) principles such as Zenodo, Dryad or figshare to deposit your artifacts.
Hosting your software and/or data on GitHub or GitLab is not sufficient. You must additionally assign your artifact a persistent identifier using Zenodo, figshare, etc.
This badge signals that author-created digital objects used in the research (including data and code) are permanently archived in a public repository that assigns a global identifier, guarantees persistence, and are made available via standard open licenses that maximize artifact availability.
Review of both software and data will include the following criteria:
This badge signals that all relevant author-created digital objects used in the research (including data and software) were reviewed according to the following criteria.
A review of a software artifact will include the following criteria, which must be satisfied in the artifact:
A review of a data artifact will include the following criteria, which must be satisfied in the artifact:
This badge is awarded when evaluators have successfully reproduced the key computational results of the paper using the author-created research objects, methods, code, and conditions of analysis. The goal is not to recreate the exact results, especially when they are hardware-dependent, but rather to reproduce the behavior and validate the central claims of the research as follows:
The following additional criteria must be met for the ROR-R badge:
The ROR-R badge process will involve:
The ROR-R badge will appear next to the ROR badge in the conference proceedings, signifying that the paper has achieved this high standard of computational reproducibility.
Authors seeking one or both of these badges must submit a 2-page artifact description document that includes a brief description of the artifacts and any needed details for them to be reviewed as part of the CCGrid 2026 artifact review process and then used by future readers.
For software: a link to the artifact and a description that includes language, compilation and run environment (tools, pre-built binaries, hardware), input dataset (if available), expected output and metrics, estimated time of all the compilation and run steps, etc.
For data: a link to the artifact, and a description as mentioned in the review criteria above. Make connections between the specific artifact(s) and their role and context within relevant parts of the accepted paper. You must also explicitly reference and cite your artifacts in this document, including a persistent identifier to it (e.g., DOI from Zenodo, figshare) and, for software, optionally a link to a URL where it is being developed and maintained (e.g., GitHub). Given the 2-page limit, you should include key details in the description document and more exhaustive steps in the persistent artifact link. You should format this document using the IEEE 2-column conference proceedings template. If artifacts are successfully evaluated, the authors will be allowed to add an updated 2-page version of their artifact description as an Appendix to the camera-ready paper. The review of the artifacts will follow a single-blinded process.
The artifact badging process will occur between author notifications and the Camera-Ready paper submission deadline ():
| Event | Date |
|---|---|
| Artifact submission for accepted papers | |
| Artifact review assignments made | |
| Artifact review midpoint check-in | |
| Artifact review deadline | |
| Artifact review results announcements to authors |
We are planning to have a midpoint check-in with authors to catch early mistakes that are easily fixable (missing files, missing import etc).
Note: Artifacts should be able to run on commodity workstations/laptops for the evaluation. In case the artifact is tightly coupled to a specific class of system or requires more than a generic workstation to be meaningfully evaluated (e.g., an HPC cluster, cloud resources, specialized accelerators, etc.), authors should provide access to such an evaluation environment that the artifact reviewers can use. This pre-requisite should be clear in the Artifact submission and the EasyChair abstract. The relevant credentials to the specialized resource may be shared by email with the Artifact Evaluation Committee Chairs to be passed onto the reviewers anonymously. If you require further guidance, please get in touch with the Artifact Evaluation Committee Chairs before the submission deadline.