CCGrid 2026 will award three IEEE reproducibility badges to accepted main research track papers:
Artifact evaluation is post-acceptance only and does not influence acceptance decisions. Authors of accepted main research-track papers may request badges by submitting the artifacts and a 2-page description. If successfully reviewed, include the 2-page description and the awarded badge logo in the camera-ready submission.
Hosting software/data on GitHub/GitLab isn’t sufficient; assign a persistent identifier (e.g., via Zenodo, figshare). We recommend repositories like Zenodo, Dryad, or figshare to promote FAIR principles.
Indicates that author-created digital objects (data/code) are permanently stored in a public repository with a globally unique identifier, guaranteed persistence, and an open license to maximize access. Aligns with ACM “Artifacts Available” and COS “Open Data/Materials” for digital objects.
The AEC and authors decide what objects are “relevant”. Provide enough documentation for reviewers to understand core functionality and data context.
Higher-level than ORO; requires ORO. Corresponds to IEEE “Code Reviewed.” All author-created digital objects used in the research (data/software) are reviewed.
Highest tier, awarded when evaluators reproduce the paper’s key results using the authors’ objects, methods, code, and analysis conditions. Focus is on reproducing behavior/claims, not exact values (especially with hardware dependence).
Authors seeking any badge must submit a 2-page artifact description including a brief artifact overview and review details:
Clearly map each artifact to specific parts of the accepted paper. Cite artifacts with a persistent identifier (e.g., DOI from Zenodo/figshare). Optionally add a development URL (e.g., GitHub). Use the IEEE conference template.
If artifacts pass evaluation, update the 2-page description and include it as an appendix to the camera-ready paper, along with the badge logo. This step is required for publication with a badge.
Between author notifications (expected ) and camera-ready deadline.
We will hold a midpoint check-in to catch easy issues early (e.g., missing files/imports).
docker compose build and docker compose run (or Apptainer), bind-mount outputs to ./out/, include a 30–60 min smoke test with expected outputs (files/metrics/hashes/plots).