This is a Preprint and has not been peer reviewed. This is version 2 of this Preprint.
Downloads
Authors
Abstract
GIScience conference authors and researchers face the same computational reproducibility challenges as authors and researchers from other disciplines who use computers to analyse data. Here, to assess the reproducibility of GIScience research, we apply a rubric for assessing the reproducibility of 75 conference papers published at the GIScience conference series in the years 2012-2018. The rubric and process were previously applied to the publications of the AGILE conference series. The results of the GIScience paper assessment are in line with previous findings: descriptions of workflows and the inclusion of the data and software suffice to explain the presented work, but they do not enable the findings to be reproduced by a third party with reasonable effort. We summarise and adapt previous recommendations for improving this dire situation and invite the GIScience community to start a broad discussion on the reusability, quality, and openness of its research. The code and data for this article are published at https://doi.org/10.5281/zenodo.4032875.
DOI
https://doi.org/10.31223/X5ZK5V
Subjects
Geographic Information Sciences
Keywords
reproducibility, reproducible research, open science, reproducible researchopen science, reproducibility, GIScience, GIScience
Dates
Published: 2020-10-29 18:02
Older Versions
License
CC BY Attribution 4.0 International
Additional Metadata
Conflict of interest statement:
None
Data Availability (Reason not available):
https://doi.org/10.5281/zenodo.4032875
There are no comments or no comments have been made public for this article.