Abstract: Citizen Science with mobile and wearable technology holds the possibility of unprecedented observation systems. Experts and policy makers are torn between enthusiasm and scepticism regarding the value of the resulting data, as their decision making traditionally relies on high-quality instrumentation and trained personnel measuring in a standardized way. In this paper, we (1) present an empirical behavior taxonomy of errors exhibited in non-expert smartphone-based sensing, based on four small exploratory studies, and discuss measures to mitigate their effects. We then present a large summative study (N=535) that compares instructions and technical measures to address these errors, both from the perspective of improvements to error frequency and perceived usability. Our results show that (2) technical measures without explanation notably reduce the perceived usability and (3) technical measures and instructions nicely complement each other: Their combination achieves a significant reduction in observed error rates while not affecting the user experience negatively.
Source: Budde, M., Schankin, A., Hoffmann, J., Danz, M., Riedel, T., Beigl, M., 2017. Participatory Sensing or Participatory Nonsense? Mitigating the Effect of Human Error on Data Quality in Citizen Science. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1(3) No. 39. DOI: 10.1145/3131900