Abstract: Citizen science involves volunteers who participate in scientific research by collecting data, monitoring sites, and even taking part in the whole process of scientific inquiry (Roy et al. 2012, Scyphers et al. 2015). In the past two decades, citizen science (also called participatory or community-based monitoring) has gained tremendous popularity (Bonney et al. 2009, Danielsen et al. 2014), due in part to the increasing realization among scientists of the benefits of engaging volunteers (Silvertown 2009, Danielsen et al. 2014, Aceves-Bueno et al. 2015, Scyphers et al. 2015). In particular, the cost-effectiveness of citizen science data offers the potential for scientists to tackle research questions with large spatial and/or temporal scales (Brossard et al. 2005, Holck 2007, Levrel et al. 2010, Szabo et al. 2010, Belt and Krausman 2012). Today, citizen science projects span a wide range of research topics concerning the preservation of marine and terrestrial environments, from invasive species monitoring (e.g., Scyphers et al. 2015) to ecological restoration and from local indicators of climate change to water quality monitoring (Silvertown 2009). They include well-known conservation examples like the Audubon Christmas Bird Count (Butcher et al. 1990) and projects of the Cornell Lab of Ornithology (Bonney et al. 2009).

Despite the growth in the number of citizen science projects, scientists remain concerned about the accuracy of citizen science data (Danielsen et al. 2005, Crall et al. 2011, Gardiner et al. 2012, Law et al. 2017). Some studies evaluating data quality have found volunteer data to be more variable than professionally collected data (Harvey et al. 2002, Uychiaoco et al. 2005, Belt and Krausman 2012, Moyer-Horner et al. 2012) and others that volunteers’ performance is comparable to that of professionals or scientists (Hoyer et al. 2001, 2012, Canfield et al. 2002, Oldekop et al. 2011). For example, Danielsen et al. (2005) concluded that the 16 comparative cases studies they reviewed only provided cautious support for volunteers’ ability to detect changes in populations, habitats, or patterns of resource use. In a more recent review, Dickinson et al. (2010) found that the potential of citizen scientists to produce datasets with error and bias is poorly understood.

The evidence of problems with citizen science data accuracy (e.g., Hochachka et al. 2012, Vermeiren et al. 2016) indicates a need for a more systematic analysis of the accuracy of citizen science data derived from individual studies of accuracy. To our knowledge, despite useful qualitative reviews (e.g., Lewandowski and Specht 2015), there are to date no reviews that combine the case studies to quantitatively evaluate the data quality of citizen science. In this paper, we conduct a quantitative review of citizen science data in the areas of ecology and environmental science. We focus on the universe of peer-reviewed studies in which researchers compare citizen science data to reference data either as part of validation mechanisms in a citizen science project or by designing experiments to test whether volunteers can collect sufficiently accurate data. We code the authors’ qualitative assessments of data accuracy and we code the quantitative assessments of data accuracy. This enables us to evaluate both whether the authors believe the data to be accurate enough to achieve the goals of the program and the degree of accuracy reflected in the quantitative comparisons. We then use a linear regression model to assess correlates of accuracy. With citizen science playing an increasingly important role in expanding our scientific knowledge and enhancing the management of the environment, we conclude with recommendations for assessing data quality and for designing citizen science tasks that are more likely to produce accurate data.

Source: Aceves-Bueno, E. et al, 2017. The Accuracy of Citizen Science Data: A Quantitative Review. The Bulletin of the Ecological Society of America, 98(4): 278–290. DOI: 10.1002/bes2.1336