One of the wonderful things about citizen science is the innate coupling of science research with science education – learning while doing. But does this equation actually hold up? This article reviews four citizen science categories in the context of whether public understanding of science is obtained and comes to the conclusion that while there are many instances in which increased public understanding of science occurs through participation in citizen science, the field can be more purposeful in future project delivery to enable a higher level of public understanding of science. –LFF

Abstract:

Over the past 20 years, thousands of citizen science projects engaging millions of participants in collecting and/or processing data have sprung up around the world. Here we review documented outcomes from four categories of citizen science projects which are defined by the nature of the activities in which their participants engage – Data Collection, Data Processing, Curriculum-based, and Community Science. We find strong evidence that scientific outcomes of citizen science are well documented, particularly for Data Collection and Data Processing projects. We find limited but growing evidence that citizen science projects achieve participant gains in knowledge about science knowledge and process, increase public awareness of the diversity of scientific research, and provide deeper meaning to participants’ hobbies. We also find some evidence that citizen science can contribute positively to social well-being by influencing the questions that are being addressed and by giving people a voice in local environmental decision making. While not all citizen science projects are intended to achieve a greater degree of public understanding of science, social change, or improved science -society relationships, those projects that do require effort and resources in four main categories: (1) project design, (2) outcomes measurement, (3) engagement of new audiences, and (4) new directions for research.

Photo Credit: EOL (CC BY). Citizen Science in action: Learning about biodiversity through games.

Source: Can citizen science enhance public understanding of science?

Abstract:

Citizen Science is part of a broader reconfiguration of the relationship between science and the public in the digital age: Knowledge production and the reception of scientific knowledge are becoming increasingly socially inclusive. We argue that the digital revolution brings the “problem of extension” — identified by Collins and Evans in the context of science and technology governance — now closer to the core of scientific practice. In order to grasp the implications of the inclusion of non-experts in science, the aim of this contribution is to define a role-set of non-certified knowledge production and reception, serving as a heuristic instrument for empirical clarifications.

Source: The “Problem of Extension” revisited: new modes of digital participation in science

Abstract:

The ability of volunteers to undertake different tasks and accurately collect data is critical for the success of many conservation projects. In this study, a simulated herpetofauna visual encounter survey was used to compare the detection and distance estimation accuracy of volunteers and more experienced observers. Experience had a positive effect on individual detection accuracy. However, lower detection performance of less experienced volunteers was not found in the group data, with larger groups being more successful overall, suggesting that working in groups facilitates detection accuracy of those with less experience. This study supports the idea that by optimizing survey protocols according to the available resources (time and volunteer numbers), the sampling efficiency of monitoring programs can be improved and that non-expert volunteers can provide valuable contributions to visual encounter-based biodiversity surveys. Recommendations are made for the improvement of survey methodology involving non-expert volunteers.

Source: How useful are volunteers for visual biodiversity surveys? An evaluation of skill level and group size during a conservation expedition

Here we have a real existential crisis – one that those of us in the field grapple with continuously – a search for the true meaning of the term  “citizen science”. This term has different meanings to various stakeholders, but to accurately track the contributions that are being made academically (and otherwise) by various forms of citizen science, we need to obtain more precise context-driven definitions of the term “citizen science”. This article is an excellent place to start. –LFF

Abstract:

The concept of citizen science (CS) is currently referred to by many actors inside and outside science and research. Several descriptions of this purportedly new approach of science are often heard in connection with large datasets and the possibilities of mobilizing crowds outside science to assists with observations and classifications. However, other accounts refer to CS as a way of democratizing science, aiding concerned communities in creating data to influence policy and as a way of promoting political decision processes involving environment and health.In this study we analyse two datasets (N = 1935, N = 633) retrieved from the Web of Science (WoS) with the aim of giving a scientometric description of what the concept of CS entails. We account for its development over time, and what strands of research that has adopted CS and give an assessment of what scientific output has been achieved in CS-related projects. To attain this, scientometric methods have been combined with qualitative approaches to render more precise search terms.Results indicate that there are three main focal points of CS. The largest is composed of research on biology, conservation and ecology, and utilizes CS mainly as a methodology of collecting and classifying data. A second strand of research has emerged through geographic information research, where citizens participate in the collection of geographic data. Thirdly, there is a line of research relating to the social sciences and epidemiology, which studies and facilitates public participation in relation to environmental issues and health. In terms of scientific output, the largest body of articles are to be found in biology and conservation research. In absolute numbers, the amount of publications generated by CS is low (N = 1935), but over the past decade a new and very productive line of CS based on digital platforms has emerged for the collection and classification of data.

Source: What Is Citizen Science? – A Scientometric Meta-Analysis

Human-machine collaboration is the key to solving the most complex issues of the world, an editorial published recently in the journal Science suggested.

Championing “human computation”— a system that combines the artificial intelligence of machines and talents of humans, the authors claim the system could successfully tackle complex issues like climate change and geopolitical conflicts.

Authors Pietro Michelucci and Janis Dickinson also claim that the “human computation’ system could help solve the issues without the existential risks that are posed by artificial intelligence and the technological singularity.

The idea is to develop understanding of real-world problems online, and test possible solutions to those problems in this computational space. Then, the new knowledge should be applied back in the real world to bring desired changes.

Source: Human-machine collaboration could tackle world’s toughest issues

An article published in PLOS One tracking academic papers mentioning ‘citizen science’ caused a lot of discussion in the last month. My take is here, but Caren Cooper’s blog does a much better job of exploring the issues. –CJL

Citizen science is skyrocketing in popularity. Not just among participants (of which there are millions), but also in its visibility in academic journals. A new article in PLOS ONE by Ria Follett and Vladimir Strezov tracks trends in academic articles containing the term “citizen science.” The authors deciphered patterns based on 888 articles summoned with the keyword search “citizen science” and revealed adoption of the term over time in different disciplines and for different purposes.

“Citizen science,” by that specific phrase, first appeared in academic publications in 1997. After 2003, articles about methods and data validity began to appear. Papers about projects tricked into the literature until 2007, at which time the skyrocketing began. I suspect momentum was slowly building since about 2002 as more and more projects and their data started being accessible online: more access likely equates to more use, assuming the patterns in “citizen science” are a vague proxy for an actual increase in adoption of citizen science.

Photo Credit: Cascades Butterfly Team, by Karlie Roland, NPS

Today is an important day for participation and innovation in the federal government. The White House officially launched the Federal Crowdsourcing and Citizen Science Toolkit, a tool that provides information and resources to help federal agencies use the power of public participation to help solve scientific and societal problems.

The launch of this toolkit solidifies the White House’s commitment to advancing the culture of innovation, learning, sharing and doing in the federal community. Through crowdsourcing, we can create approaches to educate, engage, and empower citizens to apply their curiosity and talents to a wide range of real-world problems.

Crowdsourcing is not new for us. Back in 2010, the Archivist of the United States introduced the concept of the Citizen Archivist, an effort to engage researchers, educators, historians and the public and provide them with the tools and support necessary to contribute their talents, knowledge and creativity to the mission of the National Archives.

Source: Introducing the Federal Crowdsourcing and Citizen Science Toolkit

Abstract:

Recent improvements in online information communication and mobile location-aware technologies have led to the production of large volumes of volunteered geographic information. Widespread, large-scale efforts by volunteers to collect data can inform and drive scientific advances in diverse fields, including ecology and climatology. Traditional workflows to check the quality of such volunteered information can be costly and time consuming as they heavily rely on human interventions. However, identifying factors that can influence data quality, such as inconsistency, is crucial when these data are used in modeling and decision-making frameworks. Recently developed workflows use simple statistical approaches that assume that the majority of the information is consistent. However, this assumption is not generalizable, and ignores underlying geographic and environmental contextual variability that may explain apparent inconsistencies. Here we describe an automated workflow to check inconsistency based on the availability of contextual environmental information for sampling locations. The workflow consists of three steps: (1) dimensionality reduction to facilitate further analysis and interpretation of results, (2) model-based clustering to group observations according to their contextual conditions, and (3) identification of inconsistent observations within each cluster. The workflow was applied to volunteered observations of flowering in common and cloned lilac plants (Syringa vulgaris and Syringa x chinensis) in the United States for the period 1980 to 2013. About 97% of the observations for both common and cloned lilacs were flagged as consistent, indicating that volunteers provided reliable information for this case study. Relative to the original dataset, the exclusion of inconsistent observations changed the apparent rate of change in lilac bloom dates by two days per decade, indicating the importance of inconsistency checking as a key step in data quality assessment for volunteered geographic information. Initiatives that leverage volunteered geographic information can adapt this workflow to improve the quality of their datasets and the robustness of their scientific analyses.

Source: PLOS ONE: Developing a Workflow to Identify Inconsistencies in Volunteered Geographic Information: A Phenological Case Study

Abstract:
Citizen science is key to the success of Future Earth Initiatives for urban sustainability. Emerging research in urban land teleconnections highlights the benefits of incorporating theoretical insights from political ecology and participatory action research. Reviewing some of the forces propelling the recent popularity of citizen science, this article outlines challenges to processes of collaboration between scientists and non-scientists. We distinguish these concerns from others that may arise from the data or other products resulting from citizen science projects. Careful consideration of the processes and products of citizen science could engender a more fruitful relationship between professional scientists and their research communities and help universities to build effective partnerships with those in wider society whose expertise comes from their life experience.

Source: Exploring the entry points for citizen science in urban sustainability initiatives

This paper from the ever-prolific Cornell Lab of Ornithology team describes the creation of a new dataset of annotated images. Those interested in volunteer citizen science might be interested in another conclusion: We find that citizen scientists are significantly more accurate than Mechanical Turkers at zero cost. –CJL

Abstract:

We introduce tools and methodologies to collect high quality, large scale fine-grained computer vision datasets using citizen scientists – crowd annotators who are passion- ate and knowledgeable about specific domains such as birds or airplanes. We worked with citizen scientists and domain experts to collect NABirds, a new high quality dataset con- taining 48,562 images of North American birds with 555 categories, part annotations and bounding boxes. We find that citizen scientists are significantly more accurate than Mechanical Turkers at zero cost. We worked with bird ex- perts to measure the quality of popular datasets like CUB- 200-2011 and ImageNet and found class label error rates of at least 4%. Nevertheless, we found that learning algo- rithms are surprisingly robust to annotation errors and this level of training data corruption can lead to an acceptably small increase in test error if the training set has sufficient size. At the same time, we found that an expert-curated high quality test set like NABirds is necessary to accurately mea- sure the performance of fine-grained computer vision sys- tems. We used NABirds to train a publicly available bird recognition service deployed on the web site of the Cornell Lab of Ornithology.

Source: Building a Bird Recognition App and Large Scale Dataset With Citizen Scientists: The Fine Print in Fine-Grained Dataset Collection