Abstract:

The ability of volunteers to undertake different tasks and accurately collect data is critical for the success of many conservation projects. In this study, a simulated herpetofauna visual encounter survey was used to compare the detection and distance estimation accuracy of volunteers and more experienced observers. Experience had a positive effect on individual detection accuracy. However, lower detection performance of less experienced volunteers was not found in the group data, with larger groups being more successful overall, suggesting that working in groups facilitates detection accuracy of those with less experience. This study supports the idea that by optimizing survey protocols according to the available resources (time and volunteer numbers), the sampling efficiency of monitoring programs can be improved and that non-expert volunteers can provide valuable contributions to visual encounter-based biodiversity surveys. Recommendations are made for the improvement of survey methodology involving non-expert volunteers.

Source: How useful are volunteers for visual biodiversity surveys? An evaluation of skill level and group size during a conservation expedition

Here we have a real existential crisis – one that those of us in the field grapple with continuously – a search for the true meaning of the term  “citizen science”. This term has different meanings to various stakeholders, but to accurately track the contributions that are being made academically (and otherwise) by various forms of citizen science, we need to obtain more precise context-driven definitions of the term “citizen science”. This article is an excellent place to start. –LFF

Abstract:

The concept of citizen science (CS) is currently referred to by many actors inside and outside science and research. Several descriptions of this purportedly new approach of science are often heard in connection with large datasets and the possibilities of mobilizing crowds outside science to assists with observations and classifications. However, other accounts refer to CS as a way of democratizing science, aiding concerned communities in creating data to influence policy and as a way of promoting political decision processes involving environment and health.In this study we analyse two datasets (N = 1935, N = 633) retrieved from the Web of Science (WoS) with the aim of giving a scientometric description of what the concept of CS entails. We account for its development over time, and what strands of research that has adopted CS and give an assessment of what scientific output has been achieved in CS-related projects. To attain this, scientometric methods have been combined with qualitative approaches to render more precise search terms.Results indicate that there are three main focal points of CS. The largest is composed of research on biology, conservation and ecology, and utilizes CS mainly as a methodology of collecting and classifying data. A second strand of research has emerged through geographic information research, where citizens participate in the collection of geographic data. Thirdly, there is a line of research relating to the social sciences and epidemiology, which studies and facilitates public participation in relation to environmental issues and health. In terms of scientific output, the largest body of articles are to be found in biology and conservation research. In absolute numbers, the amount of publications generated by CS is low (N = 1935), but over the past decade a new and very productive line of CS based on digital platforms has emerged for the collection and classification of data.

Source: What Is Citizen Science? – A Scientometric Meta-Analysis

Human-machine collaboration is the key to solving the most complex issues of the world, an editorial published recently in the journal Science suggested.

Championing “human computation”— a system that combines the artificial intelligence of machines and talents of humans, the authors claim the system could successfully tackle complex issues like climate change and geopolitical conflicts.

Authors Pietro Michelucci and Janis Dickinson also claim that the “human computation’ system could help solve the issues without the existential risks that are posed by artificial intelligence and the technological singularity.

The idea is to develop understanding of real-world problems online, and test possible solutions to those problems in this computational space. Then, the new knowledge should be applied back in the real world to bring desired changes.

Source: Human-machine collaboration could tackle world’s toughest issues

An article published in PLOS One tracking academic papers mentioning ‘citizen science’ caused a lot of discussion in the last month. My take is here, but Caren Cooper’s blog does a much better job of exploring the issues. –CJL

Citizen science is skyrocketing in popularity. Not just among participants (of which there are millions), but also in its visibility in academic journals. A new article in PLOS ONE by Ria Follett and Vladimir Strezov tracks trends in academic articles containing the term “citizen science.” The authors deciphered patterns based on 888 articles summoned with the keyword search “citizen science” and revealed adoption of the term over time in different disciplines and for different purposes.

“Citizen science,” by that specific phrase, first appeared in academic publications in 1997. After 2003, articles about methods and data validity began to appear. Papers about projects tricked into the literature until 2007, at which time the skyrocketing began. I suspect momentum was slowly building since about 2002 as more and more projects and their data started being accessible online: more access likely equates to more use, assuming the patterns in “citizen science” are a vague proxy for an actual increase in adoption of citizen science.

Photo Credit: Cascades Butterfly Team, by Karlie Roland, NPS

Today is an important day for participation and innovation in the federal government. The White House officially launched the Federal Crowdsourcing and Citizen Science Toolkit, a tool that provides information and resources to help federal agencies use the power of public participation to help solve scientific and societal problems.

The launch of this toolkit solidifies the White House’s commitment to advancing the culture of innovation, learning, sharing and doing in the federal community. Through crowdsourcing, we can create approaches to educate, engage, and empower citizens to apply their curiosity and talents to a wide range of real-world problems.

Crowdsourcing is not new for us. Back in 2010, the Archivist of the United States introduced the concept of the Citizen Archivist, an effort to engage researchers, educators, historians and the public and provide them with the tools and support necessary to contribute their talents, knowledge and creativity to the mission of the National Archives.

Source: Introducing the Federal Crowdsourcing and Citizen Science Toolkit

Abstract:

Recent improvements in online information communication and mobile location-aware technologies have led to the production of large volumes of volunteered geographic information. Widespread, large-scale efforts by volunteers to collect data can inform and drive scientific advances in diverse fields, including ecology and climatology. Traditional workflows to check the quality of such volunteered information can be costly and time consuming as they heavily rely on human interventions. However, identifying factors that can influence data quality, such as inconsistency, is crucial when these data are used in modeling and decision-making frameworks. Recently developed workflows use simple statistical approaches that assume that the majority of the information is consistent. However, this assumption is not generalizable, and ignores underlying geographic and environmental contextual variability that may explain apparent inconsistencies. Here we describe an automated workflow to check inconsistency based on the availability of contextual environmental information for sampling locations. The workflow consists of three steps: (1) dimensionality reduction to facilitate further analysis and interpretation of results, (2) model-based clustering to group observations according to their contextual conditions, and (3) identification of inconsistent observations within each cluster. The workflow was applied to volunteered observations of flowering in common and cloned lilac plants (Syringa vulgaris and Syringa x chinensis) in the United States for the period 1980 to 2013. About 97% of the observations for both common and cloned lilacs were flagged as consistent, indicating that volunteers provided reliable information for this case study. Relative to the original dataset, the exclusion of inconsistent observations changed the apparent rate of change in lilac bloom dates by two days per decade, indicating the importance of inconsistency checking as a key step in data quality assessment for volunteered geographic information. Initiatives that leverage volunteered geographic information can adapt this workflow to improve the quality of their datasets and the robustness of their scientific analyses.

Source: PLOS ONE: Developing a Workflow to Identify Inconsistencies in Volunteered Geographic Information: A Phenological Case Study

Abstract:
Citizen science is key to the success of Future Earth Initiatives for urban sustainability. Emerging research in urban land teleconnections highlights the benefits of incorporating theoretical insights from political ecology and participatory action research. Reviewing some of the forces propelling the recent popularity of citizen science, this article outlines challenges to processes of collaboration between scientists and non-scientists. We distinguish these concerns from others that may arise from the data or other products resulting from citizen science projects. Careful consideration of the processes and products of citizen science could engender a more fruitful relationship between professional scientists and their research communities and help universities to build effective partnerships with those in wider society whose expertise comes from their life experience.

Source: Exploring the entry points for citizen science in urban sustainability initiatives

This paper from the ever-prolific Cornell Lab of Ornithology team describes the creation of a new dataset of annotated images. Those interested in volunteer citizen science might be interested in another conclusion: We find that citizen scientists are significantly more accurate than Mechanical Turkers at zero cost. –CJL

Abstract:

We introduce tools and methodologies to collect high quality, large scale fine-grained computer vision datasets using citizen scientists – crowd annotators who are passion- ate and knowledgeable about specific domains such as birds or airplanes. We worked with citizen scientists and domain experts to collect NABirds, a new high quality dataset con- taining 48,562 images of North American birds with 555 categories, part annotations and bounding boxes. We find that citizen scientists are significantly more accurate than Mechanical Turkers at zero cost. We worked with bird ex- perts to measure the quality of popular datasets like CUB- 200-2011 and ImageNet and found class label error rates of at least 4%. Nevertheless, we found that learning algo- rithms are surprisingly robust to annotation errors and this level of training data corruption can lead to an acceptably small increase in test error if the training set has sufficient size. At the same time, we found that an expert-curated high quality test set like NABirds is necessary to accurately mea- sure the performance of fine-grained computer vision sys- tems. We used NABirds to train a publicly available bird recognition service deployed on the web site of the Cornell Lab of Ornithology.

Source: Building a Bird Recognition App and Large Scale Dataset With Citizen Scientists: The Fine Print in Fine-Grained Dataset Collection

The arrival of ‘ash dieback’ in 2012, a fatal disease that aggressively infects trees, in the UK was big news, and generated a range of citizen science responses. This thoughtful article, from Judith Tsouvalis at the University of Nottingham, looks at the sometimes awkward relationship of such programs with language used by those concentrating on issues of biosecurity. As citizen science programs work more closely with state agencies, these kind of considerations will continue to crop up. — CJL.

Protecting tree and plant health remains a concern firmly embedded in the science-based, technocratic discourse of ‘biosecurity’ with its emphasis on regulation, surveillance, and control. Here, Judith Tsouvalis argues that this makes it difficult to have a broader debate on the deeper, more complex causes of the steep rise in tree and plant disease epidemics worldwide.

Much has changed since the trade-related arrival of ash dieback (Chalara) at a nursery in Buckinghamshire in February 2012. On the negative side, 652 sites across England, Scotland and Wales are now known to contain trees infected with the potentially fatal disease. It is also accepted that the spread of the disease cannot be stopped. There is hope for treatments, but they are currently still in the development phase and their wider ecological implications are unknown. Assuming therefore that ash dieback will run its course and take its toll, the estimate that the UK will lose at least fifty species identified as solely relying on the ash for their survival is tragic. On the positive side, Hymenoscyphus fraxineus, as the pathogenic fungus from East Asia that causes Chalara is now called, has spurred science and policy in the area of tree and plant health into action.

Source: How social and citizen science help challenge the limits of the biosecurity approach: the case of ash dieback.

Crowdsourcing. We talk about it. We educate people how to use it. But it is also an overused and underappreciated word, according to Forbes. Its influence is now spawning to government affairs thanks to the Internet. In 2013, President Obama called out to the federal agencies to use Citizen Science and Crowdsourcing to tap the wisdom of the crowds—the citizens—to help solve scientific and societal problems.

In November 2014, the Office of Science and Technology Policy (OSTP) started developing the crowdsourcing Toolkit to get things done using a “human-centered design workshop.” This is just of the many stories and initiatives where the government is proactive in harnessing collective wisdom and emerging technologies.

But how can governments use crowdsourcing and citizen science for effective citizen empowerment? The former is the practice of engaging a crowd or group for a common goal, while the latter, (according to the White House), is “a form of open collaboration in which members of the public participate in the scientific process, including identifying research questions, collecting and analyzing data, interpreting results, and solving problems.”

Source: How Governments Apply Crowdsourcing To Spark Citizen Empowerment