This paper from the ever-prolific Cornell Lab of Ornithology team describes the creation of a new dataset of annotated images. Those interested in volunteer citizen science might be interested in another conclusion: We find that citizen scientists are significantly more accurate than Mechanical Turkers at zero cost. –CJL
Abstract:
We introduce tools and methodologies to collect high quality, large scale fine-grained computer vision datasets using citizen scientists – crowd annotators who are passion- ate and knowledgeable about specific domains such as birds or airplanes. We worked with citizen scientists and domain experts to collect NABirds, a new high quality dataset con- taining 48,562 images of North American birds with 555 categories, part annotations and bounding boxes. We find that citizen scientists are significantly more accurate than Mechanical Turkers at zero cost. We worked with bird ex- perts to measure the quality of popular datasets like CUB- 200-2011 and ImageNet and found class label error rates of at least 4%. Nevertheless, we found that learning algo- rithms are surprisingly robust to annotation errors and this level of training data corruption can lead to an acceptably small increase in test error if the training set has sufficient size. At the same time, we found that an expert-curated high quality test set like NABirds is necessary to accurately mea- sure the performance of fine-grained computer vision sys- tems. We used NABirds to train a publicly available bird recognition service deployed on the web site of the Cornell Lab of Ornithology.
No Comments
Be the first to start a conversation