


default search action
HCOMP 2015: San Diego, California, USA
- Elizabeth Gerber, Panos Ipeirotis:

Proceedings of the Third AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2015, November 8-11, 2015, San Diego, California, USA. AAAI Press 2015, ISBN 978-1-57735-741-4
Full Papers
- Elena Agapie, Jaime Teevan, Andrés Monroy-Hernández

:
Crowdsourcing in the Field: A Case Study Using Local Crowds for Event Reporting. 2-11 - Sepehr Assadi, Justin Hsu, Shahin Jabbari:

Online Assignment of Heterogeneous Tasks in Crowdsourcing Markets. 12-21 - Arpita Biswas, Deepthi Chander, Koustuv Dasgupta, Koyel Mukherjee, Mridula Singh, Tridib Mukherjee:

PISCES: Participatory Incentive Strategies for Effective Community Engagement in Smart Cities. 22-31 - Manuel Blum, Santosh Srinivas Vempala:

Publishable Humanly Usable Secure Password Creation Schemas. 32-41 - Luca de Alfaro, Vassilis Polychronopoulos, Michael Shavlovsky:

Reliable Aggregation of Boolean Crowdsourced Tasks. 42-51 - Lili Dworkin, Michael J. Kearns:

From "In" to "Over": Behavioral Experiments on Whole-Network Computation. 52-61 - Ting-Hao (Kenneth) Huang, Walter S. Lasecki, Jeffrey P. Bigham:

Guardian: A Crowd-Powered Spoken Dialog System for Web APIs. 62-71 - Julie Hui, Amos Glenn, Rachel Jue, Elizabeth Gerber, Steven Dow:

Using Anonymity and Communal Efforts to Improve Quality of Crowdsourced Feedback. 72-82 - Hyun Joon Jung, Matthew Lease:

Modeling Temporal Crowd Work Quality with Limited Supervision. 83-91 - Ece Kamar, Ashish Kapoor, Eric Horvitz:

Identifying and Accounting for Task-Dependent Bias in Crowdsourcing. 92-101 - Markus Krause, René F. Kizilcec:

To Play or Not to Play: Interactions between Response Quality and Task Complexity in Games and Paid Crowdsourcing. 102-109 - Kurt Luther

, Nathan Hahn, Steven P. Dow, Aniket Kittur
:
Crowdlines: Supporting Synthesis of Diverse Information Sources through Crowdsourced Outlines. 110-119 - An Thanh Nguyen, Byron C. Wallace, Matthew Lease:

Combining Crowd and Expert Labels Using Decision Theoretic Active Learning. 120-129 - Besmira Nushi, Adish Singla

, Anja Gruenheid, Erfan Zamanian, Andreas Krause, Donald Kossmann:
Crowd Access Path Optimization: Diversity Matters. 130-139 - Alexandra Papoutsaki, Hua Guo, Danaë Metaxa-Kakavouli

, Connor Gramazio, Jeff Rasley, Wenting Xie, Guan Wang, Jeff Huang:
Crowdsourcing from Scratch: A Pragmatic Experiment in Data Collection by Novice Requesters. 140-149 - Genevieve Patterson, Grant Van Horn, Serge J. Belongie, Pietro Perona, James Hays:

Tropel: Crowdsourcing Detectors with Minimal Training. 150-159 - Filipe Rodrigues, Bernardete Ribeiro, Mariana Lourenço, Francisco C. Pereira:

Learning Supervised Topic Models from Crowds. 160-168 - Elliot Salisbury, Sebastian Stein, Sarvapali D. Ramchurn:

CrowdAR: Augmenting Live Video with a Real-Time Crowd. 169-177 - Akash Das Sarma, Ayush Jain, Arnab Nandi, Aditya G. Parameswaran

, Jennifer Widom:
Surpassing Humans and Computers with JELLYBEAN: Crowd-Vision-Hybrid Counting Algorithms. 178-187 - Antti Ukkonen, Behrouz Derakhshan, Hannes Heikinheimo:

Crowdsourced Nonparametric Density Estimation Using Relative Distances. 188-197 - James Y. Zou, Kamalika Chaudhuri, Adam Tauman Kalai:

Crowdsourcing Feature Discovery via Adaptively Chosen Comparisons. 198-205
Works in Progress Abstracts
- Masayuki Ashikawa, Takahiro Kawamura, Akihiko Ohsuga:

Proposal of Grade Training Method in Private Crowdsourcing System. 2-3 - Carlo Bernaschina, Ilio Catallo, Piero Fraternali, Davide Martinenghi:

On the Role of Task Design in Crowdsourcing Campaigns. 4-5 - Jie Gao, Hankz Hankui Zhuo, Subbarao Kambhampati, Lei Li:

Acquiring Planning Knowledge via Crowdsourcing. 6-7 - Alastair J. Gill, Francisco Iacobelli:

Understanding Socially Constructed Concepts Using Blogs Data. 8-9 - Mayumi Hadano, Makoto Nakatsuji, Hiroyuki Toda, Yoshimasa Koike:

Assigning Tasks to Workers by Referring to Their Schedules in Mobile Crowdsourcing. 10-11 - Heeju Hwang:

Moral Reminder as a Way to Improve Worker Performance on Amazon Mechanical Turk. 12-13 - Yongsung Kim, Emily Harburg, Shana Azria, Elizabeth Gerber, Darren Gergle, Haoqi Zhang:

Enabling Physical Crowdsourcing On-the-Go with Context-Sensitive Notifications. 14-15 - Pavel Kucherbaev, Florian Daniel, Stefano Tranquillini, Maurizio Marchese:

Modeling and Exploration of Crowdsourcing Micro-Tasks Execution. 16-17 - Fatma Layas, Helen Petrie, Christopher Power:

A Cross-Cultural Study of Motivations to Participate in a Crowdsourcing Project to Support People with Disabilities. 18-19 - Hongwei Li, Qiang Liu:

Cheaper and Better: Selecting Good Workers for Crowdsourcing. 20-21 - David Merritt, Mark S. Ackerman, Mark W. Newman, Pei-Yao Hung, Jacob Mandel, Erica Ackerman:

Using Expertise for Crowd-Sourcing. 22-23 - Anirban Mondal, Gurulingesh Raravi, Amandeep Chugh, Tridib Mukherjee:

LoRUS: A Mobile Crowdsourcing System for Efficiently Retrieving the Top-k Relevant Users in a Spatial Window. 24-25 - Ellie Pavlick, Chris Callison-Burch:

Extracting Structured Information via Automatic + Human Computation. 26-27 - Yuko Sakurai, Satoshi Oyama, Masato Shinoda, Makoto Yokoo:

Flexible Reward Plans to Elicit Truthful Predictions in Crowdsourcing. 28-29 - Mehrnoosh Sameki, Danna Gurari, Margrit Betke:

Predicting Quality of Crowdsourced Image Segmentations from Crowd Behavior. 30-31 - Nobuyuki Shimizu, Atsuyuki Morishima, Ryota Hayashi:

A Crowdsourcing Method for Obtaining Rephrased Questions. 32-33 - Adish Singla

, Eric Horvitz, Pushmeet Kohli, Andreas Krause:
Learning to Hire Teams. 34-35 - David Sirkin, Kerstin Fischer, Lars Christian Jensen, Wendy Ju:

How Effective an Odd Message Can Be: Appropriate and Inappropriate Topics in Speech-Based Vehicle Interfaces. 36-37 - Barry Smyth, Rachael Rafter, Sam Banks:

A Game with a Purpose for Recommender Systems. 38-39 - Beatrice Valeri, Shady Elbassuoni, Sihem Amer-Yahia:

Acquiring Reliable Ratings from the Crowd. 40-41 - Ketan Madhavrao Vazirabadkar, Siddhant Surendra Gadre, Rajeev Sebastian, Nikhil Dwivedi:

Crowdsourcing Based on GPS. 42-43 - Matteo Venanzi, Oliver Parson, Alex Rogers, Nick R. Jennings:

The ActiveCrowdToolkit: An Open-Source Tool for Benchmarking Active Learning Algorithms for Crowdsourcing Research. 44-45 - Yu Wu, Jessica Kropczynski, Raquel O. Prates, John M. Carroll:

The Rise of Curation on GitHub. 46-47

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














