An online approach for joint task assignment and worker evaluation in crowd-sourcing

Abstract

The paper tackles the problem of finding the correct solution to a set of multiple choice questions or labeling tasks, by adaptively assigning them to workers in a crowdsourcing system. When we do not initially know anything (besides common a priori statistics) about the workers and the questions involved, such problem becomes quite challenging and requires to jointly learn workers’ abilities and questions’ difficulties, while adaptively assigning questions to the most appropriate workers so as to maximize our chances to find which are the correct answers. To address such problem, we first cast it into a suitably constructed Bayesian framework which permits us to obtain an analytically tractable (closed form) single-question inference step, and then we address the more general framework via the Expectation Propagation algorithm, an approximated message-passing iterative technique. We then exploit the (time-varying) information gathered by the inference framework as adaptive weights for a maximum weight matching task assignment policy, proposing a computationally efficient algorithm which maximizes the entropy reduction for the questions assigned at each step. Experimental results both on synthetic and real-world data shows that the proposed algorithm can significantly improve accuracy in predicting the correct solution to multiple choice questions. © 2018 Elsevier B.V.

Publication
Pervasive and Mobile Computing