Relative Feedback Increases Disparities in Effort and Performance in Crowdsourcing Contests: Evidence from a Quasi-Experiment on Topcoder

Citation:

Milena Tsvetkova, Sebastian Müller, Oana Vuculescu, Haylee Ham, and Rinat A Sergeev. 11/11/2022. “Relative Feedback Increases Disparities in Effort and Performance in Crowdsourcing Contests: Evidence from a Quasi-Experiment on Topcoder.” Proceedings of the ACM on Human-Computer Interaction, 6, CSW2, Pp. 1-27. Publisher's Version

Abstract:

Rankings and leaderboards are often used in crowdsourcing contests and online communities to motivate individual contributions but feedback based on social comparison can also have negative effects. Here, we study the unequal effects of such feedback on individual effort and performance for individuals of different ability. We hypothesize that the effects of social comparison differ for top performers and bottom performers in a way that the inequality between the two increases. We use a quasi-experimental design to test our predictions with data from Topcoder, a large online crowdsourcing platform that publishes computer programming contests. We find that in contests where the submitted code is evaluated against others' submissions, rather than using an absolute scale, top performers increase their effort while bottom performers decrease it. As a result, relative scoring leads to better outcomes for those at the top but lower engagement for bottom performers. Our findings expose an important but overlooked drawback from using gamified competitions, rankings, and relative evaluations, with potential implications for crowdsourcing markets, online learning environments, online communities, and organizations in general.
Last updated on 01/11/2023