Crowdsourcing & Open Innovation

The Laboratory of Innovation Science at Harvard is currently working on a number of studies, experiments, and projects that center around Crowdsourcing & Open Innovation. Such initiatives allow for the optimization of research for solutions to complex problems by calling on the crowd, instead of limiting knowledge to what is available within an organization.

Listed below are some examples of research which have greatly benefited from utilizing exterior knowledge and not limiting resources to those employed in-house, including a number of crowdsourcing challenges from our partners at NASA, the Broad Institute, and others. Browse LISH’s Crowdsourcing & Open Innovation projects and papers below.

If you are looking to run a crowdsourcing challenge, LISH's Find A Crowdsourcing Platform can help you find a platform that meets your needs.

Publications

Philip Brookins, Dmitry Ryvkin, and Andrew Smyth. 3/8/2021. “Indefinitely repeated contests: An experimental study.” Experimental Economics . Publisher's VersionAbstract
We experimentally explore indefinitely repeated contests. Theory predicts more cooperation, in the form of lower expenditures, in indefinitely repeated contests with a longer expected time horizon. Our data support this prediction, although this result attenuates with contest experience. Theory also predicts more cooperation in indefinitely repeated contests compared to finitely repeated contests of the same expected length, and we find empirical support for this. Finally, theory predicts no difference in cooperation across indefinitely repeated winner-take-all and proportional-prize contests, yet we find evidence of less cooperation in the latter, though only in longer treatments with more contests played. Our paper extends the experimental literature on indefinitely repeated games to contests and, more generally, contributes to an infant empirical literature on behavior in indefinitely repeated games with “large” strategy spaces.
Philip Brookins and Paan Jindapon. 2/20/2021. “Risk preference heterogeneity in group contests.” Journal of Mathematical Economics. Publisher's VersionAbstract
We analyze the first model of a group contest with players that are heterogeneous in their risk preferences. In our model, individuals’ preferences are represented by a utility function exhibiting a generalized form of constant absolute risk aversion, allowing us to consider any combination of risk-averse, risk-neutral, and risk-loving players. We begin by proving equilibrium existence and uniqueness under both linear and convex investment costs. Then, we explore how the sorting of a compatible set of players by their risk attitudes into competing groups affects aggregate investment. With linear costs, a balanced sorting (i.e., minimizing the variance in risk attitudes across groups) always produces an aggregate investment level that is at least as high as an unbalanced sorting (i.e., maximizing the variance in risk attitudes across groups). Under convex costs, however, identifying which sorting is optimal is more nuanced and depends on preference and cost parameters.
Karim R. Lakhani, Anne-Laure Fayard, Manos Gkeredakis, and Jin Hyun Paik. 10/5/2020. “OpenIDEO (B)”. Publisher's VersionAbstract
In the midst of 2020, as the coronavirus pandemic was unfolding, OpenIDEO - an online open innovation platform focused on design-driven solutions to social issues - rapidly launched a new challenge to improve access to health information, empower communities to stay safe during the COVID-19 crisis, and inspire global leaders to communicate effectively. OpenIDEO was particularly suited to challenges which required cross-system or sector-wide collaboration due to its focus on social impact and ecosystem design, but its leadership pondered how they could continue to improve virtual collaboration and to share their insights from nearly a decade of running online challenges. Conceived as an exercise of disruptive digital innovation, OpenIDEO successfully created a strong open innovation community, but how could they sustain - or even improve - their support to community members and increase the social impact of their online challenges in the coming years?
Jin Paik, Martin Schöll, Rinat Sergeev, Steven Randazzo, and Karim R. Lakhani. 2/26/2020. “Innovation Contests for High-Tech Procurement.” Research-Technology Management, 63:2, 36-45. Publisher's VersionAbstract
Innovation managers rarely use crowdsourcing as an innovative instrument despite extensive academic and theoretical research. The lack of tools available to compare and measure crowdsourcing, specifically contests, against traditional methods of procuring goods and services is one barrier to adoption. Using ethnographic research to understand how managers solved their problems, we find that the crowdsourcing model produces higher costs in the framing phase but yields savings in the solving phase, whereas traditional procurement is downstream cost-intensive. Two case study examples with the National Aeronautics and Space Agency (NASA) and the United States Department of Energy demonstrate a potential total cost savings of 27 percent and 33 percent, respectively, using innovation contests. We provide a comprehensive evaluation framework for crowdsourcing contests developed from a high-tech industry perspective, which are applicable to other industries.
Raymond H. Mak, Michael G. Endres, Jin H. Paik, Rinat A. Sergeev, Hugo Aerts, Christopher L. Williams, Karim R. Lakhani, and Eva C. Guinan. 4/18/2019. “Use of Crowd Innovation to Develop an Artificial Intelligence–Based Solution for Radiation Therapy Targeting.” JAMA Oncology, 5, 5, Pp. 654-661. Publisher's VersionAbstract

Radiation therapy (RT) is a critical cancer treatment, but the existing radiation oncologist work force does not meet growing global demand. One key physician task in RT planning involves tumor segmentation for targeting, which requires substantial training and is subject to significant interobserver variation.

To determine whether crowd innovation could be used to rapidly produce artificial intelligence (AI) solutions that replicate the accuracy of an expert radiation oncologist in segmenting lung tumors for RT targeting.

We conducted a 10-week, prize-based, online, 3-phase challenge (prizes totaled $55 000). A well-curated data set, including computed tomographic (CT) scans and lung tumor segmentations generated by an expert for clinical care, was used for the contest (CT scans from 461 patients; median 157 images per scan; 77 942 images in total; 8144 images with tumor present). Contestants were provided a training set of 229 CT scans with accompanying expert contours to develop their algorithms and given feedback on their performance throughout the contest, including from the expert clinician.

Main Outcomes and Measures  The AI algorithms generated by contestants were automatically scored on an independent data set that was withheld from contestants, and performance ranked using quantitative metrics that evaluated overlap of each algorithm’s automated segmentations with the expert’s segmentations. Performance was further benchmarked against human expert interobserver and intraobserver variation.

A total of 564 contestants from 62 countries registered for this challenge, and 34 (6%) submitted algorithms. The automated segmentations produced by the top 5 AI algorithms, when combined using an ensemble model, had an accuracy (Dice coefficient = 0.79) that was within the benchmark of mean interobserver variation measured between 6 human experts. For phase 1, the top 7 algorithms had average custom segmentation scores (S scores) on the holdout data set ranging from 0.15 to 0.38, and suboptimal performance using relative measures of error. The average S scores for phase 2 increased to 0.53 to 0.57, with a similar improvement in other performance metrics. In phase 3, performance of the top algorithm increased by an additional 9%. Combining the top 5 algorithms from phase 2 and phase 3 using an ensemble model, yielded an additional 9% to 12% improvement in performance with a final S score reaching 0.68.

A combined crowd innovation and AI approach rapidly produced automated algorithms that replicated the skills of a highly trained physician for a critical task in radiation therapy. These AI algorithms could improve cancer care globally by transferring the skills of expert clinicians to under-resourced health care settings.

  •  
  • 1 of 11
  • »