Crowdsourcing & Open Innovation

The Laboratory of Innovation Science at Harvard is currently working on a number of studies, experiments, and projects that center around Crowdsourcing & Open Innovation. Such initiatives allow for the optimization of research for solutions to complex problems by calling on the crowd, instead of limiting knowledge to what is available within an organization.

Listed below are some examples of research which have greatly benefited from utilizing exterior knowledge and not limiting resources to those employed in-house, including a number of crowdsourcing challenges from our partners at NASA, the Broad Institute, and others. Browse LISH’s Crowdsourcing & Open Innovation projects and papers below.

If you are looking to run a crowdsourcing challenge, LISH's Find A Crowdsourcing Platform can help you find a platform that meets your needs.

Publications

Elizabeth E. Richard, Jeffrey R. Davis, Jin H. Paik, and Karim R. Lakhani. 4/25/2019. “Sustaining open innovation through a “Center of Excellence”.” Strategy & Leadership. Publisher's VersionAbstract

This paper presents NASA’s experience using a Center of Excellence (CoE) to scale and sustain an open innovation program as an effective problem-solving tool and includes strategic management recommendations for other organizations based on lessons learned.

This paper defines four phases of implementing an open innovation program: Learn, Pilot, Scale and Sustain. It provides guidance on the time required for each phase and recommendations for how to utilize a CoE to succeed. Recommendations are based upon the experience of NASA’s Human Health and Performance Directorate, and experience at the Laboratory for Innovation Science at Harvard running hundreds of challenges with research and development organizations.

Lessons learned include the importance of grounding innovation initiatives in the business strategy, assessing the portfolio of work to select problems most amenable to solving via crowdsourcing methodology, framing problems that external parties can solve, thinking strategically about early wins, selecting the right platforms, developing criteria for evaluation, and advancing a culture of innovation. Establishing a CoE provides an effective infrastructure to address both technical and cultural issues.

The NASA experience spanned more than seven years from initial learnings about open innovation concepts to the successful scaling and sustaining of an open innovation program; this paper provides recommendations on how to decrease this timeline to three years.

Andrea Blasco, Michael G. Endres, Rinat A. Sergeev, Anup Jonchhe, Max Macaluso, Rajiv Narayan, Ted Natoli, Jin H. Paik, Bryan Briney, Chunlei Wu, Andrew I. Su, Aravind Subramanian, and Karim R. Lakhani. 9/2019. “Advancing Computational Biology and Bioinformatics Research Through Open Innovation Competitions.” PLOS One, 14, 9. Publisher's VersionAbstract
Open data science and algorithm development competitions over a unique avenue for rapid discovery of better computational strategies. We highlight three examples in computational biology and bioinformatics research where the use of competitions has yielded significant performance gains over established algorithms. These include algorithms for antibody clustering, imputing gene expression data, and querying the Connectivity Map (CMap). Performance gains are evaluated quantitatively using realistic, albeit sanitized, data sets. The solutions produced through these competitions are then examined with respect to their utility and the prospects for implementation in the field. We present the decision process and competition design considerations that lead to these successful outcomes as a model for researchers who want to use competitions and non-domain crowds as collaborators to further their research.
Andrea Blasco, Olivia S. Jung, Karim R. Lakhani, and Michael E. Menietti. 4/2019. “Incentives for Public Goods Inside Organizations: Field Experimental Evidence.” Journal of Economic Behavior & Organization, 160, Pp. 214-229. Publisher's VersionAbstract

We report results of a natural field experiment conducted at a medical organization that sought contribution of public goods (i.e., projects for organizational improvement) from its 1200 employees. Offering a prize for winning submissions boosted participation by 85 percent without affecting the quality of the submissions. The effect was consistent across gender and job type. We posit that the allure of a prize, in combination with mission-oriented preferences, drove participation. Using a simple model, we estimate that these preferences explain about a third of the magnitude of the effect. We also find that these results were sensitive to the solicited person’s gender.

Herman B. Leonard, Mitchell B. Weiss, Jin H. Paik, and Kerry Herman. 2018. SOFWERX: Innovation at U.S. Special Operations Command. Harvard Business School Case. Harvard Business School. Publisher's VersionAbstract
James “Hondo” Geurts, the Acquisition Executive for U.S. Special Operations Command was in the middle of his Senate confirmation hearing in 2017 to become Assistant Secretary of the Navy for Research, Development and Acquisition. The questions had a common theme: how would Geurts’s experience running an innovative procurement effort for U.S. Special Forces units enable him to change a much larger—and much more rigid—organization like the U.S. Navy? In one of the most secretive parts of the U.S. military, Geurts founded an open platform called SOFWERX to speed the rate of ideas to Navy SEALs, Army Special Forces, and the like. His team even sourced the idea for a hoverboard from a YouTube video. But how should things like SOFWERX and protypes like the EZ-Fly find a place within the Navy writ large?
Luke Boosey, Philip Brookins, and Dmitry Ryvkin. 2018. “Contests between groups of unknown size.” Games and Economic Behavior. Publisher's VersionAbstract
We study group contests where group sizes are stochastic and unobservable to participants at the time of investment. When the joint distribution of group sizes is symmetric, with expected group size , the symmetric equilibrium aggregate investment is lower than in a symmetric group contest with commonly known fixed group size . A similar result holds for two groups with asymmetric distributions of sizes. For the symmetric case, the reduction in individual and aggregate investment due to group size uncertainty increases with the variance in relative group impacts. When group sizes are independent conditional on a common shock, a stochastic increase in the common shock mitigates the effect of group size uncertainty unless the common and idiosyncratic components of group size are strong complements. Finally, group size uncertainty undermines the robustness of the group size paradox otherwise present in the model.
Philip Brookins, John P. Lightle, and Dmitry Ryvkin. 2018. “Sorting and communication in weak-link group contests.” Journal of Economic Behavior & Organization, 152, Pp. 64-80. Publisher's VersionAbstract
We experimentally study the effects of sorting and communication in contests between groups of heterogeneous players whose within-group efforts are perfect complements. Contrary to the common wisdom that competitive balance bolsters performance in contests, in this setting theory predicts that aggregate output increases in the variation in abilities between groups, i.e., it is maximized by the most unbalanced sorting of players. However, the data does not support this prediction. In the absence of communication, we find no effect of sorting on aggregate output, while in the presence of within-group communication aggregate output is 33% higher under the balanced sorting as compared to the unbalanced sorting. This reversal of the prediction is in line with the competitive balance heuristic. The results have implications for the design of optimal groups in organizations using relative performance pay.