Crowdsourcing & Open Innovation

Jin Paik, Martin Schöll, Rinat Sergeev, Steven Randazzo, and Karim R. Lakhani. 2/26/2020. “Innovation Contests for High-Tech Procurement.” Research-Technology Management, 63:2, 36-45. Publisher's VersionAbstract
Innovation managers rarely use crowdsourcing as an innovative instrument despite extensive academic and theoretical research. The lack of tools available to compare and measure crowdsourcing, specifically contests, against traditional methods of procuring goods and services is one barrier to adoption. Using ethnographic research to understand how managers solved their problems, we find that the crowdsourcing model produces higher costs in the framing phase but yields savings in the solving phase, whereas traditional procurement is downstream cost-intensive. Two case study examples with the National Aeronautics and Space Agency (NASA) and the United States Department of Energy demonstrate a potential total cost savings of 27 percent and 33 percent, respectively, using innovation contests. We provide a comprehensive evaluation framework for crowdsourcing contests developed from a high-tech industry perspective, which are applicable to other industries.

Races vs. Tournaments

Contests are frequently used to raise the workers’ productivity and innovation in business, government, and many other settings.  They can take many different formats or designs, but two seem prevalent:  the race and the tournament. Races set the incentives by rewarding the first person to meet a specified,... Read more about Races vs. Tournaments

Optimal Prize Structure

One of the strongest design parameters for contests is the prize structure, i.e., the number and level of prizes. In developing best practices, we are working to provide guidance to practitioners to optimize the use of prize funds. Optimal selection of prizes is a complex task. For tasks with diminishing returns to effort (the 100th hour of work improves the output less than the 1st hour),... Read more about Optimal Prize Structure

Best Management Practices

LISH is working to develop testable systems and methods to help open innovation (OI) practitioners explore techniques for best practices. To date, the lab has spent extensive time studying both contests and communities with profit companies, governments, academic research centers, and platforms. Research in these areas explore... Read more about Best Management Practices

Andrea Blasco, Ted Natoli, Michael G. Endres, Rinat A. Sergeev, Steven Randazzo, Jin Paik, Max Macaluso, Rajiv Narayan, Karim R. Lakhani, and Aravind Subramaniam. 4/6/2021. “Improving Deconvolution Methods in Biology through Open Innovation Competitions: An Application to the Connectivity Map.” Bioinformatics. Publisher's VersionAbstract
Do machine learning methods improve standard deconvolution techniques for gene expression data? This article uses a unique new dataset combined with an open innovation competition to evaluate a wide range of approaches developed by 294 competitors from 20 countries. The competition’s objective was to address a deconvolution problem critical to analyzing genetic perturbations from the Connectivity Map. The issue consists of separating gene expression of individual genes from raw measurements obtained from gene pairs. We evaluated the outcomes using ground-truth data (direct measurements for single genes) obtained from the same samples.
Raymond H. Mak, Michael G. Endres, Jin H. Paik, Rinat A. Sergeev, Hugo Aerts, Christopher L. Williams, Karim R. Lakhani, and Eva C. Guinan. 4/18/2019. “Use of Crowd Innovation to Develop an Artificial Intelligence–Based Solution for Radiation Therapy Targeting.” JAMA Oncology, 5, 5, Pp. 654-661. Publisher's VersionAbstract

Radiation therapy (RT) is a critical cancer treatment, but the existing radiation oncologist work force does not meet growing global demand. One key physician task in RT planning involves tumor segmentation for targeting, which requires substantial training and is subject to significant interobserver variation.

To determine whether crowd innovation could be used to rapidly produce artificial intelligence (AI) solutions that replicate the accuracy of an expert radiation oncologist in segmenting lung tumors for RT targeting.

We conducted a 10-week, prize-based, online, 3-phase challenge (prizes totaled $55 000). A well-curated data set, including computed tomographic (CT) scans and lung tumor segmentations generated by an expert for clinical care, was used for the contest (CT scans from 461 patients; median 157 images per scan; 77 942 images in total; 8144 images with tumor present). Contestants were provided a training set of 229 CT scans with accompanying expert contours to develop their algorithms and given feedback on their performance throughout the contest, including from the expert clinician.

Main Outcomes and Measures  The AI algorithms generated by contestants were automatically scored on an independent data set that was withheld from contestants, and performance ranked using quantitative metrics that evaluated overlap of each algorithm’s automated segmentations with the expert’s segmentations. Performance was further benchmarked against human expert interobserver and intraobserver variation.

A total of 564 contestants from 62 countries registered for this challenge, and 34 (6%) submitted algorithms. The automated segmentations produced by the top 5 AI algorithms, when combined using an ensemble model, had an accuracy (Dice coefficient = 0.79) that was within the benchmark of mean interobserver variation measured between 6 human experts. For phase 1, the top 7 algorithms had average custom segmentation scores (S scores) on the holdout data set ranging from 0.15 to 0.38, and suboptimal performance using relative measures of error. The average S scores for phase 2 increased to 0.53 to 0.57, with a similar improvement in other performance metrics. In phase 3, performance of the top algorithm increased by an additional 9%. Combining the top 5 algorithms from phase 2 and phase 3 using an ensemble model, yielded an additional 9% to 12% improvement in performance with a final S score reaching 0.68.

A combined crowd innovation and AI approach rapidly produced automated algorithms that replicated the skills of a highly trained physician for a critical task in radiation therapy. These AI algorithms could improve cancer care globally by transferring the skills of expert clinicians to under-resourced health care settings.

Elizabeth E. Richard, Jeffrey R. Davis, Jin H. Paik, and Karim R. Lakhani. 4/25/2019. “Sustaining open innovation through a “Center of Excellence”.” Strategy & Leadership. Publisher's VersionAbstract

This paper presents NASA’s experience using a Center of Excellence (CoE) to scale and sustain an open innovation program as an effective problem-solving tool and includes strategic management recommendations for other organizations based on lessons learned.

This paper defines four phases of implementing an open innovation program: Learn, Pilot, Scale and Sustain. It provides guidance on the time required for each phase and recommendations for how to utilize a CoE to succeed. Recommendations are based upon the experience of NASA’s Human Health and Performance Directorate, and experience at the Laboratory for Innovation Science at Harvard running hundreds of challenges with research and development organizations.

Lessons learned include the importance of grounding innovation initiatives in the business strategy, assessing the portfolio of work to select problems most amenable to solving via crowdsourcing methodology, framing problems that external parties can solve, thinking strategically about early wins, selecting the right platforms, developing criteria for evaluation, and advancing a culture of innovation. Establishing a CoE provides an effective infrastructure to address both technical and cultural issues.

The NASA experience spanned more than seven years from initial learnings about open innovation concepts to the successful scaling and sustaining of an open innovation program; this paper provides recommendations on how to decrease this timeline to three years.

Andrea Blasco, Michael G. Endres, Rinat A. Sergeev, Anup Jonchhe, Max Macaluso, Rajiv Narayan, Ted Natoli, Jin H. Paik, Bryan Briney, Chunlei Wu, Andrew I. Su, Aravind Subramanian, and Karim R. Lakhani. 9/2019. “Advancing Computational Biology and Bioinformatics Research Through Open Innovation Competitions.” PLOS One, 14, 9. Publisher's VersionAbstract
Open data science and algorithm development competitions over a unique avenue for rapid discovery of better computational strategies. We highlight three examples in computational biology and bioinformatics research where the use of competitions has yielded significant performance gains over established algorithms. These include algorithms for antibody clustering, imputing gene expression data, and querying the Connectivity Map (CMap). Performance gains are evaluated quantitatively using realistic, albeit sanitized, data sets. The solutions produced through these competitions are then examined with respect to their utility and the prospects for implementation in the field. We present the decision process and competition design considerations that lead to these successful outcomes as a model for researchers who want to use competitions and non-domain crowds as collaborators to further their research.
Herman B. Leonard, Mitchell B. Weiss, Jin H. Paik, and Kerry Herman. 2018. SOFWERX: Innovation at U.S. Special Operations Command. Harvard Business School Case. Harvard Business School. Publisher's VersionAbstract
James “Hondo” Geurts, the Acquisition Executive for U.S. Special Operations Command was in the middle of his Senate confirmation hearing in 2017 to become Assistant Secretary of the Navy for Research, Development and Acquisition. The questions had a common theme: how would Geurts’s experience running an innovative procurement effort for U.S. Special Forces units enable him to change a much larger—and much more rigid—organization like the U.S. Navy? In one of the most secretive parts of the U.S. military, Geurts founded an open platform called SOFWERX to speed the rate of ideas to Navy SEALs, Army Special Forces, and the like. His team even sourced the idea for a hoverboard from a YouTube video. But how should things like SOFWERX and protypes like the EZ-Fly find a place within the Navy writ large?
Luke Boosey, Philip Brookins, and Dmitry Ryvkin. 2018. “Contests between groups of unknown size.” Games and Economic Behavior. Publisher's VersionAbstract
We study group contests where group sizes are stochastic and unobservable to participants at the time of investment. When the joint distribution of group sizes is symmetric, with expected group size , the symmetric equilibrium aggregate investment is lower than in a symmetric group contest with commonly known fixed group size . A similar result holds for two groups with asymmetric distributions of sizes. For the symmetric case, the reduction in individual and aggregate investment due to group size uncertainty increases with the variance in relative group impacts. When group sizes are independent conditional on a common shock, a stochastic increase in the common shock mitigates the effect of group size uncertainty unless the common and idiosyncratic components of group size are strong complements. Finally, group size uncertainty undermines the robustness of the group size paradox otherwise present in the model.

Pages