Crowdsourcing & Open Innovation

Working Paper
Jana Gallus, Olivia S. Jung, and Karim R. Lakhani. Working Paper. “Managerial Recognition as an Incentive for Innovation Platform Engagement: A Field Experiment and Interview Study at NASA.” HBS Working Paper Series. Publisher's Version 20-059.pdf
2023
Olivia S Jung, Fahima Begum, Andrea Dorbu, Sara J Singer, and Patricia Satterstrom. 7/17/2023. “Ideas from the Frontline: Improvement Opportunities in Federally Qualified Health Centers.” Journal of General Internal Medicine. Publisher's VersionAbstract

Background

Engaging frontline clinicians and staff in quality improvement is a promising bottom-up approach to transforming primary care practices. This may be especially true in federally qualified health centers (FQHCs) and similar safety-net settings where large-scale, top-down transformation efforts are often associated with declining worker morale and increasing burnout. Innovation contests, which decentralize problem-solving, can be used to involve frontline workers in idea generation and selection.

Objective

We aimed to describe the ideas that frontline clinicians and staff suggested via organizational innovation contests in a national sample of 54 FQHCs.

Interventions

Innovation contests solicited ideas for improving care from all frontline workers—regardless of professional expertise, job title, and organizational tenure and excluding those in senior management—and offered opportunities to vote on ideas.

Participants

A total of 1,417 frontline workers across all participating FQHCs generated 2,271 improvement opportunities.

Approaches

We performed a content analysis and organized the ideas into codes (e.g., standardization, workplace perks, new service, staff relationships, community development) and categories (e.g., operations, employees, patients).

Key Results

Ideas from frontline workers in participating FQHCs called attention to standardization (n = 386, 17%), staffing (n = 244, 11%), patient experience (n = 223, 10%), staff training (n = 145, 6%), workplace perks (n = 142, 6%), compensation (n = 101, 5%), new service (n = 92, 4%), management-staff relationships (n = 82, 4%), and others. Voting results suggested that staffing resources, standardization, and patient communication were key issues among workers.

Conclusions

Innovation contests generated numerous ideas for improvement from the frontline. It is likely that the issues described in this study have become even more salient today, as the COVID-19 pandemic has had devastating impacts on work environments and health/social needs of patients living in low-resourced communities. Continued work is needed to promote learning and information exchange about opportunities to improve and transform practices between policymakers, managers, and providers and staff at the frontlines.

ideas_frontline_fqhcs.pdf
2022
Ademir Vrolijk and Zoe Szajnfarber. 12/20/2022. “The Opportunists in Innovation Contests Understanding Whom to Attract and How to Attract Them.” Research-Technology Management, 66, 1, Pp. 30-40. Publisher's VersionAbstract
Organizations increasingly turn to innovation contests for solutions to their complex problems. But these contests still face a fundamental inefficiency: they need to attract many participants to find the right solution, resulting in high costs and uncertainty. Studies have identified multiple dichotomies of successful and unsuccessful solver types, but these diverge. These studies also offer little guidance on how to attract successful solver types. We introduce the opportunist-transactor dichotomy, bridging whom to attract and how to attract them. Opportunists view the contest as a onramp to a new pursuit instead of a temporary undertaking. Characterizing solvers according to this new dichotomy was a better predictor of success than existing ones: in our context, most winners were opportunists. This type of solver was also reliably attracted by the seeker’s in-kind incentives, unlike those described by the other dichotomies. Our insights provide a deeper understanding of participants in complex contests and a concrete lever for influencing who shows up to solve.
Milena Tsvetkova, Sebastian Müller, Oana Vuculescu, Haylee Ham, and Rinat A Sergeev. 11/11/2022. “Relative Feedback Increases Disparities in Effort and Performance in Crowdsourcing Contests: Evidence from a Quasi-Experiment on Topcoder.” Proceedings of the ACM on Human-Computer Interaction, 6, CSW2, Pp. 1-27. Publisher's VersionAbstract
Rankings and leaderboards are often used in crowdsourcing contests and online communities to motivate individual contributions but feedback based on social comparison can also have negative effects. Here, we study the unequal effects of such feedback on individual effort and performance for individuals of different ability. We hypothesize that the effects of social comparison differ for top performers and bottom performers in a way that the inequality between the two increases. We use a quasi-experimental design to test our predictions with data from Topcoder, a large online crowdsourcing platform that publishes computer programming contests. We find that in contests where the submitted code is evaluated against others' submissions, rather than using an absolute scale, top performers increase their effort while bottom performers decrease it. As a result, relative scoring leads to better outcomes for those at the top but lower engagement for bottom performers. Our findings expose an important but overlooked drawback from using gamified competitions, rankings, and relative evaluations, with potential implications for crowdsourcing markets, online learning environments, online communities, and organizations in general.
Frank Nagle, James Dana, Jennifer Hoffman, Steven Randazzo, and Yanuo Zhou. 3/2/2022. Census II of Free and Open Source Software — Application Libraries. The Linux Foundation. Harvard Laboratory for Innovation Science (LISH) and Open Source Security Foundation (OpenSSF). Publisher's VersionAbstract

Free and Open Source Software (FOSS) has become a critical part of the modern economy. There are tens of millions of FOSS projects, many of which are built into software and products we use every day. However, it is difficult to fully understand the health, economic value, and security of FOSS because it is produced in a decentralized and distributed manner. This distributed development approach makes it unclear how much FOSS, and precisely what FOSS projects, are most widely used. This lack of understanding is a critical problem faced by those who want to help enhance the security of FOSS (e.g., companies, governments, individuals), yet do not know what projects to start with. This problem has garnered widespread attention with the Heartbleed and log4shell vulnerabilities that resulted in the susceptibility of hundreds of millions of devices to exploitation.

This report, Census II, is the second investigation into the widespread use of FOSS and aggregates data from over half a million observations of FOSS libraries used in production applications at thousands of companies, which aims to shed light on the most commonly used FOSS packages at the application library level. This effort builds on the Census I report that focused on the lower level critical operating system libraries and utilities, improving our understanding of the FOSS packages that software applications rely on. Such insights will help to identify critical FOSS packages to allow for resource prioritization to address security issues in this widely used software.

The Census II effort utilizes data from partner Software Composition Analysis (SCA) companies including Snyk, the Synopsys Cybersecurity Research Center (CyRC), and FOSSA, which partnered with Harvard to advance the state of open source research. Our goal is to not only identify the most widely used FOSS, but to also provide an example of how the distributed nature of FOSS requires a multi-party effort to fully understand the value and security of the FOSS ecosystem. Only through data-sharing, coordination, and investment will the value of this critical component of the digital economy be preserved for generations to come.

In addition to the detailed results on FOSS usage provided in the report, we identified five high-level findings: 1) the need for a standardized naming schema for software components, 2) the complexities associated with package versions, 3) much of the most widely used FOSS is developed by only a handful of contributors, 4) the increasing importance of individual developer account security, and 5) the persistence of legacy software in the open source space.

lfresearch_harvard_census_ii_1.pdf
2021
Andrea Blasco, Ted Natoli, Michael G. Endres, Rinat A. Sergeev, Steven Randazzo, Jin Paik, Max Macaluso, Rajiv Narayan, Karim R. Lakhani, and Aravind Subramaniam. 4/6/2021. “Improving Deconvolution Methods in Biology through Open Innovation Competitions: An Application to the Connectivity Map.” Bioinformatics. Publisher's VersionAbstract
Do machine learning methods improve standard deconvolution techniques for gene expression data? This article uses a unique new dataset combined with an open innovation competition to evaluate a wide range of approaches developed by 294 competitors from 20 countries. The competition’s objective was to address a deconvolution problem critical to analyzing genetic perturbations from the Connectivity Map. The issue consists of separating gene expression of individual genes from raw measurements obtained from gene pairs. We evaluated the outcomes using ground-truth data (direct measurements for single genes) obtained from the same samples.
Philip Brookins, Dmitry Ryvkin, and Andrew Smyth. 3/8/2021. “Indefinitely repeated contests: An experimental study.” Experimental Economics . Publisher's VersionAbstract
We experimentally explore indefinitely repeated contests. Theory predicts more cooperation, in the form of lower expenditures, in indefinitely repeated contests with a longer expected time horizon. Our data support this prediction, although this result attenuates with contest experience. Theory also predicts more cooperation in indefinitely repeated contests compared to finitely repeated contests of the same expected length, and we find empirical support for this. Finally, theory predicts no difference in cooperation across indefinitely repeated winner-take-all and proportional-prize contests, yet we find evidence of less cooperation in the latter, though only in longer treatments with more contests played. Our paper extends the experimental literature on indefinitely repeated games to contests and, more generally, contributes to an infant empirical literature on behavior in indefinitely repeated games with “large” strategy spaces.
Brookins - Indefinitely Repeated Contests
Philip Brookins and Paan Jindapon. 2/20/2021. “Risk preference heterogeneity in group contests.” Journal of Mathematical Economics. Publisher's VersionAbstract
We analyze the first model of a group contest with players that are heterogeneous in their risk preferences. In our model, individuals’ preferences are represented by a utility function exhibiting a generalized form of constant absolute risk aversion, allowing us to consider any combination of risk-averse, risk-neutral, and risk-loving players. We begin by proving equilibrium existence and uniqueness under both linear and convex investment costs. Then, we explore how the sorting of a compatible set of players by their risk attitudes into competing groups affects aggregate investment. With linear costs, a balanced sorting (i.e., minimizing the variance in risk attitudes across groups) always produces an aggregate investment level that is at least as high as an unbalanced sorting (i.e., maximizing the variance in risk attitudes across groups). Under convex costs, however, identifying which sorting is optimal is more nuanced and depends on preference and cost parameters.
Brookins - Risk Preference Heterogeneity
2020
Karim R. Lakhani, Anne-Laure Fayard, Manos Gkeredakis, and Jin Hyun Paik. 10/5/2020. “OpenIDEO (B)”. Publisher's VersionAbstract
In the midst of 2020, as the coronavirus pandemic was unfolding, OpenIDEO - an online open innovation platform focused on design-driven solutions to social issues - rapidly launched a new challenge to improve access to health information, empower communities to stay safe during the COVID-19 crisis, and inspire global leaders to communicate effectively. OpenIDEO was particularly suited to challenges which required cross-system or sector-wide collaboration due to its focus on social impact and ecosystem design, but its leadership pondered how they could continue to improve virtual collaboration and to share their insights from nearly a decade of running online challenges. Conceived as an exercise of disruptive digital innovation, OpenIDEO successfully created a strong open innovation community, but how could they sustain - or even improve - their support to community members and increase the social impact of their online challenges in the coming years?
Jin Paik, Martin Schöll, Rinat Sergeev, Steven Randazzo, and Karim R. Lakhani. 2/26/2020. “Innovation Contests for High-Tech Procurement.” Research-Technology Management, 63:2, 36-45. Publisher's VersionAbstract
Innovation managers rarely use crowdsourcing as an innovative instrument despite extensive academic and theoretical research. The lack of tools available to compare and measure crowdsourcing, specifically contests, against traditional methods of procuring goods and services is one barrier to adoption. Using ethnographic research to understand how managers solved their problems, we find that the crowdsourcing model produces higher costs in the framing phase but yields savings in the solving phase, whereas traditional procurement is downstream cost-intensive. Two case study examples with the National Aeronautics and Space Agency (NASA) and the United States Department of Energy demonstrate a potential total cost savings of 27 percent and 33 percent, respectively, using innovation contests. We provide a comprehensive evaluation framework for crowdsourcing contests developed from a high-tech industry perspective, which are applicable to other industries.
2019
Andrea Blasco, Michael G. Endres, Rinat A. Sergeev, Anup Jonchhe, Max Macaluso, Rajiv Narayan, Ted Natoli, Jin H. Paik, Bryan Briney, Chunlei Wu, Andrew I. Su, Aravind Subramanian, and Karim R. Lakhani. 9/2019. “Advancing Computational Biology and Bioinformatics Research Through Open Innovation Competitions.” PLOS One, 14, 9. Publisher's VersionAbstract
Open data science and algorithm development competitions over a unique avenue for rapid discovery of better computational strategies. We highlight three examples in computational biology and bioinformatics research where the use of competitions has yielded significant performance gains over established algorithms. These include algorithms for antibody clustering, imputing gene expression data, and querying the Connectivity Map (CMap). Performance gains are evaluated quantitatively using realistic, albeit sanitized, data sets. The solutions produced through these competitions are then examined with respect to their utility and the prospects for implementation in the field. We present the decision process and competition design considerations that lead to these successful outcomes as a model for researchers who want to use competitions and non-domain crowds as collaborators to further their research.
Elizabeth E. Richard, Jeffrey R. Davis, Jin H. Paik, and Karim R. Lakhani. 4/25/2019. “Sustaining open innovation through a “Center of Excellence”.” Strategy & Leadership. Publisher's VersionAbstract

This paper presents NASA’s experience using a Center of Excellence (CoE) to scale and sustain an open innovation program as an effective problem-solving tool and includes strategic management recommendations for other organizations based on lessons learned.

This paper defines four phases of implementing an open innovation program: Learn, Pilot, Scale and Sustain. It provides guidance on the time required for each phase and recommendations for how to utilize a CoE to succeed. Recommendations are based upon the experience of NASA’s Human Health and Performance Directorate, and experience at the Laboratory for Innovation Science at Harvard running hundreds of challenges with research and development organizations.

Lessons learned include the importance of grounding innovation initiatives in the business strategy, assessing the portfolio of work to select problems most amenable to solving via crowdsourcing methodology, framing problems that external parties can solve, thinking strategically about early wins, selecting the right platforms, developing criteria for evaluation, and advancing a culture of innovation. Establishing a CoE provides an effective infrastructure to address both technical and cultural issues.

The NASA experience spanned more than seven years from initial learnings about open innovation concepts to the successful scaling and sustaining of an open innovation program; this paper provides recommendations on how to decrease this timeline to three years.

Raymond H. Mak, Michael G. Endres, Jin H. Paik, Rinat A. Sergeev, Hugo Aerts, Christopher L. Williams, Karim R. Lakhani, and Eva C. Guinan. 4/18/2019. “Use of Crowd Innovation to Develop an Artificial Intelligence–Based Solution for Radiation Therapy Targeting.” JAMA Oncology, 5, 5, Pp. 654-661. Publisher's VersionAbstract

Radiation therapy (RT) is a critical cancer treatment, but the existing radiation oncologist work force does not meet growing global demand. One key physician task in RT planning involves tumor segmentation for targeting, which requires substantial training and is subject to significant interobserver variation.

To determine whether crowd innovation could be used to rapidly produce artificial intelligence (AI) solutions that replicate the accuracy of an expert radiation oncologist in segmenting lung tumors for RT targeting.

We conducted a 10-week, prize-based, online, 3-phase challenge (prizes totaled $55 000). A well-curated data set, including computed tomographic (CT) scans and lung tumor segmentations generated by an expert for clinical care, was used for the contest (CT scans from 461 patients; median 157 images per scan; 77 942 images in total; 8144 images with tumor present). Contestants were provided a training set of 229 CT scans with accompanying expert contours to develop their algorithms and given feedback on their performance throughout the contest, including from the expert clinician.

Main Outcomes and Measures  The AI algorithms generated by contestants were automatically scored on an independent data set that was withheld from contestants, and performance ranked using quantitative metrics that evaluated overlap of each algorithm’s automated segmentations with the expert’s segmentations. Performance was further benchmarked against human expert interobserver and intraobserver variation.

A total of 564 contestants from 62 countries registered for this challenge, and 34 (6%) submitted algorithms. The automated segmentations produced by the top 5 AI algorithms, when combined using an ensemble model, had an accuracy (Dice coefficient = 0.79) that was within the benchmark of mean interobserver variation measured between 6 human experts. For phase 1, the top 7 algorithms had average custom segmentation scores (S scores) on the holdout data set ranging from 0.15 to 0.38, and suboptimal performance using relative measures of error. The average S scores for phase 2 increased to 0.53 to 0.57, with a similar improvement in other performance metrics. In phase 3, performance of the top algorithm increased by an additional 9%. Combining the top 5 algorithms from phase 2 and phase 3 using an ensemble model, yielded an additional 9% to 12% improvement in performance with a final S score reaching 0.68.

A combined crowd innovation and AI approach rapidly produced automated algorithms that replicated the skills of a highly trained physician for a critical task in radiation therapy. These AI algorithms could improve cancer care globally by transferring the skills of expert clinicians to under-resourced health care settings.

Andrea Blasco, Olivia S. Jung, Karim R. Lakhani, and Michael E. Menietti. 4/2019. “Incentives for Public Goods Inside Organizations: Field Experimental Evidence.” Journal of Economic Behavior & Organization, 160, Pp. 214-229. Publisher's VersionAbstract

We report results of a natural field experiment conducted at a medical organization that sought contribution of public goods (i.e., projects for organizational improvement) from its 1200 employees. Offering a prize for winning submissions boosted participation by 85 percent without affecting the quality of the submissions. The effect was consistent across gender and job type. We posit that the allure of a prize, in combination with mission-oriented preferences, drove participation. Using a simple model, we estimate that these preferences explain about a third of the magnitude of the effect. We also find that these results were sensitive to the solicited person’s gender.

Incentives_for_Public_Goods_Inside_Orgs.pdf
2018
Michael Menietti, M.P. Recalde, and L. Vesterlund. 2018. “Charitable Giving in the Laboratory: Advantages of the Piecewise Linear Public Good Game.” In The Economics of Philanthropy: Donations and Fundraising, edited by Mirco Tonin and Kimberley Scharf. MIT Press. Publisher's Version
Luke Boosey, Philip Brookins, and Dmitry Ryvkin. 2018. “Contests between groups of unknown size.” Games and Economic Behavior. Publisher's VersionAbstract
We study group contests where group sizes are stochastic and unobservable to participants at the time of investment. When the joint distribution of group sizes is symmetric, with expected group size , the symmetric equilibrium aggregate investment is lower than in a symmetric group contest with commonly known fixed group size . A similar result holds for two groups with asymmetric distributions of sizes. For the symmetric case, the reduction in individual and aggregate investment due to group size uncertainty increases with the variance in relative group impacts. When group sizes are independent conditional on a common shock, a stochastic increase in the common shock mitigates the effect of group size uncertainty unless the common and idiosyncratic components of group size are strong complements. Finally, group size uncertainty undermines the robustness of the group size paradox otherwise present in the model.
Herman B. Leonard, Mitchell B. Weiss, Jin H. Paik, and Kerry Herman. 2018. SOFWERX: Innovation at U.S. Special Operations Command. Harvard Business School Case. Harvard Business School. Publisher's VersionAbstract
James “Hondo” Geurts, the Acquisition Executive for U.S. Special Operations Command was in the middle of his Senate confirmation hearing in 2017 to become Assistant Secretary of the Navy for Research, Development and Acquisition. The questions had a common theme: how would Geurts’s experience running an innovative procurement effort for U.S. Special Forces units enable him to change a much larger—and much more rigid—organization like the U.S. Navy? In one of the most secretive parts of the U.S. military, Geurts founded an open platform called SOFWERX to speed the rate of ideas to Navy SEALs, Army Special Forces, and the like. His team even sourced the idea for a hoverboard from a YouTube video. But how should things like SOFWERX and protypes like the EZ-Fly find a place within the Navy writ large?
Philip Brookins, John P. Lightle, and Dmitry Ryvkin. 2018. “Sorting and communication in weak-link group contests.” Journal of Economic Behavior & Organization, 152, Pp. 64-80. Publisher's VersionAbstract
We experimentally study the effects of sorting and communication in contests between groups of heterogeneous players whose within-group efforts are perfect complements. Contrary to the common wisdom that competitive balance bolsters performance in contests, in this setting theory predicts that aggregate output increases in the variation in abilities between groups, i.e., it is maximized by the most unbalanced sorting of players. However, the data does not support this prediction. In the absence of communication, we find no effect of sorting on aggregate output, while in the presence of within-group communication aggregate output is 33% higher under the balanced sorting as compared to the unbalanced sorting. This reversal of the prediction is in line with the competitive balance heuristic. The results have implications for the design of optimal groups in organizations using relative performance pay.
2017
Teppo Felin, Karim R. Lakhani, and Michael L. Tushman. 2017. “Firms, Crowds, and Innovation.” Strategic Organization, 15:2, Special Issue on Organizing Crowds and Innovation, Pp. 119-140. Publisher's VersionAbstract

The purpose of this article is to suggest a (preliminary) taxonomy and research agenda for the topic of “firms, crowds, and innovation” and to provide an introduction to the associated special issue. We specifically discuss how various crowd-related phenomena and practices—for example, crowdsourcing, crowdfunding, user innovation, and peer production—relate to theories of the firm, with particular attention on “sociality” in firms and markets. We first briefly review extant theories of the firm and then discuss three theoretical aspects of sociality related to crowds in the context of strategy, organizations, and innovation: (1) the functions of sociality (sociality as extension of rationality, sociality as sensing and signaling, sociality as matching and identity); (2) the forms of sociality (independent/aggregate and interacting/emergent forms of sociality); and (3) the failures of sociality (misattribution and misapplication). We conclude with an outline of future research directions and introduce the special issue papers and essays.

Firms_crowds_and_innovation.pdf
Olivia Jung, Andrea Blasco, and Karim R. Lakhani. 2017. “Perceived Organizational Support For Learning and Contribution to Improvement by Frontline Staff.” Academy of Management Proceedings, 2017, 1. Publisher's VersionAbstract

Utilizing suggestions from clinicians and administrative staff is associated with process and quality improvement, organizational climate that promotes patient safety, and added capacity for learning. However, realizing improvement through innovative ideas from staff depends on their ability and decision to contribute. We hypothesized that staff perception of whether the organization promotes learning is positively associated with their likelihood to engage in problem solving and speaking up. We conducted our study in a cardiology unit in an academic hospital that hosted an ideation contest that solicited frontline staff to suggest ideas to resolve issues encountered at work. Our primary dependent variable was staff participation in ideation. The independent variables measuring perception of support for learning were collected using the validated 27-item Learning Organization Survey (LOS). To examine the relationships between these variables, we used analysis of variance, logistic regression, and predicted probabilities. We also interviewed 16 contest participants to explain our quantitative results. The study sample consisted of 30% of cardiology unit staff (n=354) that completed the LOS. In total, 72 staff submitted 138 ideas, addressing a range of issues including patient experience, cost of care, workflow, utilization, and access. Figuring out the cost of procedures in the catheterization laboratory and creating a smartphone application that aids patients to navigate through appointments and connect with providers were two of the ideas that won the most number of votes and funding to be implemented in the following year. Participation in ideation was positively associated with staff perception of supportive learning environment. For example, one standard deviation increase in perceived welcome for differences in opinions was associated with a 43% increase in the odds of participating in ideation (OR=1.43, p=0.04) and 55% increase in the odds of suggesting more than one idea (OR=1.55, p=0.09). Experimentation, a practice that supports learning, was negatively associated with ideation (OR=0.36, p=0.02), and leadership that reinforces learning was not associated with ideation. The perception that new ideas are not sufficiently considered or experimented could have motivated staff to participate, as the ideation contest enables experimentation and learning. Interviews with ideation participants revealed that the contest enabled systematic bottom-up contribution to quality improvement, promoted a sense of community, facilitated organizational exchange of ideas, and spread a problem-solving oriented mindset. Enabling frontline staff to feel that their ideas are welcome and that making mistakes is permissible may increase their likelihood to engage in problem solving and speaking up, contributing to organizational improvement.

Pages