Working Paper
Misha Teplitskiy, Hardeep Ranu, Gary Gray, Michael Menietti, Eva Guinan, and Karim Lakhani. Working Paper. “Do Experts Listen to Other Experts? Field Experimental Evidence from Scientific Peer Review.” HBS Working Paper Series. Publisher's VersionAbstract
Organizations in science and elsewhere often rely on committees of experts to make important decisions, such as evaluating early-stage projects and ideas. However, very little is known about how experts influence each other’s opinions and how that influence affects final evaluations. Here, we use a field experiment in scientific peer review to examine experts’ susceptibility to the opinions of others. We recruited 277 faculty members at seven U.S. medical schools to evaluate 47 early stage research proposals in biomedicine. In our experiment, evaluators (1) completed independent reviews of research ideas, (2) received (artificial) scores attributed to anonymous “other reviewers” from the same or a different discipline, and (3) decided whether to update their initial scores. Evaluators did not meet in person and were not otherwise aware of each other. We find that, even in a completely anonymous setting and controlling for a range of career factors, women updated their scores 13% more often than men, while very highly cited “superstar” reviewers updated 24% less often than others. Women in male-dominated subfields were particularly likely to update, updating 8% more for every 10% decrease in subfield representation. Very low scores were particularly “sticky” and seldom updated upward, suggesting a possible source of conservatism in evaluation. These systematic differences in how world-class experts respond to external opinions can lead to substantial gender and status disparities in whose opinion ultimately matters in collective expert judgment.
Jacqueline N. Lane, Eva C. Guinan, Ina Ganguli, Karim R. Lakhani, and Patrick Gaule. Working Paper. “Engineering Serendipity: The Role of Cognitive Similarity in Knowledge Sharing and Knowledge Production.” HBS Working Paper Series. Publisher's Version 20-058_.pdf
Jana Gallus, Olivia S. Jung, and Karim R. Lakhani. Working Paper. “Managerial Recognition as an Incentive for Innovation Platform Engagement: A Field Experiment and Interview Study at NASA.” HBS Working Paper Series. Publisher's Version 20-059.pdf
Kyle R. Myers, Wei Yang Tham, Yian Yin, Nina Cohodes, Jerry G. Thursby, Marie C. Thursby, Peter E. Schiffer, Joseph T. Walsh, Karim R. Lakhani, and Dashun Wang. Working Paper. “Quantifying the Immediate Effects of the COVID-19 Pandemic on Scientists”. Publisher's VersionAbstract
The COVID-19 pandemic has undoubtedly disrupted the scientific enterprise, but we lack empirical evidence on the nature and magnitude of these disruptions. Here we report the results of a survey of approximately 4,500 Principal Investigators (PIs) at U.S.- and Europe-based research institutions. Distributed in mid-April 2020, the survey solicited information about how scientists' work changed from the onset of the pandemic, how their research output might be affected in the near future, and a wide range of individuals' characteristics. Scientists report a sharp decline in time spent on research on average, but there is substantial heterogeneity with a significant share reporting no change or even increases. Some of this heterogeneity is due to field-specific differences, with laboratory-based fields being the most negatively affected, and some is due to gender, with female scientists reporting larger declines. However, among the individuals' characteristics examined, the largest disruptions are connected to a usually unobserved dimension: childcare. Reporting a young dependent is associated with declines similar in magnitude to those reported by the laboratory-based fields and can account for a significant fraction of gender differences. Amidst scarce evidence about the role of parenting in scientists' work, these results highlight the fundamental and heterogeneous ways this pandemic is affecting the scientific workforce, and may have broad relevance for shaping responses to the pandemic's effect on science and beyond.
Michael G. Endres, Florian Hillen, Marios Salloumis, Ahmad R. Sedaghat, Stefan M., Olivia Quatela, Henning Hanken, Ralf Smeets, Benedicta Beck-Broichsitter, Carsten Rendenback, Karim R. Lakhani, Max Heiland, and Robert Gaudin. 6/24/2020. “Development of a Deep Learning Algorithm for Periapical Disease Detection in Dental Radiographs.” Diagnostics, 10, 6, Pp. 430. Publisher's VersionAbstract
Periapical radiolucencies, which can be detected on panoramic radiographs, are one of the most common radiographic findings in dentistry and have a differential diagnosis including infections, granuloma, cysts and tumors. In this study, we seek to investigate the ability with which 24 oral and maxillofacial (OMF) surgeons assess the presence of periapical lucencies on panoramic radiographs, and we compare these findings to the performance of a predictive deep learning algorithm that we have developed using a curated data set of 2902 de-identified panoramic radiographs. The mean diagnostic positive predictive value (PPV) of OMF surgeons based on their assessment of panoramic radiographic images was 0.69 (±0.13), indicating that dentists on average falsely diagnose 31% of cases as radiolucencies. However, the mean diagnostic true positive rate (TPR) was 0.51 (±0.14), indicating that on average 49% of all radiolucencies were missed. We demonstrate that the deep learning algorithm achieves a better performance than 14 of 24 OMF surgeons within the cohort, exhibiting an average precision of 0.60 (±0.04), and an F1 score of 0.58 (±0.04) corresponding to a PPV of 0.67 (±0.05) and TPR of 0.51 (±0.05). The algorithm, trained on limited data and evaluated on clinically validated ground truth, has potential to assist OMF surgeons in detecting periapical lucencies on panoramic radiographs. 
Timothy DeStefano, Richard Kneller, and Jonathan Timmis. 5/6/2020. “Cloud computing and firm growth.” VOX. Publisher's VersionAbstract
The last decade has seen a fundamental shift in the way firms access technology, from physical hardware towards cloud computing. This shift not only significantly reduces the cost of such technologies but also allows for the possibility of remote and simultaneous access. This column presents evidence on the impact of cloud adoption by firms using firm level data from the UK. There are marked differences in the effects on young and incumbent firms, where cloud adoption largely impacts the growth of young firms while it affects the geography of incumbent firms.
Roberto Verganti, Luca Vendraminelli, and Marco Iansiti. 3/19/2020. “Innovation and Design in the Age of Artificial Intelligence”. Publisher's VersionAbstract

At the heart of any innovation process lies a fundamental practice: the way people create ideas and solve problems. This “decision making” side of innovation is what scholars and practitioners refer to as “design”. Decisions in innovation processes have so far been taken by humans. What happens when they can be substituted by machines? Artificial Intelligence (AI) brings data and algorithms to the core of innovation processes. What are the implications of this diffusion of AI for our understanding of design and innovation? Is AI just another digital technology that, akin to many others, will not significantly question what we know about design? Or will it create transformations in design that current theoretical frameworks cannot capture?

This article proposes a framework for understanding design and innovation in the age of AI. We discuss the implications for design and innovation theory. Specifically, we observe that, as creative problem solving is significantly conducted by algorithms, human design increasingly becomes an activity of sense making, i.e. understanding which problems should or could be addressed. This shift in focus calls for new theories and brings design closer to leadership, which is, inherently, an activity of sense making.

Our insights are derived from and illustrated with two cases at the frontier of AI ‐‐ Netflix and AirBnB (complemented with analyses in Microsoft and Tesla) ‐‐, which point to two directions for the evolution of design and innovation in firms. First, AI enables an organization to overcome many past limitations of human‐intensive design processes, by improving the scalability of the process, broadening its scope across traditional boundaries, and enhancing its ability to learn and adapt on the fly. Second, and maybe more surprising, while removing these limitations, AI also appears to deeply enact several popular design principles. AI thus reinforces the principles of Design Thinking, namely: being people‐centered, abductive, and iterative. In fact, AI enables the creation of solutions that are more highly user‐centered than human‐based approaches (i.e., to an extreme level of granularity, designed for every single person); that are potentially more creative; and that are continuously updated through learning iterations across the entire product life cycle.

In sum, while AI does not undermine the basic principles of design, it profoundly changes the practice of design. Problem solving tasks, traditionally carried out by designers, are now automated into learning loops that operate without limitations of volume and speed. The algorithms embedded in these loops think in a radically different way than a designer who handles complex problems holistically with a systemic perspective. Algorithms instead handle complexity through very simple tasks, which are iterated continuously. This article discusses the implications of these insights for design and innovation management scholars and practitioners.

Marco Iansiti and Karim R. Lakhani. 3/3/2020. “From Disruption to Collision: The New Competitive Dynamics.” MIT Sloan Management Review.Abstract
In the age of AI, traditional businesses across the economy are being attacked by highly scalable data-driven companies whose operating models leverage network effects to deliver value.
Jin Paik, Martin Schöll, Rinat Sergeev, Steven Randazzo, and Karim R. Lakhani. 2/26/2020. “Innovation Contests for High-Tech Procurement.” Research-Technology Management, 63:2, 36-45. Publisher's VersionAbstract
Innovation managers rarely use crowdsourcing as an innovative instrument despite extensive academic and theoretical research. The lack of tools available to compare and measure crowdsourcing, specifically contests, against traditional methods of procuring goods and services is one barrier to adoption. Using ethnographic research to understand how managers solved their problems, we find that the crowdsourcing model produces higher costs in the framing phase but yields savings in the solving phase, whereas traditional procurement is downstream cost-intensive. Two case study examples with the National Aeronautics and Space Agency (NASA) and the United States Department of Energy demonstrate a potential total cost savings of 27 percent and 33 percent, respectively, using innovation contests. We provide a comprehensive evaluation framework for crowdsourcing contests developed from a high-tech industry perspective, which are applicable to other industries.
Andrea Blasco, Ted Natoli, Michael G. Endres, Rinat A. Sergeev, Steven Randazzo, Jin Paik, Max Macaluso, Rajiv Narayan, Karim R. Lakhani, and Aravind Subramaniam. 1/2020. “Improving Deconvolution Methods in Biology through Open Innovation Competitions: An Application to the Connectivity Map.” bioRxiv. Publisher's VersionAbstract
A recurring problem in biomedical research is how to isolate signals of distinct populations (cell types, tissues, and genes) from composite measures obtained by a single analyte or sensor. Existing computational deconvolution approaches work well in many specific settings, but they might be suboptimal in more general applications. Here, we describe new methods that were obtained via an open innovation competition. The goal of the competition was to characterize the expression of 1,000 genes from 500 composite measurements, which constitutes the approach of a new assay, called L1000, used to scale-up the Connectivity Map (CMap) — a catalog of millions of perturbational gene expression profiles. The competition used a novel dataset of 2,200 profiles and attracted 294 competitors from 20 countries. The top-nine performing methods ranged from machine learning approaches (Convolutional Neural Networks and Random Forests) to more traditional ones (Gaussian Mixtures and k-means). These solutions were faster and more accurate than the benchmark and likely have applications beyond gene expression.
Tarun Khanna, Karim Lakhani, Shubhangi Bhadada, Nabil Khan, Saba Dave, Rasim Alam, and Meena Hewett. 10/2019. “Crowdsourcing Memories: Mixed Methods Research by Cultural Insiders-Epistemological Outsiders.” Academy of Management Perspectives. Publisher's VersionAbstract
This paper examines the role that the two lead authors’ personal connections played in the research methodology and data collection for the Partition Stories Project - a mixed methods approach to revisiting the much-studied historical trauma of the Partition of British India in 1947. The Project collected survivors’ oral histories, a data type that is a mainstay of qualitative research, and subjected their narrative data to statistical analysis to detect aggregated trends. In this paper, the authors discuss the process of straddling the dichotomies of insider/outsider and qualitative/quantitative, address the “myth of informed objectivity”, and the need for hybrid research structures with the intent to innovate in humanities projects such as this. In presenting key learnings from the project, this paper highlights the tensions that the authors faced between positivist and interpretivist methods of inquiry, between “insider” and “outsider” categories of positionality, and in the quantification of qualitative oral history data. The paper concludes with an illustrative example from one of the lead authors’ past research experiences to suggest that the tensions of this project are general in occurrence and global in applicability, beyond the specifics of the Partition case study explored here.
John Winsor, Jin Paik, Michael Tushman, and Karim Lakhani. 10/2019. “Overcoming cultural resistance to open source innovation.” Strategy & Leadership, 47, 6, Pp. 28-33. Publisher's VersionAbstract

Purpose: This article offers insight on how to effectively help incumbent organizations prepare for global business shifts to open source and digital business models.

Design/methodology/approach: Discussion related to observation, experience and case studies related to incumbent organizations and their efforts to adopt open source models and business tools.

Findings: Companies that let their old culture reject the new risk becoming obsolete if doing so inhibits their rethinking of their future using powerful tools like crowdsourcing, blockchain, customer experience-based connections, integrating workflows with artificial intelligence (AI), automated technologies and digital business platforms. These new ways of working affect how and where work is done, access to information, an organization’s capacity for work and its efficiency. As important as technological proficiency is, managing the cultural shift required to embrace transformative industry architecture – the key to innovating new business models – may be the bigger challenge.

Research limitations/implications: Findings are based on original research and case studies. Insights are theoretically, based on additional study, interviews, and research, but need to be tested through additional case studies. Practical implications: The goal is to make the transition more productive and less traumatic for incumbent firms by providing a language and tested methods to help senior leaders use innovative technologies to build on their core even as they explore new business models.

Social implications: This article provides insights that will lead to more effective ideas for helping organizations adapt. Originality/value: This article is based on original research and case experience. That research and experience has then been analyzed and viewed through the lens of models that have been known to work. The result is original insights and findings that can be applied in new ways to further adoption within incumbent organizations.

Andrea Blasco, Michael G. Endres, Rinat A. Sergeev, Anup Jonchhe, Max Macaluso, Rajiv Narayan, Ted Natoli, Jin H. Paik, Bryan Briney, Chunlei Wu, Andrew I. Su, Aravind Subramanian, and Karim R. Lakhani. 9/2019. “Advancing Computational Biology and Bioinformatics Research Through Open Innovation Competitions.” PLOS One, 14, 9. Publisher's VersionAbstract
Open data science and algorithm development competitions over a unique avenue for rapid discovery of better computational strategies. We highlight three examples in computational biology and bioinformatics research where the use of competitions has yielded significant performance gains over established algorithms. These include algorithms for antibody clustering, imputing gene expression data, and querying the Connectivity Map (CMap). Performance gains are evaluated quantitatively using realistic, albeit sanitized, data sets. The solutions produced through these competitions are then examined with respect to their utility and the prospects for implementation in the field. We present the decision process and competition design considerations that lead to these successful outcomes as a model for researchers who want to use competitions and non-domain crowds as collaborators to further their research.
Elizabeth E. Richard, Jeffrey R. Davis, Jin H. Paik, and Karim R. Lakhani. 4/25/2019. “Sustaining open innovation through a “Center of Excellence”.” Strategy & Leadership. Publisher's VersionAbstract

This paper presents NASA’s experience using a Center of Excellence (CoE) to scale and sustain an open innovation program as an effective problem-solving tool and includes strategic management recommendations for other organizations based on lessons learned.

This paper defines four phases of implementing an open innovation program: Learn, Pilot, Scale and Sustain. It provides guidance on the time required for each phase and recommendations for how to utilize a CoE to succeed. Recommendations are based upon the experience of NASA’s Human Health and Performance Directorate, and experience at the Laboratory for Innovation Science at Harvard running hundreds of challenges with research and development organizations.

Lessons learned include the importance of grounding innovation initiatives in the business strategy, assessing the portfolio of work to select problems most amenable to solving via crowdsourcing methodology, framing problems that external parties can solve, thinking strategically about early wins, selecting the right platforms, developing criteria for evaluation, and advancing a culture of innovation. Establishing a CoE provides an effective infrastructure to address both technical and cultural issues.

The NASA experience spanned more than seven years from initial learnings about open innovation concepts to the successful scaling and sustaining of an open innovation program; this paper provides recommendations on how to decrease this timeline to three years.

Raymond H. Mak, Michael G. Endres, Jin H. Paik, Rinat A. Sergeev, Hugo Aerts, Christopher L. Williams, Karim R. Lakhani, and Eva C. Guinan. 4/18/2019. “Use of Crowd Innovation to Develop an Artificial Intelligence–Based Solution for Radiation Therapy Targeting.” JAMA Oncology, 5, 5, Pp. 654-661. Publisher's VersionAbstract

Radiation therapy (RT) is a critical cancer treatment, but the existing radiation oncologist work force does not meet growing global demand. One key physician task in RT planning involves tumor segmentation for targeting, which requires substantial training and is subject to significant interobserver variation.

To determine whether crowd innovation could be used to rapidly produce artificial intelligence (AI) solutions that replicate the accuracy of an expert radiation oncologist in segmenting lung tumors for RT targeting.

We conducted a 10-week, prize-based, online, 3-phase challenge (prizes totaled $55 000). A well-curated data set, including computed tomographic (CT) scans and lung tumor segmentations generated by an expert for clinical care, was used for the contest (CT scans from 461 patients; median 157 images per scan; 77 942 images in total; 8144 images with tumor present). Contestants were provided a training set of 229 CT scans with accompanying expert contours to develop their algorithms and given feedback on their performance throughout the contest, including from the expert clinician.

Main Outcomes and Measures  The AI algorithms generated by contestants were automatically scored on an independent data set that was withheld from contestants, and performance ranked using quantitative metrics that evaluated overlap of each algorithm’s automated segmentations with the expert’s segmentations. Performance was further benchmarked against human expert interobserver and intraobserver variation.

A total of 564 contestants from 62 countries registered for this challenge, and 34 (6%) submitted algorithms. The automated segmentations produced by the top 5 AI algorithms, when combined using an ensemble model, had an accuracy (Dice coefficient = 0.79) that was within the benchmark of mean interobserver variation measured between 6 human experts. For phase 1, the top 7 algorithms had average custom segmentation scores (S scores) on the holdout data set ranging from 0.15 to 0.38, and suboptimal performance using relative measures of error. The average S scores for phase 2 increased to 0.53 to 0.57, with a similar improvement in other performance metrics. In phase 3, performance of the top algorithm increased by an additional 9%. Combining the top 5 algorithms from phase 2 and phase 3 using an ensemble model, yielded an additional 9% to 12% improvement in performance with a final S score reaching 0.68.

A combined crowd innovation and AI approach rapidly produced automated algorithms that replicated the skills of a highly trained physician for a critical task in radiation therapy. These AI algorithms could improve cancer care globally by transferring the skills of expert clinicians to under-resourced health care settings.

Andrea Blasco, Olivia S. Jung, Karim R. Lakhani, and Michael E. Menietti. 4/2019. “Incentives for Public Goods Inside Organizations: Field Experimental Evidence.” Journal of Economic Behavior & Organization, 160, Pp. 214-229. Publisher's VersionAbstract

We report results of a natural field experiment conducted at a medical organization that sought contribution of public goods (i.e., projects for organizational improvement) from its 1200 employees. Offering a prize for winning submissions boosted participation by 85 percent without affecting the quality of the submissions. The effect was consistent across gender and job type. We posit that the allure of a prize, in combination with mission-oriented preferences, drove participation. Using a simple model, we estimate that these preferences explain about a third of the magnitude of the effect. We also find that these results were sensitive to the solicited person’s gender.

Karim R. Lakhani, Patrick Ferguson, Sarah Fleischer, Jin H. Paik, and Steven Randazzo. 2019. Kangatech. Harvard Business School Case. Harvard Business School. Publisher's VersionAbstract
On a warm January afternoon in 2019, Steve Saunders, Dave Scerri, Carl Dilena, and Nick Haslam (see Exhibit 1 for biographies), co-founders of KangaTech, wrapped up the latest round of discussions about the future direction of their sports-technology start-up. Focused on injury prediction and prevention in elite sport, the Melbourne, Australia-based KangaTech prepared to launch a new model of their core product, an integrated exercise frame and software system that used strength exercises to identify and mitigate the risk of soft-tissue and ligament injuries (see Exhibit 2 for overview of product). The team was excited about the new product and was confident that it improved upon many of the features of the previous model. However, Saunders and his co-founders couldn’t help but think about the long-term strategy of the company.
Spun off in 2015 out of an internal R&D initiative at the North Melbourne Football Club, KangaTech spent the past four years squarely focused on product development and gaining early traction in the elite sports markets in the U.S., the U.K., and Australia (see Exhibit 3 for company timeline). As of 2019, KangaTech had users across 15 different sites, including professional teams in the National Basketball Association, the English Premier League, and the Australian Football League. The company also underwent a successful round of financing recently, and the proceeds of which were used to fund the new version of the KangaTech product.
Off the back of this recent success, the co-founders were focused on how they might be able to navigate the future ahead of them. Dilena explained, “We are going through a pretty robust strategy discussion at the moment. It is one of those decision points for us as to how we best proceed.” Dilena continued, “We’ve been largely product-based and product-development-based until now. How do we scale up? How do we take that next quantum leap as an organization? So part of that has been looking at where do we see the market opportunities?” Specifically, KangaTech weighed up three options for unlocking the full commercial value of the company’s technology: 1) Going deeper into the sports market; 2) Expanding into the allied health market; or, 3) Pursuing contracts in the defense industry. Evaluating the merits of each of these options was not clear. Which market had the greatest upside? Which market would expose the firm to the greatest risk? Which of these opportunities held the most promise for KangaTech?
Karim R. Lakhani, Andrea Blasco, and Olivia S. Jung. 2018. “Innovation Contest: Effect of Perceived Support for Learning on Participation .” Health Care Management Review.Abstract
Frontline staff are well positioned to conceive improvement opportunities based on first-hand knowledge of what works and does not work. The innovation contest may be a relevant and useful vehicle to elicit staff ideas. However, the success of the contest likely depends on perceived organizational support for learning; when staff believe that support for learning-oriented culture, practices, and leadership is low, they may be less willing or able to share ideas. Purpose: We examined how staff perception of organizational support for learning affected contest participation, which comprised ideation and evaluation of submitted ideas. Methodology/Approach: The contest held in a hospital cardiac center invited all clinicians and support staff (n = 1,400) to participate. We used the 27-item Learning Organization Survey to measure staff perception of learning-oriented environment, practices and processes, and leadership. Results: Seventy-two frontline staff submitted 138 ideas addressing wide-ranging issues including patient experience, cost of care, workflow, utilization, and access. Two hundred forty-five participated in evaluation. Supportive learning environment predicted participation in ideation and idea evaluation. Perceptions of insufficient experimentation with new ways of working also predicted participation. Conclusion: The contest enabled frontline staff to share input and assess input shared by other staff. Our findings indicate that the contest may serve as a fruitful outlet through which frontline staff can share and learn new ideas, especially for those who feel safe to speak up and believe that new ideas are not tested frequently enough. Practice Implications: The contest’s potential to decentralize innovation may be greater under stronger learning orientation. A highly visible intervention, like the innovation contest, has both benefits and risks. Our findings suggest benefits such as increased engagement with work and community as well as risks such as discontent that could arise if staff suggestions are not acted upon or if there is no desired change after the contest.
Hila Lifshitz-Assaf, Michael Tushman, and Karim R. Lakhani. 2018. “A Study of NASA Scientists Shows How to Overcome Barriers to Open Innovation.” Harvard Business Review. Publisher's VersionAbstract
Open innovation processes promise to enhance creative output, yet we have heard little about successful launches of new technologies, products, or services arising from these approaches. Certainly, crowdsourcing platforms (among other open innovation methods) have yielded striking solutions to hard scientific and technological problems—prominent examples being the Netflix predictive recommendation algorithm and the approach to reducing the weight of  GE jet engine brackets. But most R&D organizations are still struggling to reap the very real rewards of open innovation. We believe we’ve hit on an important hidden factor for this failure and that it holds the key to a successful integration and execution of open innovation methods.
Michael Menietti, M.P. Recalde, and L. Vesterlund. 2018. “Charitable Giving in the Laboratory: Advantages of the Piecewise Linear Public Good Game.” In The Economics of Philanthropy: Donations and Fundraising, edited by Mirco Tonin and Kimberley Scharf. MIT Press. Publisher's Version