Publications

Working Paper
Hannah Mayer. Working Paper. “AI in Enterprise: AI Product Management.” Edited by Jin Paik, Jenny Hoffman, and Steven Randazzo.Abstract

While there are dispersed resources to learn more about artificial intelligence, there remains a need to cultivate a community of practitioners for cyclical exposure and knowledge sharing of best practices in the enterprise. That is why Laboratory for Innovation Science at Harvard launched the AI in the Enterprise series, which exposes managers and executives to interesting applications of AI and the decisions behind developing such tools. 

Moderated by HBS Professor and co-author of Competing in the Age of AI, Karim R. Lakhani, the July virtual session featured Peter Skomoroch from DataWrangling and formerly at LinkedIn. Together, they discussed what differentiates AI product management from managing other tech products and how to adapt to the uncertainty in the AI product lifecycle.

AI in Enterprise - AI Product Management (P Skomoroch).pdf
Hannah Mayer. Working Paper. “AI in Enterprise: In Tech We Trust.. Maybe Too Much?Edited by Jin Hyun Paik and Jenny Hoffman.Abstract

While there are dispersed resources to learn more about artificial intelligence, there remains a need to cultivate a community of practitioners for cyclical exposure and knowledge sharing of best practices in the enterprise. That is why Laboratory for Innovation Science at Harvard launched the AI in the Enterprise series, which exposes managers and executives to interesting applications of AI and the decisions behind developing such tools. 

In the September session of the AI in Enterprise series, HBS Professor and co-author of Competing in the Age of AI, Karim R. Lakhani spoke with Latanya Sweeney about algorithmic bias, data privacy, and the way forward for enterprises adopting AI. They explored how AI and ML can impact society in unexpected ways and what senior enterprise leaders can do to avoid negative externalities. Professor of the Practice of Government and Technology at the Harvard Kennedy School and in the Harvard Faculty of Arts and Sciences, director and founder of the Data Privacy Lab, and former Chief Technology Officer at the U.S. Federal Trade Commission, Latanya Sweeney pioneered the field known as data privacy and launched the emerging area known as algorithmic fairness.

ai_in_enterprise_-_in_tech_we_trust_-_maybe_too_much_l_sweeney.pdf
Jin Paik, Steven Randazzo, and Jenny Hoffman. Working Paper. “AI in the Enterprise: How Do I Get Started?”.Abstract

While there are dispersed resources to learn more about artificial intelligence, there remains a need to cultivate a community of practitioners for cyclical exposure and knowledge sharing of best practices in the enterprise. That is why Laboratory for Innovation Science at Harvard launched the AI in the Enterprise series, which exposes managers and executives to interesting applications of AI and the decisions behind developing such tools. 

Moderated by HBS Professor and co-author of Competing in the Age of AI, Karim R. Lakhani, the most recent virtual session with over 240 attendees featured Rob May, General Partner at PJC, an early-stage venture capital firm, and founder of Inside AI, a premier source for information on AI, robotics and neurotechnology. Together, they discussed why we have seen a rise in interest in AI, what managers should consider when wading into the AI waters, and what steps they can take when it is time to do so. 

AI in Enterprise - How Do I Get Started (R May).pdf
Misha Teplitskiy, Hardeep Ranu, Gary Gray, Michael Menietti, Eva Guinan, and Karim Lakhani. Working Paper. “Do Experts Listen to Other Experts? Field Experimental Evidence from Scientific Peer Review.” HBS Working Paper Series. Publisher's VersionAbstract
Organizations in science and elsewhere often rely on committees of experts to make important decisions, such as evaluating early-stage projects and ideas. However, very little is known about how experts influence each other’s opinions and how that influence affects final evaluations. Here, we use a field experiment in scientific peer review to examine experts’ susceptibility to the opinions of others. We recruited 277 faculty members at seven U.S. medical schools to evaluate 47 early stage research proposals in biomedicine. In our experiment, evaluators (1) completed independent reviews of research ideas, (2) received (artificial) scores attributed to anonymous “other reviewers” from the same or a different discipline, and (3) decided whether to update their initial scores. Evaluators did not meet in person and were not otherwise aware of each other. We find that, even in a completely anonymous setting and controlling for a range of career factors, women updated their scores 13% more often than men, while very highly cited “superstar” reviewers updated 24% less often than others. Women in male-dominated subfields were particularly likely to update, updating 8% more for every 10% decrease in subfield representation. Very low scores were particularly “sticky” and seldom updated upward, suggesting a possible source of conservatism in evaluation. These systematic differences in how world-class experts respond to external opinions can lead to substantial gender and status disparities in whose opinion ultimately matters in collective expert judgment.
Jacqueline N. Lane, Eva C. Guinan, Ina Ganguli, Karim R. Lakhani, and Patrick Gaule. Working Paper. “Engineering Serendipity: The Role of Cognitive Similarity in Knowledge Sharing and Knowledge Production.” HBS Working Paper Series. Publisher's Version 20-058_.pdf
Hannah Mayer, Jin Hyun Paik, Timothy DeStefano, and Jenny Hoffman. Working Paper. “From Craft to Commodity: The Evolution of AI in Pharma and Beyond”.Abstract

While there are dispersed resources to learn more about artificial intelligence, there remains a need to cultivate a community of practitioners for cyclical exposure and knowledge sharing of best practices in the enterprise. That is why Laboratory for Innovation Science at Harvard launched the AI in the Enterprise series, which exposes managers and executives to interesting applications of AI and the decisions behind developing such tools. 

Moderated by HBS Professor and co-author of Competing in the Age of AI, Karim R. Lakhani, the August virtual session featured Reza Olfati-Saber, an experienced academic researcher currently managing teams of data scientists and life scientists across the globe for Sanofi. Together, they discussed the evolution of AI in life science experimentation and how it may become the determining factor for R&D success in pharma and other industries.

AI in Enterprise - From Craft to Commodity (R Olfati-Saber).pdf
Jana Gallus, Olivia S. Jung, and Karim R. Lakhani. Working Paper. “Managerial Recognition as an Incentive for Innovation Platform Engagement: A Field Experiment and Interview Study at NASA.” HBS Working Paper Series. Publisher's Version 20-059.pdf
Kyle R. Myers, Wei Yang Tham, Yian Yin, Nina Cohodes, Jerry G. Thursby, Marie C. Thursby, Peter E. Schiffer, Joseph T. Walsh, Karim R. Lakhani, and Dashun Wang. Working Paper. “Quantifying the Immediate Effects of the COVID-19 Pandemic on Scientists”. Publisher's VersionAbstract
The COVID-19 pandemic has undoubtedly disrupted the scientific enterprise, but we lack empirical evidence on the nature and magnitude of these disruptions. Here we report the results of a survey of approximately 4,500 Principal Investigators (PIs) at U.S.- and Europe-based research institutions. Distributed in mid-April 2020, the survey solicited information about how scientists' work changed from the onset of the pandemic, how their research output might be affected in the near future, and a wide range of individuals' characteristics. Scientists report a sharp decline in time spent on research on average, but there is substantial heterogeneity with a significant share reporting no change or even increases. Some of this heterogeneity is due to field-specific differences, with laboratory-based fields being the most negatively affected, and some is due to gender, with female scientists reporting larger declines. However, among the individuals' characteristics examined, the largest disruptions are connected to a usually unobserved dimension: childcare. Reporting a young dependent is associated with declines similar in magnitude to those reported by the laboratory-based fields and can account for a significant fraction of gender differences. Amidst scarce evidence about the role of parenting in scientists' work, these results highlight the fundamental and heterogeneous ways this pandemic is affecting the scientific workforce, and may have broad relevance for shaping responses to the pandemic's effect on science and beyond.
2020
Prithwiraj Choudhury, Ryan T. Allen, and Michael G. Endres. 8/9/2020. “Machine learning for pattern discovery in management research.” Strategic Management Journal. Publisher's VersionAbstract
Supervised machine learning (ML) methods are a powerful toolkit for discovering robust patterns in quantitative data. The patterns identified by ML could be used for exploratory inductive or abductive research, or for post hoc analysis of regression results to detect patterns that may have gone unnoticed. However, ML models should not be treated as the result of a deductive causal test. To demonstrate the application of ML for pattern discovery, we implement ML algorithms to study employee turnover at a large technology company. We interpret the relationships between variables using partial dependence plots, which uncover surprising nonlinear and interdependent patterns between variables that may have gone unnoticed using traditional methods. To guide readers evaluating ML for pattern discovery, we provide guidance for evaluating model performance, highlight human decisions in the process, and warn of common misinterpretation pitfalls. The Supporting Information section provides code and data to implement the algorithms demonstrated in this article.
Anita Eisenstadt, Meghan Ange-Stark, and Gail Cohen. 8/2020. “The Role of Inducement Prizes.” National Academies of Sciences, Engineering, and Medicine. Publisher's VersionAbstract
On May 29, 2019, the National Academies of Sciences, Engineering, and Medicine, in cooperation with the Laboratory for Innovation Science at Harvard (LISH), convened a workshop in Washington, D.C. on the role of inducement prizes to spur American innovation. Unlike prizes that recognize past achievements, these inducement prizes are designed to stimulate innovative activity, whether it be the creation of a desired technology, orienting research efforts toward designing products that are capable of being used at scale by customers, or developing products with wide societal benefits. Workshop participants explored how prizes fit into federal and non-federal support for innovation, the benefits and disadvantages of prizes, and the differences between cash and non-cash prizes. Other discussion topics included the conditions under which prizes are most effective, how to measure the effectiveness of prizes, and the characteristics of prize winners. This publication summarizes the presentations and discussions from the workshop.
Kyle R. Myers, Wei Yang Tham, Yian Yin, Nina Cohodes, Jerry G. Thursby, Marie C. Thursby, Peter Schiffer, Joseph T. Walsh, Karim R. Lakhani, and Dashun Wang. 7/15/2020. “Unequal effects of the COVID-19 pandemic on scientists.” Nature Human Behavior. Publisher's Version
Olivia S. Jung, Andrea Blasco, and Karim R. Lakhani. 7/9/2020. “Innovation contest: Effect of perceived support for learning on participation.” Health Care Management Review, 45, 3, Pp. 255-266. Publisher's VersionAbstract
Frontline staff are well positioned to conceive improvement opportunities based on first-hand knowledge of what works and does not work. The innovation contest may be a relevant and useful vehicle to elicit staff ideas. However, the success of the contest likely depends on perceived organizational support for learning; when staff believe that support for learning-oriented culture, practices, and leadership is low, they may be less willing or able to share ideas.
Michael G. Endres, Florian Hillen, Marios Salloumis, Ahmad R. Sedaghat, Stefan M., Olivia Quatela, Henning Hanken, Ralf Smeets, Benedicta Beck-Broichsitter, Carsten Rendenback, Karim R. Lakhani, Max Heiland, and Robert Gaudin. 6/24/2020. “Development of a Deep Learning Algorithm for Periapical Disease Detection in Dental Radiographs.” Diagnostics, 10, 6, Pp. 430. Publisher's VersionAbstract
Periapical radiolucencies, which can be detected on panoramic radiographs, are one of the most common radiographic findings in dentistry and have a differential diagnosis including infections, granuloma, cysts and tumors. In this study, we seek to investigate the ability with which 24 oral and maxillofacial (OMF) surgeons assess the presence of periapical lucencies on panoramic radiographs, and we compare these findings to the performance of a predictive deep learning algorithm that we have developed using a curated data set of 2902 de-identified panoramic radiographs. The mean diagnostic positive predictive value (PPV) of OMF surgeons based on their assessment of panoramic radiographic images was 0.69 (±0.13), indicating that dentists on average falsely diagnose 31% of cases as radiolucencies. However, the mean diagnostic true positive rate (TPR) was 0.51 (±0.14), indicating that on average 49% of all radiolucencies were missed. We demonstrate that the deep learning algorithm achieves a better performance than 14 of 24 OMF surgeons within the cohort, exhibiting an average precision of 0.60 (±0.04), and an F1 score of 0.58 (±0.04) corresponding to a PPV of 0.67 (±0.05) and TPR of 0.51 (±0.05). The algorithm, trained on limited data and evaluated on clinically validated ground truth, has potential to assist OMF surgeons in detecting periapical lucencies on panoramic radiographs. 
Luke Boosey, Philip Brookins, and Dmitry Ryvkin. 5/13/2020. “Information Disclosure in Contests with Endogenous Entry: An Experiment.” Management Science. Publisher's VersionAbstract
We use a laboratory experiment to study the effects of disclosing the number of active participants in contests with endogenous entry. At the first stage, potential participants decide whether to enter competition, and at the second stage, entrants choose their investments. In a 2××2 design, we manipulate the size of the outside option, w, and whether the number of entrants is disclosed between the stages. Theory predicts more entry for lower w and the levels of entry and aggregate investment to be independent of disclosure in all cases. We find empirical entry frequencies decreasing with w. For aggregate investment, we find no effect of disclosure when w is low but a strong positive effect of disclosure when w is high. The difference is driven by substantial overinvestment in contests with a small, publicly known number of players contrasted by more restrained investment in contests in which the number of players is uncertain and may be small. The behavior under disclosure is explained by a combination of joy of winning and entry regret.
Timothy DeStefano, Richard Kneller, and Jonathan Timmis. 5/6/2020. “Cloud computing and firm growth.” VOX. Publisher's VersionAbstract
The last decade has seen a fundamental shift in the way firms access technology, from physical hardware towards cloud computing. This shift not only significantly reduces the cost of such technologies but also allows for the possibility of remote and simultaneous access. This column presents evidence on the impact of cloud adoption by firms using firm level data from the UK. There are marked differences in the effects on young and incumbent firms, where cloud adoption largely impacts the growth of young firms while it affects the geography of incumbent firms.
Roberto Verganti, Luca Vendraminelli, and Marco Iansiti. 3/19/2020. “Innovation and Design in the Age of Artificial Intelligence”. Publisher's VersionAbstract

At the heart of any innovation process lies a fundamental practice: the way people create ideas and solve problems. This “decision making” side of innovation is what scholars and practitioners refer to as “design”. Decisions in innovation processes have so far been taken by humans. What happens when they can be substituted by machines? Artificial Intelligence (AI) brings data and algorithms to the core of innovation processes. What are the implications of this diffusion of AI for our understanding of design and innovation? Is AI just another digital technology that, akin to many others, will not significantly question what we know about design? Or will it create transformations in design that current theoretical frameworks cannot capture?

This article proposes a framework for understanding design and innovation in the age of AI. We discuss the implications for design and innovation theory. Specifically, we observe that, as creative problem solving is significantly conducted by algorithms, human design increasingly becomes an activity of sense making, i.e. understanding which problems should or could be addressed. This shift in focus calls for new theories and brings design closer to leadership, which is, inherently, an activity of sense making.

Our insights are derived from and illustrated with two cases at the frontier of AI ‐‐ Netflix and AirBnB (complemented with analyses in Microsoft and Tesla) ‐‐, which point to two directions for the evolution of design and innovation in firms. First, AI enables an organization to overcome many past limitations of human‐intensive design processes, by improving the scalability of the process, broadening its scope across traditional boundaries, and enhancing its ability to learn and adapt on the fly. Second, and maybe more surprising, while removing these limitations, AI also appears to deeply enact several popular design principles. AI thus reinforces the principles of Design Thinking, namely: being people‐centered, abductive, and iterative. In fact, AI enables the creation of solutions that are more highly user‐centered than human‐based approaches (i.e., to an extreme level of granularity, designed for every single person); that are potentially more creative; and that are continuously updated through learning iterations across the entire product life cycle.

In sum, while AI does not undermine the basic principles of design, it profoundly changes the practice of design. Problem solving tasks, traditionally carried out by designers, are now automated into learning loops that operate without limitations of volume and speed. The algorithms embedded in these loops think in a radically different way than a designer who handles complex problems holistically with a systemic perspective. Algorithms instead handle complexity through very simple tasks, which are iterated continuously. This article discusses the implications of these insights for design and innovation management scholars and practitioners.

Marco Iansiti and Karim R. Lakhani. 3/3/2020. “From Disruption to Collision: The New Competitive Dynamics.” MIT Sloan Management Review.Abstract
In the age of AI, traditional businesses across the economy are being attacked by highly scalable data-driven companies whose operating models leverage network effects to deliver value.
Jin Paik, Martin Schöll, Rinat Sergeev, Steven Randazzo, and Karim R. Lakhani. 2/26/2020. “Innovation Contests for High-Tech Procurement.” Research-Technology Management, 63:2, 36-45. Publisher's VersionAbstract
Innovation managers rarely use crowdsourcing as an innovative instrument despite extensive academic and theoretical research. The lack of tools available to compare and measure crowdsourcing, specifically contests, against traditional methods of procuring goods and services is one barrier to adoption. Using ethnographic research to understand how managers solved their problems, we find that the crowdsourcing model produces higher costs in the framing phase but yields savings in the solving phase, whereas traditional procurement is downstream cost-intensive. Two case study examples with the National Aeronautics and Space Agency (NASA) and the United States Department of Energy demonstrate a potential total cost savings of 27 percent and 33 percent, respectively, using innovation contests. We provide a comprehensive evaluation framework for crowdsourcing contests developed from a high-tech industry perspective, which are applicable to other industries.
Christopher Stanton, Karim R. Lakhani, Jennifer L. Hoffman, Jin Hyun Paik, and Nina Cohodes. 1/13/2020. Freelancer, Ltd.. Harvard Business School Case. Harvard Business School.Abstract
Over the course of the 2010s, the rapid advancement of mobile technologies and the rise of online freelancing platforms seemed to portend a radical transformation of labor markets into on-demand, flexible talent pools. Even though several Fortune 500 companies-including Microsoft, Samsung, and General Electric-embraced digital labor solutions, enterprise adoption lagged far behind smaller businesses and startups. Despite the promising potential benefits, concerns persisted about navigating labor regulations, ensuring appropriate vetting, and guaranteeing the quality of work. Sarah Tang, the newly appointed Vice President of Enterprise at Freelancer, Ltd., took on the challenge of crafting the growth strategy, operations, and sales of Freelancer's services to Fortune 500 companies. What it would take to convince more enterprises of the potential of on-demand freelance labor that could help them hire skilled freelancers in volume or in multiple countries simultaneously? What did the future hold for open work practices between enterprises and digital labor markets?
Karim R. Lakhani, Hong Luo, and Laura Katsnelson. 1/2020. Market for Judgement: Creative Destruction Lab. Harvard Business School Case. Harvard Business School.

Pages