Publications

Working Paper
Hannah Mayer. Working Paper. “AI in Enterprise: AI Product Management.” Edited by Jin H. Paik, Jenny Hoffman, and Steven Randazzo.Abstract

While there are dispersed resources to learn more about artificial intelligence, there remains a need to cultivate a community of practitioners for cyclical exposure and knowledge sharing of best practices in the enterprise. That is why Laboratory for Innovation Science at Harvard launched the AI in the Enterprise series, which exposes managers and executives to interesting applications of AI and the decisions behind developing such tools. 

Moderated by HBS Professor and co-author of Competing in the Age of AI, Karim R. Lakhani, the July virtual session featured Peter Skomoroch from DataWrangling and formerly at LinkedIn. Together, they discussed what differentiates AI product management from managing other tech products and how to adapt to the uncertainty in the AI product lifecycle.

AI in Enterprise - AI Product Management (P Skomoroch).pdf
Jerry Thursby, Marie Thursby, Karim R. Lakhani, Kyle R. Myers, Nina Cohodes, Sarah Bratt, Dennis Byrski, Hannah Cohoon, and M. P. Roche. Working Paper. “Scientific Production: An Exploration into Organization, Resource Allocation, and Funding”.
Hannah Mayer. Working Paper. “AI in Enterprise: In Tech We Trust.. Maybe Too Much?Edited by Jin H. Paik and Jenny Hoffman.Abstract

While there are dispersed resources to learn more about artificial intelligence, there remains a need to cultivate a community of practitioners for cyclical exposure and knowledge sharing of best practices in the enterprise. That is why Laboratory for Innovation Science at Harvard launched the AI in the Enterprise series, which exposes managers and executives to interesting applications of AI and the decisions behind developing such tools. 

In the September session of the AI in Enterprise series, HBS Professor and co-author of Competing in the Age of AI, Karim R. Lakhani spoke with Latanya Sweeney about algorithmic bias, data privacy, and the way forward for enterprises adopting AI. They explored how AI and ML can impact society in unexpected ways and what senior enterprise leaders can do to avoid negative externalities. Professor of the Practice of Government and Technology at the Harvard Kennedy School and in the Harvard Faculty of Arts and Sciences, director and founder of the Data Privacy Lab, and former Chief Technology Officer at the U.S. Federal Trade Commission, Latanya Sweeney pioneered the field known as data privacy and launched the emerging area known as algorithmic fairness.

AI in Enterprise - In Tech We Trust - Maybe Too Much (L Sweeney)
Jin H. Paik, Steven Randazzo, and Jenny Hoffman. Working Paper. “AI in the Enterprise: How Do I Get Started?”.Abstract

While there are dispersed resources to learn more about artificial intelligence, there remains a need to cultivate a community of practitioners for cyclical exposure and knowledge sharing of best practices in the enterprise. That is why Laboratory for Innovation Science at Harvard launched the AI in the Enterprise series, which exposes managers and executives to interesting applications of AI and the decisions behind developing such tools. 

Moderated by HBS Professor and co-author of Competing in the Age of AI, Karim R. Lakhani, the most recent virtual session with over 240 attendees featured Rob May, General Partner at PJC, an early-stage venture capital firm, and founder of Inside AI, a premier source for information on AI, robotics and neurotechnology. Together, they discussed why we have seen a rise in interest in AI, what managers should consider when wading into the AI waters, and what steps they can take when it is time to do so. 

AI in Enterprise - How Do I Get Started (R May).pdf
Hannah Mayer. Working Paper. “Data Science is the New Accounting.” Edited by Jin H. Paik and Jenny Hoffman.Abstract

In the October session of the AI in Enterprise series, HBS Professor and co-author of Competing in the Age of AI, Karim R. Lakhani and Roger Magoulas (Data Science Advisor) delved into O'Reilly's most recent survey of AI adoption in larger companies. The discussion explored common risk factors, techniques, tools, as well as the data governance and data conditioning that large companies are using to build and scale their AI practices. 

 

Read Hannah Mayer's recap of the event to learn more about what senior managers in enterprises need to know about AI - particularly, if they want to adopt at scale. 

 

AI in Entreprise - Data is the New Accounting (R Magoulas)
Misha Teplitskiy, Hardeep Ranu, Gary Gray, Michael Menietti, Eva Guinan, and Karim Lakhani. Working Paper. “Do Experts Listen to Other Experts? Field Experimental Evidence from Scientific Peer Review.” HBS Working Paper Series. Publisher's VersionAbstract
Organizations in science and elsewhere often rely on committees of experts to make important decisions, such as evaluating early-stage projects and ideas. However, very little is known about how experts influence each other’s opinions and how that influence affects final evaluations. Here, we use a field experiment in scientific peer review to examine experts’ susceptibility to the opinions of others. We recruited 277 faculty members at seven U.S. medical schools to evaluate 47 early stage research proposals in biomedicine. In our experiment, evaluators (1) completed independent reviews of research ideas, (2) received (artificial) scores attributed to anonymous “other reviewers” from the same or a different discipline, and (3) decided whether to update their initial scores. Evaluators did not meet in person and were not otherwise aware of each other. We find that, even in a completely anonymous setting and controlling for a range of career factors, women updated their scores 13% more often than men, while very highly cited “superstar” reviewers updated 24% less often than others. Women in male-dominated subfields were particularly likely to update, updating 8% more for every 10% decrease in subfield representation. Very low scores were particularly “sticky” and seldom updated upward, suggesting a possible source of conservatism in evaluation. These systematic differences in how world-class experts respond to external opinions can lead to substantial gender and status disparities in whose opinion ultimately matters in collective expert judgment.
Hannah Mayer, Jin H. Paik, Timothy DeStefano, and Jenny Hoffman. Working Paper. “From Craft to Commodity: The Evolution of AI in Pharma and Beyond”.Abstract

While there are dispersed resources to learn more about artificial intelligence, there remains a need to cultivate a community of practitioners for cyclical exposure and knowledge sharing of best practices in the enterprise. That is why Laboratory for Innovation Science at Harvard launched the AI in the Enterprise series, which exposes managers and executives to interesting applications of AI and the decisions behind developing such tools. 

Moderated by HBS Professor and co-author of Competing in the Age of AI, Karim R. Lakhani, the August virtual session featured Reza Olfati-Saber, an experienced academic researcher currently managing teams of data scientists and life scientists across the globe for Sanofi. Together, they discussed the evolution of AI in life science experimentation and how it may become the determining factor for R&D success in pharma and other industries.

AI in Enterprise - From Craft to Commodity (R Olfati-Saber).pdf
Jana Gallus, Olivia S. Jung, and Karim R. Lakhani. Working Paper. “Managerial Recognition as an Incentive for Innovation Platform Engagement: A Field Experiment and Interview Study at NASA.” HBS Working Paper Series. Publisher's Version 20-059.pdf
Kyle R. Myers, Wei Yang Tham, Yian Yin, Nina Cohodes, Jerry G. Thursby, Marie C. Thursby, Peter E. Schiffer, Joseph T. Walsh, Karim R. Lakhani, and Dashun Wang. Working Paper. “Quantifying the Immediate Effects of the COVID-19 Pandemic on Scientists”. Publisher's VersionAbstract
The COVID-19 pandemic has undoubtedly disrupted the scientific enterprise, but we lack empirical evidence on the nature and magnitude of these disruptions. Here we report the results of a survey of approximately 4,500 Principal Investigators (PIs) at U.S.- and Europe-based research institutions. Distributed in mid-April 2020, the survey solicited information about how scientists' work changed from the onset of the pandemic, how their research output might be affected in the near future, and a wide range of individuals' characteristics. Scientists report a sharp decline in time spent on research on average, but there is substantial heterogeneity with a significant share reporting no change or even increases. Some of this heterogeneity is due to field-specific differences, with laboratory-based fields being the most negatively affected, and some is due to gender, with female scientists reporting larger declines. However, among the individuals' characteristics examined, the largest disruptions are connected to a usually unobserved dimension: childcare. Reporting a young dependent is associated with declines similar in magnitude to those reported by the laboratory-based fields and can account for a significant fraction of gender differences. Amidst scarce evidence about the role of parenting in scientists' work, these results highlight the fundamental and heterogeneous ways this pandemic is affecting the scientific workforce, and may have broad relevance for shaping responses to the pandemic's effect on science and beyond.
Misha Teplitskiy, Eamon Duede, Michael Menietti, and Karim R. Lakhani. Working Paper. “Status drives how we cite: Evidence from thousands of authors”. Publisher's VersionAbstract
Researchers cite works for a variety of reasons, including some having nothing to do with acknowledging influence. The distribution of different citation types in the literature, and which papers attract which types, is poorly understood. We investigate high-influence and low-influence citations and the mechanisms producing them using 17,154 ground-truth citation types provided via survey by 9,380 authors systematically sampled across academic fields. Overall, 54% of citations denote little-to-no influence and these citations are concentrated among low status (lightly cited) papers. In contrast, high-influence citations are concentrated among high status (highly cited) papers through a number of steps that resemble a pipeline. Authors discover highly cited papers earlier in their projects, more often through social contacts, and read them more closely. Papers' status, above and beyond any quality differences, directly helps determine their pipeline: experimentally revealing or hiding citation counts during the survey shows that low counts cause lowered perceptions of quality. Accounting for citation types thus reveals a "double status effect": in addition to affecting how often a work is cited, status affects how meaningfully it is cited. Consequently, highly cited papers are even more influential than their raw citation counts suggest.
Forthcoming
Jacqueline N. Lane, Ina Ganguli, Patrick Gaule, Eva C. Guinan, and Karim R. Lakhani. Forthcoming. “Engineering Serendipity: When Does Knowledge Sharing Lead to Knowledge Production?” Strategic Management Journal. Publisher's VersionAbstract
We investigate how knowledge similarity between two individuals is systematically related to the likelihood that a serendipitous encounter results in knowledge production. We conduct a field experiment at a medical research symposium, where we exogenously varied opportunities for face‐to‐face encounters among 15,817 scientist‐pairs. Our data include direct observations of interaction patterns collected using sociometric badges, and detailed, longitudinal data of the scientists' postsymposium publication records over 6 years. We find that interacting scientists acquire more knowledge and coauthor 1.2 more papers when they share some overlapping interests, but cite each other's work between three and seven times less when they are from the same field. Our findings reveal both collaborative and competitive effects of knowledge similarity on knowledge production outcomes.
Engineering_serendipity.pdf
2020
Karim R. Lakhani, Anne-Laure Fayard, Manos Gkeredakis, and Jin Hyun Paik. 10/5/2020. “OpenIDEO (B)”. Publisher's VersionAbstract
In the midst of 2020, as the coronavirus pandemic was unfolding, OpenIDEO - an online open innovation platform focused on design-driven solutions to social issues - rapidly launched a new challenge to improve access to health information, empower communities to stay safe during the COVID-19 crisis, and inspire global leaders to communicate effectively. OpenIDEO was particularly suited to challenges which required cross-system or sector-wide collaboration due to its focus on social impact and ecosystem design, but its leadership pondered how they could continue to improve virtual collaboration and to share their insights from nearly a decade of running online challenges. Conceived as an exercise of disruptive digital innovation, OpenIDEO successfully created a strong open innovation community, but how could they sustain - or even improve - their support to community members and increase the social impact of their online challenges in the coming years?
Marco Iansiti, Karim R. Lakhani, Hannah Mayer, and Kerry Herman. 9/15/2020. “Moderna (A)”. Publisher's VersionAbstract
In summer 2020, Stephane Bancel, CEO of biotech firm Moderna, faces several challenges as his company races to develop a vaccine for COVID-19. The case explores how a company builds a digital organization, and leverages artificial intelligence and other digital resources to speed its operations, manage its processes and ensure quality across research, testing and manufacturing. Built from the ground up as such a digital organization, Moderna was able to respond to the challenge of developing a vaccine as soon as the gene sequence for the virus was posted to the Web on January 11, 2020. As the vaccine enters Phase III clinical trials, Bancel considers several issues: How should Bancel and his team balance the demands of developing a vaccine for a virus creating a global pandemic alongside the other important vaccines and therapies in Moderna's pipeline? How should Moderna communicate its goals and vision to investors in this unprecedented time? Should Moderna be concerned it will be pegged as "a COVID-19 company?"
Prithwiraj Choudhury, Ryan T. Allen, and Michael G. Endres. 8/9/2020. “Machine learning for pattern discovery in management research.” Strategic Management Journal. Publisher's VersionAbstract
Supervised machine learning (ML) methods are a powerful toolkit for discovering robust patterns in quantitative data. The patterns identified by ML could be used for exploratory inductive or abductive research, or for post hoc analysis of regression results to detect patterns that may have gone unnoticed. However, ML models should not be treated as the result of a deductive causal test. To demonstrate the application of ML for pattern discovery, we implement ML algorithms to study employee turnover at a large technology company. We interpret the relationships between variables using partial dependence plots, which uncover surprising nonlinear and interdependent patterns between variables that may have gone unnoticed using traditional methods. To guide readers evaluating ML for pattern discovery, we provide guidance for evaluating model performance, highlight human decisions in the process, and warn of common misinterpretation pitfalls. The Supporting Information section provides code and data to implement the algorithms demonstrated in this article.
Anita Eisenstadt, Meghan Ange-Stark, and Gail Cohen. 8/2020. “The Role of Inducement Prizes.” National Academies of Sciences, Engineering, and Medicine. Publisher's VersionAbstract
On May 29, 2019, the National Academies of Sciences, Engineering, and Medicine, in cooperation with the Laboratory for Innovation Science at Harvard (LISH), convened a workshop in Washington, D.C. on the role of inducement prizes to spur American innovation. Unlike prizes that recognize past achievements, these inducement prizes are designed to stimulate innovative activity, whether it be the creation of a desired technology, orienting research efforts toward designing products that are capable of being used at scale by customers, or developing products with wide societal benefits. Workshop participants explored how prizes fit into federal and non-federal support for innovation, the benefits and disadvantages of prizes, and the differences between cash and non-cash prizes. Other discussion topics included the conditions under which prizes are most effective, how to measure the effectiveness of prizes, and the characteristics of prize winners. This publication summarizes the presentations and discussions from the workshop.
Kyle R. Myers, Wei Yang Tham, Yian Yin, Nina Cohodes, Jerry G. Thursby, Marie C. Thursby, Peter Schiffer, Joseph T. Walsh, Karim R. Lakhani, and Dashun Wang. 7/15/2020. “Unequal effects of the COVID-19 pandemic on scientists.” Nature Human Behavior. Publisher's Version
Olivia S. Jung, Andrea Blasco, and Karim R. Lakhani. 7/9/2020. “Innovation contest: Effect of perceived support for learning on participation.” Health Care Management Review, 45, 3, Pp. 255-266. Publisher's VersionAbstract
Frontline staff are well positioned to conceive improvement opportunities based on first-hand knowledge of what works and does not work. The innovation contest may be a relevant and useful vehicle to elicit staff ideas. However, the success of the contest likely depends on perceived organizational support for learning; when staff believe that support for learning-oriented culture, practices, and leadership is low, they may be less willing or able to share ideas.
Michael G. Endres, Florian Hillen, Marios Salloumis, Ahmad R. Sedaghat, Stefan M., Olivia Quatela, Henning Hanken, Ralf Smeets, Benedicta Beck-Broichsitter, Carsten Rendenback, Karim R. Lakhani, Max Heiland, and Robert Gaudin. 6/24/2020. “Development of a Deep Learning Algorithm for Periapical Disease Detection in Dental Radiographs.” Diagnostics, 10, 6, Pp. 430. Publisher's VersionAbstract
Periapical radiolucencies, which can be detected on panoramic radiographs, are one of the most common radiographic findings in dentistry and have a differential diagnosis including infections, granuloma, cysts and tumors. In this study, we seek to investigate the ability with which 24 oral and maxillofacial (OMF) surgeons assess the presence of periapical lucencies on panoramic radiographs, and we compare these findings to the performance of a predictive deep learning algorithm that we have developed using a curated data set of 2902 de-identified panoramic radiographs. The mean diagnostic positive predictive value (PPV) of OMF surgeons based on their assessment of panoramic radiographic images was 0.69 (±0.13), indicating that dentists on average falsely diagnose 31% of cases as radiolucencies. However, the mean diagnostic true positive rate (TPR) was 0.51 (±0.14), indicating that on average 49% of all radiolucencies were missed. We demonstrate that the deep learning algorithm achieves a better performance than 14 of 24 OMF surgeons within the cohort, exhibiting an average precision of 0.60 (±0.04), and an F1 score of 0.58 (±0.04) corresponding to a PPV of 0.67 (±0.05) and TPR of 0.51 (±0.05). The algorithm, trained on limited data and evaluated on clinically validated ground truth, has potential to assist OMF surgeons in detecting periapical lucencies on panoramic radiographs. 
Luke Boosey, Philip Brookins, and Dmitry Ryvkin. 5/13/2020. “Information Disclosure in Contests with Endogenous Entry: An Experiment.” Management Science. Publisher's VersionAbstract
We use a laboratory experiment to study the effects of disclosing the number of active participants in contests with endogenous entry. At the first stage, potential participants decide whether to enter competition, and at the second stage, entrants choose their investments. In a 2××2 design, we manipulate the size of the outside option, w, and whether the number of entrants is disclosed between the stages. Theory predicts more entry for lower w and the levels of entry and aggregate investment to be independent of disclosure in all cases. We find empirical entry frequencies decreasing with w. For aggregate investment, we find no effect of disclosure when w is low but a strong positive effect of disclosure when w is high. The difference is driven by substantial overinvestment in contests with a small, publicly known number of players contrasted by more restrained investment in contests in which the number of players is uncertain and may be small. The behavior under disclosure is explained by a combination of joy of winning and entry regret.
Timothy DeStefano, Richard Kneller, and Jonathan Timmis. 5/6/2020. “Cloud computing and firm growth.” VOX. Publisher's VersionAbstract
The last decade has seen a fundamental shift in the way firms access technology, from physical hardware towards cloud computing. This shift not only significantly reduces the cost of such technologies but also allows for the possibility of remote and simultaneous access. This column presents evidence on the impact of cloud adoption by firms using firm level data from the UK. There are marked differences in the effects on young and incumbent firms, where cloud adoption largely impacts the growth of young firms while it affects the geography of incumbent firms.

Pages