Publications

Working Paper
Misha Teplitskiy, Hardeep Ranu, Gary Gray, Michael Menietti, Eva Guinan, and Karim Lakhani. Working Paper. “Do Experts Listen to Other Experts? Field Experimental Evidence from Scientific Peer Review.” HBS Working Paper Series. Publisher's VersionAbstract
Organizations in science and elsewhere often rely on committees of experts to make important decisions, such as evaluating early-stage projects and ideas. However, very little is known about how experts influence each other’s opinions and how that influence affects final evaluations. Here, we use a field experiment in scientific peer review to examine experts’ susceptibility to the opinions of others. We recruited 277 faculty members at seven U.S. medical schools to evaluate 47 early stage research proposals in biomedicine. In our experiment, evaluators (1) completed independent reviews of research ideas, (2) received (artificial) scores attributed to anonymous “other reviewers” from the same or a different discipline, and (3) decided whether to update their initial scores. Evaluators did not meet in person and were not otherwise aware of each other. We find that, even in a completely anonymous setting and controlling for a range of career factors, women updated their scores 13% more often than men, while very highly cited “superstar” reviewers updated 24% less often than others. Women in male-dominated subfields were particularly likely to update, updating 8% more for every 10% decrease in subfield representation. Very low scores were particularly “sticky” and seldom updated upward, suggesting a possible source of conservatism in evaluation. These systematic differences in how world-class experts respond to external opinions can lead to substantial gender and status disparities in whose opinion ultimately matters in collective expert judgment.
Jana Gallus, Olivia S. Jung, and Karim R. Lakhani. Working Paper. “Managerial Recognition as an Incentive for Innovation Platform Engagement: A Field Experiment and Interview Study at NASA.” HBS Working Paper Series. Publisher's Version 20-059.pdf
Kyle R. Myers, Wei Yang Tham, Yian Yin, Nina Cohodes, Jerry G. Thursby, Marie C. Thursby, Peter E. Schiffer, Joseph T. Walsh, Karim R. Lakhani, and Dashun Wang. Working Paper. “Quantifying the Immediate Effects of the COVID-19 Pandemic on Scientists”. Publisher's VersionAbstract
The COVID-19 pandemic has undoubtedly disrupted the scientific enterprise, but we lack empirical evidence on the nature and magnitude of these disruptions. Here we report the results of a survey of approximately 4,500 Principal Investigators (PIs) at U.S.- and Europe-based research institutions. Distributed in mid-April 2020, the survey solicited information about how scientists' work changed from the onset of the pandemic, how their research output might be affected in the near future, and a wide range of individuals' characteristics. Scientists report a sharp decline in time spent on research on average, but there is substantial heterogeneity with a significant share reporting no change or even increases. Some of this heterogeneity is due to field-specific differences, with laboratory-based fields being the most negatively affected, and some is due to gender, with female scientists reporting larger declines. However, among the individuals' characteristics examined, the largest disruptions are connected to a usually unobserved dimension: childcare. Reporting a young dependent is associated with declines similar in magnitude to those reported by the laboratory-based fields and can account for a significant fraction of gender differences. Amidst scarce evidence about the role of parenting in scientists' work, these results highlight the fundamental and heterogeneous ways this pandemic is affecting the scientific workforce, and may have broad relevance for shaping responses to the pandemic's effect on science and beyond.
Jerry Thursby, Marie Thursby, Karim R. Lakhani, Kyle R. Myers, Nina Cohodes, Sarah Bratt, Dennis Byrski, Johanna Cohoon, and Maria Roche. Working Paper. “Scientific Production: An Exploration into Organization, Resource Allocation, and Funding”.
Misha Teplitskiy, Eamon Duede, Michael Menietti, and Karim R. Lakhani. Working Paper. “Status drives how we cite: Evidence from thousands of authors”. Publisher's VersionAbstract
Researchers cite works for a variety of reasons, including some having nothing to do with acknowledging influence. The distribution of different citation types in the literature, and which papers attract which types, is poorly understood. We investigate high-influence and low-influence citations and the mechanisms producing them using 17,154 ground-truth citation types provided via survey by 9,380 authors systematically sampled across academic fields. Overall, 54% of citations denote little-to-no influence and these citations are concentrated among low status (lightly cited) papers. In contrast, high-influence citations are concentrated among high status (highly cited) papers through a number of steps that resemble a pipeline. Authors discover highly cited papers earlier in their projects, more often through social contacts, and read them more closely. Papers' status, above and beyond any quality differences, directly helps determine their pipeline: experimentally revealing or hiding citation counts during the survey shows that low counts cause lowered perceptions of quality. Accounting for citation types thus reveals a "double status effect": in addition to affecting how often a work is cited, status affects how meaningfully it is cited. Consequently, highly cited papers are even more influential than their raw citation counts suggest.
Iavor Bojinov, Prithwiraj Choudhury, and Jacqueline N. Lane. Working Paper. “Virtual Watercoolers: A Field Experiment on Virtual Synchronous Interactions and Performance of Organizational Newcomers.” SSRN, Harvard Business School Technology & Operations Mgt. Unit Working Paper , Pp. 21-125. Publisher's VersionAbstract
Do virtual, yet informal and synchronous, interactions affect individual performance outcomes of organizational newcomers? We report results from a randomized field experiment conducted at a large global organization that estimates the performance effects of “virtual water coolers” for remote interns participating in the firm’s flagship summer internship program. Findings indicate that interns who had randomized opportunities to interact synchronously and informally with senior managers were significantly more likely to receive offers for full-time employment, achieved higher weekly performance ratings, and had more positive attitudes toward their remote internships. Further, we observed stronger results when the interns and senior managers were demographically similar. Secondary results also hint at a possible abductive explanation of the performance effects: virtual watercoolers between interns and senior managers may have facilitated knowledge and advice sharing. This study demonstrates that hosting brief virtual water cooler sessions with senior managers might have job and career benefits for organizational newcomers working in remote workplaces, an insight with immediate managerial relevance.
SSRN Virtual Watercoolers: A Field Experiment on Virtual Synchronous Interactions and Performance of Organizational Newcomers
Forthcoming
Jacqueline N. Lane, Ina Ganguli, Patrick Gaule, Eva C. Guinan, and Karim R. Lakhani. Forthcoming. “Engineering Serendipity: When Does Knowledge Sharing Lead to Knowledge Production?” Strategic Management Journal. Publisher's VersionAbstract

We investigate how knowledge similarity between two individuals is systematically related to the likelihood that a serendipitous encounter results in knowledge production. We conduct a field experiment at a medical research symposium, where we exogenously varied opportunities for face‐to‐face encounters among 15,817 scientist‐pairs. Our data include direct observations of interaction patterns collected using sociometric badges, and detailed, longitudinal data of the scientists' postsymposium publication records over 6 years. We find that interacting scientists acquire more knowledge and coauthor 1.2 more papers when they share some overlapping interests, but cite each other's work between three and seven times less when they are from the same field. Our findings reveal both collaborative and competitive effects of knowledge similarity on knowledge production outcomes.

Click here to view the video abstract.

 

Engineering_serendipity.pdf
2021
Andrea Blasco, Ted Natoli, Michael G. Endres, Rinat A. Sergeev, Steven Randazzo, Jin Paik, Max Macaluso, Rajiv Narayan, Karim R. Lakhani, and Aravind Subramaniam. 4/6/2021. “Improving Deconvolution Methods in Biology through Open Innovation Competitions: An Application to the Connectivity Map.” Bioinformatics. Publisher's VersionAbstract
Do machine learning methods improve standard deconvolution techniques for gene expression data? This article uses a unique new dataset combined with an open innovation competition to evaluate a wide range of approaches developed by 294 competitors from 20 countries. The competition’s objective was to address a deconvolution problem critical to analyzing genetic perturbations from the Connectivity Map. The issue consists of separating gene expression of individual genes from raw measurements obtained from gene pairs. We evaluated the outcomes using ground-truth data (direct measurements for single genes) obtained from the same samples.
Henry Eyring, Patrick J. Ferguson, and Sebastian Koppers. 3/30/2021. “Less Information, More Comparison, and Better Performance: Evidence from a Field Experiment.” Journal of Accounting Research , 59, 2, Pp. 657-711. Publisher's VersionAbstract
We use a field experiment in professional sports to compare effects of providing absolute, relative, or both absolute and relative measures in performance reports for employees. Although studies have documented that the provision of these types of measures can benefit performance, theory from economic and accounting literature suggests that it may be optimal for firms to direct employees’ attention to some types of measures by omitting others. In line with this theory, we find that relative performance information alone yields the best performance effects in our setting—that is, that a subset of information (relative performance information) dominates the full information set (absolute and relative performance information together) in boosting performance. In cross-sectional and survey-data analyses, we do not find that restricting the number of measures shown per se benefits performance. Rather, we find that restricting the type of measures shown to convey only relative information increases involvement in peer-performance comparison, benefitting performance. Our findings extend research on weighting of and responses to measures in performance reports.
eyring_et_al_2021.pdf
Philip Brookins, Dmitry Ryvkin, and Andrew Smyth. 3/8/2021. “Indefinitely repeated contests: An experimental study.” Experimental Economics . Publisher's VersionAbstract
We experimentally explore indefinitely repeated contests. Theory predicts more cooperation, in the form of lower expenditures, in indefinitely repeated contests with a longer expected time horizon. Our data support this prediction, although this result attenuates with contest experience. Theory also predicts more cooperation in indefinitely repeated contests compared to finitely repeated contests of the same expected length, and we find empirical support for this. Finally, theory predicts no difference in cooperation across indefinitely repeated winner-take-all and proportional-prize contests, yet we find evidence of less cooperation in the latter, though only in longer treatments with more contests played. Our paper extends the experimental literature on indefinitely repeated games to contests and, more generally, contributes to an infant empirical literature on behavior in indefinitely repeated games with “large” strategy spaces.
Brookins - Indefinitely Repeated Contests
Philip Brookins and Paan Jindapon. 2/20/2021. “Risk preference heterogeneity in group contests.” Journal of Mathematical Economics. Publisher's VersionAbstract
We analyze the first model of a group contest with players that are heterogeneous in their risk preferences. In our model, individuals’ preferences are represented by a utility function exhibiting a generalized form of constant absolute risk aversion, allowing us to consider any combination of risk-averse, risk-neutral, and risk-loving players. We begin by proving equilibrium existence and uniqueness under both linear and convex investment costs. Then, we explore how the sorting of a compatible set of players by their risk attitudes into competing groups affects aggregate investment. With linear costs, a balanced sorting (i.e., minimizing the variance in risk attitudes across groups) always produces an aggregate investment level that is at least as high as an unbalanced sorting (i.e., maximizing the variance in risk attitudes across groups). Under convex costs, however, identifying which sorting is optimal is more nuanced and depends on preference and cost parameters.
Brookins - Risk Preference Heterogeneity
2020
Hannah Mayer. 7/2020. “AI in Enterprise: AI Product Management.” Edited by Jin H. Paik, Jenny Hoffman, and Steven Randazzo.Abstract

While there are dispersed resources to learn more about artificial intelligence, there remains a need to cultivate a community of practitioners for cyclical exposure and knowledge sharing of best practices in the enterprise. That is why Laboratory for Innovation Science at Harvard launched the AI in the Enterprise series, which exposes managers and executives to interesting applications of AI and the decisions behind developing such tools. 

Moderated by HBS Professor and co-author of Competing in the Age of AI, Karim R. Lakhani, the July virtual session featured Peter Skomoroch from DataWrangling and formerly at LinkedIn. Together, they discussed what differentiates AI product management from managing other tech products and how to adapt to the uncertainty in the AI product lifecycle.

AI in Enterprise - AI Product Management (P Skomoroch).pdf
Gerard George, Karim R. Lakhani, and Phanish Puranam. 12/2020. “What Has Changed? The Impact of COVID Pandemic on the Technology and Innovation Management Research Agenda.” Journal of Management Studies, 57, 8, Pp. 1754-1758. Publisher's VersionAbstract
Whereas the pandemic has tested the agility and resilience of organizations, it forces a deeper look at the assumptions underlying theoretical frameworks that guide managerial decisions and organizational practices. In this commentary, we explore the impact of the Covid‐19 pandemic on technology and innovation management research. We identify key assumptions, and then, discuss how new areas of investigation emerge based on the changed reality.
Karim R. Lakhani, Anne-Laure Fayard, Manos Gkeredakis, and Jin Hyun Paik. 10/5/2020. “OpenIDEO (B)”. Publisher's VersionAbstract
In the midst of 2020, as the coronavirus pandemic was unfolding, OpenIDEO - an online open innovation platform focused on design-driven solutions to social issues - rapidly launched a new challenge to improve access to health information, empower communities to stay safe during the COVID-19 crisis, and inspire global leaders to communicate effectively. OpenIDEO was particularly suited to challenges which required cross-system or sector-wide collaboration due to its focus on social impact and ecosystem design, but its leadership pondered how they could continue to improve virtual collaboration and to share their insights from nearly a decade of running online challenges. Conceived as an exercise of disruptive digital innovation, OpenIDEO successfully created a strong open innovation community, but how could they sustain - or even improve - their support to community members and increase the social impact of their online challenges in the coming years?
Hannah Mayer. 10/2020. “Data Science is the New Accounting.” Edited by Jin H. Paik and Jenny Hoffman.Abstract

In the October session of the AI in Enterprise series, HBS Professor and co-author of Competing in the Age of AI, Karim R. Lakhani and Roger Magoulas (Data Science Advisor) delved into O'Reilly's most recent survey of AI adoption in larger companies. The discussion explored common risk factors, techniques, tools, as well as the data governance and data conditioning that large companies are using to build and scale their AI practices. 

 

Read Hannah Mayer's recap of the event to learn more about what senior managers in enterprises need to know about AI - particularly, if they want to adopt at scale. 

 

AI in Entreprise - Data is the New Accounting (R Magoulas)
Marco Iansiti, Karim R. Lakhani, Hannah Mayer, and Kerry Herman. 9/15/2020. “Moderna (A)”. Publisher's VersionAbstract
In summer 2020, Stephane Bancel, CEO of biotech firm Moderna, faces several challenges as his company races to develop a vaccine for COVID-19. The case explores how a company builds a digital organization, and leverages artificial intelligence and other digital resources to speed its operations, manage its processes and ensure quality across research, testing and manufacturing. Built from the ground up as such a digital organization, Moderna was able to respond to the challenge of developing a vaccine as soon as the gene sequence for the virus was posted to the Web on January 11, 2020. As the vaccine enters Phase III clinical trials, Bancel considers several issues: How should Bancel and his team balance the demands of developing a vaccine for a virus creating a global pandemic alongside the other important vaccines and therapies in Moderna's pipeline? How should Moderna communicate its goals and vision to investors in this unprecedented time? Should Moderna be concerned it will be pegged as "a COVID-19 company?"
Hannah Mayer. 9/2020. “AI in Enterprise: In Tech We Trust.. Maybe Too Much?Edited by Jin H. Paik and Jenny Hoffman.Abstract

While there are dispersed resources to learn more about artificial intelligence, there remains a need to cultivate a community of practitioners for cyclical exposure and knowledge sharing of best practices in the enterprise. That is why Laboratory for Innovation Science at Harvard launched the AI in the Enterprise series, which exposes managers and executives to interesting applications of AI and the decisions behind developing such tools. 

In the September session of the AI in Enterprise series, HBS Professor and co-author of Competing in the Age of AI, Karim R. Lakhani spoke with Latanya Sweeney about algorithmic bias, data privacy, and the way forward for enterprises adopting AI. They explored how AI and ML can impact society in unexpected ways and what senior enterprise leaders can do to avoid negative externalities. Professor of the Practice of Government and Technology at the Harvard Kennedy School and in the Harvard Faculty of Arts and Sciences, director and founder of the Data Privacy Lab, and former Chief Technology Officer at the U.S. Federal Trade Commission, Latanya Sweeney pioneered the field known as data privacy and launched the emerging area known as algorithmic fairness.

AI in Enterprise - In Tech We Trust - Maybe Too Much (L Sweeney)
Prithwiraj Choudhury, Ryan T. Allen, and Michael G. Endres. 8/9/2020. “Machine learning for pattern discovery in management research.” Strategic Management Journal. Publisher's VersionAbstract
Supervised machine learning (ML) methods are a powerful toolkit for discovering robust patterns in quantitative data. The patterns identified by ML could be used for exploratory inductive or abductive research, or for post hoc analysis of regression results to detect patterns that may have gone unnoticed. However, ML models should not be treated as the result of a deductive causal test. To demonstrate the application of ML for pattern discovery, we implement ML algorithms to study employee turnover at a large technology company. We interpret the relationships between variables using partial dependence plots, which uncover surprising nonlinear and interdependent patterns between variables that may have gone unnoticed using traditional methods. To guide readers evaluating ML for pattern discovery, we provide guidance for evaluating model performance, highlight human decisions in the process, and warn of common misinterpretation pitfalls. The Supporting Information section provides code and data to implement the algorithms demonstrated in this article.
Hannah Mayer, Jin H. Paik, Timothy DeStefano, and Jenny Hoffman. 8/2020. “From Craft to Commodity: The Evolution of AI in Pharma and Beyond”.Abstract

While there are dispersed resources to learn more about artificial intelligence, there remains a need to cultivate a community of practitioners for cyclical exposure and knowledge sharing of best practices in the enterprise. That is why Laboratory for Innovation Science at Harvard launched the AI in the Enterprise series, which exposes managers and executives to interesting applications of AI and the decisions behind developing such tools. 

Moderated by HBS Professor and co-author of Competing in the Age of AI, Karim R. Lakhani, the August virtual session featured Reza Olfati-Saber, an experienced academic researcher currently managing teams of data scientists and life scientists across the globe for Sanofi. Together, they discussed the evolution of AI in life science experimentation and how it may become the determining factor for R&D success in pharma and other industries.

AI in Enterprise - From Craft to Commodity (R Olfati-Saber).pdf
Anita Eisenstadt, Meghan Ange-Stark, and Gail Cohen. 8/2020. “The Role of Inducement Prizes.” National Academies of Sciences, Engineering, and Medicine. Publisher's VersionAbstract
On May 29, 2019, the National Academies of Sciences, Engineering, and Medicine, in cooperation with the Laboratory for Innovation Science at Harvard (LISH), convened a workshop in Washington, D.C. on the role of inducement prizes to spur American innovation. Unlike prizes that recognize past achievements, these inducement prizes are designed to stimulate innovative activity, whether it be the creation of a desired technology, orienting research efforts toward designing products that are capable of being used at scale by customers, or developing products with wide societal benefits. Workshop participants explored how prizes fit into federal and non-federal support for innovation, the benefits and disadvantages of prizes, and the differences between cash and non-cash prizes. Other discussion topics included the conditions under which prizes are most effective, how to measure the effectiveness of prizes, and the characteristics of prize winners. This publication summarizes the presentations and discussions from the workshop.

Pages