Organization & Processes

The Laboratory for Innovation Science at Harvard (LISH) conducts research on how labs operate, including the process researchers take in developing new products and ideas and how best to capitalize on successes and bring solutions out of the lab and into commercial use.

Key Questions


What are the drivers of productivity in science and engineering laboratories?


How can crowds be integrated with traditional R&D functions in companies and academic labs?


What are the biases in the processes of evaluating innovative ideas? How can they be overcome?


What are the predictors of breakthrough success for innovative scientific ideas?


How can technology commercialization be accelerated from academic and government labs?


Projects in this research track are most directly associated with the Managing R&D Labs & Organizations and Technology Translation areas of application, which include experiments around grant applications and scientific awards, the development of a massive open online course on technology translation, and the integration of crowds into academic labs. See below for more information on each of the individual projects in this research track.

Related Publications

Misha Teplitskiy, Karim Lakhani, Hardeep Ranu, Gary Gray, Michael Menietti, and Eva Guinan. 4/2019. “Do Experts Listen to Other Experts? Field Experimental Evidence from Scientific Peer Review”. Publisher's VersionAbstract
Organizations in science and elsewhere often rely on committees of experts to make important decisions, such as evaluating early-stage projects and ideas. However, very little is known about how experts influence each other’s opinions and how that influence affects final evaluations. Here, we use a field experiment in scientific peer review to examine experts’ susceptibility to the opinions of others. We recruited 277 faculty members at seven U.S. medical schools to evaluate 47 early stage research proposals in biomedicine. In our experiment, evaluators (1) completed independent reviews of research ideas, (2) received (artificial) scores attributed to anonymous “other reviewers” from the same or a different discipline, and (3) decided whether to update their initial scores. Evaluators did not meet in person and were not otherwise aware of each other. We find that, even in a completely anonymous setting and controlling for a range of career factors, women updated their scores 13% more often than men, while very highly cited “superstar” reviewers updated 24% less often than others. Women in male-dominated subfields were particularly likely to update, updating 8% more for every 10% decrease in subfield representation. Very low scores were particularly “sticky” and seldom updated upward, suggesting a possible source of conservatism in evaluation. These systematic differences in how world-class experts respond to external opinions can lead to substantial gender and status disparities in whose opinion ultimately matters in collective expert judgment.
Olivia Jung, Andrea Blasco, and Karim R. Lakhani. 2017. “Perceived Organizational Support For Learning and Contribution to Improvement by Frontline Staff.” Academy of Management Proceedings, 2017, 1. Publisher's VersionAbstract

Utilizing suggestions from clinicians and administrative staff is associated with process and quality improvement, organizational climate that promotes patient safety, and added capacity for learning. However, realizing improvement through innovative ideas from staff depends on their ability and decision to contribute. We hypothesized that staff perception of whether the organization promotes learning is positively associated with their likelihood to engage in problem solving and speaking up. We conducted our study in a cardiology unit in an academic hospital that hosted an ideation contest that solicited frontline staff to suggest ideas to resolve issues encountered at work. Our primary dependent variable was staff participation in ideation. The independent variables measuring perception of support for learning were collected using the validated 27-item Learning Organization Survey (LOS). To examine the relationships between these variables, we used analysis of variance, logistic regression, and predicted probabilities. We also interviewed 16 contest participants to explain our quantitative results. The study sample consisted of 30% of cardiology unit staff (n=354) that completed the LOS. In total, 72 staff submitted 138 ideas, addressing a range of issues including patient experience, cost of care, workflow, utilization, and access. Figuring out the cost of procedures in the catheterization laboratory and creating a smartphone application that aids patients to navigate through appointments and connect with providers were two of the ideas that won the most number of votes and funding to be implemented in the following year. Participation in ideation was positively associated with staff perception of supportive learning environment. For example, one standard deviation increase in perceived welcome for differences in opinions was associated with a 43% increase in the odds of participating in ideation (OR=1.43, p=0.04) and 55% increase in the odds of suggesting more than one idea (OR=1.55, p=0.09). Experimentation, a practice that supports learning, was negatively associated with ideation (OR=0.36, p=0.02), and leadership that reinforces learning was not associated with ideation. The perception that new ideas are not sufficiently considered or experimented could have motivated staff to participate, as the ideation contest enables experimentation and learning. Interviews with ideation participants revealed that the contest enabled systematic bottom-up contribution to quality improvement, promoted a sense of community, facilitated organizational exchange of ideas, and spread a problem-solving oriented mindset. Enabling frontline staff to feel that their ideas are welcome and that making mistakes is permissible may increase their likelihood to engage in problem solving and speaking up, contributing to organizational improvement.

Kevin Boudreau, Tom Brady, Ina Ganguli, Patrick Gaule, Eva Guinan, Tony Hollenberg, and Karim R. Lakhani. 2017. “A Field Experiment on Search Costs and the Formation of Scientific Collaborations.” The Review of Economics and Statistics, 99, 4, Pp. 565-576. Publisher's VersionAbstract

Scientists typically self-organize into teams, matching with others to collaborate in the production of new knowledge. We present the results of a field experiment conducted at Harvard Medical School to understand the extent to which search costs affect matching among scientific collaborators. We generated exogenous variation in search costs for pairs of potential collaborators by randomly assigning individuals to 90-minute structured information-sharing sessions as part of a grant funding opportunity for biomedical researchers. We estimate that the treatment increases the baseline probability of grant co-application of a given pair of researchers by 75% (increasing the likelihood of a pair collaborating from 0.16 percent to 0.28 percent), with effects higher among those in the same specialization. The findings indicate that matching between scientists is subject to considerable frictions, even in the case of geographically-proximate scientists working in the same institutional context with ample access to common information and funding opportunities.

Kevin J. Boudreau and Karim R. Lakhani. 2016. “Innovation Experiments: Researching Technical Advance, Knowledge Production, and the Design of Supporting Institutions.” In Innovation Policy and the Economy, 16: Pp. 135-167. Chicago, IL. Publisher's VersionAbstract

This paper discusses several challenges in designing field experiments to better understand how organizational and institutional design shapes innovation outcomes and the production of knowledge. We proceed to describe the field experimental research program carried out by our Crowd Innovation Laboratory at Harvard University to clarify how we have attempted to address these research design challenges. This program has simultaneously solved important practical innovation problems for partner organizations, like NASA and Harvard Medical School (HMS), while contributing research advances, particularly in relation to innovation contests and tournaments. We conclude by proceeding to highlight the opportunity for the scholarly community to develop a “science of innovation” that utilized field experiments as means to generate knowledge.

Kevin J. Boudreau, Eva C. Guinan, Karim R. Lakhani, and Christoph Riedl. 2016. “Looking Across and Looking Beyond the Knowledge Frontier: Intellectual Distance, Novelty, and Resource Allocation in Science.” Management Science, 62, 10, Pp. 2765-2783. Publisher's VersionAbstract

Selecting among alternative projects is a core management task in all innovating organizations. In this paper, we focus on the evaluation of frontier scientific research projects. We argue that the “intellectual distance” between the knowledge embodied in research proposals and an evaluator’s own expertise systematically relates to the evaluations given. To estimate relationships, we designed and executed a grant proposal process at a leading research university in which we randomized the assignment of evaluators and proposals to generate 2,130 evaluator–proposal pairs. We find that evaluators systematically give lower scores to research proposals that are closer to their own areas of expertise and to those that are highly novel. The patterns are consistent with biases associated with boundedly rational evaluation of new ideas. The patterns are inconsistent with intellectual distance simply contributing “noise” or being associated with private interests of evaluators. We discuss implications for policy, managerial intervention, and allocation of resources in the ongoing accumulation of scientific knowledge.

  • 1 of 4
  • »