Organization & Processes

The Laboratory for Innovation Science at Harvard (LISH) conducts research on how labs operate, including the process researchers take in developing new products and ideas and how best to capitalize on successes and bring solutions out of the lab and into commercial use.

Key Questions

Question

What are the drivers of productivity in science and engineering laboratories?

Question

How can crowds be integrated with traditional R&D functions in companies and academic labs?

Question

What are the biases in the processes of evaluating innovative ideas? How can they be overcome?

Question

What are the predictors of breakthrough success for innovative scientific ideas?

Question

How can technology commercialization be accelerated from academic and government labs?

 

Projects in this research track are most directly associated with the Managing R&D Labs & Organizations and Technology Translation areas of application, which include experiments around grant applications and scientific awards, the development of a massive open online course on technology translation, and the integration of crowds into academic labs. See below for more information on each of the individual projects in this research track.

Related Publications

Hannah Mayer. 7/2020. “AI in Enterprise: AI Product Management.” Edited by Jin H. Paik, Jenny Hoffman, and Steven Randazzo.Abstract

While there are dispersed resources to learn more about artificial intelligence, there remains a need to cultivate a community of practitioners for cyclical exposure and knowledge sharing of best practices in the enterprise. That is why Laboratory for Innovation Science at Harvard launched the AI in the Enterprise series, which exposes managers and executives to interesting applications of AI and the decisions behind developing such tools. 

Moderated by HBS Professor and co-author of Competing in the Age of AI, Karim R. Lakhani, the July virtual session featured Peter Skomoroch from DataWrangling and formerly at LinkedIn. Together, they discussed what differentiates AI product management from managing other tech products and how to adapt to the uncertainty in the AI product lifecycle.

Misha Teplitskiy, Hardeep Ranu, Gary Gray, Michael Menietti, Eva Guinan, and Karim Lakhani. Working Paper. “Do Experts Listen to Other Experts? Field Experimental Evidence from Scientific Peer Review.” HBS Working Paper Series. Publisher's VersionAbstract
Organizations in science and elsewhere often rely on committees of experts to make important decisions, such as evaluating early-stage projects and ideas. However, very little is known about how experts influence each other’s opinions and how that influence affects final evaluations. Here, we use a field experiment in scientific peer review to examine experts’ susceptibility to the opinions of others. We recruited 277 faculty members at seven U.S. medical schools to evaluate 47 early stage research proposals in biomedicine. In our experiment, evaluators (1) completed independent reviews of research ideas, (2) received (artificial) scores attributed to anonymous “other reviewers” from the same or a different discipline, and (3) decided whether to update their initial scores. Evaluators did not meet in person and were not otherwise aware of each other. We find that, even in a completely anonymous setting and controlling for a range of career factors, women updated their scores 13% more often than men, while very highly cited “superstar” reviewers updated 24% less often than others. Women in male-dominated subfields were particularly likely to update, updating 8% more for every 10% decrease in subfield representation. Very low scores were particularly “sticky” and seldom updated upward, suggesting a possible source of conservatism in evaluation. These systematic differences in how world-class experts respond to external opinions can lead to substantial gender and status disparities in whose opinion ultimately matters in collective expert judgment.
Olivia S Jung, Fahima Begum, Andrea Dorbu, Sara J Singer, and Patricia Satterstrom. 7/17/2023. “Ideas from the Frontline: Improvement Opportunities in Federally Qualified Health Centers.” Journal of General Internal Medicine. Publisher's VersionAbstract

Background

Engaging frontline clinicians and staff in quality improvement is a promising bottom-up approach to transforming primary care practices. This may be especially true in federally qualified health centers (FQHCs) and similar safety-net settings where large-scale, top-down transformation efforts are often associated with declining worker morale and increasing burnout. Innovation contests, which decentralize problem-solving, can be used to involve frontline workers in idea generation and selection.

Objective

We aimed to describe the ideas that frontline clinicians and staff suggested via organizational innovation contests in a national sample of 54 FQHCs.

Interventions

Innovation contests solicited ideas for improving care from all frontline workers—regardless of professional expertise, job title, and organizational tenure and excluding those in senior management—and offered opportunities to vote on ideas.

Participants

A total of 1,417 frontline workers across all participating FQHCs generated 2,271 improvement opportunities.

Approaches

We performed a content analysis and organized the ideas into codes (e.g., standardization, workplace perks, new service, staff relationships, community development) and categories (e.g., operations, employees, patients).

Key Results

Ideas from frontline workers in participating FQHCs called attention to standardization (n = 386, 17%), staffing (n = 244, 11%), patient experience (n = 223, 10%), staff training (n = 145, 6%), workplace perks (n = 142, 6%), compensation (n = 101, 5%), new service (n = 92, 4%), management-staff relationships (n = 82, 4%), and others. Voting results suggested that staffing resources, standardization, and patient communication were key issues among workers.

Conclusions

Innovation contests generated numerous ideas for improvement from the frontline. It is likely that the issues described in this study have become even more salient today, as the COVID-19 pandemic has had devastating impacts on work environments and health/social needs of patients living in low-resourced communities. Continued work is needed to promote learning and information exchange about opportunities to improve and transform practices between policymakers, managers, and providers and staff at the frontlines.

Karim R. Lakhani, Yael Grushka-Cockayne, Jin H. Paik, and Steven Randazzo. 10/2021. “Customer-Centric Design with Artificial Intelligence: Commonwealth Bank”. Publisher's VersionAbstract
As Commonwealth Bank (CommBank) CEO Matt Comyn delivered the full financial year results in August 2021 over videoconference, it took less than two minutes for him to make his first mention of the organization's Customer Engagement Engine (CEE), the AI-driven customer experience platform. With full cross-channel integration, CEE operated using 450 machine learning models that learned from a total of 157 billion data points. Against the backdrop of a once-in-a century global pandemic, CEE had helped the Group deliver a strong financial performance while also supporting customers with assistance packages designed in response to the coronavirus outbreak. Six years earlier, in 2015, financial services were embarking on a transformation driven by the increased availability and standardization of data and artificial intelligence (AI). Speed, access and price, once key differentiators for attracting and retaining customers, had been commoditized by AI, and new differentiators such as customization and enhanced interactions were expected. Seeking to create value for customers through an efficient, data-driven practice, CommBank leveraged existing channels of operations. Angus Sullivan, Group Executive of Retail Banking, remarked, "How do we, over thousands of interactions, try and generate the same outcomes as from a really in-depth, one-to-one conversation?" The leadership team began to make key investments in data and infrastructure. While some headway had been made, newly appointed Chief Data and Analytics Officer, Andrew McMullan, was brought in to catalyze the process and progress of the leadership's vision for a new customer experience. Success would depend on continued drive from leadership, buy-in from frontline staff, and a reliable team of passionate and knowledgeable data professionals. How did Comyn and McMullan bring their vision to life: to deliver better outcomes through a new approach to customer-centricity? How did they overcome internal resistance, data sharing barriers, and requirements for technical capabilities?
Karim R. Lakhani, Anne-Laure Fayard, Manos Gkeredakis, and Jin Hyun Paik. 10/5/2020. “OpenIDEO (B)”. Publisher's VersionAbstract
In the midst of 2020, as the coronavirus pandemic was unfolding, OpenIDEO - an online open innovation platform focused on design-driven solutions to social issues - rapidly launched a new challenge to improve access to health information, empower communities to stay safe during the COVID-19 crisis, and inspire global leaders to communicate effectively. OpenIDEO was particularly suited to challenges which required cross-system or sector-wide collaboration due to its focus on social impact and ecosystem design, but its leadership pondered how they could continue to improve virtual collaboration and to share their insights from nearly a decade of running online challenges. Conceived as an exercise of disruptive digital innovation, OpenIDEO successfully created a strong open innovation community, but how could they sustain - or even improve - their support to community members and increase the social impact of their online challenges in the coming years?
Hannah Mayer. 10/2020. “Data Science is the New Accounting.” Edited by Jin H. Paik and Jenny Hoffman.Abstract

In the October session of the AI in Enterprise series, HBS Professor and co-author of Competing in the Age of AI, Karim R. Lakhani and Roger Magoulas (Data Science Advisor) delved into O'Reilly's most recent survey of AI adoption in larger companies. The discussion explored common risk factors, techniques, tools, as well as the data governance and data conditioning that large companies are using to build and scale their AI practices. 

 

Read Hannah Mayer's recap of the event to learn more about what senior managers in enterprises need to know about AI - particularly, if they want to adopt at scale. 

 

  •  
  • 1 of 5
  • »