Contest

Kevin J. Boudreau and Karim R. Lakhani. 2016. “Innovation Experiments: Researching Technical Advance, Knowledge Production, and the Design of Supporting Institutions.” In Innovation Policy and the Economy, 16: Pp. 135-167. Chicago, IL. Publisher's VersionAbstract

This paper discusses several challenges in designing field experiments to better understand how organizational and institutional design shapes innovation outcomes and the production of knowledge. We proceed to describe the field experimental research program carried out by our Crowd Innovation Laboratory at Harvard University to clarify how we have attempted to address these research design challenges. This program has simultaneously solved important practical innovation problems for partner organizations, like NASA and Harvard Medical School (HMS), while contributing research advances, particularly in relation to innovation contests and tournaments. We conclude by proceeding to highlight the opportunity for the scholarly community to develop a “science of innovation” that utilized field experiments as means to generate knowledge.

Kevin J. Boudreau and Karim R. Lakhani. 2012. “The Confederacy of Heterogeneous Software Organizations and Heterogeneous Developers: Field Experimental Evidence on Sorting and Worker Effort.” In The Rate and Direction of Inventive Activity Revisited, edited by Scott Stern and Josh Lerner. Chicago, IL: University of Chicago Press. Publisher's VersionAbstract

This chapter reports on an actual field experiment that tests for the influence of “sorting” on innovator effort. The focus is on the potential heterogeneity among innovators and whether they prefer a more cooperative versus competitive research environment. The focus of the field experiment is a real-world multiday software coding exercise in which participants are able to express a preference for being sorted into a cooperative or competitive environment—that is, incentives in the cooperative environment are team based, while those in the competitive environment are individualized and depend on relative performance. Half of the participants are indeed sorted on the basis of their preferences, while the other half are assigned to the two modes on a random basis.

Karim R. Lakhani, Wesley M. Cohen, Kynon Ingram, Tushar Kothalkar, Maxim Kuzemchenko, Santosh Malik, Cynthia Meyn, Greta Friar, and Stephanie Healy Pokrywa. 2014. Netflix: Designing the Netflix Prize (A). Harvard Business School Case. Harvard Business School. Publisher's VersionAbstract
In 2006, Reed Hastings, CEO of Netflix, was looking for a way to solve Netflix's customer churn problem. Netflix used Cinematch, its proprietary movie recommendation software, to promote individually determined best-fit movies to customers. Hastings determined that a 10% improvement to the Cinematch algorithm would decrease customer churn and increase annual revenue by up to $89 million. However, traditional options for improving the algorithm, such as hiring and training new employees, were time intensive and costly. Hastings decided to improve Netflix's software by crowdsourcing, and began planning the Netflix Prize, an open contest searching for a 10% improvement on Cinematch. The case examines the dilemmas Hastings faced as he planned the contest, such as whether to use an existing crowdsourcing platform or create his own, what company information to expose, how to protect customer privacy while making internal datasets public, how to allocate IP, and how to manage the crowd.
Karim R. Lakhani, Katja Hutter, Stephanie Healy Pokrywa, and Johann Fuller. 2013. Open Innovation at Siemens. Harvard Business School Case. Harvard Business School. Publisher's VersionAbstract

The case describes Siemens, a worldwide innovator in the Energy, Healthcare, Industry, and Infrastructure & Cities sectors, and its efforts to develop and commercialize new R&D through open innovation, including internal and external crowdsourcing contests. Emphasis is placed on exploring actual open innovation initiatives within Siemens and their outcomes. These include creating internal social- and knowledge-sharing networks and utilzing third party platforms to host internal and external contests. Industries discussed include energy, green technology, infrastructure and cities, and sustainability. In addition, the importance of fostering a collaborative online environment and protecting intellectual property is explored.

Olivia Jung, Andrea Blasco, and Karim R. Lakhani. 2017. “Perceived Organizational Support For Learning and Contribution to Improvement by Frontline Staff.” Academy of Management Proceedings, 2017, 1. Publisher's VersionAbstract

Utilizing suggestions from clinicians and administrative staff is associated with process and quality improvement, organizational climate that promotes patient safety, and added capacity for learning. However, realizing improvement through innovative ideas from staff depends on their ability and decision to contribute. We hypothesized that staff perception of whether the organization promotes learning is positively associated with their likelihood to engage in problem solving and speaking up. We conducted our study in a cardiology unit in an academic hospital that hosted an ideation contest that solicited frontline staff to suggest ideas to resolve issues encountered at work. Our primary dependent variable was staff participation in ideation. The independent variables measuring perception of support for learning were collected using the validated 27-item Learning Organization Survey (LOS). To examine the relationships between these variables, we used analysis of variance, logistic regression, and predicted probabilities. We also interviewed 16 contest participants to explain our quantitative results. The study sample consisted of 30% of cardiology unit staff (n=354) that completed the LOS. In total, 72 staff submitted 138 ideas, addressing a range of issues including patient experience, cost of care, workflow, utilization, and access. Figuring out the cost of procedures in the catheterization laboratory and creating a smartphone application that aids patients to navigate through appointments and connect with providers were two of the ideas that won the most number of votes and funding to be implemented in the following year. Participation in ideation was positively associated with staff perception of supportive learning environment. For example, one standard deviation increase in perceived welcome for differences in opinions was associated with a 43% increase in the odds of participating in ideation (OR=1.43, p=0.04) and 55% increase in the odds of suggesting more than one idea (OR=1.55, p=0.09). Experimentation, a practice that supports learning, was negatively associated with ideation (OR=0.36, p=0.02), and leadership that reinforces learning was not associated with ideation. The perception that new ideas are not sufficiently considered or experimented could have motivated staff to participate, as the ideation contest enables experimentation and learning. Interviews with ideation participants revealed that the contest enabled systematic bottom-up contribution to quality improvement, promoted a sense of community, facilitated organizational exchange of ideas, and spread a problem-solving oriented mindset. Enabling frontline staff to feel that their ideas are welcome and that making mistakes is permissible may increase their likelihood to engage in problem solving and speaking up, contributing to organizational improvement.

Karim R. Lakhani, Kevin J. Boudreau, Po-Ru Loh, Lars Backstrom, Carliss Y. Baldwin, Eric Lonstein, Mike Lydon, Alan MacCormack, Ramy A. Arnaout, and Eva C. Guinan. 2013. “Prize-based Contests Can Provide Solutions to Computational Biology Problems.” Nature Biotechnology, 31, 2, Pp. 108-111. Publisher's VersionAbstract

In summary, we show that a prize-based contest on a commercial platform can effectively recruit skilled individuals to apply their knowledge to a big-data biomedical problem. Deconstruction and transformation of problems for a heterogeneous solver community coupled with adequate data to produce and validate results can support solution diversity and minimize the risk of sub-optimal solutions that may arise from limited searches. In addition to the benefits of generating new knowledge, this strategy may be particularly useful in situations where the computational or algorithmic problem, or potentially any science problem, represents a barrier to rapid progress but where finding the solution is not itself the major thrust of the investigator’s scientific effort. The America Competes Act passed by the US Congress provides funding agencies with the authority to administer their own prize-based contests and paves the way for establishing how grant recipients might access commercial prize platforms to accelerate their own research.

Karim R. Lakhani. 2015. Innovating with the Crowd. Harvard Business School Case. Harvard Business School. Publisher's VersionAbstract

This note outlines the structure and content of a seven-session module that is designed to introduce students to the fundamentals of innovating with the "crowd." The module has been taught in a second year elective course at the Harvard Business School on "Digital Innovation and Transformation" and is aimed at students that already have an understanding of how to structure an innovation process inside of a company. The module expands the students' innovation toolkit by exposing them to the theory and practice of extending the innovation process to external participants.

Andrea Blasco, Olivia S. Jung, Karim R. Lakhani, and Michael E. Menietti. 4/2019. “Incentives for Public Goods Inside Organizations: Field Experimental Evidence.” Journal of Economic Behavior & Organization, 160, Pp. 214-229. Publisher's VersionAbstract

We report results of a natural field experiment conducted at a medical organization that sought contribution of public goods (i.e., projects for organizational improvement) from its 1200 employees. Offering a prize for winning submissions boosted participation by 85 percent without affecting the quality of the submissions. The effect was consistent across gender and job type. We posit that the allure of a prize, in combination with mission-oriented preferences, drove participation. Using a simple model, we estimate that these preferences explain about a third of the magnitude of the effect. We also find that these results were sensitive to the solicited person’s gender.

Kevin J. Boudreau, Karim R. Lakhani, and Michael Menietti. 2016. “Performance Responses to Competition Across Skill-Levels in Rank Order Tournaments: Field Evidence and Implications for Tournament Design.” The RAND Journal of Economics, 47, 1, Pp. 140-165. Publisher's VersionAbstract

Tournaments are widely used in the economy to organize production and innovation. We study individual data on 2775 contestants in 755 software algorithm development contests with random assignment. The performance response to added contestants varies nonmonotonically across contestants of different abilities, precisely conforming to theoretical predictions. Most participants respond negatively, whereas the highest-skilled contestants respond positively. In counterfactual simulations, we interpret a number of tournament design policies (number of competitors, prize allocation and structure, number of divisions, open entry) and assess their effectiveness in shaping optimal tournament outcomes for a designer.

Kevin J. Boudreau, Nicola Lacetera, and Karim R. Lakhani. 2011. “Incentives and Problem Uncertainty in Innovation Contests: An Empirical Analysis.” Management Science, 57, 5, Pp. 843-863. Publisher's VersionAbstract

Contests are a historically important and increasingly popular mechanism for encouraging innovation. A central concern in designing innovation contests is how many competitors to admit. Using a unique data set of 9,661 software contests, we provide evidence of two coexisting and opposing forces that operate when the number of competitors increases. Greater rivalry reduces the incentives of all competitors in a contest to exert effort and make investments. At the same time, adding competitors increases the likelihood that at least one competitor will find an extreme-value solution. We show that the effort-reducing effect of greater rivalry dominates for less uncertain problems, whereas the effect on the extreme value prevails for more uncertain problems. Adding competitors thus systematically increases overall contest performance for high-uncertainty problems. We also find that higher uncertainty reduces the negative effect of added competitors on incentives. Thus, uncertainty and the nature of the problem should be explicitly considered in the design of innovation tournaments. We explore the implications of our findings for the theory and practice of innovation contests.

Karim R. Lakhani, Eric Lonstein, and Stephanie Pokrywa. 2011. TopCoder (B). Harvard Business School Case Supplement. Harvard Business School. Publisher's VersionAbstract

Metrology plays a key role in the manufacture of mechanical components. Traditionally it is used extensively in a pre-process stage where a manufacturer does process planning, design, and ramp-up, and in post-process off-line inspection to establish proof of quality. The area that is seeing a lot of growth is the in-process stage of volume manufacturing, where feedback control can help ensure that parts are made to specification. The Industrial Metrology Group at Carl Zeiss AG had its traditional strength in high precision coordinate measuring machines, a universal measuring tool that had been widely used since its introduction in the mid-1970s. The market faced a complex diversification of competition as metrology manufacturers introduced new sensor and measurement technologies, and as some of their customers moved towards a different style of measurement mandating speed and integration with production systems. The case discusses the threat of new in-line metrology systems to the core business as well as the arising new opportunities.

Karim R. Lakhani, David A. Garvin, and Eric Lonstein. 2010. TopCoder (A): Developing Software through Crowdsourcing. Harvard Business School Case. Harvard Business School. Publisher's VersionAbstract

TopCoder's crowdsourcing-based business model, in which software is developed through online tournaments, is presented. The case highlights how TopCoder has created a unique two-sided innovation platform consisting of a global community of over 225,000 developers who compete to write software modules for its over 40 clients. Provides details of a unique innovation platform where complex software is developed through ongoing online competitions. By outlining the company's evolution, the challenges of building a community and refining a web-based competition platform are illustrated. Experiences and perspectives from TopCoder community members and clients help show what it means to work from within or in cooperation with an online community. In the case, the use of distributed innovation and its potential merits as a corporate problem solving mechanism is discussed. Issues related to TopCoder's scalability, profitability, and growth are also explored.

Andrea Blasco, Olivia S. Jung, Karim R. Lakhani, and Michael Menietti. 2016. Motivating Effort in Contributing to Public Goods Inside Organizations: Field Experimental Evidence. National Bureau of Economic Research. Publisher's VersionAbstract

We investigate the factors driving workers’ decisions to generate public goods inside an organization through a randomized solicitation of workplace improvement proposals in a medical center with 1200 employees. We find that pecuniary incentives, such as winning a prize, generate a threefold increase in participation compared to non-pecuniary incentives alone, such as prestige or recognition. Participation is also increased by a solicitation appealing to improving the workplace. However, emphasizing the patient mission of the organization led to countervailing effects on participation. Overall, these results are consistent with workers having multiple underlying motivations to contribute to public goods inside the organization consisting of a combination of pecuniary and altruistic incentives associated with the mission of the organization.

Kevin J. Boudreau and Karim R. Lakhani. 2013. “Using the Crowd as an Innovation Partner.” Harvard Business Review 91 (4), Pp. 61-69. Publisher's VersionAbstract

From Apple to Merck to Wikipedia, more and more organizations are turning to crowds for help in solving their most vexing innovation and research questions, but managers remain understandably cautious. It seems risky and even unnatural to push problems out to vast groups of strangers distributed around the world, particularly for companies built on a history of internal innovation. How can intellectual property be protected? How can a crowdsourced solution be integrated into corporate operations? What about the costs? These concerns are all reasonable, the authors write, but excluding crowdsourcing from the corporate innovation tool kit means losing an opportunity. After a decade of study, they have identified when crowds tend to outperform internal organizations (or not). They outline four ways to tap into crowd-powered problem solving — contests, collaborative communities, complementors, and labor markets — and offer a system for picking the best one in a given situation. Contests, for example, are suited to highly challenging technical, analytical, and scientific problems; design problems; and creative or aesthetic projects. They are akin to running a series of independent experiments that generate multiple solutions—and if those solutions cluster at some extreme, a company can gain insight into where a problem’s “technical frontier” lies. (Internal R&D may generate far less information.)

Karim R. Lakhani, Wesley M. Cohen, Kynon Ingram, and Tushar Kothalkar. 2014. Netflix: Designing the Netflix Prize (B). Harvard Business School Case Supplement. Harvard Business School. Publisher's VersionAbstract

This supplemental case follows up on the Netflix Prize Contest described in Netflix: Designing the Netflix Prize (A). In the A case, Netflix CEO Reed Hastings must decide how to organize a crowdsourcing contest to improve the algorithms for Netflix's movie recommendation software. The B case follows the contest from the building of the platform in 2006 to the awarding of the highest prize in 2009. The B cause also considers the aftermath of the contest, and the issues of successfully implementing a winning idea from a contest.

Pages