This paper discusses several challenges in designing field experiments to better understand how organizational and institutional design shapes innovation outcomes and the production of knowledge. We proceed to describe the field experimental research program carried out by our Crowd Innovation Laboratory at Harvard University to clarify how we have attempted to address these research design challenges. This program has simultaneously solved important practical innovation problems for partner organizations, like NASA and Harvard Medical School (HMS), while contributing research advances, particularly in relation to innovation contests and tournaments. We conclude by proceeding to highlight the opportunity for the scholarly community to develop a “science of innovation” that utilized field experiments as means to generate knowledge.
This chapter reports on an actual field experiment that tests for the influence of “sorting” on innovator effort. The focus is on the potential heterogeneity among innovators and whether they prefer a more cooperative versus competitive research environment. The focus of the field experiment is a real-world multiday software coding exercise in which participants are able to express a preference for being sorted into a cooperative or competitive environment—that is, incentives in the cooperative environment are team based, while those in the competitive environment are individualized and depend on relative performance. Half of the participants are indeed sorted on the basis of their preferences, while the other half are assigned to the two modes on a random basis.
The case describes Siemens, a worldwide innovator in the Energy, Healthcare, Industry, and Infrastructure & Cities sectors, and its efforts to develop and commercialize new R&D through open innovation, including internal and external crowdsourcing contests. Emphasis is placed on exploring actual open innovation initiatives within Siemens and their outcomes. These include creating internal social- and knowledge-sharing networks and utilzing third party platforms to host internal and external contests. Industries discussed include energy, green technology, infrastructure and cities, and sustainability. In addition, the importance of fostering a collaborative online environment and protecting intellectual property is explored.
Utilizing suggestions from clinicians and administrative staff is associated with process and quality improvement, organizational climate that promotes patient safety, and added capacity for learning. However, realizing improvement through innovative ideas from staff depends on their ability and decision to contribute. We hypothesized that staff perception of whether the organization promotes learning is positively associated with their likelihood to engage in problem solving and speaking up. We conducted our study in a cardiology unit in an academic hospital that hosted an ideation contest that solicited frontline staff to suggest ideas to resolve issues encountered at work. Our primary dependent variable was staff participation in ideation. The independent variables measuring perception of support for learning were collected using the validated 27-item Learning Organization Survey (LOS). To examine the relationships between these variables, we used analysis of variance, logistic regression, and predicted probabilities. We also interviewed 16 contest participants to explain our quantitative results. The study sample consisted of 30% of cardiology unit staff (n=354) that completed the LOS. In total, 72 staff submitted 138 ideas, addressing a range of issues including patient experience, cost of care, workflow, utilization, and access. Figuring out the cost of procedures in the catheterization laboratory and creating a smartphone application that aids patients to navigate through appointments and connect with providers were two of the ideas that won the most number of votes and funding to be implemented in the following year. Participation in ideation was positively associated with staff perception of supportive learning environment. For example, one standard deviation increase in perceived welcome for differences in opinions was associated with a 43% increase in the odds of participating in ideation (OR=1.43, p=0.04) and 55% increase in the odds of suggesting more than one idea (OR=1.55, p=0.09). Experimentation, a practice that supports learning, was negatively associated with ideation (OR=0.36, p=0.02), and leadership that reinforces learning was not associated with ideation. The perception that new ideas are not sufficiently considered or experimented could have motivated staff to participate, as the ideation contest enables experimentation and learning. Interviews with ideation participants revealed that the contest enabled systematic bottom-up contribution to quality improvement, promoted a sense of community, facilitated organizational exchange of ideas, and spread a problem-solving oriented mindset. Enabling frontline staff to feel that their ideas are welcome and that making mistakes is permissible may increase their likelihood to engage in problem solving and speaking up, contributing to organizational improvement.
In summary, we show that a prize-based contest on a commercial platform can effectively recruit skilled individuals to apply their knowledge to a big-data biomedical problem. Deconstruction and transformation of problems for a heterogeneous solver community coupled with adequate data to produce and validate results can support solution diversity and minimize the risk of sub-optimal solutions that may arise from limited searches. In addition to the benefits of generating new knowledge, this strategy may be particularly useful in situations where the computational or algorithmic problem, or potentially any science problem, represents a barrier to rapid progress but where finding the solution is not itself the major thrust of the investigator’s scientific effort. The America Competes Act passed by the US Congress provides funding agencies with the authority to administer their own prize-based contests and paves the way for establishing how grant recipients might access commercial prize platforms to accelerate their own research.
This note outlines the structure and content of a seven-session module that is designed to introduce students to the fundamentals of innovating with the "crowd." The module has been taught in a second year elective course at the Harvard Business School on "Digital Innovation and Transformation" and is aimed at students that already have an understanding of how to structure an innovation process inside of a company. The module expands the students' innovation toolkit by exposing them to the theory and practice of extending the innovation process to external participants.
We report results of a natural field experiment conducted at a medical organization that sought contribution of public goods (i.e., projects for organizational improvement) from its 1200 employees. Offering a prize for winning submissions boosted participation by 85 percent without affecting the quality of the submissions. The effect was consistent across gender and job type. We posit that the allure of a prize, in combination with mission-oriented preferences, drove participation. Using a simple model, we estimate that these preferences explain about a third of the magnitude of the effect. We also find that these results were sensitive to the solicited person’s gender.
Tournaments are widely used in the economy to organize production and innovation. We study individual data on 2775 contestants in 755 software algorithm development contests with random assignment. The performance response to added contestants varies nonmonotonically across contestants of different abilities, precisely conforming to theoretical predictions. Most participants respond negatively, whereas the highest-skilled contestants respond positively. In counterfactual simulations, we interpret a number of tournament design policies (number of competitors, prize allocation and structure, number of divisions, open entry) and assess their effectiveness in shaping optimal tournament outcomes for a designer.
Contests are a historically important and increasingly popular mechanism for encouraging innovation. A central concern in designing innovation contests is how many competitors to admit. Using a unique data set of 9,661 software contests, we provide evidence of two coexisting and opposing forces that operate when the number of competitors increases. Greater rivalry reduces the incentives of all competitors in a contest to exert effort and make investments. At the same time, adding competitors increases the likelihood that at least one competitor will find an extreme-value solution. We show that the effort-reducing effect of greater rivalry dominates for less uncertain problems, whereas the effect on the extreme value prevails for more uncertain problems. Adding competitors thus systematically increases overall contest performance for high-uncertainty problems. We also find that higher uncertainty reduces the negative effect of added competitors on incentives. Thus, uncertainty and the nature of the problem should be explicitly considered in the design of innovation tournaments. We explore the implications of our findings for the theory and practice of innovation contests.
Metrology plays a key role in the manufacture of mechanical components. Traditionally it is used extensively in a pre-process stage where a manufacturer does process planning, design, and ramp-up, and in post-process off-line inspection to establish proof of quality. The area that is seeing a lot of growth is the in-process stage of volume manufacturing, where feedback control can help ensure that parts are made to specification. The Industrial Metrology Group at Carl Zeiss AG had its traditional strength in high precision coordinate measuring machines, a universal measuring tool that had been widely used since its introduction in the mid-1970s. The market faced a complex diversification of competition as metrology manufacturers introduced new sensor and measurement technologies, and as some of their customers moved towards a different style of measurement mandating speed and integration with production systems. The case discusses the threat of new in-line metrology systems to the core business as well as the arising new opportunities.
TopCoder's crowdsourcing-based business model, in which software is developed through online tournaments, is presented. The case highlights how TopCoder has created a unique two-sided innovation platform consisting of a global community of over 225,000 developers who compete to write software modules for its over 40 clients. Provides details of a unique innovation platform where complex software is developed through ongoing online competitions. By outlining the company's evolution, the challenges of building a community and refining a web-based competition platform are illustrated. Experiences and perspectives from TopCoder community members and clients help show what it means to work from within or in cooperation with an online community. In the case, the use of distributed innovation and its potential merits as a corporate problem solving mechanism is discussed. Issues related to TopCoder's scalability, profitability, and growth are also explored.
We investigate the factors driving workers’ decisions to generate public goods inside an organization through a randomized solicitation of workplace improvement proposals in a medical center with 1200 employees. We find that pecuniary incentives, such as winning a prize, generate a threefold increase in participation compared to non-pecuniary incentives alone, such as prestige or recognition. Participation is also increased by a solicitation appealing to improving the workplace. However, emphasizing the patient mission of the organization led to countervailing effects on participation. Overall, these results are consistent with workers having multiple underlying motivations to contribute to public goods inside the organization consisting of a combination of pecuniary and altruistic incentives associated with the mission of the organization.
From Apple to Merck to Wikipedia, more and more organizations are turning to crowds for help in solving their most vexing innovation and research questions, but managers remain understandably cautious. It seems risky and even unnatural to push problems out to vast groups of strangers distributed around the world, particularly for companies built on a history of internal innovation. How can intellectual property be protected? How can a crowdsourced solution be integrated into corporate operations? What about the costs? These concerns are all reasonable, the authors write, but excluding crowdsourcing from the corporate innovation tool kit means losing an opportunity. After a decade of study, they have identified when crowds tend to outperform internal organizations (or not). They outline four ways to tap into crowd-powered problem solving — contests, collaborative communities, complementors, and labor markets — and offer a system for picking the best one in a given situation. Contests, for example, are suited to highly challenging technical, analytical, and scientific problems; design problems; and creative or aesthetic projects. They are akin to running a series of independent experiments that generate multiple solutions—and if those solutions cluster at some extreme, a company can gain insight into where a problem’s “technical frontier” lies. (Internal R&D may generate far less information.)
This supplemental case follows up on the Netflix Prize Contest described in Netflix: Designing the Netflix Prize (A). In the A case, Netflix CEO Reed Hastings must decide how to organize a crowdsourcing contest to improve the algorithms for Netflix's movie recommendation software. The B case follows the contest from the building of the platform in 2006 to the awarding of the highest prize in 2009. The B cause also considers the aftermath of the contest, and the issues of successfully implementing a winning idea from a contest.