Join the Conversation
Innovation: Organization & Management
Deadline: 15 September 2020
Experimentation for innovation: What’s now and what’s next
Innovation management is a fast-moving field, particularly in recent times as “new” frameworks to manage R&D processes, e.g., set-based concurrent engineering, agile, lean start-up, design thinking, have emerged as ways to improve the consistently low performance levels of innovation projects (Markham and Lee, 2013). The different origins of these frameworks (e.g., from system engineering, software development, product design) partly explain the genesis and evolution of separate research streams investigating their use and effectiveness, with siloed approaches and little cross-fertilization.Yet, these frameworks can be seen as diverse “recipes” of very similar ingredients, i.e., underlying principles of what constitutes sound innovation management in increasingly uncertain and volatile environments: user centeredness and early involvement, rapid iteration process design, front-loaded problem solving, minimal up-front planning, flexibility and adaptation of the concept to changing requirements until late in the R&D process (Thomke and Fujimoto, 2000; MacCormack et al., 2001; Terwiesch and Loch, 2004; Bianchi et al., 2018; Pisano, 2019; Felin et al., 2019; Micheli et al., 2019).
Above all, at the heart of these frameworks is a focus on experimentation, on conceptualizing innovation work as the design and execution of a sequence of experiments that push developers to test the merit of their ideas in real settings since the very beginning of the projects (Thomke, 1998; Hampel et al., 2019). Naturally, experimentation is not a new “thing” advocated by these frameworks. Even traditional models such as Stage-Gateâ and Waterfall included prototyping and testing as key steps, yet the logic was significantly different: experiments were characterized by high fidelity and integration (vs. partial, rough and rapid experiments), served to validate the final design in late phases (vs. to try out multiple alternatives and discover innovation opportunities early in the process), and so were optimized to succeed and launch (vs. optimized to fail and learn) (Garvin, 2000).
As market and technological environments become more turbulent and unpredictable, experimentation represents a superior decision making approach to analytics and desk research which explains its growing use by innovation managers and entrepreneurs (Hampel et al., 2019). Active, concrete intervention and selective manipulation of reality triggers a realistic user reaction, by creating a vivid experience of the future, through prototypes. This rich and frequent feedback from users is a fundamental input that critically supports development decisions as innovation projects unfold. Indeed, today’s champions, e.g., Amazon, Intuit, Procter & Gamble, 3M, eBay, Toyota, attribute their innovation leadership to a strong experimentation orientation. Some of them have even introduced the role of chief experimentation officer! Yet, the proficient pursuit of experimentation goes beyond establishing new roles and requires addressing recurring individual biases (e.g., wishful thinking, confirmation bias, escalation of commitment), managing paradoxical tensions in the organization (e.g., creativity and discipline), and acting at the corporate cultural level (Pisano, 2019).
We believe that more high-quality innovation management research is needed to advance the discipline of experimentation both in theory and in its practical application. It is true that evidence exists in the literature about the constituting elements of experiments in innovation processes, i.e., the surfacing of assumptions underlying a novel idea and the formulation of hypotheses; the virtual or concrete representation of the idea (and/or of specific assumptions) through prototypes; the trial of prototypes through tests; the analysis of test results in relation to hypotheses and the consequent decision making. However, these insights are scattered across multiple fields and research traditions: new product development (MacCormack et al., 2001), marketing (Lynn et al., 1996), design (Liedtka, 2015), entrepreneurship (Kerr et al., 2014), information systems (Petersen and Wohlin, 2010), engineering (Sobek II et al., 1999) and scientific methods (Cook et al., 2002). Confusion arises when scholars in different fields define the same or similar concepts differently or use the same term for different concepts. More clarity and consistency are necessary. In addition, limited research exists on the antecedents and outcomes of experimentation in innovation processes (Hampel et al., 2019)
There is little doubt that the growing trend of experimentation in innovation will continue in the future. Advances in simulation tools and prototyping technologies, e.g., 3D printing, are certainly enabling earlier, cheaper yet higher fidelity experiments and the transfer of best practices from software development, where the digital nature of the product facilitates experimentation, to the hardware domain (Candi and Beltagui, 2019). But it’s much more than that. The fundamental trigger of a larger diffusion of the experimentation practice is an evolution in management thinking, e.g., the realization that building the “real thing” with its associated costs and times is often unnecessary to stimulate a realistic reaction from users. A façade might be enough to obtain valuable feedback by flash-forwarding the user into a future that does not yet exist. Proponents of this “fake it before you make it” approach have thus introduced the concepts of pretotypes or pretendo-types (Furr and Dyer, 2014), such as web landing pages and design fiction videos, that have generated performance gains but also raised ethical issues (Luca, 2014). Creativity and rigor in how to design and run experiments for optimal information generation will be a key skill distinguishing successful innovators from less successful, and scholars should support this competence by producing relevant scientific research.
This special issue of Innovation: Organization & Management (IOM) aims to provide a comprehensive and systematic assessment of the current state of the art of experimentation principles, methods and tools in innovation processes. We intend to offer researchers a clear, up-to-date picture of available concepts and instruments, of their relations and of the conditions in which their use is appropriate, of the antecedents and outcomes of an experimentation approach to innovation. We believe that such experimentation “palette” can be created only by cutting across various research streams and digging into the nature and the potential of specific principles and tools, independently from the management frameworks (e.g., agile or design thinking) to which they are associated.
- Mattia Bianchi: Stockholm School of Economics, Stockholm, Sweden
- Alberto Di Minin: BRIE – U.C. Berkeley, USA & Istituto di Management, Sant’Anna School of Advanced Studies, Pisa, Italy
- Gary Pisano: Harvard Business School, Cambridge (MA), USA
Helping you Publish your Research
Possible research questions that could be addressed by studies in this special issue are manifold and include, but are not limited to, the following:
- What is the impact of experimentation on performance (both innovative and corporate, short-term and long-term)? A recent study by Camuffo et al. (2019) finds significant benefits from disciplined experimentation-based decision making in innovation processes, while Felin et al. (2019) and Contigiani et al. (2019) highlight the risk of incrementalism and of imitation, respectively, from running lean start-up experiments. What are the performance trade-offs faced when experimenting (e.g., cost and time required by an experiment vs. fidelity; learning vs. revenue) and what are key criteria to consider, when choosing among conflicting objectives? The type of experimentation that is advocated in lean and agile frameworks, at least in the fuzzy front-end, is called quasi-experimentation, due to, e.g., the typically low fidelity of rapid prototypes and the non-randomized selection of test subjects. This contrasts the rigor of scientific experiments, such as the ones run by pharma companies to test the efficacy and safety of the drugs under development, characterized by randomized trials, control groups and blind studies (Cook et al., 2002).
- What is the relation between experiment’s scientificity and frugality? What are the learning potential vs. the risks of running “quick and dirty” experiments?
- Among the different factors and practices to design, conduct and analyze experiments, which enhance the effectiveness of experimentation on performance and which increase risks? Literature describes several approaches to surface assumptions, e.g., discovery driven planning and pre-mortems (McGrath and Macmillan, 1995; Kahneman et al., 2011), to formulate hypotheses (Camuffo et al., 2019), to sequence experiments (Thomke and Bell, 2001), to prototype, e.g., minimum viable products, betas, pretendo-types (Eisenmann et al., 2013), to test, e.g., A/B tests, smoke tests, learning launches (Liedtka, 2015), to measure the success or failure of a test, e.g., using metrics such as levels of interest, activation and return rates, promotion scores (Furr and Dyer, 2014). However, evidence on their effectiveness, both as separate practices and as elements of a systemic experimentation approach, is scarce. Taxonomies that help categorize the different experimentation techniques and clarify their nature could greatly benefit future research as well as practice.
- How does the governance of new product development projects change as innovation becomes managing a portfolio of experiments? What is the influence of an experimentation approach on activities such as budgeting, resource allocation, financing, planning? Significantly more knowledge is needed on the practices and the infrastructure surrounding the experimentation core of innovation projects.
- What are the contingencies that influence when to use certain experimental approaches and how they affect performance? Existing research identifies contingent factors such as the size and the age of the company, e.g., start-ups vs. large established firms; the nature of the innovation being developed, e.g., its digital content and/or service component; the sector, e.g., B2B vs. B2C; the strength of the intellectual property regime; the nature of the hypothesis to be tested, i.e., whether it concerns the idea’s feasibility, desirability and viability (Hampel et al., 2019; Contigiani et al., 2019). While experimenting with purely digital software products can be typically done in rapid cycles and with little investments, the nature of hardware systems with many interacting components often belonging to several technical domains may indeed restrict and/or complicate the adoption of an early experimentation approach and reduce its benefits. While engineering and marketing literatures offer rich insights on how to run experiments to test technical and market hypothesis, little is known about how to probe business model hypotheses (Blank, 2013; McGrath, 2010), e.g., new venture’s revenue generation schemes, pricing strategy and value chain design. A contingent approach to experimentation is fundamental and deserves further attention by scholars.
- What are the antecedents that drive and support the adoption of experimentation in innovation? What are the hindrances? These factors originate at the individual (e.g., cognitive traits, behavioral fallacies), team (diversity, incentives) and organizational (psychological safety, tolerance for failure and ambiguity) levels. What managerial practices and organizational designs help establish and develop the drivers and overcome the hindrances? Pisano (2019) argue that an experimentation culture requires balancing seemingly contradictory behaviors, e.g., creativity and discipline, safety and criticism, collaboration and individual accountability, strong but flat leadership. While agile organizations are naturally lead users of experimental practices, Cooper and Sommer (2016) show that hybrid Agile-Stage-Gate methods help companies transition from linear, plan-based innovation approaches to adaptive, experimental ones.
- How do experimentation-related research themes link to other currently relevant topics in innovation management research, e.g., open innovation, platforms, business models, crowdfunding?
Above we highlighted some of the relevant questions and challenges that arise when experimenting in innovation processes. The purpose of this special issue is to stimulate and present to scholars rigorous studies that focus on these questions and challenges and suggest ways to advance the discipline of experimentation by catalyzing insights from several theoretical traditions and perspectives (e.g., real options, search theory, effectuation, organizational learning, appropriability, creative problem solving, evidence-based management, etc.) and methodological approaches. We envision a special issue composed of conceptual and empirical pieces, using any of the full range of research methods to offer qualitative and/or quantitative evidence. With this special issue, we aim to start building a community of scholars from multiple thematic domains.
If you are interested, please submit your manuscript through the IOM system no later than September 15th, 2020 to be peer reviewed. When completing your submission, please make sure to clearly select the correct special issue track. The paper should reflect the editorial guidelines of Innovation: Organization & Management. You are welcome to contact any or all of the Guest Editors for further information. Authors whose papers receive a revise and re-submit will be invited to a special workshop organized by the Guest Editors. We anticipate that this issue will be published in IOM sometime in 2022. However your paper will be published in advance on-line viewing on the IOM website, as soon as the review process is completed, and therefore they might appear earlier.
Bianchi, M., Marzi, G., & Guerini, M. (2018). Agile, Stage-Gate and their combination: Exploring how they relate to performance in software development. Journal of Business Research. https://doi.org/10.1016/j.jbusres.2018.05.003.
Blank, S. (2013). Why the lean start-up changes everything. Harvard Business Review, 91(5), 63-72.
Camuffo, A., Cordova, A., Gambardella, A., & Spina, C. (2019). Scientific approach to entrepreneurial decision making: Evidence from a randomized control trial. Management Science, https://doi.org/10.1287/mnsc.2018.3249.
Candi, M., & Beltagui, A. (2019). Effective use of 3D printing in the innovation process. Technovation, 80, 63-73.
Contigiani, A. (2018). Experimentation, Learning, and Appropriability in Early-Stage Ventures. Available at SSRN: doi:10.2139/ssrn.3282261.
Cook, T. D., Campbell, D. T., & Shadish, W. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.
Cooper, R. G., & Sommer, A. F. (2016). The Agile–Stage-Gate hybrid model: A promising new approach and a new research opportunity. Journal of Product Innovation Management, 33(5), 513–526.
Eisenmann, T.R., Ries, E., & Dillard, S. (2013). Hypothesis-driven Entrepreneurship: the Lean Startup. Harvard Business School Case 9-812-095.
Felin, T., Gambardella, A., Stern, S. and Zenger, T. (2019). Lean startup and the business model: Experimentation revisited. Long Range Planning, forthcoming. Available at SSRN: https://ssrn.com/abstract=3427084.
Furr, N. R., & Dyer, J. (2014). The Innovator's Method: Bringing the Lean Start-Up Into Your Organization. Harvard Business Press.
Garvin, D. A. (2003). Learning in action: A guide to putting the learning organization to work. Harvard Business Review Press.
Christian Hampel, Markus Perkmann & Nelson Phillips (2019): Beyond the lean start-up: experimentation in corporate entrepreneurship and innovation, Innovation: Organization & Management (in press).
Kahneman, D., Lovallo, D., & Sibony, O. (2011). Before you make that big decision. Harvard Business Review, 89(6), 50-60.
Kerr, W. R., Nanda, R., & Rhodes-Kropf, M. (2014). Entrepreneurship as experimentation. Journal of Economic Perspectives, 28(3), 25-48.
Liedtka, J. (2015). Perspective: Linking design thinking with innovation outcomes through cognitive bias reduction. Journal of Product Innovation Management, 32(6), 925-938.
Luca, M. (2014). Were OkCupid’s and Facebook’s Experiments Unethical? Harvard Business Review, online edition, available at: https://hbr.org/2014/07/were-okcupids-and-facebooks-experiments-unethical.
Lynn, G. S., Morone, J. G., & Paulson, A. S. (1996). Marketing and discontinuous innovation: the probe and learn process. California Management Review, 38(3), 8-37.
MacCormack, A., Verganti, R., & Iansiti, M. (2001). Developing products on “Internet time”: The anatomy of a flexible development process. Management Science, 47(1), 133-150.
Markham, S. K., & Lee, H. (2013). Product Development and Management Association's 2012 Comparative Performance Assessment Study. Journal of Product Innovation Management, 30(3), 408-429.
McGrath, R. G. (2010). Business models: A discovery driven approach. Long Range Planning, 43(2-3), 247-261.
McGrath, R. G., & MacMillan, I. C. (1995). Discovery driven planning. Harvard Business Review. July-August, 44-54.
Micheli, P., Wilner, S. J., Bhatti, S. H., Mura, M., & Beverland, M. B. (2019). Doing Design Thinking: conceptual review, synthesis, and research agenda. Journal of Product Innovation Management, 36(2), 124-148.
Petersen, K., & Wohlin, C. (2010). The effect of moving from a plan-driven to an incremental software development approach with agile practices. Empirical Software Engineering, 15(6), 654-693.
Pisano, G. P. (2019). The hard truth about innovative cultures. Harvard Business Review, 97(1), 62-71.
Sobek II, D. K., Ward, A. C., & Liker, J. K. (1999). Toyota's principles of set-based concurrent engineering. MIT Sloan Management Review, 40(2), 67-83.
Terwiesch, C., & Loch, C. H. (2004). Collaborative prototyping and the pricing of custom-designed products. Management Science, 50(2), 145-158.
Thomke, S. H. (1998). Managing experimentation in the design of new products. Management Science, 44(6), 743-762.
Thomke, S., & Bell, D. E. (2001). Sequential testing in product development. Management Science, 47(2), 308-323.
Thomke, S., & Fujimoto, T. (2000). The effect of “front‐loading” problem‐solving on product development performance. Journal of Product Innovation Management, 17, 128-142.