An approach to combining analytical model outputs with an expert’s intuitive judgment that proves successful in one forecasting context, may very well prove to be inefficient in another
Today, one of the most common problems businesses face when launching new products is how to reliably predict future success in times of dynamically evolving market conditions. Even though information technologies become more and more advanced- allowing us to rely on a wide range of decision support tools- single point predictions about the future always contain an element of uncertainty. For instance, when launching the new multi-touch iPad series in January 2010, Apple announced that up to 5 million units could be sold during the first business year. In comparison, sales estimations provided by analysts from investment bank Oppenheimer turned out to be much more conservative, predicting 1.1 million units for the same period. Finally, BroadPoint AmTech, a broker-dealer subsidiary of Broadpoint Gleacher Securities Group, Inc, forecasted sales figures of 2.2 million iPads during 2010. So, how do we know which figures are right? Apple’s iPad represents only one example for the problem of generating accurate forecasts when launching new products. In fact, substantial forecast deviations are quite common, particularly when insufficient contextual knowledge exists – knowledge that could help us understanding causalities underlying a product’s success in a more meaningful way.
While in the common business press, the process of how a forecast is generated all too often remains a “black box” to the reader, it is likely that managerial predictions rely on a mixture of both a statistical model prediction and an expert judgment- whereby the optimal mix crucially depends on the nature of the task context. In fact, while in the financial sector, traders can quite often comfortably rely on complex computational models based on historical time series data to predict stock market volatility. In other environments, managers fare better when trusting their own intuition. For instance, when forecasting box office results for the first week of the recently released 3-D movie Avatar, leading industry experts made astonishingly accurate sales estimations, deviating only slightly from the actual results. When asked about their approach, experts stated that quantitative, historical information is only limitedly available, meaning the predictions ultimately come down to what they referred to as their “gut instinct.”
This, it seems that an approach to combining analytical model outputs with an expert’s intuitive judgment that proves successful in one forecasting context, may very well prove to be inefficient in another. This is because analytic models and experts have complementary characteristics in the sense that the strengths of one input compensate for the weaknesses of the other. For example, while experts tend to suffer from decision bias, overconfidence, and organizational politics, models are immune to such pressures and consistently take base rates into account, optimally weighting the available task information. However, models can be rigid and incapable of interpreting new information, whereas experts proficiently rely on highly organized, domain-specific knowledge to subjectively evaluate new variables.
To explore how organizations differ in their approach to combining models and expert judgments, I collected field data from 54 senior executives at 36 leading pharmaceutical, petroleum, defense technology, and entertainment companies. In a series of in-depth interviews participants described typical instances in which they generated new product forecasts in their professional environment and rated their perceived reliance on analytic versus intuitive information processes. Interestingly, managers across all industries believed their forecasts to be primarily “rational,” in the sense that they relied heavily on statistical model outputs, evaluated critical pieces of information in an objective, systematic manner, and ensured a high degree of process transparency and consistency. Furthermore, I detected important differences from one organization to another. In particular, the degree of process rationality did not simply to result from uncertainty associated with the general industry context, as commonly suggested by leading strategic management experts. Rather, the degree to which forecasts were perceived to rely on analytical models versus managerial intuition was more affected by two micro-level characteristics inherent in the task itself: ambiguity and complexity. What does this mean? Well, the ability of managers to build and utilize sophisticated forecasting models critically hinges (a) on how complex the task is in terms of the amount and type of data that needs to be processed and (b) on the ambiguity of the task information available prior to making a judgment. Moreover, task ambiguity mediated the negative relationship between complexity and forecasting rationality, so that when information was highly ambiguous, task complexity increased reliance on expert intuition to a far greater extent than when ambiguity was low. This research suggests that expert intuition is believed superior to analytical forecasting models when task information is highly unreliable.
Driven by these initial observations from studying managerial forecasting perceptions, further investigations were required in order to understand how managers actually combine their intuitive expertise with decision support models. Together with Allègre Hadida, Lecturer at the University of Cambridge, we conducted a comprehensive field study in one specific, highly uncertain industry environment: the music industry. Over a period of 12 weeks, we recorded forecasting judgments from 92 managers of global record companies and 88 graduate students about the Top 100 chart positions of upcoming pop-music singles. Each week, participants were confronted with a set of key information about songs which were scheduled for release during the subsequent week and asked to predict the position at which the song would enter the Top 100 charts. By focusing on the performance of both participant groups alongside a statistical forecasting model, it was possible to analyze the circumstances in which experts utilize their intuition in order to add predictive power beyond the statistical model. Our findings not only confirmed our initial conjectures with regards to the roles of task complexity and ambiguity, but also allowed us to compute exact weights to be assigned when optimizing model-manager aggregations. Specifically, in low ambiguity conditions, we calculated an 80:20 split between statistical model and managerial intuition, whereby in high ambiguity conditions, the optimal split shifted to 40:60 if the participant had previously acquired highly specific domain expertise. Of course, while the optimal splits were contingent on our data sample, our study delivered one clear message: Task factors substantially impact the degree to which organizations should rely on models versus expert intuition.
With this in mind, it is important to consider the level of ambiguity and complexity of a task when configuring new product forecasting processes. For instance, it may help managers in the pharmaceutical industry gain a better understanding of statistical model predictions vs. expert judgments when deciding to move to the next stage or halt drug development altogether. Similarly, venture capitalists may benefit from these insights when evaluating the success potential of investment opportunities in a variety of different industry settings. Ultimately, a better understanding of how to adapt model-manager forecasts to specific requirements of the task environment will enable us to implement forecasting processes in a more sustainable way by dealing with new product uncertainty in a systematic manner.
[This research paper has been reproduced with permission of the authors, professors of IE Business School, Spain http://www.ie.edu/]