Foresight’s Great Myth – “It is Rarely Worth Doing as It Is So Often Wrong”
Recall that foresight has three core practice types. The first, probable futures, is about forecasting and prediction, and protection from predictable risks (intelligence, risk management). The second is about uncovering possible futures (alternatives, scenarios, experiments, wildcards). The third is about exploring preferable futures (strategies, plans, surveys, visions, agendas, competitive intelligence).
Each of us will be quite wrong at times when we try to explore probable, possible, or preferable futures. For some good histories of prediction and forecasting failures, see Steven Schnaars Megamistakes (1989), Laura Lee’s Bad Predictions (2000), Nicholas Taleb’s Fooled By Randomness (2005), Bob Seidensticker’s Future Hype (2006), Adam Gordon’s Future Savvy (2008), and Doug Hubbard’s The Failure of Risk Management (2008). For more on our poor record of exploring the scope of possible futures, see William Sherden’s excellent The Fortune Sellers (1994) and Taleb’s The Black Swan (2010). Our very weak ability to uncover and plan usefully for preferable futures is also well documented. See Henry Mintzberg’s The Rise and Fall of Strategic Planning (2000), Clay Christensen’s The Innovators Dilemma (2013) and Walter Kiechel’s The Lords of Strategy (2010) for three great accounts.
It is alternately amusing, enlightening, and sobering to read such books. Amusing to see how wrong we can be, enlightening to realize how much we can improve, and sobering to realize how much suffering that bad prediction and risk management, poor exploration of possibilities, and limited understanding of preferences (our teams, our customers and our competitors) have caused to individuals, organizations and societies over the centuries.
Then there is the sobering topic of mental bias. There are many ways we humans are fallible in our feeling and thinking. We’ll explore a long and incomplete list of emotional-cognitive biases in Chapter 2. Our history of mistakes and biases when thinking about the future has allowed a great myth to arise concerning foresight work.
The myth, which is unfortunately common in some organizations, is that foresight work is rarely worth doing, as it is so often wrong. Let’s call this the “myth of ineffectiveness”. This myth is sometimes voiced as an explicit belief, but more often it is implicitly held, assumed but not discussed, both by many of our clients and by lay observers of our profession.
All three practice types deserve a vigorous defense from the myth of ineffectiveness, and we’ll try to do so in this Guide. Because human beings fail the most often and the most glaringly at probable foresight, this is the area of our practice that is most deeply and commonly devalued as a result. But all three practice types are commonly discriminated against due to this myth.
In any complex endeavor, such as exploring the future, there are myriad different ways to fail, and often only a few special ways to reliably succeed. This observation has been called the Anna Karenina principle. Leo Tolstoy opens Anna Karenina with the observation that happy families are all alike, and unhappy ones unhappy “each in their own (unique) way.” In biology, we see this in the (as far as I know unnamed) principle that “most mutations are deleterious (harmful) to the organism.” Because evolution has created a complex, interdependent system, there are many ways to fail with random gene changes, and only a few ways forward, at any point in time, further improve adaptiveness.
In foresight practice too, there are a large number of unique ways to fail, and only a few proven and well-balanced ways to reliably succeed. For example, with respect to probable foresight, we can fail by forecasting or predicting without good knowledge or proper models of the system in question. We can fail by straying into fields where we have developed inadequate conceptual expertise. We can fail by predicting from a position of known or unconscious bias. We can fail by neglecting to subject our forecasts and predictions to intense criticism from an appropriately skilled and cognitively diverse crowd. We can fail by forecasting or predicting only infrequently, thus never learning accuracy and conservatism. We’ll discuss these and other practice traps throughout this guide, and with more depth in Chapter 12.
Yet with all the ways to fail, the modern foresight field has been paid to engage in over seventy years of such efforts. We’ve gotten steadily better at them, in every domain. Never have there been more specialists with more models, implicit and explicit, doing forecasting and prediction in the world, never has there been a richer environment for exploring possible and preferable futures, and never has more economic value been created by those individuals and organizations who have been best and fastest at seeing and getting strategic about what comes next. We’ll meet some of these folks in the pages to come.