Challenge 2 – Being Explicit, Quantitative and Scientific
Once we decide to value probable foresight, we need to practice it well. Here are three common ways we can fail in that work. We can fail when we are not sufficiently explicit in our assumptions and models, we can fail when we don’t look for quantitative data that could help improve our models, and we can fail when we don’t take a sufficiently scientific approach to our work. All three of these strategies work together well, so let’s discus them together now, in reverse order. Recall the Five Steps of the Scientific Method:
- Make some Observations, preferably including quantitative data
- Formulate a Hypothesis (causal or correlative model)
- Generate a Prediction from that Hypothesis
- Create an Experiment to test the Prediction
- Analyze your Results. If your Hypothesis isn’t sufficiently Predictive, return to Step 1. If it is, say “Eureka”, Celebrate, throw a Party, etc.)
These Five Steps form the acronym OHPER. On the TV-series Stargate, Ohper was a 400-plus year old wise man with a deep understanding how ecosystems and biology worked. He thought humanity, which is still learning what biology is, still had a lot of growing up to do. Remember Ohper and his crazy hair, and you easily remember the steps of the scientific method, and use it consciously in your work and life.
Many futurists enjoy telling entertaining stories. Some are masters of using words to influence others. But without also taking a quantitative and scientific approach, our words remain subjective and our arguments rarely generate hypotheses that we can test. Some futurists who ignore quantitative and scientific approaches even think the world of words, argument, and qualitative descriptions of the future is somehow superior to attempts to count, model, predict, and test our predictions.
That attitude is a particularly common problem with postmodernists, who think the scientific approach is just one model, one world view. They even have a word they like to misuse, “scientism” to argue that we can overapply the scientific approach. We can’t. The appropriate definition of scientism is the championing of the scientific method in a way that reduces or denies the use of creative, intuitive, and normative approaches to foresight and action. The Three Ps model reminds us that all three approaches are fundamental to understanding and adaptation.
We foresighters do ourselves no good by stating things like “the purpose of forecasting is not to get the future “right””, found on p.127 in Bishop and Hines’ Thinking About the Future (2013), an otherwise-excellent guide to foresight practice. In reality, getting the probable aspects of the future right, in a quantitative and probabilistic way, is one of the three main types of foresight work, and the central motivation of both forecasting and prediction. We must recognize and help those in our profession who seek to to discover the quantitatively predictable aspects of the future, at whatever level of granularity. We must welcome all those who seek to bring science knowledge and methods, formally and informally, to bear on our foresight problems. These kinds of individuals are as critical to the field as those who are interested in creatively exploring the possibility space, and in finding preferred futures.
Using explicit models and assumptions, seeking out quantitative data, and speaking in predictive and probabilistic answers, even when these are crudely derived, brings an accountability to our work that is rarely seen with qualitative approaches. Ideally, your predictions will express a confidence-interval, a predicted range of future results, and they will be based on some hypothetical model, even if it is very simple, of the causal factors involved, and of the experiments (tests) and evidence or outcomes (result) that might improve or change your predictions.
An explicit, quantitative, and scientific approach is also critical to building an evidence-based consensus for action, as the phrase “care, count, and act” in social activism attests. First we decide to care about a problem (make an assumption), then we try to count how bad it is, then we report that data and seek to generate more effective action. We can also make sure to make specific and quantitative predictions, and to publicly review our predictions annually.
The The Economist magazine does annual prediction analysis in their The World in [Year] publication, dedicating a page to reviewing their prior years predictions, and exposing their biggest wins and losses. Such accountability helps expose our hidden biases and poor mental models, keeps us honest, and tends to make our predictions more qualified over time, and more conservative as well in most ways (though not, as we’ll see, in those special areas that are most impacted by accelerating change).
As we’ll see throughout the Guide, one common mistake in foresight is to mistake a clear view for a short distance. We imagine that certain aspects of our future will arrive earlier than we think. This happens because we make faulty assumptions or models about how easy the necessary R&D will be, how easy it will be to commercialize, how soon and how many others will arrive at the vision we see, and how far others want to move science and tech, business and society toward that vision. Exposing our underlying assumptions and models, observing how they differ from those used by others, and asking how we might test those assumptions and models is often the best way to avoid this “clear view” foresight mistake.
Being Explicit in Our Assumptions and Models – An Educational Foresight Example
Let’s look briefly now at the challenge of being explicit with our assumptions and models. Consider the following educational foresight example. The book School’s Out (1992), by the policy analyst Lewis J. Perelman, is a generally excellent work of 20th century predictive and normative foresight. See his 1993 Wired article, also called School’s Out, for a synopsis of the book. In both his book and article, Perelman offers us a prescient and articulate vision for lifelong digital and AI-enhanced education, continual microcertifications, and competency-based hiring. Though it was written at the dawn of the web, it broadly anticipated the continued advance of technological unemployment, the rise of the gig economy, and progress in educational software, MOOCs, certifications, adaptive learning, and many other features of our modern EdTech startup landscape. It is still one of the best aspirational visions I’ve seen for the future of our educational industries over the next fifty years. But just because these are noble aspirations, doesn’t mean we’ll see them materialize as fast or as fully as we might like.
As a minor mistaken assumption, Perelman assumed that companies would want to move to competency-based testing and hiring a lot more rapidly than they have. He also assumed EdTech companies would be a lot more focused on building measurable, predictive, prioritized, and learning optimized platforms than they have been to date. I think his end vision in each case was correct, but his timing for each was far too premature. Improving educational software and solutions is technically harder than he imagined, and there has also been less motivation to build efficient learning systems than he imagined. Thus his vision of digital education has been a lot slower to emerge than anyone who cares about individual performance, personal responsibility, and competitiveness would have liked.
But Perelman’s most significant mistaken assumptions, are political, in my view. Perelman is a libertarian, which isn’t obvious in his writings at first, and his implicit political views led him to predict that entrepreneurs would increasingly both disrupt and replace our public educational system as digital technologies matured. Nothing of the kind will happen, in my view. He explores how expensive, top-down, change-averse, ineffective, and unadaptive our current educational systems are in our advanced democracies. But showing how bad a system is doesn’t tell us what comes next. America’s liberal democracy long ago made a commitment to public education, and I expect we’ll slowly reform it, not replace it.
Let me offer a different, let’s call it a technoliberal, rather than libertarian, model for the future of education. I’ll list some of my assumptions explicitly in this model, so they can be more easily critiqued as well. Some counter assumptions I’d offer are the following:
- Science and technology will continue to accelerate, assuming no extinction-level events. This in turn will cause accelerating wealth production in our industrial democracies.
- Libertarian values (greater individual freedom, competition, shrinking the state) will never be a political majority in any industrial democracy with growing wealth.
- We’ll get more plutocratic, neoconservative values in government as wealth and tech grows. Those values actually seek to limit competition and freedom, and consolidate power.
- We’ll also get more liberal and egalitarian values in our citizens as wealth and tech grows. Issues of social justice, fairness, and inequality will grow ever more important.
- Issues of personal productivity and competition will grow less important as wealth and tech accelerate, not more. What we’ll care about instead is fair rules and wealth distribution.
- Public education will continue to get most of our education budget in this environment, and the size of the state will continue to grow.
Note that other than Assumption 1, the rest of my assumptions are based on a different set of hypotheses (causal models) than those of Perelman about social and political change in modern democracies as their wealth and technology continue to grow. What we want most in liberal democracies today is more social justice and egalitarianism, not more personal productivity. Ideally, we all want to become more accomplished than our parents, but we also want to do less work than they did, not more, and to do more meaningful work, in more flexible ways. As we’ll discuss in Chapter 8, I expect the average citizen will increasingly use learning machines like Personal AIs, as they advise us on how to live, buy, and vote, to help us create a much more egalitarian world, with much more extensive social safety nets, including universal basic income, in coming decades.
Looking to the future, I think we’ll need more mature AI, digital currencies, and crowd-benefiting collaboration technologies before EdTech begins to greatly improve learning outcomes in large segments of the private sector, in areas like MOOCs, online education, test preparation, and corporate training. We can also expect earlier disruption in small leading communities in defense and intelligence, which are often the first to employ powerful new technologies. But in public education, where the content and priorities will continue to be dominated by large committees, and politically connected, change-averse corporations like McGraw Hill, Pearson, and the like, educational change and improved learning outcomes will likely be a lot slower than we’d like. I’d love to be proven wrong in this last predictions\, and there will always be local exceptions, so I don’t wish to dissuade social entrepreneurs from entering public education reform. But you should know what you are up against, and understand where disruption will be easier and where it will be harder in coming years. If you want to see where we are in EdTech today, the EdSurge newsletter offers a great overview of the field. It is recommended daily reading for anyone in educational foresight.
In sum, our predictions are always based on our models. We need to expose, critique and test those models as best we can. Besides keeping up on the literature and news in your field, listing your assumptions, as you see them, and asking others to critique them, is a small but important step toward being more scientific. So are quantitative estimates, wherever you can find or make them. Let’s conclude this section with a few questions to ask yourself and your teams: How explicit are you with your assumptions and models? Do you try to find quantitative data and trends to support or contradict them? How often do you and your team use a roughly scientific approach (OHPER) in your foresight work? How often do you generate hypotheses (models), predictions, and confidence intervals? Do you ask what experiments or results might change your predictions?