It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring. — Carl Sagan
III. Universal Accelerating Change
Accelerating change is one of the most future-important, pervasive, and puzzling features of our universe. We can see it in the way galaxies have created more complex suns, leading to life, and we see it across the entire five billion year history of life on Earth. Most obviously, we see it in the twenty-thousand year history of human civilization, in ways yet to be fully understood or appreciated by science.
This section will largely be about how acceleration happens, at various levels in our universe and society, and where it generally seems to be taking us, when we look at change and complexity from a universal perspective. We’ll broaden our universal perspective in Chapter 11, when we explore evolutionary development, a way to understand why acceleration exists, and the various processes that regulate it.
We got an introduction to evolutionary development in our last section, Five Global Goals and Fifteen Trends which gave us some ways to better see and define social progress, and to use acceleration for social good. We’ll continue our exploration of evolutionary development in Chapter 11.
The natural processes that drive the world are the domain of universal foresight, our science and philosophy of change. Universal foresight is often neglected in foresight books and education, but it is a domain every future-oriented leader needs to think carefully about today. In the last century, science and systems theory have helped us see some of the ways that our universe itself, using human civilizations as catalysts, here on Earth and presumably in many other places, is increasingly rapidly evolving and developing certain types of social change.
Universal foresight reminds us that whatever our personal or organizational visions of the future may be, there are larger systems of human civilization, life on Earth, and our universe itself that we are embedded within, systems that are busily engaged in going somewhere too, regardless of what we may individually want. If we don’t understand the natural trajectories of the larger and more powerful systems in which we are embedded, we can easily waste our precious time, energy, and lives on strategies that won’t get us as far as they should, because they go against the natural processes of the world around us, either in whole or in part.
Universal foresight is the domain of our world view, the Big Picture models and assumptions we to understand the world and our ideal role in it. Our world view guides the way we approach the other fundamental foresight domains (Personal, Organizational, and Global foresight), which is why it is such a critical, yet often-neglected element of foresight education. Some of our world view is conscious, but parts of it are also unconscious. We implicitly accept many of the world models we get from our families, our culture, and our religion.
When we talk about those world models, as we will do in this chapter and in Chapter 11, we can come to see a few things about our universe, and ourselves, that weren’t previously obvious. Some of the things we will discuss may change your view of where civilization is going, of why we are here, and of what goals, values, visions, and strategies are universal, and thus worth adopting for yourself, your family, and your organization. As we’ll see, you can use a universal perspective to develop a deeper and more useful understanding of progress itself, and the values and goals that are most likely to generate measurably more progress in our and our children’s lifetimes. That is a pretty big payoff! Hopefully the possibility of that kind of reward will motivate you to think through some of the more abstract ideas, and to adopt a longer and wider view of change than you may have found valuable to date. So let’s begin.
As we’ve said, perhaps the most future-important, surprising, and puzzling universal process we can identify is the phenomenon of accelerating change. As I mention in my 1999 online article, A Brief History of Intellectual Discussion of Accelerating Change, most future thinkers have ignored this phenomenon for decades, but we are now finally beginning to see it, and ask where it is taking us, and why it exists.
A Perception Problem
Most growth processes in living systems are on S-curves. They have periods of exponential growth, followed by periods of declining growth (a negative exponential). Growth eventually levels off (an S-curve) and in some cases, moves steadily or suddenly into decline and recycling, creating a life cycle curve. That kind of curve describes almost all living things, including human beings, which eventually decline and get recycled. For more such curves, see our section, Change Curves and Other Cycle Models in Chapter 4 (Models).
Again, the first phase of an S-curve is exponential. Exponential growth can create problems for us, because, as the futurist Ray Kurzweil argued well in books like The Singularity is Near (2005), we are mentally wired to treat exponential change as linear change. We need to really substantiate this claim, and understand its impact. The first scientist who discovered this effect, as far as I can tell, was the psychologist Gustav Fechner. In 1860, he showed that humans convert geometric progressions in intensity of sound, sight, and feeling into arithmetic progressions with our senses. This is now called Fechner’s law. It’s the best evidence I know that humans are mentally wired to sense exponentials as linear change. We also like to think linearly about exponentials and all other nonlinear relationships, as well. See this great HBR article, Linear Thinking in a Nonlinear World (2017) for more on that mental limitation of ours.
Our conversion of exponentials to linearity can easily cause perception and reaction problems for us. As the professor of physics Albert Bartlett famously said, “The greatest shortcoming of the human race is our inability to understand the exponential function.” It is worth watching Bartlett’s 80 minute lecture, Arithmetic, Population, and Energy: Sustainability 101 (2002), which begins with this phrase. In this presentation, a version of which he gave to college students since 1969, he made seriously flawed predictions regarding Peak Oil, as he ignored exponential and power law progress in exploration and recovery technology, and he didn’t seem to recognize that global population was already in great decline (in negative exponential growth since the 1960s), except in those countries whose development has been neglected by the wealthy. He also did not seem to recognize long term trends in ethical progress, and how and why people have less kids and tolerate less violence as they get more developed. But his basic message, that we don’t think smartly enough about exponentials, is well worth recognizing.
One kind of problem comes when the doubling time of a problem-creating exponential is very fast, as with bacterial or viral doubling. We often won’t see a problem coming, and are mentally unprepared for it. But even when we are mentally prepared, we may also need to be able to act in an exponential or logarithmic fashion, in order to reduce the intensity of the threat and manage its impact. For example, public health surveillance systems can fail when we don’t expect rapid exponential growth in pathogens, and rapidly remove the conditions that allow that growth early on, before they reach pandemic proportions. But they can also fail if they aren’t able to mount exponential (or logarithmic) responses as the problem scales. Fortunately, within our own bodies, our immune system mounts such responses to exponentially scaling infections, so even if we are mentally unaware how fast a cold is coming on, and aren’t resting up ahead of time as we should, at least our bodies have been selected to respond in the appropriate fashion. Our immune systems have features like redundancy, decentralization and local autonomy, massive parallelism, exponential scalability, and rapid response times that are constant with mass (largely independent of body size, even across species types). Learning how to bring immune-system like features to our our public health, disaster relief, intelligence, security, and defense systems, and to our increasingly intelligent computer systems, are major areas of research. See Melanie Moses’s work on computational and organizational immunology for one example of that exciting yet still very understudied frontier.
Another perception problem happens when an S-curve runs over a very long span of time. We saw this with human population growth in the 20th century, and a variety of population-dependent environmental problems like pollution and deforestation. We were in exponential growth in all of those since the start of human civilization, but we didn’t recognize them as a problem until the late 1960s, with the rise of the environmental movement. We thought linearly about those variables, not exponentially. Fortunately for us, we hit an inflection point in global population in the early 1960s, just before environmentalism emerged, as global population growth entered a negative exponential mode. Unfortunately, it took us another forty years after reaching the inflection point of the S-curve for us to even agree that we’d entered a new regime, and that the total human population would peak in the mid 21st century, then start declining from there. There is always a perception lag with exponentials, whether they are positive (early phase S-curve) or negative (late phase S-curve) in mode.
I think the greatest of our perception problems with exponentials comes at the universal scale. We don’t see that certain universal events are happening ever faster, and we don’t ask why that might be. The field of accelerating change has been ignored for decades. Occasionally deep thinkers like Carl Sagan have pointed out it is a phenomenon we need to address, but there is no funding for it. Government funding agencies seem to shy away from funding this work, as it leads us inevitably to realize that Abundance thinkers like Julian Simon are much more correct about our future than Malthusians like Paul Elrich. It seems more ethically proper for us to ignore accelerating abundance, as there are so many problems that we still need to solve. But as we argue in this Guide, it is those abundance processes that will get us out of our current plight, and advance human progress better than any other strategy available to us.
As we’ll see in this chapter, in certain cases, as with the rate of production of societal information, including scientific knowledge, the performance per resource inputs in our computers, and certain performance trends in nanotechnology, this exponential growth often gets steeper over time, a very curious phenomenon called superexponential growth (J-curves). As we’ll see, the acceleration of nanotech (and its apparently universal trend, densification) and infotech (and its universal trend, dematerialization) are surely on an S-curve as well, but the inflection point for those universal processes are clearly far beyond anything we human beings can do or conceive. They may hit their inflection points only in a much older and more “terminally developed” universe. Moore’s law hit a limit in 2005, but as I’ve written elsewhere, that just opened up the opportunity for us to massively parallel exponentiation of increasingly bio-inspired computer chips, and the rise of deep learning computers. In the near term, all these accelerating processes are rapidly taking us to a phase change, to a new kind of life and intelligence on Earth. Not recognizing this transition is simply not paying attention to how nature apparently works, on all Earthlike planets in our universe.
For a special set of superexponential processes, then, the future looks something like the curve at right. Looking ahead, we think we’ll continue to see the same kind of upward sloping but low exponential growth we’ve seen in our recent past, but the universe has something else in mind. In these special areas, new discoveries in science and technology, once applied to business and society, are always shifting us to ever faster modes of growth and improvement. Again, each particular technical system is always on an S-curve. It’s abilities always top out. But at the same time, special kinds of information, computation, and complexity continually migrate to increasingly powerful and efficient new domains, and the end result is the J-curve we see above.
Exponential and Evo Devo Foresight
This and the next chapter will help you develop exponential foresight, the ability to see and improve a special set of natural processes that appear to continuously accelerate. We will focus in this Guide on the five I4S processes, innovation, intelligence, interdependence, immunity, and sustainability production. If we live in an evo devo universe, as we will propose, these five processes seem particularly central to how complex systems adapt and improve. We’ll ask ourselves some questions about these processes, how they appear to drive our future, where they are taking us, and what should be our relationship to them, once we admit that they are pervasive in our environment.
We’ll see why the continued exponential and superexponential growth of these special activities in human civilization isn’t an illusion of human psychology, or a self-serving ideology or belief, as some scholars and futurists still claim today. Rather, it is highly probable that these forms of social and technological acceleration will continue, because they are each being employed by all our various leading complex systems, each in their own unique ways. By paying careful attention to current exponential and evo devo processes, and understanding how they work, we can make better bets about our probable futures, better explore our possible futures, and better chart our preferable futures, over the next few generations.
By the end of this chapter, I hope you’ll feel a lot more “accelaware”, or aware of and motivated to see, use, and lead these accelerating processes in your own work, and try to harness them to manage and solve some of our longstanding human problems. I believe it is our moral duty, as foresight leaders and as citizens, to learn to see and guide these accelerating changes better every year, because the faster they get, the more powerful, transformative, and disruptive some of them will be to certain aspects of our societies, our organizations, and even our own self-conceptions. While many of these transformations will be positive, and their arrival should be hastened, some will be damaging, and their negative effects should be minimized with good foresight and action. To be better stewards of our future, we need to cultivate what Singularity University co-founder Peter Diamandis calls “Exponential Wisdom” in his podcast series.
As we’ll see, those of us lucky enough to be alive today are living in an era we might call the Great Speedup, a time in which exponential change is now happening so fast that it is noticeable to each of us, a time when a special subset of our planetary processes, including information production, computing, communications, nanotechnologies, and scientific knowledge and technical abilities, are running faster every year. These exponential changes are in turn accelerating global wealth creation and certain social and political changes, and decelerating other changes, such as human population growth, most forms of pollution, and violence, as we’ll see.
A sizeable fraction of futurists still ignore the topic of accelerating change, and see nothing special in processes of exponential and superexponential growth. One well-respected foresight professional said that to me in private at a leading futurist conference, as late as 2007, ironically the year that the first generation iPhone was released. That was a significant cultural event, as Tom Friedman argues below, but in truth, we’ve haven’t really seen anything yet. Today’s digital technologies are going to get smarter, more powerful, more affordable, and more ubiqitous every year, at an accelerating pace. The digital change we see in the next ten years will make our last decade of improvements look nearly flat by comparison.
My Own Journey to the Study of Acceleration
I’ve been thinking about accelerating change since 1972, when as a twelve year old middle school student, I came to two valuable sets of hypotheses about it, STEM compression and the transcension hypothesis. We’ll briefly consider both of these later in this chapter. You can judge for yourself whether you think either of insights help us better understand accelerating change. Whether you agree with my hypotheses or not, I hope you’ll agree treating acceleration seriously is something we all should do. That’s the main takeaway I hope you will gain from this chapter.
After twenty-seven years of reading and thinking about acceleration privately, I began writing about it publicly in 1999, on my personal website, originally called SingularityWatch, and now called AccelerationWatch. I started that website because I could find no one else on the web at the time that was talking about acceleration from both a social and a universal perspective. I argued the need for the emergence of an academic field I called acceleration studies, studying universal and social acceleration from all perspectives, if we were to develop the serious foresight this topic deserves.
In 2003 I co-founded a small nonprofit, the Acceleration Studies Foundation, to try to popularize that goal. At some point, study of the mechanisms and probabilities of societal acceleration processes will become commonplace, and academics will finally treat them with the seriousness they deserve. Unfortunately that hasn’t happened yet, so until then, we each can offer our own hypotheses and their early evidence, and I will do that here.
Fortunately, every year since starting our small nonprofit, the probability of such a field emerging has grown. An influential group of people are seeing and writing good works that illuminate various aspects of accelerating change. For a recent example, see Thomas Friedman’s latest book, Thank You For Being Late: An Optimists Guide to Thriving in the Age of Accelerations, 2016. This is a great introduction to the last decade’s digital and social accelerations and a number of their future implications. Such a book was quite hard to find just two decades ago, but today they are becoming almost commonplace.
Friedman argues that 2007 was a “special year” in which acceleration began. As we’ll see, that claim is just a journalist’s contrivance, as 2007 is simply a convenient place at which to begin telling one particularly interesting and obvious acceleration story, the explosion of mobile computing, and some of the technologies around it. The futurist Alvin Toffler told a similar acceleration story in 1970, in his seminal book Future Shock, and there were others who saw scientific, technological, and societal acceleration long before him, as we’ll describe in this chapter. The truth is that accelerations of various types have been occurring constantly for the entire history of humanity.
We’ll see in this chapter, the acceleration of digital, economic, and social technologies appears to be baked into the physics and informatics of our universe. Some of our accelerations are the result of human creativity, things our entrepreneurs, experimenters, and artists bring into being. Others are the result of human scientific and technical discovery, of hidden pathways to faster futures, pathways waiting patiently for us to find them, pathways that will be inevitably discovered on every intelligence-supporting planet in our universe.
Not only does our universe support high levels of emergent complexity and mind here on Earth, it supports an even more improbable condition of continuously accelerating complexification in special environments, an acceleration that seems increasingly self-stabilizing under periodic, and often catalyzing, episodes of selective catastrophe. Processes like large scale structure formation in our early universe, stellar nucleosynthesis, and redox organic chemistry may each be developmental portals (unique gateways) to our particularly accelerative form of structural and functional complexification and intelligence growth.
Scholars like Eric Chaisson, in his bold and brilliant book, Cosmic Evolution (2001), Robert Aunger (Major transitions in ‘big’ history, TF&SC, 2007) and others have proposed that it is the increasingly intelligent control of energy flow that drives structural-functional acceleration in our universe. Chaisson has estimated exponentially increasing energy flow density (free energy flow per gram or volume) in a special subset of complex adaptive systems over universal time (figure right). We’ll return to this figure later in this Guide. Consider it just one of many useful ways to measure that universal J-curve we’ve been discussing. Life’s accelerating complexification, in turn, has reliably produced a variety of social tool using species, and in humans, accelerating intelligence, immunity, and (though it is often debated) morality in recent millennia. Consider how both the frequency and severity of global social violence have statistically declined over human history, even as our potential for committing acts of violence at scale, via science and technology, has steadily grown, as Stephen Pinker so eloquently argues in his masterwork of social foresight, The Better Angels of Our Nature: Why Violence Has Declined (2010).
Curiously, our leading technology, digital computers, have a free energy density control rate (Chaisson’s preferred measure of complexity) that is now at least a millionfold faster than our biological neurons. This differential has grown exponentially over our “Moore’s law” era of computing, and may grow by many additional orders of magnitude as we shift to future even more miniaturized, dense, and complex architectures and technologies including massive parallelism, single electron transistors and optical and quantum computing. I call this process of accelerating complexification “STEM compression” (Smart 2002), with “compression” referring to physical density and/or informational learning efficiency growth, and expect that complexity science should even today be able to measure accelerating change via spatial, temporal, energetic, and material/mass (STEM) measures of increasing computational density and/or efficiency at the leading edge of complexification, though such quantitative measurement is beyond my abilities.
Furthermore, now that our leading computers are using biologically-inspired algorithms, and are developing increasingly general forms of intelligence, the adaptive goals they can learn from their environment should be similarly accelerated, particularly if we can intelligently aid this apparently natural process. What are those adaptive goals? As the natural (evo devo) I4S processes, we will focus in particular on five goals (abilities, drives, ends, telos) that seem particularly universally adaptive and self-stabilizing for intelligent complex systems, if those systems are built from both evolutionary and developmental processes. Those five goals, and their associated values, seem to be the most important ways we can measure and manage accelerating change toward futures that will be broadly successful, futures that will help us survive and thrive, in the most universal terms.
Again, when we view our universe from a big picture, long-term, systems perspective, the continual acceleration of certain processes – including information production, complexity, intelligence, energy flow density, knowledge and mastery of the physical world, and since the 1800’s, economic wealth production – becomes apparent. For more on this perspective, you may enjoy my academic paper, Evolutionary Development: A Universal View (2018).
When acceleration studies, or whatever we call it, becomes an accepted academic discipline, and folks begin to publish lots of papers and build lots of models to better understand how and why accelerating change occurs, we’ll give this topic the respect it deserves. Until such an academic community exists, whatever its eventual name, we’ll still be ignorant, stuck in the first phase of the IDABDAK stages we’ll discuss later in this chapter, unaware of the magnitude and kinds of accelerating change ahead. I hope this chapter helps you, dear reader, see a bit more of where we need to go next.
Seeing and Regulating Accelerating Change is Our Moral Responsibility
We will argue in this chapter that humanity couldn’t stop the continued acceleration of technology in coming decades even if we wanted to, because there are just too many independent parallel ways in which it is occurring, and too many societal benefits to the use and improvement of ever faster, more miniaturized, more intelligent, and more efficient technology. But just because this acceleration appears to be statistically inevitable, that doesn’t take away our moral responsibility to guide and regulate it.
The better we understand it, the better we can lead and guide science and technology’s evolutionary development. There are risks and problems that stem from various technological accelerations which we must learn to better see and manage. We can fund the “good” sciences and technologies, and slow down work on the “bad” ones, and regulate their use, and there are plenty of obviously bad ones, and plenty more whose effects are mixed, and require careful moral and policy evaluation.
For one example of a mostly bad technology, consider nuclear weapons. After initial decades of our leading nations sadly accelerating our nuclear weapons design and production, those same countries, in a long-running policy and security effort, increasingly turned away from the design and production of these weapons, in a sustained moral and policy effort. As William Broad chronicles in the little-known but excellent Star Warriors, 1986, the height of our planet’s nuclear weapons extremism was in the early 1980’s, when American and Soviet engineers were trying to make nuclear weapons small enough to be backpack and briefcase devices.
At a certain point, just like the Cuban Missile Crisis in 1962, an earlier peak of nuclear extremism, both sides in the escalation gained “exponential wisdom”. It took just days for us to wise up during the Cuban Missile Crisis, and a few decades for us to wise up to the challenges of nonproliferation, but we were successful in both cases. As we peered over the precipice, we realized we didn’t want to live in the kind of world that our political and technological systems were biasing us to create. So we began dismantling our warheads, and we’ve been increasingly creating a different kind of world with respect to nuclear proliferation ever since. There have been setbacks, as with the nuclear secrets that were exported from Pakistan to various countries via the nuclear spy AQ Khan in the 1980s, and with the experiments going on in one of our few remaining fascist states, North Korea, but by and large, we’ve been progressively eliminating this scourge from Earth. Human beings, in the US and the Soviet Union, made the conscious decision in the second half of the 20th century that these accelerations weren’t worth continuing. That decision greatly postpones the problem of nuclear weapons proliferation, but as any defense foresight professional knows, it doesn’t eliminate it.
Contemplating the far future of nuclear terrorism, it is obvious that well-funded small groups, lone individuals, and eventually, even bright high school students, will one day gain the ability, in principle, to develop nuclear weapons in secret, given how increasingly advanced, inexpensive, and democratized (ubiquitous) the enabling technologies (robotics, artificial intelligence, even uranium prospecting and enrichment) actually are. We even know how, with lasers, to secretly mine uranium from seawater, which is available everywhere. I will spare you the details.
What then is the solution to the problem of small groups eventually being able to use advanced and miniaturized technology to build nuclear weapons in secret? What we can do today, and have successfully done in recent decades, is to greatly lower the probability of such futures every year forward, by keeping nuclear weapons production techniques uncommon knowledge, by turning away from them in our defense industries, and by accelerating the advance of a host of socially stabilizing technologies, including bottom up and top down transparency in our societies. We are learning how to create an accelerating technological immunity, and social morality, to help us guide our accelerating technological intelligence.
Social theorist Jeremy Bentham was one of the earliest to identify this growing societal immunity, in a concept he called a panopticon (a future of “all watching all”) in the late 18th century. Futurist David Brin updated Bentham’s vision in a book called The Transparent Society (1998) is a modern classic on how we are all working together, whether we recognize it or not, to turn Earth into global digital fishbowl, via a series of technologies and policy decisions that allow both us and our increasingly intelligent machines, to watch each other’s behavior with ever more granular and realtime detail. Brin noted that democracies will accept growing transparency as long as that transparency comes with new organizational systems that help us moderate social problems like violence and crime, and as long as, for every top-down surveillance mechanisms that powerful actors like states and corporations deploy, like the NSA, the Patriot Act, or Google or Amazon or Alibaba AI, we can identify twenty times as many bottom up “souveillance” processes and technologies—think of cellphones, the internet, CCTV cams, social media, and whistleblowing—that citizens can potentially use, in individual or mass actions, to ensure that our democratic values are respected. In other words, if we build our panopticons right, by the time small extremist groups can build nukes in their basement (2060? 2160?) we will all live in societies where the probability of that happening, in any city, will be very, very low, and the ability for it to happen frequently will be effectively zero, as we’d greatly increase our technological immunity after any such local catastrophe.
We’ve considered the problem of secret nuclear weapons production a few generations hence not just because it is a dramatic social issue, but because it is especially illustrative of general problems that accelerating science and technology create for humanity. Engineers know that any science, technology, or policy, that is permitted by the laws of physics and by human behavior may eventually be produced by someone on Earth, at some time in the future. But defense leaders, politicians and managers know that just because something is technically possible doesn’t mean it it ever be commonplace. Probable and possible futures are two very different things. As a society, we can make it very improbable that certain scientific and technological activities will occur. Many advances in science, technology, economics, politics, and social norms are creating accelerating safety, truth, and morality that stabilizes all our accelerating power, freedom, and creativity. In short, our exponentially emerging foresight and wisdom can keep us from doing foolish things, even as our technical abilities continue to grow.
As we’ll argue in this Guide, slowing down and regulating the potentially bad technologies, while accelerating the good ones, is often enough to solve not just the problem of future nuclear terrorism, but just about every big current and future social problem we can foresee. Since it is the misuse of accelerating science and technology in our economies and societies that creates most of the problems of the modern world, it is predictable that only science and technology that are powerful enough to solve those same problems. That’s the world we live in, and the bargain we have made with technology in order to gain its benefits, whether we understand that bargain or not.
For a few more examples, consider some of the other harmful harmful 20th century accelerations we’ve eventually turned around as well. Think of human population growth, which started decelerating in the 1970s, just as it was reaching alarming rates of growth. Consider energy use per capita, which has also been decelerating since the 1970s. Consider deforestation, which has been decelerating in developed countries since the 1980’s, has fallen by half globally since 1990, and has finally also been decelerating for in recent years (albeit with a recent unfortunate reversal) in the Amazon. Consider the many forms of pollution (like air and water pollution) that have been decelerating in developed nations since the 1980’s. Even soil pollution, still the easiest to hide in our semi-transparent society, is also finally decelerating in most countries. After air and water cleanup, we can expect it to reverse globally, once our soil remediation techniques reach the next level of Benefit to Cost Ratio (BCR) performance. BCRs are the social benefits per cost of deployment of particular technological or policy solutions. BCRs grow the fastest for all our most nanotechnologically advanced and intelligent technologies.
Thanks to the growing science and variability of climate change, we can now see that even CO2 pollution will soon be in the category of harmful things we successfully stopped accelerating. We’ve recently figured out, through voluntary initiatives like the Green Climate Fund, how to make it advantageous for big businesses to progressively decarbonize. Divesting from and turning away from fossil fuels, first coal, then oil, then finally gas, while leaving large reserves of each of those commodities still in the ground, unused, is our obvious future, though some energy analysts are still slow to acknowledge it.
We are accelerating our move to renewable energy, as we’re seeing now in leading countries like the Nordic Democracies, Germany, and the US. China is so aggressively decarbonizing right now, by moving out of coal use and into nuclear power and renewables, with natural gas as a temporary bridge, that they are on track to reaching peak CO2 production in 2025, five years earlier than they predicted they would in 2015. Due to our size, America still does more absolute CO2 emissions reduction than China, but because China’s rate of growth of decarbonization is so much faster, they may pass us in a few more years in absolute decarbonization activities per year. The fact that they are so recently economically developed, and still at so much lower GDP per capita than the USA makes this feat particularly admirable.
So yes, accelerating science, technology, and economic globalization have created a host of problems for us in the last two and half centuries. But as Gaia Vince, editor at Nature, reminds us in Adventures in the Anthropocene (2016) these same processes are also where we find our best solutions to those problems. Helping our clients to cope with accelerating change, and to avoid or mitigate its negative effects, while harnessing its positive effects, has become the foresight practitioner’s primary challenge. Better educating, connecting, inspiring, empowering, and protecting humanity from the negative effects of all this exponential change, while enhancing its positive effects, has become our primary moral and social imperative.
As we’ll see, information technology (including computing, communications, sensing, and robotics technology), nanotechnology (by which we mean moving our critical processes to the smallest scales and the densest configurations that we can) and their use in free and fair economic markets to create products and services, are the most powerful systems driving accelerating change. Today, most experts expect stunning changes ahead, and some foresee a coming “technological singularity” a period of time in the next few generations where our machines reach and then rapidly surpass human intelligence.
While many of us may not want that kind of future, this chapter will argue it is coming nonetheless, as smarter and more capable machines serve so many useful purposes in society. Unlike the a number of the social accelerations we’ve just mentioned, like pollution, deforestation, or species loss, we can’t turn this one around. We could slow it down with concerted global policy (very unlikely), or a truly massive global war, plague or meteor strike (even more unlikely), but as the theory of warfare and of catalytic catastrophe tells us, even small versions of those catastrophes, would likely accelerate further technological development instead.
But again, just because a world of smarter-than-human machines and stronger-than-human robots appears to be an inevitable developmental destination, that doesn’t mean the evolutionary path we take toward that future, as individuals, organizations, or societies, will be the most humanizing, positive sum, enlightened, or progressive. If accelerating intelligence in human society, whether it be human or machine, is not also accompanied by things like accelerating safety (immunity), sustainability (truth, order), and morality, terrible things cann happen, as we’ve seen in amoral 20th century governments, like Nazi Germany, or in the dehumanization, inequities, and environmental catastrophies caused by accelerating globalization today.
So leaders need to recognize ways that all of this acceleration can and does go wrong. The moral responsibility for the path always remains with us, and we bear the consequences of our free choices. There are many better and worse paths available to a smarter and faster future, many experiments to be done, and many failure states to be avoided along the way. With good foresight, we can learn to recognize progressive scientific research agendas, technologies and socioeconomic policies, whose development we can selectively catalyze (speed up), and a range of bad (antiprogressive) scientific agendas, technologies and socioeconomic policies that we can inhibit (slow down), while we develop countermeasures from the “good side” of accelerating scientific, technological, and economic change.
We must also mention that just because certain special processes on our planet are continually accelerating, there is no need for us to keep accelerating ourselves. Very often we and our clients seek ways to slow down and simplify our lives, technologies, and our strategies, even as Earth’s information technologies and nanotechnologies continue to speed up and complexify all around us. It’s high time for us to take some slowing down time as well, if you think about it. For centuries we worked just to stay alive, and now most of us are treated like disposable cogs in the great machine.
The increasing number of Basic Income proposals we see in countries around the world today, and their increasing likelihood of passage in coming years, is a cause for celebration, as a developmental (predictable) event. But there are many evolutionary (unpredictable, experimental) ways that societies will implement that inevitable development, and if we don’t include good incentives to improve ourselves, and design our educational systems to make us self-actualizing and self-improving, we can easily create a more dependent, unproductive, and ignorant citizenry with those government subsidies. Policy structure, culture and education matter greatly to any social outcome.
As soon as we get a basic income, many folks will take a big vacation, and most folks may never work as hard as they did before that world arrives. Is that bad? I don’t think so. Increasingly, work is being handled by our ever more intelligent machines, not us, and most of us are ready for less pressure to survive, and many more opportunities to thrive, at a slower and more considered social pace. But the evolutionary details of basic income policy, the particular path we take, in any country, are hugely important as well. There are plenty of disempowering and dehumanizing ways to implement Basic Incomes, ways that make societies dependent, addicted, weaker, and less humanized as a result.
Consider the dependency economies we have seen emerge in some of the Gulf States, where government handouts have been too generous since the the 1950s, and they are now trying to dig themselves out from low levels of individual motivation and high feelings of entitlement, or those we have seen in indigenous peoples communities where we’ve simply provided money to them, but done little to strengthen families, educate youth, or offer incentives for self-development. Consider also many of the responsibility-free entitlement and welfare schemes we’ve seen in western democracies.
Progressive basic income policy will need smart incentives designed to nudge people to self-growth, personal responsibility, and doing any of a wide variety of recognized social goods, to receive the highest benefits, and education systems and cultures that incentivize personal responsibility, or those societies that don’t pay attention to these issues will suffer the consequences of a poor evolutionary path to an obvious developmental future.
We also need to keep in mind that not every acceleration is likely to be within our full control. As we will see, we couldn’t stop the accelerating growth of information and computation and robotics on Earth even if we wanted to. We can’t stop the arrival of human-surpassing machine intelligence in coming decades. But we certainly can determine the nature and timing of AI’s arrival, the kinds of machine intelligences we get, and the uses to which we put our machine systems. We have great control over how positive or negative technology’s effects are on our societies.
So if our future visions are to be adaptive, they must begin with a good understanding of the larger accelerations occurring around us, both the processes we are continuing to decelerate, like deforestation and human population growth, and the ones we are speeding up, like our computing and communications technologies. The universe appears to be going, at an ever-accelerating pace, to a set of destinations we are only just now beginning to recognize, admit, and understand. When we aware of where the universe is going, and why, we can better make our own journeys, and pursue paths and destinations that are in better harmony with the larger and far more powerful systems that are operating around us.
A Deeper Understanding of Acceleration – The Model of Evolutionary Development
The last thing we should say, in this general introduction to accelerating change, is that many of the books at the end of this Guide offer more useful insights into how and why all this accelerating change occurs. The best of these books acknowledge that humanity is both increasing our evolutionary freedoms and uncertainties, and at the same time, we are increasing developmental constraints and predictability. Both evolutionary and developmental ways of viewing our future seem equally fundamental, and if we don’t see both of those processes at work, we are missing the deeper story of how change, including accelerating change, actually occurs.
Our next chapter on universal foresight, Chapter 11, evo devo foresight, will deal with another truly fundamental issue in foresight, the question of how to best distinguish between unpredictable (evolutionary) and predictable (developmental) processes of change, in the universe and all its subsystems, including humanity, and the unpredictable and predictable features of accelerating change. It explores a number of apparently predictable processes and drivers of complex adaptive systems, whether those systems are chemical, biological, social, technological, or the universe itself.
Readers will gain a better understanding of how unpredictable, divergent, and bottom-up (evolutionary) processes interact with predictable, convergent, and top-down (developmental) processes to create value in all complex adaptive systems, including human minds, teams, organizations, and society. Evo devo foresight is of particular benefit to global foresight practitioners, and to those interested in long range and planetary futures. But it is also helpful to any foresight practitioners seeking to understand how preferable futures are created in the minds of individuals and their organizations, and to better balance their strategic use of bottom-up and top-down processes of change.
The stories we’ll tell in this chapter and in Chapter 11 are even spiritual, if we define that word as anything that helps us to discover our higher meaning and purpose, and to arrive at more useful thinking, emotion, and consciousness. We all love to tell each other stories, and because our intelligences are primarily evolutionary and only secondarily developmental, as we will see, most of our stories are entertaining fictions, not reality. But there are a few developmental ones that are critical to see. As Jon Beach notes in The Dawn of Symbolic Life (2010), after the amazing complexity and elegance of the universe itself, the story of universal acceleration and I would add, its evolutionary and developmental nature, is the most future-important and surprising universe story available to us today.
We seem to be going somewhere specific, faster every year, and we’re just getting perceptive enough to see that destination, and start steering better toward it, rather than randomly flailing about. Fiction is fun, but nothing in our world of fiction can match the breathtaking majesty, elegance, and complexity of the natural world in which we find ourselves embedded. The more we see the realities of the natural world, the more amazing our universe seems, and the more clearly we see its own internal purposes, grand cosmic processes in which we can play our own small parts.