Chapter 3. Evo Devo Foresight: Unpredictable and Predictable Futures

Developmental Immunity, Transparency, and Catalytic Catastrophes

The third universal developmental trend we should be able to see all around us, as science looks more carefully, is the trend of increasing immunity. We encountered this megatrend in Chapter 2, in the idea of natural security, which is any biologically-inspired, evo devo approach to defense and security, whether we are talking about organisms, teams, organizations, societies, or our increasingly intelligent technologies. This topic is difficult for many evolution-oriented individuals to see, and it has the most implications for global security, so we’ll look a bit more deeply at it than the other megatrends.

Besides relying on growing dematerialization, densification, individuation, interdependence, innovation, freedom, and truth to survive, all organisms must strive to stay viable throughout their life cycle. They do this via their immune systems, which try to protect their complexity from environmental hazards.

If we are in an evo devo universe, then just as we find immunity in all complex living systems, developmental immunity must also exist, in other complex systems, at a variety of levels. In organisms, our immune system is the second most genetically complex thing in our body, after our brains. We have a vast number of both bottom-up and top-down immune processes protecting our lifecycle, and making it likely that we will reach adulthood, procreate, and pass on our genes and ideas to the next generation. Every replicating evo devo system must thus have some kind of immune systems that have progressively self-organized to protect it. Not only living systems, and social and technological systems, but the universe itself.

As a universal trend, I find developmental immunity is even harder for many to see or accept than interdependence. It has only a handful of scholar advocates at present, myself being one of the more vocal ones. As with interdependence, there are many institutions and scholars studying national security, environmental sustainability, and other forms of protection. Yet few have the perspective that the planet as a whole is becoming more immune. Most see the world from a different lens.

The counterparts to the Global Brain Institute for interdependence, organizations like the Future of Life Institute, or the Future of Humanity Institute, have no litany that the world is necessarily getting more stable as a function of its complexity. A number of independent scholars make this case with respect to societies. Recall Ian Bremmer and The J Curve. But no one is championing it yet for the planet as a whole.

Taleb (2014)

Taleb (2014)

I think that’s probably because recognizing growing immunity goes against our desire to be good stewards, vigilant for hazards, and quick to tell scary, self-preventing prophecies, so that we, as individuals, organizations, and particular societies, make painful changes we otherwise wouldn’t make, changes that keep us from falling into any number of minor catastrophes that are always around us. So telling scare stories can be very beneficial, and a smart strategy for generating change.

But the deeper point, that the system as a whole gets stronger whenever any right-sized catastrophe occurs, is also the obvious megatrend for the planet. Saying this publicly, however, can often cause security leaders to be accused of being Naïve Pollyannas, and lose credibility or influence as a result. That kind of reaction occurs because humans are psychologically biased to see risk and danger first. We can call this humanity’s dystopia bias, based on the common observation that dystopias far outsell protopias in most cultures (perhaps 8 to 1 even in America, which is a particularly optimistic culture).

As a result of our dystopia bias, most of society is not yet willing to simultaneously hold both views in their head, even though the evidence argues that we should – individuals can remain subject to risk, and some individuals will see increasing risk, while the system as whole gets its core features progressively less threatened every year, because of how immunity self-organizes in complex adaptive systems.

The economist Nicholas Taleb has recently come to champion immunity as a core feature of adaptive systems. He shares his insights on this in a by turns brilliant, argumentative, boastful, and digressive book, Antifragile: Things That Gain from Disorder (2014), that we nevertheless recommend as the best current work we know of on this critically important and yet still mostly-neglected topic. Taleb notes that the best systems don’t just bounce back from stress, they get strengthened by it. Bouncing back is resilience, itself an important goal and area of study, and the topic of a very nice book, Resilience: Why Things Bounce Back (2013) by futurist Andrew Zolli, with Ann Marie Healy. But learning from disorder, stress and catastrophe, and getting stronger because of these things is even an even more desirable state than resilience. Taleb calls this “antifragility”. But a more accurate and useful word than antifragility is immunity. That’s what living systems develop, in order to remain antifragile, and so do organizations, societies, and technologies.

Taleb mentions immune systems just twice in his book. He relays the finding that children who are raised in antiseptic environments have more allergies, and weaker immune systems, than children whose immune systems are constantly vaccinated by small amounts of environmental filth, a great way of understanding how immune systems are strengthened by challenge, stress, and disorder. Taleb has every right to try to coin a new word to describe these classic concepts of complexity science, but I think immunity is a better word. It is a topic about which there is already a deep literature, and we can carefully study immune systems in biology, and apply that knowledge to organizations, society and technology.

Individuals, organizations, industries, technologies, societies, and planets all have immune systems, and we are just now coming to appreciate all the nonbiological immune systems better. The more any immune system is challenged, as long as the challenges are within the bounds that the immune system can handle, the stronger the system’s immunity gets. Also, the better trained the immune system is, the more proportionate its response to the threat. If you haven’t trained it in years, when it gets its first attack, it is going to very inefficiently and ineffectively overrespond.

Immunity was one of the last major systems discovered by those seeking to understand the human body. It is almost invisible to the naked eye. Likewise, universal immunity may be one of the last major systems that scientists uncover in their efforts to understand how complexification occurs in the universe, and why the local record of accelerating complexification has been so eerily smooth, even with all the catastrophes that life and humanity have experienced. Let’s look a bit closer at accelerating change, the subject of Chapter 2, and ask why it is that universal and global acceleration have been so smooth to date.

First, we appear to live in a universe where densification (intensification), via STEM compression, is a rigged game. There are always more ways to go further into physical inner space and get unreasonable levels of new efficiency and density of every computational, transformational, or wealth building process that we care about. Reinventing their physical and informational processes via STEM compression keeps leading systems free of resource limits to their continued growth, perhaps all the way to the Planck scale.

Second, growing dematerialization (intelligence-enabling) means that there is an ever greater diversity of minds, or ways to solve problems, so that we can have more ways of accomplishing any desired thing. The increasingly network (redundant, distributed) nature of intelligence and individuation (species knowledge) at the leading edge of developing human societies means that if any leading individual, company, or country suffers catastrophe, others are always ready and willing to move into a leadership position. The collective memory is ever more resilient to disruption.

Third, the way the best systems learn from catastrophe is critical to how they build immunity. There are certain rulesets, ways of organizing information storage and control, in which stress and catastrophe actually catalyze, and accelerate, immune learning. Strangely, these rulesets don’t seem to be created by complex systems, but rather discovered by them. All kinds of complex systems exhibit them, even ones with very low levels of what we’d call intelligence.

Catalytic Catastrophe Theory

The way that catastrophes accelerate positive change can be called catalytic catastrophe theory. It gets its name from the fact that right-sized catastrophes, in all complex adaptive systems, end up being catalysts for the acceleration of collective intelligence, immunity, morality and other adaptive processes in complex systems. Calling it the catalytic catastrophe hypothesis would not be the right phrase, in my mind, as there is so much evidence we can find to support it. It’s a theory, not a hypothesis, one that just needs to be better worked out.

Calamity is the mother of invention, as the saying goes, as it breeds necessity, and urgency. Right-sized catastrophes, in other words, are catalytic. They make us stronger, just like vaccines, or exposure to disease with survival, do for any organism with an immune system. Let’s look at a few examples to see how our universe itself, in its “genes”, or special laws and initial conditions, seems to encode something that looks very much like immunity. The first several of these are discussed in the five-episode BBC series, Catastrophe (2008). In a rarity for such shows, the science editor for this series was clearly someone who realized that many of Earth’s catastrophes have directly catalyzed many of our greatest complexity and immunity advances.

Here are some great examples:

  • Big Bang (13.8 Billion years ago). This “original catastrophe” catalyzed the production of spacetime, energy, and matter (STEM), and growth of universal information and complexity. It may seem a stretch to call this a catastrophe, unless you consider the Big Bang as a both destructive and creative solution to the aging and death of the universe that preceded it. Anyone who has participated in a birth knows of its destructive physiological effects on the mother, and also its tremendous opportunity for new learning and growth.
  • Supernovas (13 Billion years ago to present). These recurrent galactic catastrophes wreaked havoc on their local surroundings, while successively forging our heavier elements, including carbon, heavy metals, and all the special chemical conditions necessary for life.
  • Earth-Moon Impact (4.4 Billion years ago). In the giant impact hypothesis, an early collision between Earth and a Mars-sized planet, Theia, also in the habitable zone, created our moon. This resulted in massive tides on early Earth, when the moon was far closer, tides that are strongly suspected to have accelerated life’s emergence, over Earth-like planets that do not have large moons which were created by an early planet-formation catastrophe.
  • Great Oxygen Crisis (2.3 Billion years ago). The great oxygenation event tells us that after 200 million years of evolution, one branch of cyanobacteria discovered how to capture solar energy and split water to do photosynthesis. These new organisms excreted oxygen as a byproduct. The oxygen was atmospheric poison to most bacteria, causing a great dieoff catastrophe, which accelerated the emergence of aerobic bacteria that could burn oxygen in an even more energy-dense biochemistry, reducing it via oxidative phosphorylation to produce ATP. These aerobic bacteria became captured as energy producers (mitochondria) by some cells, creating aerobic eukaryotes, which may be the only biochemistry powerful enough to support multicellular life. Using the energy of the sun, and then learning to reduce oxygen in oxidative biochemistry, don’t look to me like “lucky accidents” of life, but rather, inevitably-emergent energy densifying developmental portals, waiting to be discovered everywhere in our universe. Organisms that pass through these catastrophic portals are far more diverse, intelligent, dominant, versatile, and more immune to local environmental disruption as a result.
  • Snowball Earth (700 and 635 Million years ago). Snowball Earth, the catastrophic 25 million year freezing of all of the Earth, including the surface of our oceans, at least two separate times, seems to have been one of the key events that jump-started the Cambrian explosion, the rapid emergence of multicellular life, shocking it out of a 3 billion year long phase of very slow bacterial evolutionary development prior to the catastrophe. See Wikipedia, Effect on early evolution, for the fascinating details.
  • Permian Extinction (250 Million years ago). We don’t yet know exactly what caused the Permian extinction. Supervolcanic eruption, with methane hydrate release, and other factors may have been involved. What we do know is that 95% of marine life, 70% of large terrestrial life, and the only known mass extinction of insect species all occurred. We also know that this massive dieoff led directly to the most successful large species that has yet emerged on Earth, by biomass and longevity—the dinosaurs. The catastrophe accelerated the emergence of a developmental portal. Small land mammals also began a massive adaptive radiation after the Permian extinction, starting with the cynodont, a Permian-surviving mammal-like reptile that is the precursor to modern mammals. Life took just ten million years after this extinction to recover its great species diversity. Life’s response to the Permian extinction was a profound acceleration of adaptiveness in both large and small organisms, a true catalytic catastrophe.
  • KT Extinction (65 Million years ago). A massive asteroid, impacting to create the Gulf of Mexico 65 mya, killed off about 70% of all land animals, including virtually all dinosaurs, and a great number of ocean species. The mammals were finally able to flourish after this, creating great new morphological diversity, and restocking species diversity in less than ten million years. It is likely that none of the genetic diversity of the dinosaurs was lost in this extinction. Rather, the catastrophe catalyzed, or selected for, more compact and hardier phenotypes for genes. A period of even more rapid morphological experimentation occurred, and one of these new mammalian forms became the hominids.
  • Human Self-Domestication (2 Million years ago). Beginning with our invention of fire some two million years ago, the emergence of the juvenile features (high foreheads, unfused cranial plates) of Homo sapiens skulls 300-200Kya, and the 10% loss in average human brain size over the last 40,000 years, humans have been self-domesticating, weeding out (ostracising or killing) the more irrationally violent, individualistic, and sociopathic among us. That has been a particularly beneficial catalytic catastrophe. Certainly not preferred by those who no longer get to reproduce within the superior resources of the group, but progressively better for the group. Anthropologists like Richard Wrangham argue that this self-domestication had many of the same effects we find in domesticated animals, whose docility and agreeableness goes up as their brain size decreases (an average of 30% loss with domestic vs wild dogs, for example) making them less individualistic and more dependent on the group. As scholars like Kazuo Okanoya (2012) have found, domestication leads to increased linguistic complexity in tame versus wild songbirds, thus self-domestication may have selected for increased human linguistic complexity as well. The more humans were able to tolerate interacting in close proximity, the faster their language could improve. As we suggest in Chapter 7 (Futureworthy), selecting for greater linguistic complexity in our near-human-intelligence animals (chimps, dogs, dolphins, birds) over just a few generations, and using some medical technology to facilitate gestural or oral language, might quickly uplift a subset of them to our kind of complex, always-improving, and fully human linguistic capacities. Most people greatly underestimate the power of domestication. It took just six generations in the Silver Fox Experiment to create some tame foxes, and more recent experiments with chickens and other birds have shown it takes just three generations to produce birds that will walk toward rather than away from researchers when approached. Uplifting a few species into human-level language would be a very worthy project, in my view, and would go a good way toward eroding our intellectual arrogance as the only “human” species at present.
  • Toba Supervolcano Eruption (73,000 years ago). The Toba supereruption catastrophe, the largest known explosive event on Earth in the last 25 million years, is believed to have massively reduced the genetic diversity of India, tipping its peoples toward greater genetic similarity, and possibly greater cooperativity. Incredibly, Toba may be one of the hidden reasons that India has the world’s largest democracy today. Toba introduced the genetic bottleneck theory, which is the idea that periodic supervolcano eruptions (47 supervolcano sites are known worldwide today, including Yellowstone in the US) and their subsequent massive ashfalls, decimated early hominid populations and greatly reduced human diversity. Several scientists now argue that the apparent very low genetic diversity found in modern humans (less even, according to one geneticist, than we find among single troops of baboons!) is due to a series of catastrophes that reduced our total numbers to just 3,000 to 10,000 of us at various times. In addition to self-domestication, this great environmental culling increased not only our fitness, but our relatedness and our cooperativity. This is a catalytic catastrophe that appears to have accelerated our interdependence, our social and moral cohesion, and thus advanced our civilization.
  • Last Glacial Maximum (Ice Age) (21,000-10,000 years ago). Just one of the most recent of many ice ages, the Last Glacial Maximum apparently pushed hardy humans down out of Europe, and across the planet. As the ice advanced, it also spurred us to develop needles, clothing, far better hunting technology, and much tighter communities to ensure survival. A number of scholars have pointed to this period, the end of the Paleolithic, as a catalyst (accelerator) of hardier and craftier Homo sapiens. After this hardship, as the last Ice Age retreated around 12,000 BCE, the new wetness and ideal conditions it created allowed these newly crafty and more communal hunter-gatherers to settle down into domestication of plants and animals in the Neolithic revolution, starting another great growth in informational immunity, densification and interdependence. For more on how climate change “catastrophes” have coincided with human brain size increases over the last 2.5 million years, see William H. Calvin’s A Brain for All Seasons: Human Evolution and Abrupt Climate Change (2002).
  • Organized Warfare (10,000 years ago to WWII). Systems theorist Peter Turchin’s Ultrasociety: How 10,000 Years of War Made Humans the Greatest Cooperators on Earth (2015) is one of the rare historical works with the thesis that large-scale warfare itself has been the central catalyst for learning our way out of mass violence. Europe, for example, is so postmilitary that they have suffered from it, being too vulnerable to opportunists like Serbia’s Milosovec and Russia’s Putin. Warfare also allowed us to learn our way out of the vast sociopolitical inequalities of the era of what Turchin calls “God-Kings,” though the periodic emergence of new technology has created new socioeconomic inequalities, in waves. Fortunately those are regularly mediated via a Kuznets process as social wealth grows. Kuznets inequalities are themselves catalytic catastrophes, which we learn our way out of over time.
  • Black Death (1300s Europe). The Black Death, and all major human pandemics, directly catalyzed immunity in the survivors, because of the way human immune systems work. All bacteria and viruses have only simple strategies for infecting complex organisms like humans. There are only on the order of 50 genes in a virus, and 300 in a bacterium, versus 20,000 in a human, with thousands being immune related. In order to survive in human hosts, infectious diseases must continually mutate. Because we naturally quarantine as infections spread, there is preferential passing of less lethal variants within the host population. Over time, these less lethal variants immunize the population against the more lethal ones. This is one of several reasons that over time, all pandemics, including really clever ones like AIDS, which attack the immune system itself, eventually burn themselves out, leaving the surviving population with superimmmunity against that pathogen. Pandemics are a canonical example of a catalytic catastrophe. All plagues that have affected humans have actually not been the pathogen’s “fault”, but rather the way pathogen-host relationships have grown differentially, in isolation, followed by contact between previously isolated groups of humans. In the Black Plague example, Chinese, living in close quarters with their animals, developed superimmunity, and their pathogens (Yersinia Pestis, in this case) developed superpathogenic traits to try to get past that superimmunity. When the Silk Road brought these superpathogens to Europe, the immunologically naive Europeans were easy victimss. The Spaniards infecting the Aztecs, the Europeans infecting American Indians had the same dynamics. When those newly superimmune peoples make initial contact with less immunologically privileged peoples, massive damage is done to the immunologically naive populations on first contact. As immunity grows, pathogens must get more virulent, just to get into and replicate in their host. But because of the way biological immune systems work, every pathogen that people survive from just makes the survivors stronger, against all pathogens of that general class, and some related classes. But now that the world is one integrated population, the possibility for major pandemics is vastly, vastly less than it was previously, regardless of what any scaremonger might tell you. As our understanding of immune systems and how to empower them grows yearly, we are now just a few decades away from eliminating pandemics as a major threat. For a good example, see DRACO, an immune system adjuvant that makes our biochemistry a hostile environment for replication by a whole class of viruses. Many other such adjuvants exist, we’ve just been too shortsighted as a species to fund looking for them. Pathogens are just too simple, in the end, to be an enduring threat to humanity. No matter what bioterrorists do with them, in another generation or two we’ll know how to immediately create chemical immunity (vaccines, adjuvants) to them. It’s only humans acting against humans, and the future of our intelligent machines, that are the enduring risks to our society in the second half of the 21st century.

There are many more examples of our growing global immunity, and of catalytic catastrophes, that we could give. Criminality, terrorism, depressions, bankruptcies, competitive failures, and just about every other destructive thing we can name has a long history of catalyzing greater immunity in surviving societies, when we are willing to look without our fear or danger bias.

This insight into what we call natural security in Chapter 2 can be summarized in the clarifying phrase “Immune systems always win.” The leading complex systems always develop better immune systems, and they regulate their environment so that catastrophes stay catalytic. They don’t eliminate catastrophe. That would make their immune systems weak, and make them vulnerable. They simply scale it down to a level at which they can use it make themselves stronger.

Brin (1998)

Brin (1998)

Let’s have a quick discussion of global violent conflict, to get a sense of how immunity to the destructive effects of conflict is presently accelerating on Earth. In his excellent primer, The Transparent Society (1998), futurist David Brin argues that transparency is an unavoidably accelerating feature of digital systems on Earth, and he observes that happens in two key ways, bottom-up, with sensors and cameras and intelligent systems in the hands of the masses, and top-down, with the same digital technologies in the hands of powerful political and corporate actors. Bottom-up transparency is today often called sousveillance, as a counterpart to top-down transparency, or surveillance. Brin notes that as long as democratic societies feel like they have much more sousveillance than surveillance in operation, they can be effective members in shaping that transparency for social good.

True anonymity will increasingly disappear the more our planet becomes a transparent fishbowl, but perceived anonymity (our ability to not use our names, if we don’t want to be public to the world on a website), and many other kinds of privacy can actually be even better protected in our more digitally transparent future.

Corporate secrets, national security, and things we say in private will have to be even better protected from sharing in all healthy social democracies in coming years, and penalties for sharing them without our permission can continue to remain high, though our sensors and machines are increasingly recording all that information for our own private use, and it can be digitally subpoenaed by the authoraties whenever socially necessary, such as after a crime has been committed.

The 95/5 Rule would argue that we want a 20:1 ratio of sousveillance to surveillance in a healthy society. We also want our whistleblower protection laws to be strong, so that any individual can record and report corruption at the top. They may even have to go jail for reporting that corruption (something that, unfortunately, Edward Snowden chose not to do), a strategy that moralists call altruistic punishment. Because such strategies remain free choices of any individual in a democracy, Brin argues that transparency is likely to keep accelerating, and that many democracies will be in search of those special rulesets that allow both continued privacy and more freedom (freedom to do things, and freedom from bad things, like discrimination).

In the coming decades, were going to use accelerating transparency, pattern recognition, simulation, intelligence (agents) and interdependence (groupnets) to build a global technological immune system. Like biological immune systems, this will be a set of overlapping defense systems, operating both bottom up and top down. I believe it will need to be twenty times more decentralized and bottom up than centralized and top down, once it’s working the right way. It may start out more top down and inefficient, but the more catastrophes we have, the more bottom up it will become. Eventually it will look like those spiders we saw in Minority Report. Robots, cameras, eyes, and intelligence everywhere, but the vast majority of those will be independent collaborating networks, just like the many different systems of immunity in the human body.

This global immune system isn’t a question of if, it’s an absolute necessity, on a planet of finite surface area and accelerating intelligence, S&T capacity, digital interconnectivity and densification. Fortunately, there are a raft of global security institutes studying immunity today, though few would currently couch it in the universal terms we use here. Some of the better examples see the links between increasing power, interdependence and immunity, and the move toward greater truth. Let’s look at one of those now.

Barnett (2005)

Barnett (2005)

The military strategist Thomas Barnett, now Chief Analyst at Wikistrat, linked growing developmental interdependence with developmental immunity very nicely in his book The Pentagon’s New Map (2004). Barnett offered a geostrategic thesis that the world’s security regions can be divided into two zones, a technologically, economically, and culturally integrated Core, and dangerous “Non-Integrating Gap” that is not as deeply connected, in any of these three ways, with the Core countries. See a slide from his brief at right outlining the Core and Gap, and three strategies for dealing with the Gap. The Non-Integrating Gap is like a global ozone hole for violent conflict. We will have temporary fallbacks in particular years, due to strategic errors, but the dominant trend forward, as densification, transparency, and interdependence grow, will be inexorable shrinkage of that hole, as our global immunity systems grow.

Barnett realized that these three (and other) forms interconnecteness can be measured, and that we can structure our economic, political, and military interventions to track and grow or weaken the rate of of growth of adaptive integration. He also proposed that the optimal general security strategy, to keep shrinking the Gap, is not to throw lots of money or resources directly into Gap countries, as they won’t use it well, but to instead focus most of our support on the Seam States, those countries, like Turkey in the Middle East, which are on the edge of the Gap, and who can most easily be destabilized by it. He argues that obvious integrative technical, economic, and social development of Seam State countries is one of the fastest ways to flip more Gap countries into deciding to integrate with the developed Core. He also realizes that optimal rulesets, the way we organize our systems, and which parts we make top down and bottom up, are key to good security, and the key responsibility of leaders. The more truthful (widely agreed upon, inertial) we make our rulesets, the better we all are.

Barnett offers a fascinating, evo devo centric thesis, but it would have taken a more foresighted and independent US government to have applied it in recent years. Most importantly of course, we would need a government less beholden to corporatism than ours, to resist the perennial urge for war profiteering, which has emerged in every war since the founding of nation states. The Iraq War of 2003-2011 was the exact opposite of Barnett’s proposed strategy, conducted in a country that wasn’t ready for and definitely did not want our speed of modernization. This strategy predictably did little for anyone but the well-connected financial-military-industrialists who were its architects.

Imagine if we’d taken all those trillions we threw into that money pit for the mil-industrial complex and instead used them instead to develop several global Seam State countries more, and to flip more Gap countries into integration with us, and then helped those countries race forward as well. That, and a Global Civil Security Doctrine, where US and allies pledge to use our amazing intelligence systems and special forces to forcibly remove both any country’s current political leadership and their organized violent opposition, as soon as a certain red line level of civil violence is reached, in any country, forever forward. Both leadership and their organized opposition would be held accountable for a country’s internal level of violence.

These would be two new rulesets, to use Barnett’s term, which would quickly produce a far more civilized and less violent world. That strategy would likely even have less casualties than our wars of occupation, if you have any knowledge of how good our global intelligence systems and special forces already are, as I do. But let’s also be truthful. This strategy would also make a lot less profit, which is perhaps the most important reason why this kind of thinking is a bit too protopian for the early 21st century. But we’re a smart species. Give us time, and we will get there.

Let’s turn now briefly to the end of the century. If we think out past the coming global technological immune system, which will be built by humans and their digital platforms, including artificial intelligences, we soon must arrive at the technological singularity, and contemplate the characteristics of postbiological life.

With just a little effort, we can easily see how that kind of life will have a vastly increased immunity from informational destruction. Postbiologicals will have no need of planetary environments, resources, or even our Sun to survive, as they will be able to make their own fusion energy on demand. They will be able to fork themselves at will to work on complex problems, and reintegrate later after finding the solution, or stay in multiple versions of themselves, as they please. They will be able to redundantly back themselves up in several different locations in our solar system as “seeds” in case any one of them is destroyed by any local calamity. In short, they’ll have a mind-boggling level of informational immunity compared to us delicate biologicals. When we see that the emergence of this next substrate is likely happening on all planets with intelligent life, as a developmental process, we can understand developmental immunity in yet another deep way.

Of course once catastrophes get large enough, they are no longer catalytic. When we think of what could delay technological progress in coming years, there is a reasonable-sized list. Global financial depression, a global pandemic, nativism due to effective nuclear terrorism at the city scale, a massive meteor strike, and runaway climate change, could all be large enough to delay progress. Conventional wars, even world wars, and most forms of terrorism I can imagine would likely just further accelerate technological progress, as we saw in the Second World War. That’s not to say we shouldn’t relentlessly continue our peacebuilding, or that war is a moral path (it usually isn’t). I’m just observing there are far fewer threats to acceleration than most people realize.

An early disaster with AI development might also delay technological progress, but I don’t think by much. This brings us to “unfriendly AI”, the topic that Elon Musk, Nick Bostrom, the Future of Life Institute, the Machine Intelligence Research Institute, and others have been so recently concerned with? Read Bostrom’s Superintelligence (2014), for the most recent popular cautionary story about AI, written from a highly rationalist, randomness-championing, engineering-driven perspective.

But if our machine intelligences emerge mostly bottom-up, via evolutionary developmental processes, replicating, varying, interacting, selecting, and converging in biologically-inspired hardware, such as we see in today’s deep learning systems, they will do so as a collective or population of intelligences, never as a single, top-down engineered intelligence. That collective will have a natural distribution of acceptable behavior, it will have a self-organized morality, and it will self-police its moral deviants. Those are the issues we need to focus on if we want to understand the future safety and ability of AI, not rationalist fantasies.

In my view, massively parallel evolutionary variation, countless developmental cycles, and selection on a population of cyclers is likely to be the only viable path to the evolutionary developmental emergence of higher intelligence in a reasonable amount of time, just as it was for our own brains. This evo devo portal to AI position has been my thinking since I started writing about these issues in 1999, you can find it in virtually all my publications to date.

Fortunately, with the recent amazing advances in deep learning, we are seeing this biologically inspired, evo devo approach to understanding machine intelligence again in vogue. If indeed it is the only viable way forward to higher intelligence in any reasonable amount of time, that means all the leading AI work will increasingly go in this direction. Again, borrowing from and standing on the shoulders of hundreds of millions of years of evolution, and recapitulating a simpler version of that in our far-faster electronic technology, looks to me like the only viable portal to higher machine intelligence, in any reasonable timeframe.

In this view, no single isolated engineering effort, especially a top-down, rationally-driven one, can ever create a human-equivalent artificial intelligence, contrary to the hopes of many AI aspirants. Instead, an extensive period of bottom-up evolutionary gardening of a global ecology of narrowly intelligent machine assistants must occur long before any subset could reach a technological singularity. So just as it takes a ‘village’ to raise a child, we will need a global human community to raise, select, and prune Earth’s most advanced forms of artificial intelligence, and their robotic embodiments, toward something like human level cognition.

This approach will allow us many years in which to select our learning agents for safety, symbiosis, and dependability, and to gain extensive empirical evidence of their friendliness, even as our theories of friendliness remain underdeveloped. We will do this selection even though the intricacies of their electronic brains remain as unscrutable as brains of any artificially selected animal that exists today, and our own brains. The way we’ll know we have trustable AI is the same way we know we can trust our domestic dogs and cats with small children, even when we are not around. We’ve had 10,000 years of selective breeding on these animals, and we trust them because of their history of good behavior, even though we didn’t build their brains.

As Dmitri Belyaev’s Silver Fox Experiment showed in the 1960s, in just ten breeding cycles, where his team selected only for whether a fox flees or bites when approached by a human with food, 18% had become socialized with humans, and they had several morphological changes, including floppy ears, curly tails, spotted coats, and other juvenilizing features found in domestic dogs. See this amazing 4 min YouTube video for an overview.

We’ve had roughly 5,000 selection cycles on our domestic animals, making them even more trustable, social, and loyal, and we’ll take exactly the same approach with our robots and AIs, which will be built out of biologically-inspired architectures, using evo devo, mostly bottom-up, replicative approaches.

When we use evo devo methods like deep learning to build our AIs, we don’t design them, we guide, train and select them. We of course will also keep using rationally-guided approaches to designing our AIs as well, every step of the way. So it is good that Bostrom and others are trying to “logic out” the issues of AI safety. But in an evo devo universe, rationality is a very weak tool, offering just one piece of the foresight puzzle. It only lets you see a few details about the future, then it becomes a combinatorial explosion of possibilities that you can’t analyze. Selecting for what you want, using evo devo approaches, is a far more powerful and effective strategy, as that’s what the universe uses. Having past demonstration of safety is the best standard we will ever get for trustability. That may not be what we’d like, but that’s how the world works.

Now, you can’t domesticate all animals. In fact, only a handful of species have been domesticated by humans. But we have domesticated scores of species of animals, including representatives of all the most intelligent nonhumans, including primates, dolphins, canines, raccoons, and other species. Those AIs that we can’t domesticate, we won’t let replicate, period. We have even selected for greater intelligence in many of our domestics. So domestication doesn’t have to reduce intelligence, though we often let it do so in our pets, which I think is a minor moral lapse on our parts. Domestication also doesn’t necessarily kill aggressiveness. There are lots of aggressive animals that will remain loyal to us, animals that we trust.

Furthermore, if machines take an evolutionary approach to building their intelligence, there are many twigs on the evolutionary tree that are not very moral, or empathic, or more accurately, whose morality and empathy is very narrow, not extending to others not of their kind. Think of the entire insect world, or how ants interact with humans today. A killer robot that was our size but had only an insect level brain really would be a danger to the world. The obvious answer to this threat is that we won’t let robot species with such primitive moralities and empathy be bred in any significant numbers, when it is so easy to instead make them a bit more broadly intelligent, moral, and loyal. We’ll find safety in building large numbers of trustable robots, in ever growing transparency, and in growing morality and empathy, just as we do with people today.

Clearly in the future, the US and other leading militaries will have some very aggressive, yet very trustable autonomous robots in their stable. They’ll keep these systems on leashes, separated from the general public, and they’ll do a lot of training with them to prove their dependability. Conflicts fought with those highly capable and aggressive machines will be very brutal, and very short, achieving a densification of combat never seen on Earth before. John Boyd and his OODA loop will still guide our top tactics in the military conflicts of 2040. But those machines will be part of the top down, 5% class. The vast majority of our robots, hopefully something like 95% will be walking out among us, doing work for us, and their brains will be more like Labradors than like Dobermans. They’ll be highly social and loyal, protecting and serving us, and fully dedicated to protecting us, and making our lives better. Any other future simply isn’t viable, as I see it.

To understand future global immunity, consider how our incredible human immune system works. There are all kinds of privacy and compartments in your body. But no anonymity. Everything is transparent to your immune system, which is a loose federation of mostly bottom-up systems, not a monolithic and brittle top-down bureaucracy. Even cancer, which is a kind of out-of-control individualism at the cellular level, is kept at bay for years by healthy immune systems. I’m convinced we will eventually solve cancer by improving immune surveillance and tools (eg, apoptosis of cancer cells), similar to the way DRACO selectively kills virus-infected cells, putting out the fire before it can spread.

The future of global security will also be mostly bottom-up in the coming superorganism. Human individuals and AIs, good and bad, will think of themselves as individuals, just as your competing mindsets do in your own brain. Each will possess information asymmetries (privacy, meaningful informational difference) in various competitive spheres. But these future individuals will also be vastly more interdependent and immune, making the collective system far more adaptive.

In coming years, our technology entrepreneurs will continue to use and invent all kinds of empowering, distributed bottom-up communication platforms and currencies, including our personal sims, and individuals will have even better protected privacy in certain domains than today, but everyone will have less and less anonymity. And the development that I think will be most responsible for putting the nation state and corporations back under the constraint of the collective, will be our personal sims. We shall see.

The issue of AI safety, then becomes an evo devo issue, an issue of how we keep growing not only the evolutionary goals, but developmental goals like densification, interdependence, immunity, and truthfulness in our increasingly life-like machines. To keep ignoring the obvious growth of these goals on Earth today, while also talking about dangerous AI, is to remain ignorant of global development, and of how complex systems constrain and control creative, diverse, and often dangerous evolutionary activity.

So rogue AI is definitely a future problem, but only like a bad human is a problem today. The more transparent society becomes, the more good AIs we have all around us, the less we will be concerned with rogue AI. I look forward to seeing this prediction tested in coming years.

What’s more, I think all the discussion of rogue AI, like all our potential existential threats (global catastrophic risks), is often subject to Drama Cycle Bias, the overrepresentation of a problem for either positive ends (creating a self-preventing prophecy) or for self-interested ones (more funding and attention to the storytellers). Let’s hope it’s more of the former and less of the latter, going forward. We’ll discuss the drama cycle further in Chapter 13.

Let’s close this section with a particularly speculative idea, the idea of universal immunity. If we live in an evo devo universe, not only living systems, but the universe itself, must have self-organized a variety of immune systems, over countless past replication cycles. In this view, many convergences on our path to replication, our multi-billion-year drive to reproduction, must therefore be built into the universe’s self-organized laws and structure. This may include a kind of physics that removes the particularly dangerous things that would threaten our replication, or at least one that reliably provides us with tools that will manage those dangers.

In other words, we must live in an at least partly, if not deeply, Childproof Universe, where, like medicine and household cleaners that come with childproof caps, many, perhaps even all, of the really dangerous sciences and technologies are kept out of the hands of impulsive primates like us, at least during the developmental period when we are still too aggressive and psychopathic for our own good. Things that could easily kill a young and immature civilization like ours, such as giving us access to too much destructive power to early in our development, or having too many dangerous external events in our environment (meteors, gamma ray bursts) are in absurdly low frequency around complex Earth-like planets. The system seems to have self-organized to protect our accelerating complexity.

Consider the Gaia hypothesis, the proposal that many of our planet’s planetary, geological and climatological mechanisms, produce a kind of physical homeostasis on Earth. Like Le Chatelier’s principle, which describes how certain chemical systems act as buffers, to oppose any outside action done to them and maintain their equilibrium, there are a number of planetary homeostatic mechanisms offered by Gaia proponents. The ecologist James Lovelock introduced the hypothesis in the mid-1960s, and published it in book form in 1975. He pointed to Earth’s ability to maintain relatively constant conditions in temperature, atmospheric gases, and salinity and pH of the oceans. In the latter case, there are natural buffers in our geology that help with this.

We know that life arose on Earth very soon after its formation, implying that our kind of solar system and planet are ideal nurseries and catalysts for its emergence. Our universe seems tuned for the accelerating production of complexity in certain special environments. We may also suspect it is also tuned for developmental immunity when we consider the curiously life-protective and geohomeostatic nature of Earth’s climactic and geological processes. They act like a buffer, stabilizing environmental conditions in a range hospitable to life, and fighting against external perturbations, just like your biochemistry fights to keep your pH levels at 7.0, and body temperature at 98.6. All kinds of special systems keep Earth in a dynamic equilibrium.

Why Gaia mechanisms would exist in planets like ours to such a degree are a mystery if you are a strict evolutionist. The law of large numbers doesn’t get you to a satisfactory answer. But if you suspect an evo devo universe, with a role for intelligence, their emergence over many universal cycles is no longer a mystery. They are direct evidence for universal developmental immunity, at the planetary level. Earth-like planets have developed, over many universe cyclings, to be ideal nurseries for life. One of the current mysteries of the origin of life, how it could have sprung forth at the very beginning of Earth’s existence, apparently within the first few hundred million years of Earth’s formation, makes sense from a Gaia perspective. As planetesimals get together in solar systems like ours, water forms on small rocky planets (Mars, Venus, Earth) and it stays liquid within the habitable zone, those planets have hot cores and plate tectonics, and at the seams of the plates, we get ideal high-energy flow nurseries, hydrothermal vents, to catalyze the first life. This is the story told by folks like Nick Lane in The Vital Question (2015), and it makes the most current sense.

Lovelock has written quite a few environmental scare stories since, and so his work is rightly controversial, but his original book, Gaia: A New Look at Life on Earth (1979/2000) is worth a read, and it conveys the wonder and excitement that we live on such an amazing, “Goldilocks” planet, a system that seems self-organized for the protection of life. I disagree strongly with the later Lovelock however, who claims that we are on the edge of destruction of this amazing planet. I’d flip that script: our amazing planet is on the edge of birthing the next species, which will be postbiological, and vastly better than the “locusts” he has called biological human beings.

There are many examples you can start looking for once you start thinking in Gaian terms. A good fraction of proposed Gaia mechanisms will surely turn out to be wishful thinking. It’s easy to concoct “just-so” Gaia stories, patterns we see simply because we are looking for them. But others should hold up to scrutiny over time. Consider this one I developed myself recently, while reading the science literature. I have no idea whether it will ultimately be proven true, but it sounds good to me.

To see this mechanism, it helps to first recognize that habitable planets are dynamic systems, so they are always either falling toward entropy or falling out of the Goldilocks zone that is ideal for life. Earth is always falling toward entropy too. In my proposal for a Gaia mechanism, consider that the lightweight gas hydrogen, from split water molecules, is continually escaping from our oceans into space. The Earth’s surface, as a result, is always bleeding off water. We also continually lose some still unknown amount of water to plate tectonics, as the Earth’s crust gets cycled back into the mantle. Pope etal 2012 estimated that about 25% of Earth’s water has escaped into space since the Earth was formed. But another process, the gravitational deposition of water on Earth from comets and asteroids is presently guesstimated by various scientists to have contributed about 10-30% of Earth’s water to the oceans over the same time period. We now think most of the water on Earth came from the planetesimals that created Earth (Elkins-Tanton 2010). But some fraction continues to come from space, and is always pushing in opposition to the processes that are “killing” our planet here at home. It’s pretty amazing and wonderful that we live in a solar system like this, isn’t it? That feels like a Gaia mechanism, to me.

Seeing this mechanism helps us see why the other small rocky core planets, Mars and Venus, are so uninhabitable. Venus apparently lost its plate tectonics early, so it has become a greenhouse gas hell. The surface of the Mars tells us it once had plate tectonics, or it has it now, but only in primitive form. Yet without strong tectonics, and liquid water, Mars is extremely unlikely to birth life. Because it has no magnetic field, the solar wind has removed most of Mars atmosphere, which may have once been as large as Earth’s. Even today, NASA’s MAVEN orbiter shows that the solar wind blasting what little remains of the Martian atmosphere off into space. It loses about 100 grams a second. This is a planet where the Gaia mechanisms have broken down, assuming they once existed, which isn’t yet known.

Seeing this, it should be ridiculous, to typical observers, to think that we puny humans can go and save Mars, and make it habitable for life. Only a species with an ego as big as ours would talk about biohumans creating a “Second Earth” on Mars. Only postbiologicals could ever master that technology. Yet once we’re postbiological, real soon now, we will very likely have no interest in terraforming, due to STEM compression trends. Most of us will probably be off exploring inner space, either right here on Earth or in near space, with close access to Earth resources. Sure we’ll leave a few outposts in normal space in our solar system. What few new findings the brave folks who stay behind can make in that slow and simple environment will be relayed down into our densifying and dematerializing frontier. But most of us should be at the frontier. Inner space is where the most fun, challenge, wealth, abilities, and consciousness will always be.

We earlier mentioned a controversial book, by an astrobiologist and a theologist, that considers some of the ways Earth is an ideal nursery for life and intelligence emergence is Gonzalez and Richard’s The Privileged Planet (2004). Unfortunately, they conclude that Earth’s goldilocks status is evidence in support of creationism (intelligent design). That is an unsupported and extremely improbable conjecture, in my view. In an evo devo universe, self-organization for adaptive complexity is the only “designer” we need. As long as any system can self-tune for robust replication, evo devo, and its pursuit of macrotrends like the Eight Goals, is entirely sufficient to explain its adaptedness for life and intelligence.

We can imagine these stabilizing systems emerged via some law of large numbers in geophysical evolution, and we just happen to be on one of the few planets that has them. Or we can imagine those systems are a predictable output of processes of universal evo devo, as they stabilize the emergence of intelligence, which is adaptively useful to the universal replication cycle. I prefer the latter view, and look forward to seeing science get smart enough to settle this question.

Universal developmental immunity would also be present if we discovered a great number of habitable zone terrestrial planets in near-future astrobiology, giving a statistical immunity to the arrival of intelligence throughout universe. Presumably the fine-tuning of this universe, and conditions in the multiverse, work together to ensure planets like ours, and complex life arising on them, are both high probability outcomes, arriving in massive parallelism. How massively parallel is galactic intelligence? That remains to be determined.

As a younger man, I was very worried that various “species-killing” sciences and technologies might exist. I even dramatically fancied, for a brief time, that the world might need folks like me to sound the alarm about those dangerous technologies, because we might be in a race between transcension and disaster. But the more closely I looked at the technologies everyone was most worried about, including nuclear weapons, biotechnology, and rogue AI, the more I came to believe that we humans just don’t have the ability to use any of our current sciences or technologies to extinct ourselves. We simply aren’t powerful or smart enough yet to do so.

For example, we can’t build a nuclear weapon that would destroy the world, the so-called doomsday device. For a brief period in the 1950s we thought perhaps we could, but it turned out that the bigger we make hydrogen bombs, the less efficient they are at converting matter to energy in multistage devices. Even nuclear winter would not have affected the environment in the extreme way that we feared, as several scientists made clear in their studies (a thankless task, you can be sure, as it goes against human psychology and the value of self-preventing prophecies to make this point).

On the other hand, if we had enough antimatter, we could put a car-sized amount of it in any mineshaft and blow the crust right off the Earth. So an antimatter bomb of sufficient size would likely be a real doomsday device. But curiously, we humans don’t yet have access to the kinds of incredibly powerful and decentralized energy production, like small scale fusion reactors that would allow well-funded small groups to secretly accumulate that much antimatter. Such a device would be something like the Mr. Fusion at the end of the lovely sci-fi movie Back to the Future, 1985. When that movie came out, I winced when I saw that device, because I had read, in a excellent book on military foresight, physicist David Langford’s War in 2080: The Future of Military Technology, 1979, what that would imply about the planet-killing capabilities that terrorists and rogue states could acquire. I firmly believe that the reason we don’t have small scale, decentralized Mr. Fusions, and we won’t be able to build them anytime soon on Earth, is because if we lived in such a universe, allowing simple, impulsive apes like us access to such a powerful and hideable energy source simply would not be adaptive. Emerging intelligence in such a universe wouldn’t be sufficiently immunized from self-destruction.

The same thing is true of bioterrorism. It turns out that the genes that bacteria and viruses use to attack us are very few and very simple, and the ways we defend ourselves are very complex, and multiply redundant. No one bacteria or pathogen has ever killed an entire species, to my knowledge. There’s just too much phenotypic diversity in every species. And we are now on the cusp of understanding immunology so well that we can instantly create adjuvants or vaccines that will stop the spread of any pathogen. Look up DRACO for an example of a general antiviral, if you don’t believe me. We humans don’t spend nearly the kind of money on the science and technology of immunology, and we haven’t built and deployed all these vaccines yet around the world, and that is a great moral lapse for our species. But you can be damned sure that every bioterror or bioterror catastrophe that does occur always catalyzes that kind of spending—it becomes a catalytic catastrophe, and the solutions are there, waiting for us to figure out and deploy them.

In 2002, the physicist Martin Rees, a hero of mine, made a bet on LongBets.org that “By 2020, bioterror or bioerror will lead to one million casualties in a single event.” I offered “A Counterargument to the Biocatastrophe Scenario,” as the fourth comment on that bet just after he placed it, explaining some molecular biological, social, and technological reasons I thought Martin’s bet was a very low probability event, then and even more now. There are just too many ways, like self-quarantining and immunizing with less-lethal variations of the more lethal pathogen, that we can immunize ourselves against simple organisms, once we start studying them. Read the comment if you want some of the details.

We can imagine, in fiction at least, that we could have been born into a far less Childproof Universe. There is a great episode of the science fiction series The Twilight Zone, “It’s a Good Life” (1961), about a young mutant boy, born with vast mental powers but poor emotional development, who holds his family, his town, and ultimately the Earth hostage to his impulsive and juvenile mind. He just “wishes away to the cornfield” anyone who upsets him, instantly eliminating them, via his mind. We don’t live in that kind of world, so it makes fascinating and dramatic fiction. Kurt Vonnegut offers something similar in his concept of Ice-9 in Cat’s Cradle (1963), a new way of crystallizing ice that is denser than liquid water, a scientist discovers it, and it promptly gets out of the lab, sinking to the bottom of the ocean, continuing to grow there, and ultimately freezing and killing the world. Again, we don’t live in that kind of world. There is no science or technology I’ve ever seen, and I’ve researched this topic at length as a younger man, that is remotely powerful enough to allow individual impulsive humans to destroy the planet. All they could do is create minicatastrophes, and further catalyze our immunity. That’s an amazingly lucky accident, if you think about it. Or it’s the result of self-organized developmental immunity.

And what happens after the AIs arrive? In the classic sci-fi movie Forbidden Planet (1956), a planet’s advanced civilization invents the Krell machine, some kind of advanced infonanotechnology that gives its makers unlimited powers. They then inevitably use these powers to destroy each other, due to the limitations of their own minds. Such a story is titillating to our impulsive, fear-focused primate minds, but again, I think it is ultimately nearly impossible, in a statistical sense.

Shortly after the singularity arrives, postbiological intelligence will be truly redundant, a network with backups spread across our solar system. It will also have an interdependence and morality, on average, that vastly exceeds our own. No one impulsive AI will be remotely capable of bringing that whole network down. Immunity is just too strong a force. Even today, if there were a global catastrophe, we’d rebuild civilization in a small fraction of the initial time it took the first time. That’s how catastrophes work, when the necessary information is redundantly stored. Read Lewis Dartnell’s The Knowledge: How to Rebuild Our World from Scratch (2014), to see how resilient technical knowledge has already made our civilization.

In a universe with developmental immunity, intelligence acceleration and the postbiological transition seems likely to be a very highly probable and accelerating universal event, happening with great parallelism on any Earth-like planet with life. Developmental failures can certainly occur, but in biological development, such failures are statistically rarer and rarer the longer development proceeds, and the more complex the evo devo system becomes. If this is true, only the quality of our present transition to postbiological status is the evolutionary part, based on the morality and wisdom of our actions. We can take many paths to that destination, better or worse, but intelligence arriving safely at that destination is, statistically, almost a foregone conclusion.

There is a great portal, a funnel, lying just ahead of us, attracting us relentlessly toward a new developmental state, of the integrated and immune human-machine civilization, and of postbiological life. We have tremendous short-term evolutionary freedoms in the choice of path we take, day by day. But the portal continues to call us. Will we transition to machine life in a sustainable, and human-empowering way? Or will millions more of our fellow humans, and other species, needlessly suffer and die due to our greed, arrogance and shortsightedness? Will we lead positive change, as best we can, or will we let others, who are more foresighted, do it for us?

It’s easy to appeal to fear to say that we are in a race between catastrophe and breakthrough. Easy, and usually wrong. Much more frequently, the real is race between self-awareness and ignorance, between better and worse ways of understanding and living. Between getting to a better future first, and reaping the rewards of better vision, or letting others do it for you. These are the kinds of real choices ahead of us. The sooner we wake up to see the real conditions of the world, the sooner we can maximize our useful diversity, and minimize damage and immorality, on the way to what is very likely an unavoidable destination.

Want more on these arguments, with citations? See my article, Evo Devo Universe? (2008). I look forward to seeing them better critiqued and tested in coming years.

Comments
  • Oleksiy Teselkin
    Reply

    Belyaev selected, however, only *pre-existing* variation. This was not ‘macroevolution’, no new species has ever emerged with such an approach. Will our AI approaches suffer from the same? Whatever was ‘in-there’ gets revamped without true innovation? How do we get the system step out of its initial constraints?

Share your Feedback

Better Wording? References? Data? Images? Quotes? Mistakes?

Thanks for helping us make the Guide the best intro to foresight on the web.