Chapter 3. Evo Devo Foresight: Unpredictable and Predictable Futures

Eight Abilities and Goals of Adaptive Systems: A Model of Progress

With our brief big picture survey of emergence lists done, we can return to a very important question. What is the nature of progress? How can we better define it, and seek to advance it in our lives? This is no easy question.

If we live in an evolutionary and developmental universe, we can say that progress involves some kind of balanced advancement of each of these two fundamental processes, abilities, goals, and values, in service to greater general adaptiveness for complex systems. We can also say that this advancement will tend to occur in brief bursts and longer plateaus, via punctuated equilibrium, like evolution itself. We can also say that some of this advancement will also be hierarchical, like development, and that the leading edge of hierarchical advancement will happen in accelerating manner, with progressively briefer plateaus before the next punctuation.

In Chapter 2, we explored how our universe is “running up”, creating negentropy, at the same time as it is “running down”, creating ever-increasing entropy, ultimately resulting in its death and need to be recycled. It’s information potential is always going up, and its energy potential always going down. This observation isn’t yet captured in any widely-agreed-upon universal theory of information, but it seems obvious to any high schooler today.

To understand our universe’s past, present, and future, we will need to learn a bit about its most interesting activity: the accelerating creation of adaptive complexity. Scientists use over 40 definitions and measures of complexity, but physicist Seth Lloyd reduces them to three basic types: Difficulty of Description of the System, Difficulty of Creation of the System, and Degree of Organization of the System. Complex systems are hard to describe and difficult to create, and they have many types of structural and functional organization within them. All interesting complex systems also strive to adapt to their environmental in order to survive and perpetuate their complexity. Thus as their complexity increases they develop increasingly intricate models, goals, purpose, and eventually minds.

There are many frames from which to view universal complexity, and ones we use in large part determine what lessons we can learn. Unfortunately, our science and information theory are not yet advanced enough to prove or disprove concepts of universal directionality, purpose, or destiny in our complex systems. Thus what follows is educated speculation. Whether you agree with me or not, please try to understand my framework, and to form an opinion about it. Your opinion should be tentatively held, as mine is, and it should be awaiting further evidence and theory.

Yet having even tentative ideas on these issues can change your life, and the right ideas can greatly improve your effectiveness. So let’s see what we can see.

We propose that that all adaptive complexity in our universe, and in life on Earth, since its emergence 5 billion years ago, has been engaged in a rather obvious set of at least Eight Universal Processes (abilities, goals, drives, purposes, trends). Everyone who has bet against these Eight Abilities growing in society, over the long term, has ended up being wrong. They are all continually increasing in the most successful complex adaptive systems.

In leading systems, the universe is continually creating, and seeking to create:

  1. More Intelligence (dematerializationvirtualization, modeling, consciousness).
  2. More Power (densification, wealth, speed, efficiency, STEM compression).
  3. More Diversity (independence, information, individuation, difference).
  4. More Morality (interdependence, love, empathy, compassion, morality).
  5. More Creativity (innovation) individually and collectively.
  6. More Security (immunity, protection) for evolved complexity.
  7. More Freedom (indeterminacy), as long as it doesn’t threaten protection.
  8. More Truth (inertia, optimization. accuracy) in the knowledge acquired.

These Eight Abilities are thus a starter set of candidates for Universal Goals of complex adaptive systems. The term Universal Goals acknowledges that there are certainly more than just these eight. But when thinking about the adaptive purpose of evo devo processes, these eight seem particularly easy to derive. So until we have better systems theory, we’ll use the Eight Goals phrase as a stand-in for the idea of universal goals or values.

Simplifying further, we can regroup these eight goals into two sets of four, the first being more evolutionary and the second more developmental features of adaptive systems, as follows:

Evolutionary Goals: IntelligenceDiversity, Creativity, Freedom.

Developmental Goals: Power, Morality, Security, Truth.

It is often helpful to think of the universe as an evo devo system pursuing both sets of these goals. As we’ll propose, growing the evolutionary abilities described above (Intelligence, Diversity, Creativity, Freedom) seems to be the most effective way to get more of the developmental abilities described above Power, Morality, Security, Truth). Likewise, improving these developmental abilities, over successive replications of complex systems, may be the most effective way to improve the evolutionary abilities.

We can think of Dematerialization narrowly, as an increase in intelligence,, or we can think of it broadly, as the growth of all four of these evolutionary goals. Likewise, we can think of Densification narrowly, as an increase in power, or broadly, as an increase in all four of these developmental goals. Speaking broadly then, we can can describe the Eight Goals as simply Dematerialization and Densification, or “D&D,” twin races to virtual and physical “inner space”.

When we want to remind people of the goal or trend that is most prevalent in society, as in the 95/5 Rule, and the expression “evo devo”, it makes most sense to list the evolutionary goals first, as in the list above, or “dematerialization and densification.” Alternatively, when we want to focus on good foresight practice, we often list the developmental goals first. Understanding densification trends helps us think better about the distributions of less predictable things, as in the many diverse forms of dematerialization, the conditions where virtualization or mind can augment and substitute for physical processes.  That convention gives: “densification and dematerialization.” We’ll use both conventions in this Guide.

The Six I’s listed above will be discussed shortly. As we’ll see later, there is also a “seventh I”, Incompleteness, that is also key to the nature and future of evo devo systems. Incompleteness is not an ability or goal, but rather a persistent state of all goal-seeking systems. But it seems so important to remember that it deserves to be listed first, ahead of the Eight Abilities. So the full set of factors that may be important to understanding evo devo systems can be remembered as “Incompleteness, D&D, and the Six I’s.

We also stated that adaptive Complexity, Computation, and Intelligence are among the top quantities evo devo systems seem driven to maximize. These three variables are occasionally abbreviated as “CCI” in this Guide. In a nutshell then, universal progress looks like the creation of accelerating islands of ever more adaptive CCI. Life and civilization are a history of building ever more local, dense, efficient, and intricate kinds of complexity, computation, and intelligence, including emotion, cognition, mind, and consciousness, in a sea of ever more disordered and used-up surroundings.

In evo devo language, the nature of progress in our universe is the evolutionary development of ever more adaptive Complexity, Computation, and Intelligence. Leading adaptive systems grow more complex, have more ways of computing, and become more intelligent and conscious as the universe evolves and develops.

CCI perspectives offer us three different views on adaptive systems, from three complementary academic perspectives: complexity science, theory of computation, and theory of mind.

These three perspectives are certainly not the only way we can view and analyze progress. We can, for example, talk about improving various social values, morality, love and compassion, growing scientific knowledge, technical abilities, resiliency, safety, and all sorts of other desirable outcomes, and the minimization of a host of undesirable outcomes. But CCI perspectives, and evolutionary development, offer us a powerful and simple way to consider social values in a universal foresight frame. So they seem to be a great place to start talking broadly about progress.

The physicist John Wheeler is among those who have proposed that our universe is fundamentally computational, that “It” emerges from “Bit”, via quantum mechanics. Dematerialization includes that perspective, and says something more. The more “bits” we have, the more we focus on them, over atoms. Densification tells us we use atoms increasingly exclusively for creating more bits.

The philosopher Alfred North Whitehead, the mentor of my mentor, systems theorist James Grier Miller, was perhaps the most important 20th century advocate of the hypothesis of panpsychism, the idea that “all adaptive matter models external reality”, to the greatest extent that it can. In other words, adaptive systems have a mental dimension, and this mental dimension grows with the complexity of the physical system. Whether we call that mental dimension Intelligence,  Dematerialization, Representation, Virtualization, Simulation, or something else, conserving and advancing it appears to a basic drive of the universe.

If we choose just one of the CCI variables to represent this set, intelligence may be the most useful for managers to discuss. In perhaps the simplest terms, intelligence is what our accelerating and life-friendly universe seems to be concerned with maximizing, as Ray Kurzweil and other cosmic futurists have said. Growing adaptive intelligence, in individuals, on teams, and in society, seems like best single statement of the goal of evo devo systems.  By definition, having more adaptive intelligence allows any complex system to better survive and thrive, no matter what their environment throws at them.

Intelligence can be understood as a set of the most useful types of simulation, virtualization, or most fundamentally, informational representation, and there are many useful representation systems, like emotion, cognition, and consciousness. For a classic and excellent introduction to this perspective, see Fischler and Firschein’s Intelligence (1987), which compares the representation systems used by such different complex adaptive structures as eyes, brains, and computers.

Intelligence grows exponentially, and is now doing so particularly rapidly in machine intelligence, and its performance on various benchmarks. It won’t be long before the whole world realizes that intelligent machines are the most important new arrival on Earth in our entire history. They are one of our top evo devo goals, on all Earth-like planets, whether we consciously recognize it or not.

When we see our universe and its intelligences as not only evolutionary but also developmental systems, we move beyond the “random accident”, “purposeless” views of universal change offered by many current scholars to explain reality. For an example of that, see Alan Lightman’s Sydney award winning essay, “The Accidental Universe,” Harpers 2011. We’ll also move beyond our sterile “null hypothesis” perspective on the likely relation between the laws of our physical universe and its accelerating intelligences.

We then admit that a subset of future intelligence processes and destinations are likely to be statistically implicit in universal physics, waiting patiently to emerge as our local complexity scales. We suspect our universe has self-organized to protect and accelerate the emergence of an astronomically large number of multilocal intelligences, making it the most massively parallel computational system in, well … the Universe.  We understand that our civilization’s future intelligence very likely plays a nonrandom role in the survival and structure of the universe that generates it, just as it does in replicating evo-devo biology.

If adaptive complexity, computation, or intelligence (for short, “intelligence”) is the thing that replicating evo devo complex adaptive systems like humans and their machines are working to maximize, what abilities and goals, and tradeoffs will be pursued in its growth? We propose there are at least eight goals—four evolutionary and four developmental goals—that we can easily identify as being pursued in intelligent systems.

The "House" of Adaptiveness (Eight Goals of Adaptive Systems) - “D&D and the Six I’s”

The House of Adaptiveness. Eight Key Abilities and Goals of Adaptive Systems – “D&D and the Six I’s”

We call these the Eight Abilities and Goals of Adaptive Systems, and represent them as goal pairs in the graphical “house” mnemonic at right. Many of these goals are in evo devo opposition to and occasional conflict with each other, but presumably they all emerged to serve general adaptiveness of the organism, universe, or other complex system. That adaptiveness we place at the “peak” of the house.

Note that these goal pairs are simply a subset of the much larger set of evo devo word pairs we gave earlier in the chapter. Yet we’ve tried to pick a particularly useful subset. The “top floor” of the house is occupied by dematerialization and densification, the two most fundamental drivers of accelerating change. The remaining six goal pairs seem particularly fundamental to adaptation, and all eight of these have strong homology with cultural values studies, as we’ll see later. They seem to me to be a good “first framework” for the minimum set of abilities, goals and values that every leader should have in mind when seeking to guide a team. Good leaders strive toward measurably better futures, along these eight dimensions at least.

We can call these Eight Abilities and Goals by the common names given above, or by their more abstract names, D&D and the Six I’s, as follows:

  1. Dematerialization (virtualization, modeling, consciousness).
  2. Densification (power, STEM compression, wealth, speed, efficiency, density).
  3. Independence (diversity, information, individuation, difference).
  4. Interdependence (morality, love, empathy, compassion, morality).
  5. Innovation (creativity, novelty, experiment, disruption).
  6. Immunity (protection, security, risk-management, sustainability).
  7. Indeterminacy (freedom, uncertainty, multicapability, bottom-upness).
  8. Inertia (truth, optimization, accuracy, top-downness).

Whatever we call them, we hope the Eight Abilities and Goals model is a small step closer to a more evo devo theory of CCI. The model is broadly applicable to all four domains of foresight, as we will see. If it turns out to be at least roughly correct, or is at least moving in the right direction, it will have been well worth the effort to develop, and for you to consider and critique.

We’ve already discussed the first goal pair, Densification and Dematerialization. We saw those as Physical and Virtual forms of a “Race to Inner Space” in Chapter 2. We previously argued that D&D is the simplest and most useful way to understand how and why accelerating change occurs in universes with our special physics.

Let’s now explore all Eight Abilities and Goals in a bit more detail, so we can better see, predict, and appreciate accelerating adaptive intelligence in evo devo terms. We’ll also offer a few more names and descriptions for each of the eight goals in turn, so you can better see them and guide them as they grow in yourself, your teams, institutions, and society.

As in the name evo devo, we will list the evolutionary version of each goal pair first, as the 95/5 Rule tells us the first process in each pair is responsible for the most change by far, and is always the easiest to see. But remember that great foresight usually starts by trying to see and forecast the developmental version of each of these goal pairs first, which may be harder to see but is typically much more predictable. Only after thinking hard about a developmental goal should we contemplate its evolutionary partner, which is far more diverse, experimental, scenario-laden, and possibility oriented. In other words, developmental thinking always helps to constrain the evolutionary thinking that is worth doing.

  1. Dematerialization/Intelligence/Virtualization/Simulation is perhaps the most important feature of evolutionary systems. They create informational representations or simulations of the world. As an evolutionary process, intelligence is experimental and diversity-creating, but not necessarily adaptive. Think of all the evil or dysfunctional intelligences in human history. It also isn’t necessarily more complex. Think of all the intelligences of all the different variety of species on Earth. Many have simplified (think of a parasite) to find their best local niche. But curiously, the most adaptive local simulations increasingly augment and substitute for processes in the physical world. The typical human mind is a great example of that. A smartphone is as well. Just as there are billions of human minds and smartphones, there are also likely billions of very dematerialized civilizations in our universe as well. In this view, the universe isn’t exactly a simulation, as in Moravec and Bostrom’s interesting Simulation Argument, but is a massively parallel simulation system, with each local civilization (simulation) being limited and incomplete. Only the most adaptive intelligences are more complex, computational, and intelligent over time. But just like Darwin’s tree of life, the more dematerialization we get on Earth, the more predictable diversity of intelligences exist. It is also predictable that we all spend more time “in our minds” rather than “in our bodies,” doing more thinking and less doing, as time goes on.
  1. Densification/Intensification/Power/Wealth/STEM Compressn (Efficiency & Density) is perhaps the most important predictable feature of leading developmental systems. They keep accelerating in their resource density and efficiency. Think of the computational densification that occurs in human synapses as a child grows from baby to adult. Or the increasing efficiency of movement and purpose in older versus younger individuals, and the increasing economic value of labor in a well-developing worker in any trade. Think also of all the social and technological densifications (STEM compression) described in Chapter 2, as leading societies have moved to increasingly fast and efficient forms of wealth, from barter to gold to currency to bits, and moved from agriculture to empires to industrial cities to corporate economies to our increasingly intelligent machines. It is easy to densify a system in a way that reduces its intelligence. Conversely, you can make a system more intelligent without greatly improving its resource efficiency or density. But the ideal multisolve is any strategy that accelerates both goals at the same time.
  1. Independence/Individuation/Information/Specialization/Diversity is perhaps the most fundamental, if not the most important, feature of evolutionary systems. Think of Darwin’s tree of life, and all the diversity of its species. Information is perhaps the most basic kind of diversity. Information has been called a “difference that makes a difference”. Information generation is perhaps the easiest to measure of all of these variables, at least in digital systems, as bits. Information also grows exponentially, as human and machine civilization continue to advance on Earth. When we think of Big Data, we can think of digital information as the “soil” or “pattern” that needs to exist before any pattern recognition system (intelligence) can arise. Every bit of information is kind of like an individual, it is something unique and specialized. When we grow specialization and division of labor in society, we are growing not only evolutionary diversity, but information, which can lead us to greater forms of complexity, computation, and intelligence.
  1. Interdependence/Integration/Lovingkindness/Morality/Homogeneity is perhaps the most fundamental feature of developmental systems, if not the most important. Think of the way a funnel converges a system toward a set of stable relationships, allowing it to become an ever more integrated and homogeneous whole. The development of a tissue, organ, or organism from a group of independent cells, the emergence of lovingkindness, human morality and social norms in collectives of previously independent individuals and tribes, the advance of globalization and technological linkages between all people are all forms of growing interdependence during biological, social, and technological development. Yet interdependence also reduces information growth, by limiting the probabilities of many types of outcomes. It funnels and constrains, rather than branches and diverges, the complex system. Clearly, a balance must be struck between information generation (or if you like, individuation/independence, and interdependence/integration in all complex systems. Pursue either of these goals too exclusively and you’ll threaten the other. Think of how too much nationalism or collectivism always kills individuation and personal growth, or too much individuation kills social interdependence and stops mass collaboration to do great things at scale. Great managers find strategies that grow information generation and interdependence at the same time.
  1. Innovation/Novelty/Experimentation/Creativity/Disruption is a third fundamental feature of evolutionary systems. Innovation is any experimentation (creative invention) that has proven at least a little bit socially useful. These are ideas, behaviors, or technologies that have spread at least a bit within a population. Risk-taking experimentation has clearly accelerated with the rise of human intelligence, and the more you empower any complex system to run experiments, do rationally-guided trial and error learning, the more chance it has of growing adaptive intelligence. But as an evolutionary system, lots of mistakes, and a few catastrophes, are inevitably going to occur along the way. As psychologist Alison Gopnik tells us, experiments are how children learn about the world. She notes that until we build biologically-inspired AIs and robots that are flexible enough to run constant creative experiments, both in their minds and in the world, they won’t be able to learn like us, and we have no reasonable grounds to expect the singularity. Of course, we’re finally beginning to build machines that innovate more like we do, and I’d predict that everyone in the computer science community will see this is the best way forward in coming years.
  1. Immunity/Security/Risk Management/Protection/Sustainability is a third fundamental feature of developmental systems. All complex systems have not only intelligence, they have various types of immune systems, designed to protect that intelligence from disruption. In human beings, our immune systems are the second most complex systems, in terms of numbers of genes involved in their creation and maintenance, after our brains. Yet we often forget this illuminating fact, when we design our teams, organizations, and institutional processes. Risk management and security are usually afterthoughts. Look at all the pain the personal computer industry inflicted on the world, when it released computers that had no immune systems, inviting every curious high school student to become a black hat hacker. In organisations and societies, immune systems are all those processes and features we use to ensure security, to defend ourselves, and to police rulebreakers. Yet as important as these systems are, we see how they also must be balanced against their evolutionary counterparts. Too much innovation/experimentation will threaten immunity/protection, and vice versa, when we think of autoimmune diseases, or every degenerative process in human beings that is accelerated by inflammation (a poorly intelligent, nonspecific immune system response). Think also of out of control corporate growth, which threatens the environment, or too much nanny-state regulation or eco-extremism, which kills innovation. As every large organization knows, too much protection will keep a system from taking risks and finding valuable new things. Too little will expose it to catastrophe. Again, great managers learn how to advance both team and organizational innovation and security. We want “sustainable innovation”, a paradox that describes the competing drives of evo devo systems.
  1. Indeterminacy/Freedom/Uncertainty/Multicapability/Bottom-Upness is a fourth key feature of evolutionary systems. In dynamics, the term degrees of freedom describes the number of independent ways a system can move, and in statistics, the number of variables that remain free to vary in a complex system. We can call the freedom available to a system its indeterminacy, or multicapability. The degree to which any system is run bottom-up, with local agents determining their own courses of action, is one key measure of how much potential freedom or indeterminacy that system has. The more bottom-up it is, the freer and more creative it can be. This doesn’t mean it is always freer in its actual behaviors. It may be highly integrated and restrained by, for example, its morality, from doing various actions. But the more bottom up a system is, the more uncertainty, the more the number of possible states, thoughts, or actions it can occupy.
  1. Inertia/Truth/Accuracy/Optimization/Top-Downness is a fourth key feature of developmental systems. They are always searching for truth, which we can describe as a kind of convergent optimization of informational possibilities around one high-probability or inevitable quantity of information. Top-downness, an approach which tries to optimize rulesets for the whole system, is one of the easiest ways to see how much developmental control is being attempted in a complex system. Getting the right top-down rules in place creates a predictable inertia to the complex system, chaining it to a developmental life cycle. As with the other goal pairs, a balance between top-down and bottom-up control needs to be found by foresighted managers. The 95/5 Rule tells us that the most effective systems run almost entirely in bottom-up mode, with the exception of critical top-down processes and constraints. When should you run your own mind, your organization, or your society in a top-down versus a bottom-up mode? Both are quite valuable, at different times for different reasons. In conditions of threat, we predictably always turn to top down control in our societies, at least at first. Think of post-9/11 America for a recent example. In conditions of plenty and relative safety, we predictably swing back to more bottom-upness and freedom for ourselves and our societies. To accomplish things at scale, we always need some top down hierarchy. Think of the leaders of the Occupy movement, which fizzled with their utopian nonhierarchical approach, versus organizations like Syriza in Greece, which had very similar goals but a different approach to hierarchy and strategy. Perhaps most importantly, because Greece itself is a more constitutionally democratic (bottom-up oriented) country, being a parliamentary democracy, Syriza quickly became a leading political party. Again, great managers learn to balance top-downness against bottom-upness in the right measure, advancing both freedom and optimization.

One definition for social progress that can help us assess the value of any scientific, technology, economic, policy, or social change is trying to determine whether that new technology, trait, behavior, or change will make us more adaptive as global collective, without reducing our individual and social (subgroup) adaptiveness. This isn’t a very rigorous definition, but it’s a start, as it acknowledges the tension between growing individuation and interdependence. When we include advances in any of the Eight Abilities and Goals, without regression on any of the others, that appears to grow adaptiveness, I think we have an even better starter definition for social progress.

It’s pretty obvious that any one of these Eight Abilities can be overvalued, at the cost of the others. Consider Immunity (security). It’s easy to overprotect a child, an adult, or an organization, or to overchallenge them in an effort to build immunity. It’s also easy to slip into extremism. Extreme top-down rules like socially shunning anyone who tries to leave a group (Mormons), or even violently attacking the leavers (Chimpanzees, Mafia, Drug Cartels) will make that group much more inertial and immune, but they reduce moral interdependence with the rest of society, and lower freedom and innovation. Many other examples of imbalance can be given.

Consider also Inertia (truthtelling). Good foresight is always trying to find and tell new possible truths, and grow the sphere of the known, even at the risk of social consequences to the teller. This Guide seeks to advance our truthful understanding of who we are, as physical and informational systems, and the universe that we live in. That is why we’ve begun our story with Chapters 2 and this chapter, exponential and evo devo foresight, and the speculations they contain. Many will find these stories controversial, some even a bit threatening or disturbing. If we love and care for others (interdependence, morality, empathy, love), we will try to be sensitive in telling our stories, ideally mainly to those who seem ready to hear them.

But if we value truth, we’ll continue to tell all best stories we find, until they can be knocked down and replaced with better ones. As long as your stories seem to be weebles, as these have been for me personally for two decades, with thousands of clients, your stories deserve to be responsibly told, in a search for more truth. But we must remember that truth seeking, like every other goal, is always in balance with all the other goals. We often tell “white lies” in our own personal attempts to serve the other goals, like protection, love, freedom, or diversity of options for others.

In addition to balancing these Eight Goals during over the life of any system, there are life cycle effects we need to be aware of as well. All evo devo systems move from the left to the right side of the eight goals as they age and get more developed. Eventually they become senescent, a kind of physical and mental overdetermination (loss of resiliency) and brittleness that leads to their eventual death and recycling, in order to reestablish their balance with the environment. For more on this perennial balance between senescence and renewal via replication in complex systems, see Stan Salthe’s Development and Evolution (1993). Salthe is a member of our Evo Devo Universe research community, and someone whose work has greatly inspired my own.

In other words, as they age, all evo devo systems eventually fall into an Overdevelopment Trap, shifting from the left to the right side of the pyramid above. They reduce their dematerialization and prefer further densification, even when it reduces their intelligence. They reduce their information generation and diversity and favor interdependence and homogeneity too much for their own good. They move away from innovation and experimentation to overfavor immunity and protection. They move from indeterminacy to inertia over their life cycle.

There are two ways to avoid this trap. The first is to “square the curve” of normal aging, to slow down and push off overdevelopment and senescence until the latest possible time in the natural life cycle. In any ideal organism, organization, or society, senescence comes only at the very end of a long and healthy mature life. If we divide our lifespan into “healthspan” and “frailspan”, in the most healthy evolutionary development we want our frailspan to be compressed to the shortest possible time at the end of our life. All the wheels should come off the wagon in a very short time period, with everything working great beforehand.

The second way out of the trap, of course, is to rejuvenate, recycle or replicate. If we can’t rejuvenate, we need to packaging our best learning into a new replicator (seed) which becomes a new organism with even more indeterminacy (freedom, bottom-upness) than we had at birth. When parents have children, and try to give them more freedom and options than they had, they are engaging in this perennial renewal strategy. In the future, we biological humans may not have to die entirely to rejuvenate ourselves, but with respect to our brains at least, which are postmitotic and grow more senescent every year after puberty, we will clearly have to renew and rejuvenate them so much that at least parts of them will have to periodically die and be entirely reorganized at the synaptic level. Think of a caterpillar that turns into a butterfly, a re-juvenilizing process that biologists call neoteny. Maybe we could do it half a brain at a time, so that our continuity of consciousness is maintained in the process. Every complex system needs some form of renewal every so often, in order to get out of the trap of overdevelopment.

Fortunately current research suggests most older people today don’t fall into this trap as much as we might think. Several studies argue that we get more open minded in many ways with age, at least for a long while during our late adulthood. I don’t know if the growing open mindedness of older adults is a culturally recent phenomenon, due to the rejuvenating effect of an ever more complex environment, as seen in the Flynn effect in Chapter 4. It may be a general feature of normal human development that people get more open minded as they age, perhaps far into old age. In other words, mental senescence may be the exception, not the rule, in late adulthood. But regardless of the normal chain of events, we all know older people who have fallen into the trap of overdevelopment, becoming much too homogeneous, conservative, pessimistic, and rigid their beliefs for their own good. Figuring out how to avoid this trap, and to rejuvenate, recycle, or replicate as we, our organizations and societies age is a great area of future research.

None of this systems theory regarding the acceleration of adaptive intelligence, and the Eight Goals is definitive or complete by any stretch. We’ve just started down the road here. We offer it only because it is one application of evo devo thinking, and it covers a number of topics of great interest to managers of complex systems. Hopefully, it will makes us more aware that there are useful and partially opposing processes, goals, and values that we can better define, see, measure, manage, balance, and improve in ourselves, our teams, and our societies, in our efforts to be more adaptive.

To recap, in the Eight Abilities and Goals of Adaptive Systems model, we are proposing that our universe is a complex system, self-organized via RISVC processes to use evolutionary and developmental processes to accelerate adaptive complexity, computation, and intelligence. Four evolutionary and four developmental goals of adaptive intelligence were singled out as particularly worthy of analysis. Here again is our cartoon of the model:

When we take the perspective of an evo-devo biologist, it is not hard to identify all eight of these variables in different processes of living systems, and to see how each goal pair competes with its counterpart, along with other goal pairs we have left out for simplicity, to build adaptation and effect hierarchical long-range change in living organisms in their environment. We’ve given an outline of such hierarchical change in several sections in this guide.

Now as we have described, evolutionists, (as opposed to evolutionary developmentalists) who are presently the vast majority of scientists and biologists, presently have a significantly harder time seeing not only examples of optimally adaptive hierarchy (developmental portals), but they have a hard time imagining how developmental processes or goals might affect long-range change in living systems. That is why evo-devo biology is still a small minority view, mostly found among ecologists and theoretical biologists, and why we use the term “evolutionary biology” today, rather than evolutionary developmental biology, when we talk about long-range, “macrobiological” change.

When evolutionists think about the universe as a system, and are willing to speculate about the future of intelligence, they can naturally imagine the advancement of evolutionary goals like dematerialization (growth of minds, of all types), information generation (more difference and diversity creation), innovation (more kinds of useful novelty in the environment) and greater indeterminacy (freedom) in the more advanced complex systems.

I would say most scientific thinkers, including deductive (as opposed to inductive) rationalists and evolutionists, see the universal future from this frame. They imagine that future leading systems, whichever those may be, will be very intelligent, very diverse, highly innovative, and have great potential freedoms of thought and action. But they don’t think about what kind of developmental constraints will be necessary to get to that kind of evolutionary future. This “Green-sided view” of the future is so dominant in modern science and culture, in fact, that we will won’t explore it much in this version of the Guide. We will leave detailed discussion of the far future of these four evolutionary variables for another time.

Folks like Stephen J. Gould, Jared Diamond, Daniel Dennett, and Richard Dawkins take this general perspective. I greatly admire these authors and have learned much from them, but I also consider their world view incomplete at a fundamental level, as it misses universal developmental processes, starting most obviously with inevitable accelerating change. Those rationalists and evolutionists that do see accelerating change, like Nick Bostrom and Robin Hanson, very often see terrible risks or social calamities ahead, again because they miss what I consider to be obvious universal developmental trends.

Our views of the future are fundamentally influenced by the assumptions in our models, and I think an evolution-centric view is only half-complete. Many of those who are commonly called neo-Darwinists see the long range future from any or all of above four evolutionary goals, when they are in their more optimistic frames of mind. When they are being more pessimistic, they also see either unstoppable or random risks or calamities ahead. From either lens, I think their models are incomplete.

I we live in an evo devo universe, the most intelligent systems must also become more densified, interdependent, immune, and inertial, among other important developmental trends. Let’s look at each of these four developmental trends and goals in turn now. If each of these are in fact processes that direct and constrain universal evolutionary development, we can use them to make a series of enlightening proposals on the nature and long-range future of intelligence.

That follows in the next four sections of this chapter will be significantly more speculative. It offers some of the most vivid images of the far future that have emerged for me personally, contemplating universal evo devo, and thinking about the Eight Abilities and Goals in relation to the universe. Will these speculations hold up to scientific scrutiny in the decades ahead? We shall see.

Share your Feedback

Better Wording? References? Data? Images? Quotes? Mistakes?

Thanks for helping us make the Guide the best intro to foresight on the web.