**Incompleteness: The Limits of Science and Mind**

One more fundamental variable of **CCI systems** should be mentioned now, a variable we can call **Incompleteness**. This variable can also be thought of as beginning with an I, though its full name is computational or logical incompleteness.

**Incompleteness** isn’t a goal however, but rather a condition of all computational systems in the universe, a state that always remains in spite of the **Eight Goals**.** Incompleteness** doesn’t appear to accelerate like the Two D’s and the other Six I’s. It is instead a remainder quality, something “left over.” It is all the knowledge that remains unproven, all the futures that remain undetermined, it is everything our most adaptive intelligences never capture about reality, so it’s best thought of as a variable in its own class.

As the logician **Kurt Godel** discovered in the 1930’s, there are always questions that can be asked within formal logical systems, like mathematics, which cannot be proven or disproven from within the system. See **Godel’s incompleteness theorems** for a deeper discussion of this result. Some call this theorem the most profound insight in all of science. Similarly, **Rice’s Theorem** tells us that there is no general and effective method of knowing when any algorithm will be able to decide a particular question in computation. Every decision system we use is always deeply incomplete.

Even mathematics itself may not be the most privileged position or language from which to understand reality. Math is a particularly economical and formal language for symbolic manipulation, but it may not describe all relationships, only relationships that spring from things like number or set theory. We use words all the time to describe things that we have no math for, and they are a much better descriptor, in many environments, than math. And we may never have a math that can recreate what words do, in human brains. Yet math and words are just two of the languages we use to describe relationships. We use spatial languages, feelings, even our consciousness.

What’s more, most mathematics yields theoretical constructs which have no known utility—it is too unconstrained in the set of possible transformations for math alone to help us discover truth. Only a small subset of math’s potential transformations, applied math, has been empirically found to be useful in describing physical systems.

The mathematician and philosopher **Michael Atiyah** has asked the question: is math created or discovered? If we live in an evo devo universe, it is both. We **discover** a small subset of math that applies to the physical world. That math describes our universe’s **developmental** dynamics, and its developmentally created evolutionary mechanisms. The **evolutionary activities** of those mechanisms, in human and postbiological brains, **creates** the large majority of math. Created or theoretical math has its own beauty but little utility, except for that small subset that is later discovered to conform to physical reality. Since the **95**/**5** rule tells us most math must end up staying theoretical, no matter how hard we try to make it applied, math can be thought of as a kind of logically-self consistent, numerical “poetry from which physics springs,” to quote my late advisor, James Grier Miller.

What’s more, our maths may never fully describe any complex system. The polymath **John Von Neumann** once famously said “In mathematics, you don’t understand things. You just get used to them.” Math gives you a window into some of their operations but it never seems to describe the whole system. There’s always processes and phenomena, like consciousness, which escape description. The main problem, the problem of incomplete representation, is a joke, common in graduate physics, called the “**spherical cow**“. Physicists have to use vastly simplifying assumptions, like assuming spherical cows in a model of milk production on a farm, in order to get math to work, in any system. They are always plagued with questions of where to “**renormalize**” their equations, to adjust them back to some kind of equilibrium, and away from the infinities and extrema that often arise in interacting sets of equations.

Computation and simulation, a functionally restricted subset of math, may help us explain considerably more of the nature of physical complexity, including computations that begin from a set of simple rules, as in agent-based models and cellular automata, simulations which can on occasion deeply mimic physical form and function in the simulation space, as with Conway’s Game of Life, Wolfram’s *A New Kind of Science* (2002), and modern successors, like the NetLogo agent-based programming environment.

In the physical world, what causes this restriction of the utility of math? At one level, it may be the **physicality** of the universe itself. The fact that it’s made from something, rather than everything, constrains the kinds of rules that apply. The fundamental physical parameters may derive from our universe’s particular physicality, and they set the starting rules of the system. As the universe’s starting systems combine in ever more intricate forms, some of this is computable, or tractable to math, and some of it isn’t.

I call this idea the **informational-physical universe hypothesis**, the idea that there is not only some irreducible information (set of relationships) in our universe, but there may be some irreducible physicality to it, and both of these may be why we can never have a fully self-consistent or fully explanatory mathematics to describe it. Thus **Max Tegmark’s** mathematical universe hypothesis, the idea that math underlies physical reality, is very unlikely to be correct, in my view.

I believe that a restricted subset of computation that includes evo devo rules will turn out to be even more able to explain complexity emergence in the universe, and to most rapidly create useful intelligence in machines, than all the other forms of simulation we’ve tried to date. We shall see if these predictions are roughly correct.

But even simulation, while it can become a good approximation, never captures all the details of the system it is simulating. We’ll always be inventing new maths, theories, models, and simulations. As Massimo Pigluucci says in *Nonsense on Stilts: How to tell Science from Bunk* (2010), “Every scientific theory proposed in the past has eventually been proven wrong [better: incorrect in parts] and has given way to new theories.” Such is the nature of science, computation, and intelligence growth.

**Incompleteness** tells us that no intelligence ever becomes “perfect” in any important way. Each intelligence is always a finite state computing system, with things it can’t understand, or mentally explore. This is a very valuable thing to realize when we think about the future. We aren’t “becoming Gods”, with all the hubris that idea implies. We’re just getting smarter, and will forever be faced with various risks and unknowns. There’s no heaven or utopia, only protopias ahead.

Because of incompleteness, “Superhuman” is a much better word to describe our future than “Gods” or even “Demigods,” with all the perfection that those latter words imply. Humans are beings that simply become smarter and better over time, changing themselves with our technologies, while furthering all of the **Eight Goals**. Our future is just more of those goals, on steroids perhaps. Steroids that don’t have so many side effects, of course. J

Because of progress made on the **Eight Goals**, the survival risk from incompleteness to our local intelligence must dramatically decline as civilization proceeds, particularly once we are postbiological. Our **immunity** (protection from harm) and the **inertia** (resistance to change in motion or direction, due to **optimization**) of our continued acceleration are already formidable.

Thus the *magnitude* of our various incompletenesses relative to our knowledge also seems likely to go down as well. But at the same time, the number of known ways in which our simulations are provably incomplete also seems likely to keep inevitably rising. The adage “the more we know, the more we realize how little we know” and “every scientific fact raises many new questions” both help us see that incompleteness always grows the smarter we get, but it also becomes less threatening, and may also grow at a declining rate, the more developed any system becomes.

**Incompleteness** may be why the universe finds it most adaptive to use many parallel simulation systems (spatially separated civilizations), finding their own unique paths, so they can compare and contrasting their imperfect findings against each other later, in some evolutionary, selective manner. Incompleteness may also be why our own brains use many different parallel mindsets, each storing their own memories and points of view, arguing with each other, to think about the world. Each system will always be incomplete, so there’s power in numbers and cognitive diversity.

When we keep both **Adaptiveness** and **Incompleteness** in mind, we can talk about the **Ten Traits** and **Nine I’s** of complex systems, a mnemonic where the **Eight Goals** are all expressed in “I” terms, below the metagoal of **Adaptiveness**, and the persistent metacondition of **Incompleteness** (picture below). The Nine I’s are just a nerdier way to remember the Eight Goals and Incompleteness. Feel free to ignore them if they are not helpful. They are actually the way that I developed this model over the last decade, my attempt at a rough **information theory** (yet another “I”) of complex systems. Out of many models for “**infodynamics**” (how information shapes portions of the universe as it accumulates), I find this one particularly helpful.

Again, incompleteness is not a goal, but a persistent limitation on adaptiveness. Along with adaptiveness, it is the “one I” that rules them all, and in the universe binds them. The world’s intelligence communities have the “Five Eyes” Countries, a set of Western nations (Australia, Canada, New Zealand, UK, and the US) committed to deeper sharing of intelligence data. We evo devo scholars have the “**Nine I’s**,” or alternatively, **Incompleteness, D&D, and the** **Six I’s**. Perhaps it can become as well-known among scholars of information and complexity?

To summarize, the reality of incompleteness limits the effectiveness of the **Eight Goals**, and forces the universe to create a multitude of spatially-separated civilizations to try to get a better grasp on the kind of reality we inhabit, and where we can go next. In an evo devo frame, **incompleteness** tells us something powerful about the long-term future of science and mind.

Beautiful and poetic books like *A Science Odyssey: 100 Years of Discovery*, Charles Flowers (1998), and its lovely five-part PBS series (1998) show us how uniquely powerful the accumulation of scientific knowledge is to human flourishing. Many of these books, *A Science Odyssey* included, also offer us the comforting view that the pursuit of knowledge offers an “infinite” set of questions, and that science, in this universe, will be an “endless journey.”

Such a claim sounds humble and reasonable on its surface. But in an evo devo universe, science must be both a finite journey (evo) and a predictable series of arrivals (devo). Incompleteness tells us that we can conduct only incomplete searches for truth with the finite STEM resources of our universe, and each civilization must conduct even more incomplete searches with the local resources available to it.

In theory, there may be an infinite set of scientific questions and unknowns (evo), but in practice, only a finite number of those can ever be asked by any single civilization, and by any single universe over its lifespan. Furthermore, in any universe that is seeking to improve its adaptiveness, only a very small number of those potential questions will be found worthy of actually asking.

**Incompleteness**, the finite computational power of every system tells us why **evo** **devo** methods are the most adaptive computation, intelligence, and complexity construction techniques. Every **fan-out** of **evolutionary exploration** must be paired with a **fan-in**, of **developmental pruning**, to prevent a combinatorial explosion of possibilities in the system’s incomplete capacity to search, and to maximize the chance that it can find something useful in that search.

Complexity pioneer and biologist Stuart Kauffman, in his classic book on complexity and self-organization, *At Home in the Universe* (1996), explores NK networks, where N refers to the number of components in a complex system, and K to the number of connections among those components. His research proposed that as K increases, at a certain threshold the fitness of a system, be it an ecosystem, a brain, a society, or an artificial neural network, always decreases. Its behavior becomes **overconstrained**, **brittle**, and too **predictable**. Likewise, without enough connections, the system’s **evolutionary** ability to “hill climb,” to **experimentally** search for and move toward higher peaks on the fitness landscape, is greatly curtailed. So N versus K are always in a computational tradeoff, and there is a sweet spot, a place between too much **chaos** and too much **order**, or in our language, between too much **evolution** and too much **development**, where **adaptiveness** is maximized.

Evo devo systems are thus the way intelligent systems deal with their own computational incompleteness to maximize adaptiveness. The better we understand these systems, the better we understand the universe we live in, and the future of mind.

Consider for example the philosopher Nick Bostrom’s famous Simulation Hypothesis. His argument starts like this: “A technologically mature “posthuman” civilization would have enormous computing power. Based on this empirical fact, the simulation argument shows that at least one of the following propositions is true:

- The fraction of human-level civilizations that reach a posthuman stage is very close to zero;
- The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero;
- The fraction of all people with our kind of experiences that are living in a simulation is very close to one.”

Bostrom concludes that proposition three is true. We are almost certainly living in a simulation, in his view.

From an evo devo perspective, informational inertia and immunity argue that proposition one is almost certainly false. Incompleteness, and the use of evo devo methods to cope with it, tell us that proposition two is almost certainly true. Consider all the knowledge and memories that are buried in your unconscious brain that you will never, ever bring back to conscious attention. Over your lifetime you pruned your accessible memories, finding the optimal NK tradeoff, to maximize your adaptiveness. Because you are and always will be a finite entity, you will never revisit those memories, unless a series of very improbable events occur, such as your finding a childhood photograph, or if a neurosurgeon opens your brain, and stimulates those neurons electrically, as in Walter Penfield’s famous brain stimulation experiments in the 1950s, in which patients recalled youthful memories they had long forgotten.

The vast majority of this unconscious information is essentially “dead” to each of us after it was formed, carried in our heads but never or vary rarely accessed. In the few cases of people who have eidetic memories, with K connectivities between their memory encoding neurons that are very high, such individuals are not adaptive. They’re continually tormented by past memories, forced to rehearse them, and their ability to pay attention to the present, and respond to their surroundings becomes increasingly degraded with time and experience.

If we live in an incomplete and evo devo universe, all future intelligences will be faced with these very same kinds of NK, evo devo tradeoffs. They may wish that they had unlimited computational powers, but wishing does not make it so. Their practical interest in calling up old memories, or “running ancestor simulations” in Bostrom’s language, will be vastly constrained by the developmental pruning they will have to constantly do to stay adaptive. Interest in old knowledge declines, boredom with well understood situations is inevitable, and housecleaning must constantly be done. As evo devo intelligences grow, they never gain the ability to recall all their past “ancestor” computations, because they remain perennially computationally incomplete, and there is always great power in deciding which of those simulations are the most “interesting”.

These insights about the limits and evo devo nature of simulation in natural biological systems tell us why the simulation argument is not plausible, and why such similarly nonbiological models as physicist Frank Tipler’s end-of-universe recreation, or “informational immortality” of all the minds that once existed within it, as discussed in his interesting book, *The Physics of Immortality* (1997) are so unlikely. This Tiplerian information theory must be wrong, in my view, just as Teilhard de Chardin’s assumption of a “Godlike” intelligence at the end of this universe, his Omega Point hypothesis, must also be wrong. We will reach an Omega in this universe, but it won’t be Godlike. Such intelligences will have vast superpowers compared to us, but they’ll always be computationally incomplete, with their own questions they can’t answer, and challenges they can’t overcome.

Likewise, vastly unconstrained and materially expensive physical models of universal dynamics like the physicist Hugh Everett’s Many Worlds interpretation of quantum mechanics, which argues new universes are created at every branch in quantum interactions also seem extremely unlikely to be correct. They are simply not evo devo enough, assuming that they have been self-organized to maximize adaptiveness of the complex systems that arise based on their physics.

As soon as intelligence or adaptiveness is a feature of computation, quantum or otherwise, evolutionary exploration must collapse to developmental convergence after a very finite amount of time. This may be why quantum physics “collapses” to classical physics at larger scales of space and time. Human beings can today create 1,000 qubit quantum computers, with the qubit states entangled. That rapid new computation capacity might allow us to do vastly better molecular simulations, and evolutionary searches, as we’ve discussed.

But even with their power, such systems will always be perennially incomplete, finite in size, with finite ability to compute, and so will be reliant on evo devo methods to optimize their performance. To make sense in our macroscopic world, quantum computations may also always have to “collapse” back to classical computing at various scales of computational complexity.

So science in our kind of universe, and thus mind itself, is not actually unlimited, but is a future-limited body of knowledge, in this universe at least. There are a predictable set of destinies which our universe is heading toward, waiting to be uncovered (devo). The sooner we understand those, the sooner we can focus our energies on those things the universe is guiding us to do, and stop wasting our energy on less important futures, and fighting against developments we can’t ultimately stop, but can only delay or accelerate, as our morality guides us.

As Stan Salthe reminds us, if our universe is developmental, its journey is a life cycle, and it must invariably **senesce** (age and fall apart) over time. The older it gets, the more constrained our universal environment becomes, and it must eventually die and renew itself. We have only a finite amount of energy and time allotted to us, and there are only so many destinations we can visit in our own personal journey. So we would do well to choose wisely. So too with the universe, and the finite and incomplete science that our civilization builds within it.

Again, the evo devo model proposes that the intelligence in our universe is chained to a life cycle. It replicates itself on a regular basis, like all living systems. This implies that the older and more senescent universal intelligence gets, the more it also becomes like a seed, packaging itself in a manner that will protect what it has learned, and seeking out precisely those conditions that will allow it to renew and flower again.

If we ignore our finite time in this universe, and our increasing limits, constraints, and developmental destinies the more complex and old our civilization becomes, if we see only half of the future ahead of us, we will inevitably make poorer, less responsible choices in our journey. The better we see how the accumulation of knowledge (information) itself has its own **inertia**, and limits our future course at the same time that it empowers us, the better we get at choosing which knowledge to accumulate, and which problems to solve.

We intelligences have a finite lifetime in this universe, and every choice we make takes us further out on our evolutionary developmental life cycle. We can use a growing understanding of that life cycle to study and build things that matter most. Let’s see both the inevitable destinations ahead, and the evitable paths we take toward them, every day. We can see and choose better, every day.

As any historian of science knows, science can be either good or bad. It is the nature of scientific questions that sentient minds choose ask, and applications of science to technology, that are either good or bad. Science must get much better at understanding evolution and development in this universe, and what these fundamental processes tell us about the nature of our future. Much more courageous scholarship must be undertaken. The going will sometimes be difficult, as these concepts will be resisted and criticized by many in the existing scientific community. But if, like all living systems, our universe does replicate itself, and uses evolutionary and developmental processes to do so, these ideas will ultimately triumph.