Incompleteness: The Limits of Science and Mind
One more fundamental quality of complex systems should be mentioned now, one we can call Incompleteness. Its full name might be computational or logical incompleteness. Incompleteness isn’t a goal, but rather a condition of all finite physical systems in the universe. Incompleteness doesn’t accelerate, the way we often see in the Ten Values. It is instead something always “left over,” a remainder. It is all the knowledge that remains unproven, all the futures that remain undetermined, and everything our most adaptive intelligences will never capture about reality.
As the logician Kurt Godel discovered in the 1930’s, there are always questions that can be asked within formal logical systems, like mathematics, which cannot be proven or disproven from within the system. See Godel’s incompleteness theorems for a deeper discussion of this result. Some call this theorem the most profound insight in all of science. Similarly, Rice’s Theorem tells us that there is no general and effective method of knowing when any algorithm will be able to decide a particular question in computation. Every decision system we use is always deeply incomplete.
Even mathematics itself may not be the most privileged position or language from which to understand reality. Math is a particularly economical and formal language for symbolic manipulation, but it may not describe all relationships, only relationships that are isomorphic to number or set theory. Most mathematics yields theoretical constructs which have no known utility—it is too unconstrained in the set of possible transformations for math alone to help us discover truth. Only a small subset of math’s potential transformations, applied math, has been found to be useful in describing physical systems.
We use words all the time to describe things that we have no math for, and they are a much better descriptor, in many environments, than math. And we may never have a math that can recreate what words do, in human brains. Yet math and words are just two of the languages we use to describe relationships. Spatial perception, feelings, even our consciousness are also kinds of languages we use to navigate and model reality.
The mathematician and philosopher Michael Atiyah has asked the question: is math created or discovered? If we live in an evo devo universe, it is both. We discover a small subset of math that applies to the physical world. That math describes our universe’s developmental dynamics, and its developmentally created evolutionary mechanisms. The evolutionary activities of those mechanisms, in human and postbiological brains, creates the large majority of math. Created or theoretical math has its own beauty but little utility, except for that small subset that is later discovered to conform to physical reality. Since the 95/5 rule tells us most math must end up staying theoretical, no matter how hard we try to make it applied, math can be thought of as a kind of logically-self consistent, numerical “poetry from which physics springs,” to quote my late advisor, James Grier Miller.
What’s more, our maths may never fully describe any complex system. The polymath John Von Neumann once famously said “In mathematics, you don’t understand things. You just get used to them.” Math gives you a window into some of their operations but it never seems to describe the whole system. There’s always processes and phenomena, like consciousness, which escape description. The main problem, the problem of incomplete representation, is a joke, common in graduate physics, called the “spherical cow“. Physicists have to use vastly simplifying assumptions, like assuming spherical cows in a model of milk production on a farm, in order to get math to work, in any system. They are always plagued with questions of where to “renormalize” their equations, to adjust them back to some kind of equilibrium, and away from the infinities and extrema that often arise in interacting sets of equations.
Computation and simulation, a functionally restricted subset of math, may help us explain considerably more of the nature of physical complexity, including computations that begin from a set of simple rules, as in agent-based models and cellular automata, simulations which can on occasion deeply mimic physical form and function in the simulation space, as with Conway’s Game of Life, Wolfram’s A New Kind of Science (2002), and modern successors, like the NetLogo agent-based programming environment.
In the physical world, what causes this restriction of the utility of math? At one level, it may be the physicality of the universe itself. The fact that it’s made from something, rather than everything, constrains the kinds of rules that apply. The fundamental physical parameters may derive from our universe’s particular physicality, and they set the starting rules of the system. As the universe’s starting systems combine in ever more intricate forms, some of this is computable, or tractable to math, and some of it isn’t.
I call this idea the informational-physical universe hypothesis, the idea that there is not only some irreducible information (set of relationships) in our universe, but there may be some irreducible physicality to it, and both of these may be why we can never have a fully self-consistent or fully explanatory mathematics to describe it. Thus Max Tegmark’s mathematical universe hypothesis, the idea that math underlies physical reality, is very unlikely to be correct, in my view.
I believe that a restricted subset of computation that includes evo devo rules will turn out to be even more able to explain complexity emergence in the universe, and to most rapidly create useful intelligence in machines, than all the other forms of simulation we’ve tried to date. We shall see if these predictions are roughly correct.
But even simulation, while it can become a good approximation, never captures all the details of the system it is simulating. We’ll always be inventing new maths, theories, models, and simulations. As Massimo Pigluucci says in Nonsense on Stilts: How to tell Science from Bunk (2010), “Every scientific theory proposed in the past has eventually been proven wrong [better: incorrect in parts] and has given way to new theories.” Such is the nature of science, computation, and intelligence growth.
Incompleteness tells us that no intelligence ever becomes “perfect” in any important way. Each intelligence is always a finite state computing system, with things it can’t understand, or mentally explore. This is a very valuable thing to realize when we think about the future. We aren’t “becoming Gods”, with all the hubris that idea implies. We’re just getting smarter, and will forever be faced with various risks and unknowns. There’s no heaven or utopia, only protopias ahead.
Because of incompleteness, “Superhuman” is a much better word to describe our future than “Gods” or even “Demigods,” with all the perfection that those latter words imply. Humans are beings that simply become smarter and better over time, changing themselves with our technologies, while furthering all of the Ten Values. Our future is just more of those values and goals.
But because of progress made on our adaptive values, the survival risk from the incompleteness of our intelligence within our universe must dramatically decline as civilization proceeds, particularly once we are postbiological. Our immunity (protection from harm) and the inertia (resistance to change in motion or direction, due to optimization) of our continued acceleration are already formidable.
Thus the magnitude of our various incompletenesses relative to our knowledge also seems likely to go down as well. But at the same time, the number of known ways in which our simulations are provably incomplete also seems likely to keep inevitably rising. The adage “the more we know, the more we realize how little we know” and “every scientific fact raises many new questions” both help us see that incompleteness always grows the smarter we get. What is less reported is that incompleteness becomes less threatening, the more our intelligence grows to encompass the universe that created us.
Incompleteness may be why the universe finds it most adaptive to use many parallel simulation systems (spatially separated civilizations), finding their own unique paths, so they can compare and contrasting their imperfect findings against each other later, in some evolutionary, selective manner. Incompleteness may also be why our own brains use many different parallel mindsets, each storing their own memories and points of view, arguing with each other, to think about the world. Each system will always be incomplete, so there’s power in numbers and cognitive diversity.
If we want to keep Adaptiveness, Incompleteness, and the Ten Values all in mind, we can make a useful mental model we might call the Twelve Traits of Complex Systems, as follows. As before, Incompleteness sits at the top of the “house”, always present. Adaptiveness grows via progress in the Ten Values. Sometimes it is more helpful for us to think of adaptiveness first, and ask how the right progress in certain values might improve it, and at other times, it can be more helpful to think of the values first, and how they might grow adaptiveness, and what they leave incomplete. Thus all twelve of these traits of complex systems are very helpful to consider. Out of many possible ways to think about “infodynamics” (how information shapes the physical universe as it accumulates), I find this model to be particularly helpful.
Again, incompleteness is not a goal, but a persistent limitation on adaptiveness. The reality of incompleteness limits the effectiveness of the Ten Values. Incompleteness may be why our brains have multiple completing mindsets, and it may have caused the universe to create a multitude of spatially-separated civilizations to try to get a better grasp on the kind of reality we inhabit, and where we can go next.
Beautiful and poetic books like A Science Odyssey: 100 Years of Discovery, Charles Flowers (1998), and its lovely five-part PBS series (1998) show us how uniquely powerful the accumulation of scientific knowledge is to human flourishing. Most of these books offer us the comforting view that the pursuit of knowledge offers an “infinite” set of questions, and that science, in this universe, will be an “endless journey.”
Such a claim sounds humble and reasonable on its surface. But in an evo devo universe, science must be both a finite journey (evo) and a predictable series of arrivals (devo). Incompleteness tells us that we can conduct only incomplete searches for truth with the finite resources of our universe, and each civilization must conduct even more incomplete searches with the local resources available to it.
In theory, there may be an infinite set of scientific questions and unknowns (evo), but in practice, only a finite number of those can ever be asked by any single civilization, and by any single universe over its lifespan. Furthermore, in any universe that is seeking to improve its adaptiveness, only a very small number of those potential questions will be found worthy of actually asking.
Incompleteness, the finite computational power of every system tells us why evo devo methods are the most adaptive computation, intelligence, and complexity construction techniques. Every fan-out of evolutionary exploration must be paired with a fan-in, of developmental pruning, to prevent a combinatorial explosion of possibilities in the system’s incomplete capacity to search, and to maximize the chance that it can find something useful in that search.
Complexity pioneer and biologist Stuart Kauffman, in his classic book on complexity and self-organization, At Home in the Universe (1996), explores NK networks, where N refers to the number of components in a complex system, and K to the number of connections among those components. His research proposed that as K increases, at a certain threshold the fitness of a system, be it an ecosystem, a brain, a society, or an artificial neural network, always decreases. Its behavior becomes overconstrained, brittle, and too predictable. Likewise, without enough connections, the system’s evolutionary ability to “hill climb,” to experimentally search for and move toward higher peaks on the fitness landscape, is greatly curtailed. So N versus K are always in a computational tradeoff, and there is a sweet spot, a place between too much chaos and too much order, or in our language, between too much evolution and too much development, where adaptiveness is maximized.
Evo devo systems are thus the way intelligent systems deal with their own computational incompleteness to maximize adaptiveness. The better we understand these systems, and the changing nature of their incompleteness, the better we can understand the universe we live in, and the future of mind.
Consider the philosopher Nick Bostrom’s famous Simulation Hypothesis. His argument starts like this: “A technologically mature “posthuman” civilization would have enormous computing power. Based on this empirical fact, the simulation argument shows that at least one of the following propositions is true:
- The fraction of human-level civilizations that reach a posthuman stage is very close to zero;
- The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero;
- The fraction of all people with our kind of experiences that are living in a simulation is very close to one.”
Bostrom concludes that proposition three is true. We are almost certainly living in a simulation, in his view.
From an evo devo perspective, informational inertia and immunity argue that proposition one is almost certainly false. Incompleteness, and the use of evo devo methods to cope with it, tell us that proposition two is almost certainly true. Consider all the knowledge and memories that are buried in your unconscious brain that you will never, ever bring back to conscious attention. Over your lifetime you pruned your accessible memories, finding the optimal NK tradeoff, to maximize your adaptiveness. Because you are and always will be a finite entity, you will never revisit those memories, unless a series of very improbable events occur, such as your finding a childhood photograph, or if a neurosurgeon opens your brain, and stimulates those neurons electrically, as in Walter Penfield’s famous brain stimulation experiments in the 1950s, in which patients recalled youthful memories they had long forgotten.
The vast majority of this unconscious information is essentially “dead” to each of us after it was formed, carried in our heads but never or vary rarely accessed. In the few cases of people who have eidetic memories, with K connectivities between their memory encoding neurons that are very high, such individuals are not adaptive. They’re continually tormented by past memories, forced to rehearse them, and their ability to pay attention to the present, and respond to their surroundings becomes increasingly degraded with time and experience.
If we live in an incomplete and evo devo universe, all future intelligences will be faced with these very same kinds of NK, evo devo tradeoffs. They may wish that they had unlimited computational powers, but wishing does not make it so. Their practical interest in calling up old memories, or “running ancestor simulations” in Bostrom’s language, will be vastly constrained by the developmental pruning they will have to constantly do to stay adaptive.
Interest in redundant knowledge declines, boredom with well understood situations is inevitable, and housecleaning must constantly be done. As evo devo intelligences grow, they never gain the ability to recall all their past “ancestor” computations, because they remain perennially computationally incomplete, and there is always great power in deciding which of those simulations are the most “interesting”.
These insights about the limits and evo devo nature of simulation in natural biological systems tell us why the simulation argument is not plausible, and why such similarly nonbiological models as physicist Frank Tipler’s end-of-universe recreation, or “informational immortality” of all the minds that once existed within it, as discussed in his ambitious work, The Physics of Immortality (1997) are so unlikely. This Tiplerian information theory must be wrong, in my view, just as Teilhard de Chardin’s assumption of a “Godlike” intelligence at the end of this universe, his Omega Point hypothesis, must also be wrong. We will reach a kind of Omega in this universe, but it won’t be Godlike. Such intelligences will have vast superpowers compared to us, but they’ll always be computationally incomplete, with their own questions they can’t answer, and challenges they can’t overcome. The best they’ll be able to do is to try to help continue the evo devo process that created them, in their own highly incomplete and limited ways, by nonrandomly influencing the next generation.
So science in our kind of universe, and thus mind itself, is not actually unlimited, but is a future-limited body of knowledge, in this universe at least. There are a predictable set of destinies which our universe is heading toward, waiting to be uncovered (devo). The sooner we understand those, the sooner we can focus our energies on those things the universe is guiding us to do, and stop wasting our energy on less important futures, and fighting against developments we can’t ultimately stop, but can only delay or accelerate, as our morality guides us.
As Stan Salthe reminds us, if our universe is developmental, its journey is a life cycle, and it must invariably senesce (age and fall apart) over time. The older it gets, the more constrained our universal environment becomes, and it must eventually die and renew itself. We have only a finite amount of energy and time allotted to us, and there are only so many destinations we can visit in our own personal journey. So we would do well to choose wisely. So too with the universe, and the finite and incomplete science that our civilization builds within it.
Again, the evo devo model proposes that the intelligence in our universe is chained to a life cycle. It replicates itself on a regular basis, like all living systems. This implies that the older and more senescent universal intelligence gets, the more it also becomes like a seed, packaging itself in a manner that will protect what it has learned, and seeking out conditions that will allow it to renew and flower again.
We intelligences have a finite lifetime in this universe, and every choice we make takes us further out on our evolutionary developmental life cycle. We can use a growing understanding of that life cycle to study and build things that matter most. Let’s see both the inevitable destinations ahead, and the evitable paths we take toward them, every day. We can see and choose better, every day.
If we ignore our finite time in this universe, and our increasing limits, constraints, and developmental destinies the more complex and old our civilization becomes, if we see only half of the future ahead of us, we will inevitably make poorer, less responsible choices in our journey. The better we see how the accumulation of knowledge (information) itself has its own inertia, and limits our future course at the same time that it empowers us, the better we get at choosing which knowledge to accumulate, and which problems to solve.
As any historian of science knows, science can be either good or bad. It is the nature of scientific questions that sentient minds choose ask, and applications of science to technology, that are either good or bad. Science must get much better at understanding evolution and development in this universe, and what these fundamental processes tell us about the nature of our future. Much more courageous scholarship must be undertaken. The going will sometimes be difficult, as these concepts will be resisted and criticized by many in the existing scientific community. But if, like all living systems, our universe does replicate itself, and uses evolutionary and developmental processes to do so, these ideas will ultimately triumph.