Developmental Interdependence, Morality, and Global Superorganisms
The second universal developmental trend worth carefully understanding is the trend of increasing interdependence, integration, morality, love, caring, or in plainspeak, “heart.” Many systems scholars, from Teilhard de Chardin to Ken Wilber, have made the claim that ultimately, “love makes the universe go round.” I’m not fully in agreement with that sentiment, and it is an oversimplification of their excellent work, but I do think love is one of the great developing and constraining forces in the universe. It is a key way that developmental systems manage accelerating evolutionary complexity as it grows. It’s a great source of joy and fun.
Recall our discussion of evil under the Eight Goals, as a kind of antiprogress. When we speak informally about evil in society, it is often even more narrowly defined, as the breaking of moral codes. At root, we can argue that amoral behavior reduces our interdependence, as morality is a core kind of social interdependence.
Consider how mind and heart, cognition and emotion, System 2 and System 1 in neuroscience, individuation and interdependence in “I” speak, are two fundamental and sometimes opposing ways of making decisions. We can propose that love-caring, the lovingkindness championed in Buddhism and other religions, what the Jesuit Pierre Teilhard de Chardin called cosmic love in The Phenomenon of Man (1955), and what Charles Darwin called moral sentiments, is really just as fundamental a force of nature as intelligence. In the same way, Freedom and Security are fundamental and opposing drivers in evo devo systems. The universe needs all of these opposing processes to manage the acceleration of adaptive complexity. Here’s a cartoon of the idea, with a reduced set of the Eight Goals (picture right).
Interdependence, largely mediated by love-caring and the morality that emerges from it, seems to be a great label for the informational-physical process that holds complex systems together, keeps them secure, as their complexity scales. Disagree? What other candidates for security would you propose? It sure as heck isn’t freedom, or rationality. Both of those simply increase our evolutionary options, offering us many dangerous new tools and strategies for destruction. No, it’s love, friends. Love and intelligence together make the world go round. But love and caring, significantly more than intelligence, is developmental. It grows as complexity scales, just as Teilhard suspected. I suppose all that remains is to work out the physics. Any takers? J
What evidence do we have of the developmental nature of love-caring? How do we know humans are growing a larger heart? Certainly the media won’t report it much, because, by and large, this is a story people are evolutionarily biased against hearing, as we discussed in Chapter 1. Our mild evolutionary pessimism bias, our aging pessimism bias, media bias, and other factors make us tune these stories out. They don’t have enough drama in them, enough risk of destruction.
Nevertheless, as a species, we’ve become vastly more interdependent, integrated, socially cohesive, and morally homogeneous with time. Growing up, the best book I knew on this topic was Norbert Elias’s The Civilizing Process (1939/1969). Covering European history from 800 to 1900 CE, Elias showed how vastly less violent and more socially interdependent human beings had become. On some measures, Medieval Europe was roughly 50X more violent than Modern Europe. Yet Elias’s message, until recently, remained almost entirely unknown to the general public.
Complexity scientist Herbert Gintis is one of several scholars trying to work out the evolutionary mechanisms of social friendliness and morality. Here is Gintis on these topics:
“Because culture is influenced by human genetic propensities, it follows that human cognitive, affective, and moral capacities are the products of a unique dynamic known as gene-culture coevolution, in which genes adapt to a fitness landscape of which cultural forms are a critical element, and the resulting genetic changes lay the basis for further cultural evolution. This coevolutionary process has endowed us with preferences that go beyond the self-regarding concerns emphasized in traditional economic and biological theories, and embrace such other-regarding values as a taste for cooperation, fairness, and retribution; the capacity to empathize; and the ability to value such constitutive behaviors as honesty, hard work, toleration of diversity, and loyalty to one’s reference group.”
For more, see his edited volume on the economics of morality, Moral Sentiments and Material Interests (2006), where he co-wrote three papers, and his excellent book with Sam Bowles, A Cooperative Species (2011). Books like this attempt to explain, from a game theory and intelligence perspective, why the vast majority of us behave so surprisingly nicely, on average, given our increasingly advanced capacities to do the opposite, any time we choose. See also David Sloan Wilson’s Darwin’s Cathedral, 2002 and Does Altruism Exist? 2016.
Science is beginning to provide us with a moral code, but it is a very weak one at present. Carl Sagan’s lovely The Demon Haunted World (1997) makes this case well. But to it we must add some type of universal ethics, something science isn’t yet smart enough to discover. I believe we will need something like evo devo thinking to get us there, to turn science from a descriptive to a proscriptive enterprise of the human mind and heart.
Religion, our first philosophy of universal progress, brought us many extremisms, imperfect ideas, and evils in its history, but the leading religions have all periodically reformed their beliefs as science has advanced, and religious believers continue to do great good in the world, in their works, faith, hope, and moral education.
Secular rationalism has taken us further, but it has also brought us many extremisms, which is something the freethinkers need to remember. When they drift into utopian ideology (too much evidence-poor reasoning and belief), radical rationalists have been as damaging as radical religion in history. Independent scholar Philip Benjamin makes the point that 120M deaths can be attributed to utopian secularists (Hitler, Lenin, Stalin, Mao, Pol Pot, etc.) in the 20th century. Utopia (2003) is a hard-to-find documentary that makes this point very well, emphasizing the obsessive desire of these secular utopianists to cleanse their societies of competing beliefs, including religion, and the extreme violence they engaged in once they had shed their traditional moral compass.
Our moral codes, such as they are, are always incomplete. There is no mental code, no combination of belief or rationality, can save us from all of our own extremism. But the minimization of evidence-free belief is a major move in the right direction. So also is the growth of adaptive intelligence, whether human or machine-based.
Mark Waser of the Digital Wisdom Institute speculates that social values like cooperation (interdependence) and diversity (information-growth, individuation) have so much instrumental value in communities that we can expect any sufficiently intelligent machine, when it wakes up, will recognize their (developmental) optimality. They would strive to treat humans in the manner they would like to be treated themselves, a kind of Golden Rule. There are also many Meta-Golden Rules or moral algorithms that all adaptive cultures follow, ways to incentivize positive sum social interactions, and adaptive ways to deal with free riders and negative sum actions and actors.
Recently, Steven Pinker’s epic, 800-page work, The Better Angels of Our Nature: Why Violence Has Declined (2011), has become the vanguard work for understanding the growth of social interdependence. Pinker is a worthy successor to Elias. Better Angels is one of the bravest and most important books of the last decade, as it definitively shows that our interdependence has vastly grown, and our willingness to commit atrocities continues to shrink. Please read or listen to it, if you get a chance. It will make you proud to be a human, and very glad to live in the present versus the past.
Various folks, mostly from the evolutionist camp, piled on to critique Pinker’s thesis, but none have altered it significantly, in my reading of their responses. Here’s a summary of the criticism on Wikipedia. Better Angels is a great place to get a long range perspective on a variety of integrating trends in human social collectives, and what they may mean for the future if they continue, as we argue they surely must.
The rapidly declining trends in violence as a percentage of human behavior that we’ve seen over both the last millennium and the last century are thus good initial evidence for the idea that our social collectives become more integrated and self-policing and moral, via developmental interdependence, as a function of their complexity. Of course, others draw entirely different conclusions from the same history. It may be too early to prove this claim, but it is not too early to make hypotheses, and look for this developmental trend.
As interdependence grows, the level of integration, the level of analysis for adaptiveness, also changes. Consider how the US and the USSR first produced nuclear bombs in large numbers, in a climate of fear and poor understanding of the “other”, seeking to maximize each society’s perceived fitness. We were more on the individuation side of the evo devo pair. Then as the world became increasingly more interconnected, we raised our interdependence, increasingly realizing, after the Cuban Missile Crisis, that such weapons were no longer adaptive. We then changed our level of analytical integration, analyzing our future more as one species, as a global collective, than as individual nations. Nuclear bans, disarmament, and nonproliferation policies then increasingly became the new, more adaptive reality on Earth.
So the dominant trend for our planet’s next phase of development seems likely to be a massive new integration (a devo over evo phase), the emergence of a “global superorganism” or “collective mind” if you will, with all the new diversity coming within the sub-minds, the way we each have mindsets that argue inside our own head but which remain part of one highly integrated organism.
These future minds may have massive new freedoms in what they can construct and experiment in inner space (very small physical scales, and virtual space), but the price of that new freedom in inner space seems very likely to be the loss of many freedoms in outer and physical space, at the scale of the collective and beyond. Once a set of moral codes emerge, a whole bunch of freedoms of thought and action go away. We see them as less adaptive and try to minimize or eliminate them.
In a highly developmentally interdependent future, our whole planet should look, to outside observers, like one integrated, interdependent organism at first, and later, some kind of computronium black-hole-like entity. Lots of new imaginative freedoms must happen inside that highly integrated system, but many old freedoms disappear. Why is that necessary? If they weren’t deeply and rigorously integrated into their society, any one of these future minds could, with future exponentially powerful tools, threaten the existence of civilization.
In humans, consider how the emergence of moral behavior works in such a way that extreme maladaptive behavior, like psychopathology, always occurs in less than 1% of us in any random sample. Populations always find value in having a few low-empathy rulebreakers in their midst, but in general, psychopathology isn’t good.
Our oldest surviving written code, the Sumerian Code of Ur-Nammu, is roughly 4,000 years old. Once it was written, it became a filter for removing, and reducing the replication of, maladaptive behavior. All of the laws and institutions that have emerged since have been an additional filter. Our emerging global transparency (bottom-up and top-down surveillance systems) will be another such major filter, and the ethical AIs that arrive after it, will be yet another. In each case, the integrating system may filter out out 99% of the maladaptives, and let 1% of the rulebreakers through, while always increasing the level of monitoring of that 1%. Before long, you’ve got a fully integrated system. Dysfunction is always occurring, but immunity, statistically, always wins. We’ll discuss immunity in our next section.
As interdependence grows, global foresight also gets ever more integrative. As James Breaux, adjunct professor of the U. Houston Foresight MS program says, we were talking about saving rivers and ecosystems in the 1970s and 1980s, saving species and the environment in the 1990s and 2000s, and now we’re talking about social justice and saving the planet. Does our scope of concern naturally expand beyond our planet, to the universe in coming decades? I think it must, as Gerald Hawkins, Mindsteps to the Cosmos (1983), and many others have argued.
If the transcension hypothesis is correct, with planetary thinking our map has reached edge of what matters, and we’re just filling in more of the details of that map, discovering what global foresight really means, in universal terms. It doesn’t mean sending many more humans into space, or more than a few thousand more, at any rate. Densification argues that way of thinking is adolescent, and ignorant of global trends. It also argues that interdependence and sustainability thinking will keep accelerating.
Humanity has actually had a Save the Planet (from Humanity) perspective since Earthrise, 1968. This is called the Overview Effect. It’s just been percolating further and further into our consciousness, and now even grade school children think regularly about it. Philosopher and paleontologist Pierre Tielhard de Chardin was one of the pioneers of this perspective in his peace and interdependence-building concept of planetization, which we today call the Global Superorganism, or Global Brain. In publications beginning in 1923, while teaching Geology at the Catholiic Institute in Paris, Teilhard saw both accelerating socialization (human consciousness, emotional, intellectual, ethical development) and accelerating planetization (human-to-human and human-to-machine interconnectedness and interdependence) as irreversible, irresistible processes of macrobiological development, culminating in the emergence of the noosphere, the global mind.
This systems approach to human beings and their technology, offers the fascinating yet hard-to-prove thesis that interdependence, global responsibility, morality, and love must all grow strongly as socialization and planetization proceeds. See Teilhard’s The Future of Man, 1947/1964 for his best writing on the inevitable physical and psychological developments of socialization and planetization.
Other leading thinkers on planetization since Teilhard have been Peter Russell in The Global Brain, 1982, Gregory Stock in Metaman: The Merging of Humans and Machines into a Global Superorganism, 1993, Joel de Rosnay in The Symbiotic Man, 1999, Howard Bloom in Global Brain, 2000, and Peter Corning in Holistic Darwinism, 2005. For an excellent view on planetization that understands how closely developmental immunity and developmental morality are co-evolving, see Peter Turchin’s Ultrasociety: How 10,000 Years of War Made Humans the Greatest Cooperators on Earth, 2015.
Of all of these pioneering books, Gregory Stock’s 20th century work still offers perhaps the best basic overview of the long-term future of global interdependence. That is because Stock puts the focus where it belongs–on the accelerating technology all around us, and how it is increasingly tightly binding us together, making our world transparent, and becoming part of us. It will soon wake up and be our “better selves”, whether we want this outcome or not. The path we take to this postbiological “metahumanity” is in our hands, but the destination, as far as I can tell, is certainly not. So let’s stop arguing about it and start discussing it’s implications, shall we?
That isn’t a particularly popular, or politically correct forecast about our future. Thus it’s no wonder that most authors skirt this perspective on the most fundamental driver of ever-improving morality. Elias cites something like it in The Civilizing Process. Pinker does as well in Better Angels of Our Nature, but he lumps this expanation in with twelve other models, and refuses to see any one as more fundamental than any other. Yet Stock’s perspective is the most accurate, in my view.
Stock does an elegant job exploring the idea that accelerating social and technological interdependence is leading us increasingly toward the state of a global superorganism. I would argue, however, that an organismic level of integration for humanity into society is still many decades away, if not centuries, and will surely require the emergence of postbiological life to do the final integrating.
Teilhard used the metaphor of the accelerating technological connectivity between people (the telephone was one of his examples) combined with the “finite sphericity” of the planet, to argue that a coming phase transition was inevitable. Applying this to today’s digital technologies, it is easy to argue that accelerating linkage density and bandwidth, and accelerating virtuality, on a finite-sized planet, must inevitably go from “more” to “different” as connectivity and virtuality grow. A phase transition is one of the most useful physical models of a singularity, so we could say that Teilhard was one of the first singularity theorists, arguing that an irreversibly interdependent and integrated new environment is now emerging.
The early global futurist and peace philosopher Donald F. Keys, who ran a small nonprofit, Planetary Citizens (now defunct), inspired by Teilhard’s insights, also wrote a modest book on the planetization topic, Earth at Omega: Passage to Planetization, 1985, though he framed the growth of planetization in the classic “crisis” rhetoric of so many futurist works, posing a false and messenger-privileging choice between planetization or civilization destruction.
If developmental immunity exists on Earth-like planets in our universe, as we will argue in the next section, our crisis rhetoric will be systematically and often greatly overstated. Individual organizations or civilizations are often under threat and in crisis, through poor choices, but the system as a whole gets more immune as complexification scales. Keys also held utopian expectations for how rapidly Teilhard’s more loving and moral human would be likely to emerge. Many exponentials move slower than we expect them to at first, and faster later.
Scholars including Alvin Toffler, James Grier Miller, and Francis Heylighen, who runs the research-oriented Global Brain Institute in Brussels, have noted that the acceleration of information production can make many problems worse, due to information overload, and unmanaged new complexity. To manage that new complexity Heylighen argues we must improve the intelligence and interdependence in our emerging computational and communications systems, primarily via distributed, self-organizing approaches. Heylighen’s institute is doing some leading thinking about the nature of our emerging superorganism.
While growing digital connectivity and openness are necessary preconditions to growing interdependence, we must recognize that these trends will often make things worse, rather than better, in the first generation of deployment, in a classic Kuznets curve. Ian Bremmer documents this for societies that are opening up digitally in The J Curve (2007). Societies that are connecting and opening will often get less stable and more individuated before they get more stable and interdependent. Good development and security leaders know this, and do everything they can to accelerate the transition through the J. Read Bremmer for more.
The coming global superorganism is essentially an argument that the only way out of the problems created by our new technologies is to use them to create the next natural complex system in an integrative hierarchy. First we get species, who invent high technology, then we get a global superorganism. In this view, the global brain or superorganism is a developmental attractor, necessary to manage all the chaotic new freedoms and powers that individuals and organizations will gain in years ahead. The superorganism concept is a level of integration that goes beyond morality, and includes a new way of thinking, what Teilhard would call complexity-consciousness, as well as a new way of loving and caring.
Historically we can observe regular swings between evolutionary information-generation (new individuation, specialization, and diversity) and developmental integration (rise of multicellarity, moral codes, city states, laws, etc.). These two are easily seen as an evo devo pair. On average, both grow. To understand how that happens, think of the most complex object we know of to date, the human brain, in which a variety of mindsets argue with each other all the time over your best course of action.
Your brain is an evolutionary, diverse, “society of mindsets”, as Minsky argued in The Society of Mind (1988). Yet it is also one integrated organism. Each mindset is far less likely to act against the whole, though it does occasionally occur, as we see in multiple personality disorder, and suicide. On average, as any psychologist knows, the higher the integration, the stronger the interdependence, and the rarer the breaking of moral codes.
Nevertheless, it remains hard to accept the idea that future global intelligence will inevitably get far more self-policing, ethical, and integrated (organism-like, rather than an independent network) as it becomes postbiological, in all human civilizations. There is an instinct in the West, particularly strong in the US, that “the future always gives us more freedom”. But both developmental densification and interdependence argue that instinct is wrong. Yes we get more freedom, but only of a certain type, as densification and interdependence also always grow, at the leading edge of complexification, and in the environment which that leading edge controls.
Returning to the transcension hypothesis, if it turns out to be true, it might be only an ethical failure that would lead to interstellar expansion or messaging after postbiological life arrives on Earth, and such failures should be extremely rare statistically, once we’ve got AI ethics anywhere on the planet.