Opinion, Berkeley Blogs

Coevolution of human and artificial intelligences

By Edward Lee

Vladimir Putin, president of Russia, in an open lesson to more than a million schoolchildren on Sept. 1, said that "Whoever becomes the leader in [artificial intelligence] will become the ruler of the world." Elon Musk, CEO of Tesla, states that AI represents an existential threat to humanity and urges government regulation before it's too late. Clearly, AI technology has triggered both hype and fear. Is this justified?

robot and human hands First we must ask, will artificial intelligence exceed human intelligence?

One difficulty with this question is that we have to define "intelligence" before we can determine whether a computer-based system possesses it. Let us begin with an easier question, "Will machines exceed humans?" The answer to this question is clearly "yes." They already have. The answer is obviously "yes" for questions involving strength and precision. But it is also obviously "yes" for questions involving cognitive functions such as remembering, organizing, and retrieving information. My smartphone can remember far more than I can and can find for me anything humans have ever published. What is driving the hype and the fear is that the number and variety of cognitive functions for which machines exceed humans is growing.

Marshall McLuhan, in his groundbreaking 1964 book Understanding Media, talks about technology as extensions of mankind. Are we facing extensions that will supplant rather than extend? Kevin Kelly, in his 2017 book, The Inevitable, argues that what is emerging is not mechanized human intelligence, but rather distinctly nonhuman intelligences. The use of the plural here is deliberate. An anthropocentric view understands intelligence only in terms of its human manifestation, but we are in the process of learning that intelligence has many facets and that machines can vastly exceed humans in at least some of those facets. But to date, we have scant reason to believe that they will exceed humans in all of the facets.

Machines' disadvantages

Machines have two distinct disadvantages compared to humans. First, all of the machines we are using today to build functions that might be called "intelligent" are digital and computational. In my 2017 book, Plato and the Nerd (P&N), I show that natural processes such as those in the human brain have no such constraint, and therefore are very likely capable of functions that no computer, as conceived today, can replicate. Second, human cognition has evolved over millions of years. Richard Dawkins, in his 1987 book, The Blind Watchmaker, argues that evolution is capable of far more sophisticated design than anything an "intelligence" could ever come up with in a top-down fashion. To the extent that software is top-down intelligent design, it will remain unlikely to be able to match the sophistication of biologically evolved systems.

The first disadvantage is fundamental, and whether it really is a disadvantage hinges on whether human cognition actually takes advantage of the possibility of non-digital, non-computational processes. I have argued in P&N that this latter question cannot be answered definitively, so I will put this question aside for now.

The second disadvantage, however, is real, but probably temporary. I have argued before that software is less the result of top-down intelligent design than we commonly assume. Software is instead coevolving with human culture. And human culture itself evolves in a Darwinian way, according to Dawkins in The Selfish Gene (1976) and more recently supported by Daniel Dennett in From Bacteria to Bach and Back (2017).

George Dyson, in his 1997 book Darwin Among the Machines, traces a long history of the idea that technology coevolves with humans. He sees software as a new kind of replicator (to use Dawkins' term), analogous to genes. Humans and computers serve the software by hosting, mutating and propagating it, in much the same way that animals, including humans, are subservient to genes, as argued by Dawkins. Dyson observes,

Computers may turn out to be less important as an end product of technological evolution and more important as catalysts facilitating evolutionary processes through the incubation and propagation of self-replicating filaments of code.

Technospecies, the precursors of new planetary life

A computer program is a technospecies, and today's examples will likely eventually be viewed as the very primitive, short-lived precursors of a new kind of life on this planet. To talk of the evolution of these technospecies is easier even than of the evolution of memes, Dawkins' replicators for cultural evolution, because a computer program running in the cloud is more like a biological living thing than a meme is.

Wikipedia, for example, has quite a few features of a living thing: it reacts to stimulus from its environment (electrical signals coming in over the network); it operates autonomously (for a while, at least, but it is dependent on us for long term survival); it requires nourishment (electricity from the power grid); it self repairs (vandalism detection, see chapter 1 of P&N); and it even dreams (background indexing to facilitate search, see chapter 5 of P&N). Memes have few if any of these features, so talking about Darwinian evolution of memes is more of an analogy than a direct application of Darwin's idea.

As I point out in P&N, a technospecies individual such as Wikipedia facilitates the evolution of memes. Wikipedia makes me smarter. Its existence rewards us humans, who in turn nurture and develop it, making it "smarter." Moreover, we have become extremely dependent on technospecies, just as they are dependent on us; what would happen to humanity if our computerized banking systems suddenly failed? It's a classic symbiosis.

For complex digital and computational behaviors, like those in Wikipedia, a banking system, or a smart phone, it is hard to identify any cognitive being that performed anything resembling top-down intelligent design. These systems evolved through the combination of many components, themselves similarly evolved, and decades-long iterative design revisions with many failures along the way. It is classic survival of the fittest, where in Dennett's words, "fitness means procreative prowess." The propagation of technospecies is facilitated by the very concrete benefits they afford to the humans that use them, for example by providing those humans with income.

Dennett notices coevolution in simpler technologies than software. If you will forgive my three levels of indirection, Dennett quotes Rogers and Ehrlich (2008) quoting the French philosopher Alain ([1908] 1956) writing about fishing boats in Brittany:

Every boat is copied from another boat. … Let's reason as follows in the manner of Darwin. It is clear that a very badly made boat will end up at the bottom after one or two voyages and thus never be copied. … One could then say, with complete rigor, that it is the sea herself who fashions the boats, choosing those which function and destroying the others.

Engineers as facilitators, not inventors

So who fashions software? One could say, with complete rigor, that human culture fashions software, not software engineers. Engineers are facilitators, perhaps more than inventors. Dennett notes about culture:

[S]ome of the marvels of culture can be attributed to the genius of their inventors, but much less than is commonly imagined …

The same is true of technology.

Although we tend to overstate the amount of top-down intelligent design in technospecies, human cognitive decision making certainly influences their evolution. At the hand of a human with a keyboard, software emerges that defines how a new technospecies reacts to stimulus around it, and if those reactions are not beneficial to humans, the species very likely dies out. But this design is constructed in a context that has evolved. It uses a human-designed programming language that has survived a Darwinian evolution and encodes a way of thinking. It puts together pieces of software created and modified over years by others and codified in libraries of software components. The human is partly doing design and partly doing husbandry, "facilitating sex between software beings by recombining and mutating programs into new ones" (P&N , chapter 9).

So it seems that what we have is a facilitated evolution, facilitated by elements of top-down intelligent design and conscious deliberate husbandry.

Is facilitated evolution still evolution? Approximately 540 million years ago, a relatively rapid burst of evolution called the Cambrian explosion produced a very large number of metazoan species over a relatively short period of about 20 million years. Andrew Parker, in his 2003 book In the Blink of an Eye, postulated the Light Switch Theory, which posits that the evolution of eyes initiated the arms race that led to the explosion. Eyes made possible a facilitated evolution because they enabled predation. A predator facilitates the evolution of other species by killing many of them off, just as the sea kills boats. So facilitated evolution is still evolution. Now, in the Anthropocene era, humans have facilitated the emergence of many species through husbandry, including wheat, corn, chickens, cows, dogs, cats, and, perhaps, artificial intelligences.

Software needs humans to survive, propagate

Humans designing software are facilitators in the current Googleian explosion of technospecies. It is proactive evolution, not just passive random mutation and dying due to lack of fitness. Instead, it is husbandry and predation. Predation plays a critical role in the evolution of technospecies. The success of Silicon Valley depends on failure of startup companies as much as it depends on their success. Software competes for a limited resource, the attention and nurturing of humans that is required for the software to survive and propagate.

How far can this coevolution go? Dennett observes that the human brain is limited, but its coevolution with culture has hugely magnified its capabilities:

[H]uman brains have become equipped with add-ons, thinking tools by the thousands, that multiply our brains' cognitive powers by many orders of magnitude.

Language is such a tool. But Wikipedia and Google are also spectacular multipliers, greatly amplifying the effectiveness of language. Google and Wikipedia are not themselves top-down intelligent designs. Although their evolution has most certainly been facilitated by various small acts of top-down intelligent design, they far exceed as affordances anything that any human I know could have possibly designed. They have coevolved with their human symbionts.

Technology facilitates thinking

Dennett observes that collaboration between humans vastly exceeds the capabilities of any individual human. I argue that collaboration between humans and technology further multiplies this effect. Technology itself now occupies a niche in our (cultural) evolutionary ecosystem. It is still very primitive, much like the bacteria in our gut, which facilitate digestion. Technology facilitates thinking.

Dennett takes on AI and most particularly deep learning systems, calling them parasitic.

[D]eep learning (so far) discriminates but doesn't notice. That is, the flood of data that a system takes in does not have relevance for the system except as more "food" to "digest."

This limitation evaporates when these systems are viewed as symbiotic rather than parasitic. In Dennett's own words, "deep-learning machines are dependent on human understanding."

Dennett reflects today's handwringing and angst about AI with the question:

How concerned should we be that we are dumbing ourselves down by our growing reliance on intelligent machines?

Are we dumbing ourselves down? It doesn't look that way to me. In fact, Dennett notices a similar partnership between memes and the neurons in the brain:

There is not just coevolution between memes and genes; there is codependence between our minds' top-down reasoning abilities and the bottom-up uncomprehending talents of our animal brains.

From this perspective, AI should perhaps be viewed instead as IA, Intelligence Augmentation. For the neurons in our brain, the flood of data they experience also has no "relevance for the system except as more 'food' to 'digest.'" An AI that requires a human to give semantics to its outputs (see P & N chapter 9) is performing a function much like the neurons in our brain, which also, by themselves, having nothing like comprehension. It is an IA, not an AI.

This does not mean we are out of danger. Far from it. Again, from Dennett:

The real danger, I think, is not that machines more intelligent than we are will usurp our role as captains of our destinies, but that we will overestimate the comprehension of our latest thinking tools, prematurely ceding authority to them far beyond their competence.

Even more worrisome, IA in the hands of nefarious humans is a scary prospect indeed, as Putin observed.

We can nudge, not control, AI's evolution

Dennett argues that top-down intelligent design is less effective at producing complex behaviors than evolution. He uses this observation to criticize "good old-fashioned artificial intelligence" (GOFAI), where a program designed top-down by a presumably intelligent programmer explicitly encodes knowledge about the world and uses rules and pattern matching to leverage that knowlege to react to stimulus. The ELIZA program (P & N chapter 11) is an example of a GOFAI program.

In contrast, machine learning techniques, particularly deep learning, have a more explicit mix of evolution and top-down intelligent design. The structure of a deep learning program is designed, but its behavior evolves. There have even been moderately successful experiments at Google where programs learn to write programs. These developments have fueled the panic about AI, where doomsayers predict that new technospecies will shed their symbiotic dependence on humans, making us superfluous. Dennett's final words are more optimistic:

[I]f our future follows the trajectory of our past–something that is partly in our control--our artificial intelligences will continue to be dependent on us even as we become more warily dependent on them.

I share this cautious optimism, but also recognize that rapid coevolution, which is most certainly happening, is extremely dangerous to individuals. Rapid evolution requires a great deal of death. We can expect most software and many cultural artifacts, including entire careers, to die. And just as software engineers nudge rather than control the trajectory of software development, we as a culture can nudge but not control the evolution of AI. The results of any rapid evolution are unpredictable, but if we focus on nurturing codependence, we are more likely to end up with a symbiosis than an annihilation.