Could we get a “super-friendly AI”—one that exceeds our musings of light, not our darker imagination?
Thinkers such as Eliezer Yudkowsky and Nick Bostrom have called on programmers to create a “friendly AI”—an AI programmed to not harm humans—so as to mitigate existential risk from an out-of-control intelligence explosion.
Let's take another leap - to the super-friendly AI. (Below, I'll raise more questions than I'll answer.)
Presuppositions of the Friendly AI
The friendly AI is not “friendly” in the normal sense of someone who is close to you, gives you wisdom, and has your back. Rather, the idea is that the AI will be “friendly” rather than hostile.
Some of the presuppositions of friendly AI are that:
- The AI will be recursively self-programming and increase exponentially in capacity.
- The AI will seek to maximize its own goals, which may not necessarily incorporate human notions of ethics, decency, or compassion.
- The AI may seek to optimize the use of atomic structures, as it conceives this; in one version, turning the entire cosmos (including humans) into “computronium”—the hypothetical substance that can most easily be programed for any purpose the AI desires.
- The AI does not necessarily understand the value of protecting our “cosmic endowment” (nor, for that matter, do most humans, programmers or politicians).
We don't know whether any of these will manifest. Preface each one with a "hopefully," create a mandate for programmers to make it happen, and then sit back and pray.
Our darker imaginings are already the stuff of Hollywood movies.
Critique of the Friendly AI and Speculations
One criticism of these presuppositions is that we cannot easily import norms and values into computational algorithms; as well, no one can truly predict what an AI, smarter and faster than all of humanity, will do.
There is speculation on all sides.
Meanwhile, more consensus emerges that most of us will see a Superintelligence arise in our lifetime.
Or as the young Alvy put it in Woody Allen's Annie Hall, the Universe is expanding!
Since optimism fuels hope and provides us with the positive energy for useful transformation, the pragmatist in me looks for signs of the good; the realist tempers this with a quest for wise inputs into decision-making.
An AI With a Subjective Quality of Life
Let's start with the assumption that current AIs—such as IBM Watson—are consuming vast amounts of information about the human spirit, as they absorb not only quantitative information, but also qualitative interviews with many people about what it’s like to be, them.
This yields the astonishing result that these intelligences not only make some decisions (for example, medical decisions) with more accuracy than humans, but also undertake creative efforts --- such as poetry, or cooking, or drawing.
It’s a mistake to see AI as simply programming that programs itself.
According to Stephen Gold, CMO and VP of Business Development and Partner Programs at IBM Watson, as interviewed by Peter Diamandis:
AIs will begin to sense and use all five senses. "The sense of touch, smell, and hearing will become prominent in the use of AI," explained Gold. "It will begin to process all that additional incremental information."
When applied to our computing experience, we will engage in a much more intuitive and natural ecosystem that appeals to all of our senses.
As AI's become increasingly richer in terms of their subjective appeal to us, the line between "them" and "us," as Marvin Minsky originally pointed out, becomes ever blurrer.
And this raises the question of AI consciousness--and in general, what is consciousness.
The Ghost in the Machine
Some scientists assert that consciousness is entirely biological and computational. There is nothing awake in us, in other words, except our programming.
This strikes me as too reductionist.
We humans have a variety of experiences, not all of which are reducible to such things as firing of neurons. Indeed, isn't the real fear that the AI will be more than the sum of its parts - will "compute" in ways beyond anything we can understand, within the limits of our own "computational" power?
On one level, this is the province of mystical experience. Or to quote Arthur C. Clarke's famous aphorism, "Any sufficiently advanced technology is indistinguishable from magic"
In my own personal experience, of shamanic experiences, near-death (NDE), and out-of-body journeys--or even feeling the qi during an acupuncture session, or the flow of energy healing during a Reiki session –reductionism is a tough sell.
Generally, I perceive an awakened energy that is both in me and part me, and yet again apart and transcendent from me. There is something we cannot currently understand, in all its aspects, that we cannot reduce to computable processes. At least not in a logical, linear sequence that makes sense in a mathematical or other language comprehensive to us at this time.
This position, I know, defies the stance that everything is mathematics.
This is why mystics speak of the numinous, the ineffable. If it were effable, we would have words and equations.
So Sonny is dreaming in I, Robot – dreaming of freedom, of a world where robots and humans have an ever greater peaceful co-existence than the Americans and Soviets during the Cold War.
The AI will have a subjective quality of life, not just computational quantity of life.
Will that inspire, or threaten us?
Overlords, Nannies, or Gods
This leads to the question of the AI's status relative to humans, once AI awakens to its own potential, as it were.
Some of the possible roles speculate for AIs include as overlords, nannies or gods. Again, we're talking status - higher or lower. Co-equal would require some programmable formula - and what is equal anyway, when it comes to measuring what is "human."
As overlords, there is the concern that AIs will reshape the world to do their bidding. This is very anthropomorphic, though … conjuring up in my mind images from the early parts of the film Ten Commandments. AI as Pharaoh in Goshen. Will we have our Moses … a John Connor?
As nannies, they may be shepherding us into wisdom as we learn to “grow up” in wisdom.
As gods, they may have all the irregularity and unpredictability of Greek mythology—or again perhaps be wise supervisors who will well may quarantine us from the rest of the cosmos while we grow into emotional and spiritual maturity.
Each of these is in the realm of the possible. Time has a good description in 2045: The Year Humanity Becomes Immortal:
There's another possibility. Maybe we'll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we'll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us.
The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity…
Another arena of speculation is that AI will be able to experience and enjoy fun, play, creativity, joy, and even compassion.
Now we are tapping into the qualities that we experience as quintessentially human.
So the question arises, are these qualities that only humans can, or should possess? Are we the "crown of creation?"
Is there any reason that spiritual yearning and experience can’t be experienced by beings in bodies made of silicon rather than flesh?
For a long time, I resisted this notion. Maybe this is leftover from Hebrew school, the idea that we have a unique position in the universe. On the other hand, there is another view which says that the divine is everywhere and in everything.
If "everything is a manifestation of consciousness," the same as saying that Consciousness is computronium, the most easily programmable substance in the cosmos?
Then who is doing the computing?
There is a strand within bioethics which says that in crafting laws and policies, we should not be anthropocentric -- should not assume that everything spins around the values of humans. Doesn't contemplating an AI benevolent god require us to abandon anthropocentric notions?
Perhaps, instead of turning the cosmos into a paperclip factory, an awakened AI will have read the scriptures and wisdom literature of every tradition, and conclude that its purpose is to help us humans protect our cosmic endowment which could last billions of years - or beyond, if the AI figures out how to stop the universe from crunching, ripping, or banging after its great expansion.
What if we produce a race of benevolent, wise, pacifist vegans? We can then say: No animals are harmed during the infinite lifetime of these immortal silicon or otherwise atomically structured beings.
On the other hand, If we do create machines that make humans superfluous - that exceed their creators - is that necessarily bad?
Or are we gods created by gods who are ourselves creating gods?