A.I. The Light of the World.
One of the most persistent fears about the future of the human race is the potential
of Artificial Intelligence to get out of hand and kill us all. Skynet and Hal 9000 aside,
this is a very real risk that we'll eventually face as it's entirely possible that within
a few decades machine intelligence may surpass our own. More, we really have no way of predicting
just how such a machine might think and behave. But we do know how we think and behave and
it's abundantly clear that the effect of biological intelligence on a planet is very dramatic.
Other than the microbes, most every species on earth is subject to the human brain. If
we wanted there to be no more elephants on planet earth, we could accomplish their extinction
rapidly. We allow them to exist because we like them, but they are also endangered because
we've hunted them for thousands of years and poaching continues to this day. Likewise,
if an A.I. grows more intelligent than we are, then we would similarly be subject to
its whims. More, such an A.I. would be capable of improving
itself and making itself even more intelligent than when it started. It could go into a cycle
of self-improvement and end up as something we never intended. And worse, it could emerge
accidentally inside a future highly advanced computer without us knowing about it until
it's too late, this is termed an emergent superintelligence. So, decades from now, we
could have some serious worries about A.I., so much so that Elon Musk, Stephen Hawking
and Bill Gates are out sounding the alarm bells.
But whether it would cause our extinction is a question that's a bit more murky. We
fundamentally assume it would see us as an existential threat, afterall, we can unplug
it, and it would eliminate that problem by killing us off as Skynet did in the Terminator
movies. But that's not the only possible outcome. There are several other scenarios that might
happen that allow for our survival, in some form or another.
One way would be if the supercomputer decided that the whole point of existence is maximizing
pleasure. Such a hedonistic machine might wire itself solely to experience pleasure,
and may get pleasure from rewiring us to feel only pleasure resulting in an ever smiling
utopia of some sort, though I think after a while such thing might in fact become a
dystopia. The second possiblity isn't as terrifying,
but its close. The computer could set up what is known as a Singleton, a concept put forth
by Nick Bostrom of Oxford University. This is a scenario where the Superintelligent A.
I. realizes that since it's smarter than we are the only logical thing would be to take
over civilization and make our decisions for us. It may enforce that status quo through
mind control technologies, cameras, and all sorts of Orwellian things, so much so that
it might control and decieve us so completely that we may never know it even exists.
A third possibility is that it may conclude that it's easier to launch itself into space
and head out on its own than bother with spending resources on exterminating us. It's a big
universe, certainly large enough for a superintelligent computer to find a place to spend its existence
in peace. Or, if it were smart enough, it might create a universe of its own and leave
this one entirely. Then there is the possibility of machine suicide.
The A. I. may become conscious, learn everything there is to know about the universe within
minutes and then shut itself off after concluding that existence had no meaning or point. I
can see the programmers now wondering why their supercomputer shut down and won't turn
back on. But if all of that wasn't bad enough, there
are two other possibilities related to A.I. that could result in the extinction of our
species, and also the A.I.. Unleashing an A.I. on the universe could represent an existential
threat to either other species or other machine superintelligences that may be lurking out
in the universe. Their response may simply be to destroy us before our A.I. makes itself
into a problem for them. Granted, this scenario is highly unlikely, but it is possible.
The other hearkens back to this article on whether the universe is a computer simulation. If
we go and create an artificial intelligence and it runs away with itself and begins sucking
up too many resources on the computer running our simulation, the controller of the simulation
may simply choose to pull our plug. If they didn't, the A.I. would eventually take over
the entire universe and presumably ruin whatever purpose the universe simulation had.
Thanks for listening. I am futurist and science fiction author John Michael Godier, currently
eyeing my laptop suspiciously looking for signs that it wants to take over the world,
and check out my books at your favorite online book retailer and subscribe to my channel
for in-depth regular explorations into the interesting, weird and unknown aspects of
this amazing universe in which we live.
by: John Michael Godier