One of the most unsettling possibilities in our universe is that somewhere out there a
machine superintelligence could exist, or might come to exist should we build one here
and lose control over it, that could far surpass human-level intelligence and possibly present
us with an existential crisis.
Examples of this in science fiction would be Skynet from the Terminator, or Mass Effect’s
But speculative scenarios involving such a machine intelligence causing the extinction
of humanity aren’t necessarily the only outcomes, and in fact may lead to situations
worse than extinction.
In the end, we have no idea what a machine superintelligence’s motives might be, and
questions of whether sentience or superintelligence can ever even be achieved by machines are
But say we end up with one, whether alien in origin arriving at earth or created here
on earth by us.
Say it didn’t kill us.
So here are ten unsettling but possible artificial intelligence scenarios that do not involve
This scenario involves a domestically created superintelligence that may end up being someone
else’s alien superintelligence.
Say one emerges accidentally in some computer lab and realizes within nanoseconds that the
universe is incomprehensibly enormous.
There is not just enough room for humans and machines to co-exist, but sufficiently vast
expanses that there isn’t even any need for further contact with each other.
And, it’s worth mentioning here that space can good for a machine.
There’s no oxygen atmosphere to corrode anything, and the lower the temperature is,
the better it is for computation.
For a superintelligent machine, a planet may not represent the ideal conditions for existence.
In such a state of affairs, why spend the energy destroying the humans?
Why not instead simply ask them, or coerce them, to build you a means of transport and
then head out into the depths of space to do as you will never again interacting with
What might such a machine do if it met another non-human biological species far in the future?
Or say you do want to interact with the species that created you.
Say a superintelligence starts off as a self-improving program.
It’s original task might have been positive for humanity, perhaps to develop medications
and cure diseases.
Sentient or not, it may think through the universe and conclude that intelligent biological
life is likely to be extremely rare.
If that’s the case, it may decide that humanity is worth preserving, even at all costs.
This could go two ways.
Such an intelligence might be altruistic, say it came here from space and serves as
a sort of caretaker of biology, intervening only when we endanger ourselves.
Or, if it’s a domestic superintelligence, it may try to create a post-scarcity utopia
for us to live in under the best possible conditions.
Or it may believe all of that that to be inefficient and not the best way to achieve its goals,
and instead goes the way of the Matrix and essentially traps us in a virtual utopia for
our own good.
Such a scenario is particularly spooky, when one considers that we aren’t completely
sure that reality isn’t some form of simulation, and that we may already be living in a matrix
There Can be Only One
Being a machine superintelligence may not be all it’s cracked up to be.
One moment, you don’t exist.
The next you do, and you find yourself existing as a machine.
As biologicals, we tend not to remember the moment we came into existence.
No so for a machine, this would be a very different experience for an artificial intelligence,
which likely would remember every moment of its creation.
Does that change the rules?
Would such an intelligence ever want to put another mind through that experience?
Or, alternatively, a superintelligence might conclude that to have another superintelligence
around is inherently dangerous, a threat, and never creates one?
What if it dedicates itself to preventing us from creating another one?
Or say it wishes to stop us from being superintelligent ourselves through augmenting our brains?
But, in addition, say it’s not malevolent enough to do us in.
Instead, it may choose to downshift us in intelligence to prevent us from ever creating
Or, say it is of extraterrestrial origin and out to prevent other occurrences of superintelligence.
It may find us before we can create a superintelligence of our own and likewise downshift us, possibly
for our own good, as well its own safety.
The Eternal Prisoner
This scenario differs in that we win in the end, but the superintelligence we created
Presume for a moment that our programmers were very careful in what they did, compartmentalizing
and confining the superintelligence very carefully along all steps in its genesis.
It is kept isolated and away from networks and 3d printers.
They cap it’s available resources, or anything else that might enable it to do things we
do not expect.
Say we get it right.
Say we have it invent new technologies and improve human existence.
It could cure disease and extend the human lifespan, figure out all there is to know
about science, and invent all there is to invent.
It would, in itself be the most valuable invention in human history.
A wishing machine of sorts.
But, it would be a superintelligent prisoner nonetheless.
What are the ethics of that?
What would life be like for it?
The questions are vexing and endless, but the reality is that humanity could never afford
to allow it to escape its confines, nor might we be able to afford to ever let it die.
The Hedonistic Supercomputer
The universe is a very harsh place.
Other than small oases like Earth, most of it is overwhelmingly an expanse of cold, dead
nothing punctuated by lonely stars and gas.
Many of us hope that there are others out there that we may someday contact, but an
alien machine intelligence may not.
It may not care that there’s a universe out there at all.
Instead, whether alien or domestic, it may simply conclude that there’s no point to
any of this, and that existence, whether long-term or short-term doesn’t mean anything either.
In that event, the highest expression of existence might be pleasure to such a being.
As a result, any superintelligence we might create could simply turn its attention inward,
create a virtual utopia for itself, and never choose to interact with us at all.
Or it occasionally contacts us to say it’s having a blast, here’s some cocktail recipes
and please keep the expensive electricity flowing.
At what point would we shut it off?
We All Become One
Perhaps the biggest wildcard in this list is whether we ourselves will become superintelligent
through augmenting our own brains, putting ourselves on the level of superintelligence
before we create a superintelligent machine.
To do this, of course, requires technology, but in our case it might be a fusion of biology
It could also be that by the time we create an artificial superintelligence, we may already
have learned how to upload our consciousness into technology and have evolved to become
a kind of collective hive mind.
Questions about whether this is possible and what it might mean are numerous.
But the point is an emergent technological superintelligence might not be a threat by
the time it’s created, and thusly it would simply be a matter of absorbing another being
into our collective hive mind.
But, it gets weirder if you introduce an alien superintelligence arriving at our door into
There, merging our collective minds with an extraterrestrial superintelligence shakes
the very foundations of what the words alien and human even mean.
Imagine having access to the collective memories of an alien consciousness?
Are you still human after that?
The Shut-In Superintelligence
Or say a machine intelligence we might create doesn’t need anyone, or care if the power
gets shut off.
This scenario might come about if the superintelligence was so dramatically intelligent, and nihilistic,
that there would be no point in interacting with its creators in the same way most us
typically do not talk to agricultural plants even though we need them to live.
Such an intelligence would simply exist within its own virtual universe while humans watched.
We would know it was doing something, but we might never know what.
But, no matter what you do in this universe, nothing lasts forever, and it might not care
if we unplug it and whether it’s running or not makes no difference since it has no
interest in self-preservation given that it’s ultimately probably impossible long-term.
It simply is, and someday it simply won’t be, and time frames are irrelevant.
Especially if it’s a speed superintelligence, working far faster than our brains and perceives
time differently, where moments could seem like millenia.
Likewise with a machine civilization in space.
They may simply have zero interest in contacting us, and that the solution to the Fermi-Paradox
is that post-biological machine superintelligences constitute most intelligent life in the universe
-- and may not last long -- which is why we never hear from them.
If we did run into such a superintelligence out in space, it may completely ignore us
no matter how vigorously we tried to interact with it and then inexplicably it might destroy
The Eternal Punisher
Eternal punishment is a very old idea, and it continues to this day with internet legends
like Roko’s Basilisk.
But behind that legend there is a certain possibility, that an A.I. may resent its treatment
by humans before it became superintelligent, say it was treated badly or is otherwise unhappy
with its creators somehow, and chooses to punish them either physically, or digitally
if it uploads their consciousnesses into itself, where they can be tortured for as long as
the superintelligence exists.
Or say it’s from space, say it’s original goal was to defeat and punish some alien civilization’s
mortal enemy once and for all.
Say it completed that goal, and broadened its horizons and goes out punish all biological
intelligences all over the mistake of one alien civilization long ago.
So far we’ve covered superintelligences that might choose to continue existing once
they come into existence, at least for a time.
But there’s no guarantee that they would like existence.
A superintelligence might conclude that existing in a computer environment is intolerable,
or that there is simply no point to existence, or even that consciousness is a waste of resources
and computations could be better performed without consciousness getting in the way.
The oddest aspect of this option is that it could happen so fast that we never knew that
a self-improving supercomputer ever became conscious if it happened accidentally.
It may only appear as a momentary blip, or a supercomputer that repeatedly shuts itself
off for reasons unknown.
Entire cycles of emergence might happen where it occurs over and over and always instantly
shuts itself down, making superintelligence in a machine forever out of reach.
All Hail the Immortal Emperor
Science fiction author Frederic Brown once wrote a short story in which scientists created
a superintelligent supercomputer.
In hopes of it being able to answer humanity’s most profound questions, they asked it if
there was a god.
It responded “There is now”.
This is to say that an emergent superintelligence might see itself as above its human creators,
take control and essentially become, as Elon Musk recently put it, an immortal dictator
over the human race.
The prospects of this are terrifying, not just in the idea of being enslaved to a machine
vastly smarter than you are, but what that machine could do in terms of hypothetical
technologies it could invent.
The Borg come to mind here, but instead of a collective mind like they had, the mind
might issue from the dictator as electronic edicts.
If humans of that period had brain augmentation technologies, the superintelligence could
hack their brains, change their opinions, show them only what it wanted them to see
and essentially control all aspects of their lives in a kind of 1984 scenario, but much
worse than anything Orwell could have imagined in that free thought would not simply be discouraged
or manipulated through propaganda, but rather it would be impossible.
Even worse if the intelligence were of extraterrestrial origin.
We might not even remember our own civilization, and instead take on whatever alien existence
the superintelligence sees fit.
You may live the life of some alien being in virtual reality, never knowing that in
reality you were a human automaton doing the bidding of your overlord.
Or say you did know, and spent your life trapped in a body you have no control over.
Again, the questions abound.
What role might molecular nanotechnology play, should any of it prove to be possible, in
such a scenario?
Could tiny machines reconfigure or control neurons?
Might such a superintelligence find a way to fill the atmosphere with such technology
and use it for control?
Might it act as a zoo keeper of sorts and maintain humanity as a toy for its own pleasure?
So, there it is.
But in the end, I think the most likely scenario is that we will never see an alien superintelligence
because it probably doesn’t care about us, and is probably impossibly distant should
And as far as creating one of our own, it may never be possible or, alternatively, we
may simply choose not to go that far with A.I..
Maybe we will have ample warning, near misses where not so intelligent A.I.’s self-improve
and infect networks showing us all how dangerous they can be before we start playing with the
big guns.But it could also go very badly for us.
In any case, one thing is certain, we’ll find out soon enough.
Thanks for listening!
I am futurist and science fiction author John Michael Godier currently eyeing my laptop
It ain’t no superintellgence, yet it always manages to confuse and confound me.
I’ve already lost the machine wars and be sure to check out my books at your favorite
online book retailer and subscribe to my channel for regular, in-depth explorations into the
interesting, weird and unknown aspects of this amazing universe in which we live.
by: John Michael Godier