Elon Musk on Artificial Intelligence

than the least scary future I can't think of is
one where we have at least democratized
AI because if one company or small group
of people
manages to develop godlike digital super
intelligence they could take over the
world
at least when there's an evil dictator
this human is going to die but for an AI
there would be no death they would look
forever and then you'd have an immortal
dictator from which we can never escape
at some point in the early 21st century
all of mankind was united in celebration
we marveled at our own magnificence as
we gave birth to a singular
consciousness that spawned an entire
race of machines we don't know who
struck first us and them but we know that
it was us that scorched the sky the
robots going down the streets they're
like what are you talking about man we
want to make sure we don't have killer
robots going down the street once
they're going down the street it is too
late Google acquired deep mind several
years ago do you mind operates as a semi
independent subsidiary of Google the
thing that makes deep mind unique is
that deep mind is absolutely focused on
creating digital super intelligence an
AI that is vastly smarter than any human
on earth and ultimately smarter than all
humans on earth combined this is from
the deep mind reinforcement learning
system basically wakes up like a newborn
baby and is shown the screen of an Atari
video game and then has to learn to play
the video game it knows nothing about
objects about motion about time
it only knows that there's an image on
the screen and there's a score so if
your baby woke up the day it was born
and by later afternoon was playing 40
different Atari video games at a
superhuman level you would be terrified
you would say my baby is possessed send
it back the deep line system can win at
any game it can already beat all the
original Atari games it is superhuman
it plays the games at SuperSpeed in less
than a minute deep mines AI has
administrator level access to Google's
servers to optimize energy usage at the
data centers however this could be an
unintentional Trojan horse Eve mine has
to have complete control of the data
centers so with a little software update
that a I could take complete control of
the whole Google system which means they
can do anything they can look at all
your data do anything
[Music]
we're rapidly headed towards digital
super intelligence that far exceeds any
human don't think it's very obvious I'm
talking about your time allocation I
think one of the things you spend an
awful lot of time thinking about I know
is artificial intelligence it's
something that you and I have as a
shared interest and it's something that
our audience is interested in as well
the question here is a lot of experts in
AI don't share the same level of concern
do you do about the drop pools what
what's his last words what what
specifically do you believe that they
don't well the biggest issue I see with
so-called AI experts is that they they
think they know more than they do and
they think they're smarter than they
actually are in general we are all much
smarter than we think we are much less
smart dumber than we think we are by a
lot so this is this tends to play plague
smart people like you just can't that
they define themselves by their
intelligence and they they don't like
the idea that a machine can be way
smarter than them so they discount the
idea which is fundamentally forward
that's the wishful thinking that a
situation I'm really quite close to I'm
very to close the cutting edge in AI and
it scares the hell out of me it's
capable of vastly more than almost
anyone knows and the rate of improvement
is exponential you can see this in
things like alphago which went from in
the span of maybe six to nine months it
went from being unable to beat even a
reasonably good go player so then
beating the European world champion who
was ranked 600 then beating Lisa doll
for five what would been world champion
for many years then beating the card
world champion then beating everyone
well playing simultaneously then then
there was alpha zero which crushed
alphago a hundred to zero
and alpha zero just learnt by playing
itself and it can play basically any
game that you put the rules in for if
you whatever rules you give it just
literally read the rules play the game
any superhuman for any game nobody
expected that great of improvement to
guess those so those same experts who
think AI is not progressing at the rate
that I'm saying I think you'll find that
their predictions for things like go and
and other and other AI advancements have
their true they're batting average is
quite weak it's not good that we'll see
this also with with self-driving I I
think probably by intermixture
self-driving will be will encompass
essentially all modes driving and be at
least a hundred to two hundred percent
safer than a person by the end of next
year we're talking maybe eighteen months
from now
knits of the study on on Tesla's
autopilot version one which is
relatively primitive and found that it
was a forty five percent reduction in
highway accidents and that's despite
what a pilot one being just version one
version two I think will be at least two
or three times better that's the current
version that's running right now so the
rate of improvement is really dramatic
we have to figure out some way to ensure
that the advent of digital super
intelligence is one which is symbiotic
with humanity I think that's the single
biggest existential crisis that we face
and the most pressing one and how do we
do that I mean if we take it that it's
inevitable at this point that some
version of AI is coming down the line
how do we how do we steer through them
well I'm not normally an advocate of
regulation and oversight I mean I think
it's once you generally go inside
minimizing those things but this is a
case where you have a very serious
danger to the public
and it's therefore there needs to be a
public body
that has insight and then oversight on
to confirm that everyone is developing
AI safely this is extremely important I
think a danger of AI is much greater
than the danger of nuclear warheads if I
unlocked and nobody would suggest that
we allow anyone to just build nuclear
warheads if they want that would be
insane and mark my words
AI is far more dangerous than nukes far
so why do we have no regulatory
oversight this is insane
which question you've been asking for a
long time I think it's a question that's
coming to the forefront over the last
year where you begin to realize that it
doesn't necessarily I think if we we've
all been focused in on the idea of
artificial superintelligence right which
is clearly a dangers but maybe you know a
little further out what's happened over
the last years you've seen artifacts
what I'd be calling artificial stupidity
you're talking about you know
algorithmic manipulation of social media
like we're in it now it's starting it's
starting to happen how do we how do we
is it
what's the intervention at this point
I'm not really all that worried about
the short-term stuff things that are
other like narrow AI is not a species
level risk it will it will result in
dislocation in lost jobs and you know
that sort of better weaponry and that
kind of thing but it is not a
fundamental species level risk whereas
digital super intelligence is so it's
really all about laying the groundwork
to make sure that if humanity
collectively your science that creating
digital super intelligence is the right
move then we should do so very very
carefully very very carefully
this is the most important thing that we
could possibly do and that the AI risk
is that I guess it's the sort of a
benign AI and they were able to achieve
a symbiosis with that AI ideally the AI
there's somebody who can members name
but had a good a suggestion for what the
optimization of the AI should be
what's its utility function you have to
be careful about this because you say
maximize happiness and the AI concludes
that happiness is a function of dopamine
and serotonin so just captures all
humans and checks your brain with large
amounts of dopamine serotonin like okay
salt we meant it sounds pretty good
though how people love it
well I like the definition of like the I
should try to maximize the freedom of
action of humanity maximize the freedom
of action maximize freedom essentially I
like that definition but we do want to
close coupling between collective human
intelligence and digital intelligence
and a neural link is trying to help in
that regard by creating a an interface
between the high bandwidth interface
between AI and your human brain yeah
we're already we're already a sidewalk
in a sense that your phone and your
computer a kind of extension of you just
low bandwidth input-output exactly it's
just low bandwidth particularly output I
mean two thumbs basically so how do we
solve that problem
the better thing it's a bamboo dish and
we've all we've also come to it now
we're all we're all cyborgs we're just
low efficiency cyborgs so how do we how
do we make it better I think we've got a
bowl of a we've gotta build an interface
like we're didn't evolve to have a
communications jack you know some so
there's gonna be essentially the vast
numbers of tiny electrodes that are able
to read right for your brain of course
you know security is pretty important in
the situation say the least
obviously saying I'm not coming with I'm
keeping my brain air-gapped yeah well
the a lot of people will choose to do
that but it's a bit like in banks is new
or lace but not but in the case of your
lace it's sort of that that's there from
when you're born or it's sort of it's
not a sort of I'm sorry - back up yeah
kind of a back up this would be this
there's a digital extension of you that
is an AI Dai extension of you a tertiary
layer of intelligence so you've got your
limbic system your cortex and and the
tertiary layer which is the digital AI
extension of you and that high bandwidth
connection is what achieves the tights
and meiosis I I think that's the best
how come I I hope so if you know he's
got better ideas what you hear this is
the largest
global governments um what we have over
139 government here if you want to
advise government official to be ready
for the future what is three thing or
three advice
we'll give them well I think the the
first bit of advice would be to really
play it pay close attention to the
development of artificial intelligence I
think this is we need to just be very
careful in how we adopt artificial
intelligence and to make sure that
researchers don't get carried away
because sometimes what happens is a
scientist can get so engrossed in their
work they don't necessarily realize the