Hang tight while we fetch the video data and transcripts. This only takes a moment.
Connecting to YouTube player…
Fetching transcript data…
We’ll display the transcript, summary, and all view options as soon as everything loads.
Next steps
Loading transcript tools…
Demis Hassabis explains LLMs, public safety, and what comes next in AI | Matt Wolfe | YouTubeToText
YouTube Transcript: Demis Hassabis explains LLMs, public safety, and what comes next in AI
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
People are worried about things like
privacy and and losing their jobs to AI.
How does a company like DeepMind build
the the trust of the general public?
What I want us to get to is a place
where the assistant feels like it's
working for you. It's your AI.
AI is scary. It's moving insanely fast.
And from an outsers's perspective, it
seems like there aren't nearly enough
guard rails. And some of these concerns
are actually legit. The ones that always
stand out to me are those surrounding
access. rich people with rich access to
the right tools or privacy. How do we
trust these big companies with all the
personal data? What does the world look
like when everyone is recording
everything and AI taking people's jobs?
One expert says a blood bath. Half of
entry-level white collar jobs
disappearing and 10 to 20% unemployment
could be 1 to 5 years. Some of these
concerns are less extreme than others.
Take the classic Skynet example. It's a
bit extreme, but they're all reasonable
because some of the things you've heard
me talk about before. First, the AI
race. These huge companies with vast
resources are trying to be the biggest
and best in the world of AI. Are these
companies prioritizing short-term
profits over long-term safety? Second,
even some of the scientists working on
it don't totally understand what's going
on under the hood. For example, some of
these models exhibit what's called
emergent behaviors. These are when they
produce outputs that even the engineers
who built these models had no idea what
they can do. In this video, I want to
look into whether or not AI is all doom
and gloom. Because if you listen to some
of the analysts, news outlets,
influencers, myself included from time
to time, it's the beginning of the end.
And it's impossible to put that genie
back in the bottle. But is it? Helping
me answer these questions is Deis
Habibus, a Nobel laureate, a knight, the
CEO of Google DeepMind, and one of the
most influential figures in AI.
Companies like DeepMind are the parents
of these AI children. And we're still in
the phase where the parent is
responsible when their kids mess up. So,
what steps are these companies taking to
ensure they raise responsible, well-
behaved young algorithms? It all starts
with trying to understand what's going
on inside the tech.
Can you sort of describe what's
happening under the hood with an LLM?
Like demystify it for people a little
bit. Sure, I can try. Um, at the basic
level, uh, what these LLM systems are
trying to do is is very simple in a way.
They're just trying to predict the next
word. And they do that by looking at a
vast training set of language. The trick
is not just to regurgitate what it's
already seen, but actually generalize to
something novel that you are now asking
it. LLMs predict the next word. For
example, if you go to a standard large
language model and give it the
statement, the quick brown fox, it will
likely complete the rest of that
sentence with the quick brown fox jumps
over the lazy dog. But the modern chat
bots that we use today are more like
question and response machines
fine-tuned to be more like assistants.
It's still doing the same thing, but
instead of trying to finish the
sentence, it's trying to answer your
question that you put into the chat. But
the trick here is that they don't want
that chatbot to just find a paragraph
from the original source material and
parrot it back to you. They want it to
come up with new information based on
all of the information it already knows
from within its training data. And if it
doesn't already know something, it will
either search the internet to try to
find it for you or in the case where it
doesn't have internet access, it'll just
make things up. And that is what we call
hallucinations. At IO, you announced the
new deep think, right, which is so much
more powerful and it's it's topping all
of the benchmarks for things like coding
and math and all that. What happened
under the hood that caused that new
leap? New techniques have been brought
into the foundational model space where
there's uh this called pre-training
where you sort of train the initial base
model based on you know all the training
corpus. Then you try and fine-tune it
with a bit of reinforcement learning
feedback. And now there's this third
part of the training which is we
sometimes call inference time training
or or thinking where you you've got the
model and you give it many uh cycles to
sort of go over itself and go over its
answer before it outputs the answer to
the user. What deep thinks about is
actually taking that to the maximum and
giving it loads more time to think and
actually even doing parallel thoughts
and then choosing the best one. And you
know, we've pioneered that kind of work
in the past, actually nearly a decade
ago now with Alph Go and our games
playing programs because in order to be
good at games, you need to do that kind
of planning and thinking. And now we're
trying to do it in a more general way
here. What's really cool here is how
Demis is highlighting how much of an
effort engineers and scientists are
putting into making AI more and more
accurate and removing the chance for
hallucinations. AI started with the next
word prediction, like the example of the
quick brown fox we gave earlier. Then it
evolved to test time compute where the
AI model would actually spend the time
thinking through its responses and you
were actually able to see this happen in
real time. And now the latest evolution
is what Demis just talked about which is
parallel thoughts. Now the LLMs are
thinking through a ton of different
potential responses all at once instead
of focusing on just one at a time. It
will then pick from all of those
responses or even combine responses in
order to give you the best possible
output. The ultimate goal here is to put
the most accurate and helpful responses
in front of you. You've mentioned that
the long-term goal is to sort of let
these AIs have like a world model. Can
you sort of explain what you mean by a
world model and what does that open up
to us? I think for a model, what we mean
by a world model is a model that can
understand not just language but also
audio, images, video, uh all sorts of
input, any input um and then potentially
also output. The reason that's important
is if you want a system to be a good
assistant, uh it needs to understand the
physical context around you or if you
want robotics to work in the real world,
uh the robot needs to understand the
physical environment. What sort of new
things do you think that'll open up to
people once they have that ability? Um I
think robotics is one of the major
areas. I think that's what's holding
back robotics today. It's not so much
the hardware, it's actually the software
intelligence. You know, the robots need
to understand the physical environment.
I think that that's also what will make
today's sort of naent assistant
technology and things like you saw with
project Astra that we show and Gemini
live for that to work really robustly.
You want as accurate as world model as
you can. So that's our glimpse under the
hood. LLMs are imperfect models that are
constantly being refined to become more
and more accurate with the eventual goal
of becoming complete world models that
help Ahi understand what's going on
around it in the real physical world.
But what's still unclear is how this
will translate into practical
applications that will significantly
improve society without a lot of the
downsides everyone is fearful of. So
you've mentioned things like AI will be
able to most likely in the future solve
things like room temperature
superconductors and more energy
efficiency and curing diseases. Out of
the the sort of things that are out
there that it could potentially solve,
what do you think the sort of closest on
the horizon is? Well, as you say, we're
very interested and we actually work on
on on many of those topics, right?
Whether they're mathematics or material
science like superconductors, you know,
we work on fusion, renewable energy,
climate modeling. But I think the
closest if you you think about and and
probably most near-term is building on
our alpha fold work. We spun out a
company called Isomorphic Labs to do
drug discovery, rethink the sort of the
whole drug discovery process um from
first principles with AI. And normally,
you know, it takes the rule of thumb is
around a decade for a drug go from sort
of identifying why a disease is being
caused to actually coming up with a cure
for it and then and then finally being
available to patients. It's a very
laborious, very hard, painstaking and
expensive process. I would love to be
able to speed that up to a matter of
months, maybe even weeks one day and uh
cure hundreds of diseases like that. uh
and I think that's potentially in reach
and sounds maybe a bit science
fiction-like today but that's what
protein structure prediction was like uh
you know five six years ago before we
came up with alphafold and used to take
years to find painstakingly with
experimental techniques the structure of
one protein and now we can do it in a
matter of seconds uh with these
computational methods so I think that
sort of potential is there and it's
really exciting to to try and make that
happen 10 years to a matter of weeks is
a pretty wide gap app. But to truly
understand this disparity, we need to
look at why it currently takes up to 10
years to bring a drug to market. It all
starts with the research phase. They
first have to identify a target such as
a protein or gene, which when altered
can treat specific conditions. The early
goal is to develop a compound that makes
that alteration. Once promising
compounds are found, we go through up to
7 years of testing in the lab and on
animals. And most compounds actually
fail at this stage for a variety of
reasons. is things like lack of efficacy
or toxicity. If the results are
promising, then the companies need to
get regulatory approval to get clinical
trials started on humans, which is a
process that has three phases of its own
and can each take several years. And
again, most drugs fail during this
phase. In fact, 90% never get past the
human trial phase. Once a drug does pass
all these phases, it then has to go
through another round of regulatory
approvals before finally being allowed
to go to the public. But here's where AI
comes in. That first seven-year long
discovery phase, it's going to be
crushed because AI can identify the
targets and compounds at an accelerated
rate. It can also detect toxicity and
side effects earlier, which helps to
weed out poor candidates before they go
to trials. The studies themselves,
they're also quicker because the rate at
which AI gathers and analyzes data is so
much quicker. The bottom line is we'll
get better drugs and treatments way
faster. But here's where it gets really
wild. In the beginning, AI was being
used to complete human tasks faster.
Now, we're starting to see AI training
AI, which when you boil it down is in a
way AI completing AI tasks faster. This
is where things really pick up. You guys
just announced Alpha Evolve recently,
which looks amazing, right? It's it's an
AI that essentially can help you come up
with new algorithms, right? How close
are we to AIS that are sort of designing
new AIs to improve the AIs? And then we
start entering this cycle. Yes, I think
it's really cool, a really cool
breakthrough piece of work where we're
combining kind of in this case
evolutionary methods with LLMs to try
and get them to get to to sort of invent
something new. Uh, and I think there's
going to be a lot of uh uh promising
work actually combining different
methods in computer science together
with these foundation models like Gemini
that we have today. So I think it's a
great uh uh very promising path to
explore. Just to just to reassure
everyone, it still has humans in the
loop, scientists in the loop to kind of
it's not directly improving Gemini. It's
using uh these techniques to improve the
AI ecosystem around it. Slightly better
algorithms, better chips that the
system's trained on versus it the
algorithm that it's using itself. This
is really important because it seems
like Demis is hinting at humans
eventually being removed from the
equation. AI gets better at training AI
and no longer needs humans to be
involved in its development. So where do
we fit in? The answer to that lies in
the end goal of all of these personal
assistants and agents. AI agents,
they've been sort of a a big talk in the
AI community recently. And how far off
do you think we are to being able to
give an agent like a week's worth of
work and it goes and executes that for
us? Yeah, I mean I think that's the
dream to kind of offload some of our
mundane admin work and and and and also
to to make things like much more
enjoyable for us. You know, you have
maybe have a trip to Europe or Italy or
something and you want the most amazing
itinerary sort of built up for you and
then booked. Um I I love our assistants
to be able to do that. You know, I hope
we're maybe a year away or something
from that. I think we still need a bit
more reliability in the tool use and and
again the the planning and the reasoning
of these systems, but they're rapidly
improving. So, as you saw with with the
latest project Mariner, what what do you
think the biggest bottleneck is right
now to to sort of getting that long-term
agent? I think it's just the reliability
of the reasoning processes and the and
the tool use, right? It's so and making
sure cuz each each one if it has a
slight chance of an error if you're
doing like a 100 steps even a 1% error
doesn't sound like very much but it can
compound to something pretty significant
over a you know 50 or 100 steps and a
lot of the really interesting tasks
you'd might want these systems to help
you with will probably need multi-step
uh planning and action. Removing the
mundane from our day-to-day sounds
wonderful but it also comes with the
inevitable questions about jobs being
replaced by AI. This is part of a
broader series of public concerns
surrounding things like privacy, data
security, and job loss that all big tech
companies are facing today. DeepMind's
association to Google comes with some of
the baggage. So that begs the question,
how does a company like DeepMind build
the the trust of the general public that
you can trust them with this kind of
technology? Well, look, I think we are
we've tried to be and I think we are
very responsible uh uh trying to be
responsible role models actually with
these frontier technologies. Partly
that's showing what AI can be used for
for good, you know, like medicine and
biology. I mean, what better use could
there be for AI than to cure, you know,
terrible diseases. Um, so that's always
been my number one thought there. But
there's other things, you know, where it
can help with climate, energy, and so on
that we've discussed. But I think we've
got to that you know companies is
incumbent on them to behave thoughtfully
and responsibly with this powerful
technology. We take privacy extremely
seriously uh at Google always have done
um and I think you know most of the
things we've been discussing with the
assistants they would be opted you know
you would you they'll make the person
the universal assistant much more useful
for you but you would be you know uh
intentionally opting into that very
clearly with all the transparency around
that. What I want us to get to is a
place where the assistant feels like
it's working for you. It's your AI,
right? Your personal AI. And and and
it's working on your behalf. And um I
think that's the mode, you know, that's
at least the vision that we have and
that we want to deliver and that we
think um users and consumers will want.
One of the things that you guys also
demoed at IO that I I got a chance to
actually test out a little bit earlier
was the Android XR glasses and those
were absolutely mind-blowing when I
tried them the first time. And uh so I
guess the flip side of the sort of
privacy thing is if everybody's sort of
walking around wearing glasses that have
microphones and cameras on them, how do
we ensure that the the sort of privacy
of the other people around us are is
secure? I think that's a great question.
I mean first thing is to make it very
obvious that you're it's on or off and
these types of things, you know, in
terms of the user interfaces and the
form factors. I think that's number one.
But I also think this is the sort of
thing where we'll need sort of uh
societal agreement and norms about how
do we do we all want if we have these
devices they're popular uh and they're
useful you know how do we want to what
are the kind of um the the guard rails
around that and I think that's still
that's why we're we're only in trusted
tester at the moment is partly the
technology still developing but also we
need to think about the societal impacts
like that ahead of time. So basically,
they don't know yet, which is
interesting and fair all at the same
time because ultimately when Demis
mentions the social agreements, he's
talking about government regulations and
legislation. AI is moving so fast and
we're all busy figuring out all the
other stuff going on in the world. We
haven't as a society stopped and really
thought about these implications. And we
need to because given the speed, we're
kind of running out of time. But it
makes sense that it's moving so fast. AI
is exciting. It's cool and the benefits
that it promises will change everyone's
life for the better. Just listen to Deis
talk about what he's excited for in the
near future. And remember, this is the
man who is on the absolute forefront of
this technology. So, I've got one last
question here. It's kind of a a
two-parter question. What excites you
most about what you can do with AI
today? And what excites you most about
what we'll be able to do in the very
near future? Well, today I think um it's
it's it's the AI for science work is my
you know always been my passion and I'm
really proud of what Alpha Fold and
things like that have empowered. They've
become a you know a standard tool now in
biology and medical research. You know
over 2 million researchers around the
world use it in their incredible work.
uh in the future, you know, I'd love a
system to basically enrich your life and
actually protect a little bit work for
you on your behalf to protect your mind
space and your your own thinking space
from all of the digital world that's
bombarding you the whole time. And I
think actually one of the answers to
that is that we're all feeling in the
modern world with social media and all
these things is is uh maybe a digital
assistant working on your behalf that
only at the times that you want surfaces
the information rather than interrupting
you at all times and and of of the day.
The thing about this technology is that
it's supposed to be the technology that
gets us away from the bombardment of
technology. We're sitting at our
computers and on our phones getting
flooded by negativity and toxicity on
social media every minute of the day.
It's refreshing to hear someone like
Demis, who's in one of the best
positions on Earth to build this future,
talk about how the importance of AI is
critical for our mental and physical
well-being. That we should be able to
cut out the mundane, remove the
toxicity, and focus on the things we
really want to do. Travel the world,
play guitar, pick up that hobby that we
never found time for, or most
importantly, spend time with friends and
family. In the end, after speaking to
Demis, I really felt like it wasn't all
doom and gloom, that super intelligent
and talented people are actually behind
the wheel and that they have a firmer
grasp than most people think they do. I
want to thank my guest Demisipus and the
whole team at Google DeepMind for the
incredible conversation. As always,
don't forget to like and subscribe, and
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.