The rapid advancement and widespread adoption of AI tools like ChatGPT in higher education present a profound challenge to traditional learning and thinking processes, forcing a re-evaluation of pedagogical methods and the very definition of education.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
I'm Taylor Owen from the Globe and Mail.
I spent my entire adult life on
university campuses. And until recently,
I thought I had a pretty good sense of
what they were about.
Universities are places we go to develop
our minds, to learn how to think. But AI
seems to be changing that. Just a few
months after the first version of Chat
GPT was released, a survey found that
90% of college students were already
using it. And honestly, I'd be shocked
if that number isn't closer to 100 by now.
now.
Students aren't just using it to write
their essays. They're using it to
generate ideas, to conduct research, and
to do their readings.
In other words, they're using it to
think for them. But when this comes up
in faculty meetings, I get a sense of paralysis.
paralysis.
Some worry that if we ban tools like
chat GPT, we might leave students
unprepared for a world where everyone is
already using them. But others think
that if we go allin on AI, we might end
up with a generation that can produce
work, but not necessarily original thought.
thought.
I'm honestly unsure which camp I fall
into. So, I wanted to talk to two people
with really different perspectives.
Connor Grenan is the chief AI architect
at NYU's Stern School of Business. He's
helping students and educators embrace
AI and has ideas for how it can actually
enhance education.
Neil Ferguson is a historian. He's a
senior fellow at Stanford and Harvard
and he's the co-founder of the
University of Austin. Lately, he's been
making the opposite argument that if
universities are to survive, they have
to return to their origins. cloistered
spaces where students have to learn
without the aid of technology at all.
Whichever path we take, the consequences
will be profound.
Because this isn't just about how we
teach and how we learn. It's about the
future of how we think. [Music]
[Music] [Laughter]
>> Neil Connor, welcome to the show.
>> Yeah, thank you. Good to be with you.
Neil, I want I want to start with you.
You wrote a provocative essay in the
London Times recently where you
expressed a great deal of I would say
alarm about the emergence of AI in
higher education. First off, what are
you seeing that makes you so worried?
Why this Clarion call and this moment of
of alarm and your perspective? what I
think I am seeing what many people are
seeing and so I based that article on
what I had heard and read uh from
multiple institutions
uh to put it very simply within a very
short time of the release of chat GPT
roughly 90% of American undergraduates
were using it and the way they were
using it uh was essentially to cut
corners and so I began began to ask
around and form the impression which was
then corroborated in a rather good
article in New York magazine that an
immense amount of undergraduate assignments
assignments
uh in universities all over North
America are being completed by large
language models rather than by students.
And I think it's fairly clear that
that's bad because if you're delegating
reading, thinking, and writing uh to
chat GPT, you're not learning to do
those things. And then of course there
was a a nice paper somewhat
controversial that came out from MIT
your brain on chat GPT which certainly
was interpreted by some in the media as
meaning or showing that that this kind
of behavior is really bad for young
brains. So that was the kind of starting
point for the the essay and I'll cut a
long story short. My argument is not
that we should sort of burn the machines
but that we have to create a period of
time in the student day and I would say
it should be about six or seven hours
long during which they don't have access
to AI. I'm going to assume that they do
AI all the rest of the time, but for six
or seven hours they shouldn't have
access to it. They'll have to read and
think and write for themselves. Last
point, you'll have to abandon now
decadesl long practices of allowing
students to do assignments in their own
time with their laptops uh far from the
supervision of professors. We got to
abandon all that and go back to written
and oral exams under invigilation, which
uh all of which sounds probably uh to
Connor terribly reactionary, but I I
actually think we need to do something
along those lines to avoid a generation
doing even more harm to their brains
than the previous generation did with
smartphones and social media. Look, I
want to get into every element of what
you just outlined there and we will, but
Connor, first, I mean, what are you
seeing from your perspective also at
another large American university and um
are you as worried as this?
>> Well, I mean, yeah, it it I'd love to
sort of like, you know, turn this into
like a flaming talk show debate kind of
thing where we're at each other's
throats. I don't really disagree with
anything Neil just said to be totally
honest. That MIT study was a little bit
maddening, but also right. I mean it's
in a way the most obvious study in the
world which is if people are using Chad
GBT instead it's sort of like saying you
know people can have tutors and if their
tutors are writing their papers instead
of them they're not going to learn and
it's extremely obvious. I like that it
started the conversation but I I got I
got mad at that article online uh
because I thought it was a little
sensational but I'm sensational too. So
uh I'm very forgiving of them but you
know the truth is I I really agree with
Neil. So, so let me come at it from the
other standpoint. And I think that he
and I are probably going to end up
probably in the same general area,
though hopefully we'll find some uh
areas of hard disagreement where we can
disagree each other.
>> But but here's the thing, right? I mean,
I was I so I'm at the I'm on the MBA
side. Uh so the graduate student side.
So I um so on one hand, if you're paying
this much for a business school degree
that we all sort of know how much these
things cost and you're using Chad GPD to
get over on it, you're out of your mind.
So there's that. However, I also have
teenagers, right? So my son is uh 16, my
daughter's 14. And you know, I do a lot
of this with my son. Finn and I like
went out to Nepal and and taught AI to
schools and everything else because I
think there's a fundamental aspect of
this which can really help people learn
in a way that they were never able to
learn before. It allows people who never
had access to tutors or bespoke learning
opportunities. I mean, the reason why
teachers have such a hard time is not
because of the tool. because of how
brains work, right? You can't be a
teacher in front of 25 students and get
into the heads of all 25 and know their
exact learning ability and they can all
learn in radically different ways. Of
course, the teacher can't do that. The
teacher has to teach in the same way the
teacher's always taught, which is, you
know, using their frameworks and the
lowest common denominator and what they
found works the best for the broadest
range. So, from that standpoint, I think
it would be foolish to sort of throw the
baby out with the bathwater, so to
speak. I think that AI is incredible
with this, which is why sorry to keep
bringing up Finn, but but I I think that
the voice that's missing from that is in
this case the high school student. We
could also argue the college student,
but I would say even the high school
student and below even more because they
are really incentivized to do exactly
what Neil's saying, which is cheat,
which is to sort of say, you know, we've
kind of given them this structure of the
only thing that matters is grades. And
when people say students are so much
better at this than adults, I'm like
yeah because students are incentivized
as Neil sort of point to in a way that
that senior tenure people in
organizations are not. If they have
chatbt write a paper for them very
quickly and it's very very good that
helps their future prospects in life. So
they're extremely incentivized. Uh but
let me leave off too with kind of this
you know the olive branch over to Neil
on this too which is you know I was
doing a thing for Google where they were
saying well in the MBA program which
again graduate business program how
should we now teach marketing in an age
of AI. I'm like I think we should teach
marketing the same way we've always
taught marketing because if people are
using AI instead of critically thinking
then how are they going to determine
what quality looks like when they get
out into the workforce? It's it doesn't
work that way. They have to build the
muscle first. And also I sorry also as
Neil said sort of like I would find it
horrifying about people writing in class
six or seven hours a day by hand. I do I
find it horrifying but only because I
see the pain on my kids' faces. But I
don't have a solution. I don't know what
else you can do because
>> but you have some solutions in practice,
right? Like you want people using this
within the classroom and within a
pedagogical context, right? So can you
just lay out a few of those bestcase
scenarios of use of uses here?
>> Yeah, absolutely. I I think that um you
know in the framework that we were we
were building out was this idea of you
have sort of like a lockdown which is
there are certain skills that people
absolutely need and you have to get rid
of AI. But otherwise, I just want to say
that this gives us the potential of
using the best potential learning tool
that has ever been created in history to
really advance and augment critical
thinking in the moment. That's going to
require a very serious rethinking of how
we teach and a very serious rethinking
of the proxies for grading. But I really
do think that this can take young people
so far beyond where they are. And and
I'm talking in terms of skipping entire
grades almost with the ability to if
used properly to go home, work with AI
and then have the teacher say, "Okay,
our expectations for you are much much
higher." Can could I just You both used
the word cheating.
Neil, what is cheating with AI? And is
the way we've been thinking about
cheating in universities and even the
term sort of plagiarism
um do we have the right framework for
thinking about this? I don't think
that's a difficult question because
obviously if you uh were to submit an
essay that had been written not by you
but by a tutor or a parent, you'd be
cheating. It's no different if you if
you claim that an essay written by uh
Gemini is your work. I think this is
straightforward because the act of of
writing involves really some quite
important uh cognitive muscle flexing. I
don't think one has really thought a
problem through certainly in my
experience until one has had to write
down uh what your solution to the
problem is or at least your analysis of
it is. So that's the easy bit.
>> Can I just push you on that one one
little thing? Is it the act of writing
the endstate act of writing or the
entire process of creating that essay?
So what if someone a student uses AI to
develop their outline or to do a
brainstorm their structure or something
like that? Is that circum is that
cheating as well?
>> What if that all do now the problem is
>> I think they're using it for all stages,
right? But like but that's very hard for
us adjudicating that to determine at
what speed they have. This is why Connor
said something very important uh in what
he said before. We're going to agree a
lot actually, Connor, when you said if
you use it in the right way, it's
potentially the greatest teaching and
learning tool ever. The wrong way to use
it is the way it's currently mostly
being used, which is to cut corners so
that you don't have to read, think, or
write. and and these stages, you know,
reading, absorbing information, then
thinking, and then writing are
tremendously important brain muscle uh
actions. And if you don't learn how to
do those things, then you really aren't
educated. And that's a problem. the
right way to use the large language
models, just to focus on those for a
bit, is the way that they're using them
at a remarkable school, the Alpha
School, uh, in Austin, Texas, just down
the road from our new university. And
there uh McKenzie Rice uh with the
support of Joe Limont is doing I think
what Connor has in mind that is to say
using the LLMs rather in the way that
Neil Stevenson describes in his
wonderful book the diamond age uh where
the student has the ability to develop
in a kind of customized way uh a a
question and answer relationship with uh
the LLM and this can greatly accelerate
learning because of course it it does
tailor the uh process to the individual
in a way that the traditional classroom
just can't. Let me be clear the
traditional classroom the way we have
been doing things prehat GPT was already
broken in a whole range of ways. Um I
I've been saying for years I cannot
understand why professors give lectures.
Lectures are an incredibly bad way of
teaching. Uh, the Socratic method has
been around since Yep. Socrates, it's
better, but it's hardly used. And I
could go on and on. So, things sucked
already. And that's partly why students
cheat because if you're confronted with
a system that sucks where you go to a
lecture, some guy like me drones on for
half an hour or an hour and then you're
given this list of things to read all of
which are kind of slightly toided and
then you're expected to turn in an
assignment which uh regurgitates them in
some way.
>> Right? I mean, of course, people are
going to take the line of least
resistance once it appears because none
of this was particularly satisfactory
before. So, I think there's an
opportunity here. U and and I'm
beginning to see that it can be done
radically differently. At alpha school,
we put our seven-year-old son into that
school for a week. Uh and it was clear
that it had a tremendous benefit for
him. So, I think we need to be as
innovative as they're being there. we we
kind of have to reinvent education in
order to make this work. If we leave the
old system and just patch on uh large
language models like chat GPT, we're
going to end up with the worst possible
combination of a little bit like you
remember the Hungarian economy under
socialism. They pretend to pay us and we
pretend to work. We
>> pretend to work.
>> That'll be it. That'll be universal. We
we will pay the tuition and then
everybody's going to pretend to work.
>> Our AIS will mark their AIS.
>> Yeah. I mean that's that's already
happening. That is already happening.
>> So look, I think just to push on one
thing, I mean I think people who have
read your essay will be a little
surprised that you're putting your child
in a school that uses AI. Can you say
what Alpha School is doing differently
than what you see happening on campuses
and what maybe higher education could
learn from that?
>> So Joe Leman and uh McKenzie Rice have a
common view that school as they
initially experienced it was dreadfully
boring. They've created a system which
is very much using AI to allow students
to learn at their own pace and they've
created incentives along the lines of if
you get this all done in 2 hours and
really smash it that's it you're done
for the day you can go play outside. So
they're they're really changing the way
that we think of education instead of
everybody has to sit there for six seven
or eight hours uh they just can say get
through the work if you do it really
well you're done. Uh and so that was
something that our son Campbell found
exciting and disconcerting
uh but but liberating. I think what's
important here is the idea that you you
don't say here's an assignment and then
they go off and get uh chat GPT to do
it. You you you actually say here here
are a set of things that we'd like you
to to master. Uh maybe it's a ma set of
mathematical concepts. Um, and you're
going to play with the the problems.
You're going to do a whole bunch of
problems and an AI is going to see how
quickly you learn. It's going to see how
you get uh get along and it's going to
respond to the way you do in the first
run of problem sets uh and that will
generate the next set accordingly. And
so instead of the student using uh the
large language model to shortcut around
an assignment, actually the student ends
up working a lot more intensively to
achieve certain goals, interacting with
a kind of living encyclopedia.
>> It's also probably more fun, right,
Neil? I mean, like it's it's a more
enjoyable way of learning as well.
>> The the Alpha School is all about making
going to school fun. And I do think
they're really on to something here to
the point that I think we the university
and Austin have to learn from what
they're doing. I I I am certainly struck
by the fact that artificial intelligence
requires us to reinvent education
fundamentally to make use of these
tools. If we don't do that then I think
the tools are going to in fact be
misused and the net educational impact
will be very negative. But I can see
from what's happening at Alpha School
that this can work extraordinarily well,
particularly for smart kids, but also
for kids who struggle because it's the
fact that it can be customized for the
individual student that seems to me so
potent. Anybody who doesn't quite know
what I'm talking about and who can't
make a trip to Austin should just read
Stevenson's book, The Diamond Age,
because it tells the story of a little
girl from a totally deprived background
who happens to stumble on what we would
now call uh an N an AI. In fact,
Stevenson's kind of ahead of his time.
He's writing in the 1990s, but this is
essentially a kind of living talking
book that evolves with her. there's a a
relationship between the little girl and
the book and whatever her question is,
whatever is she's interested in, it
helps her learn about the world. So, I
remember loving that book. I think it's
Stevenson's most brilliant book. But
that was of course inconceivable in the
1990s. It was a sort of vision of a of a
science fiction future. The
extraordinary thing is that future is
now here and little girls all over the
world including as Connor said in places
where educational provision is barely
existent now can access
without too much trouble a world of
adaptive knowledge knowledge that comes
to you in the right way in at the right
time in the right volume. I I find that
hugely exciting. Last thing I'd like to
say, the the original title I gave that
essay was that the closter and the
starship. And the idea I wanted to
convey was that we need to spend time in
the closter with just our brains to to
to learn certain foundational skills of
of cognition and communication. But when
we come out of the closter now, unlike
the monks of the Middle Ages, we we can
get into the starship. And that that's
an amazingly exciting combination. So I
don't want to give anybody listening the
impression that I'm a lite. I think
Connor and I basically agree and the key
question which we haven't really
addressed is why are the established
institutions so slow? I asked the other
day somebody quite senior at Stanford
University where I I spend part of my
time at the Hoover Institution. Hey,
what's the university policy on the use
of AI? There isn't one. Right?
part of the challenge here is that I
mean you guys broadly agree I think on
most of this but there's a lot of devil
in details on how this is rolled out and
we are essentially and as you say a lot
of it's just being pushed down to
faculty because I do agree institutions
are in an emot mode of avoidance here,
but we're being asked to sort of rebuild
a 2,000-year pedagogical model on the fly
fly
using a technology that is evolving by
the week. So, Conor, one of the things I
think we're touching on here is this
moment of real cognitive development
using a technology that allows for
cognitive offloading and those are
really intention with one another in a
university. So, Conor, how do you think
through that? like how do we use these
tools without that risk of cognitive
offloading that can be so damaging to
exactly what we're trying to do at a university?
university?
>> Yeah, I mean the short answer is I don't
know, but the more sort of like
hopefully robust answer here is yeah, I
mean it's easy to sort of say that we
should reinvent education. I I totally
agree with you know with Neil on this,
but like you know how I mean I think
that the the relevancy here is who's
incented. So when I see people out
incented in the working world, it's very
limited. It's uh it tends to be sort of
small startups. People are like, "Oh my
gosh, I need every tool that I need be,
you know, I have because I have to do
too much work and here's one." But that
is not the huge majority of people out
in the workforce. And so when we think
about education, the the idea is that
this requires a tremendous amount of
educational political will on the
education level, whatever that is,
educational will. And that's not the
education system in in in in certainly
in our country in the United States for
example uh or in Canada or I I would
even say Western Europe, right? It's
it's just not how it works. We're like,
"Nope, got this." And why is that?
Because faculty have spent many many
years doing something the exact same way
and they've been voted faculty of the
year and all that kind of stuff and they
know how to do it and everything. We
sort of saw the same thing a little bit
during COVID when everybody went online.
They just tried to move their everything
online and then the real innovative
people are like, "Well, how what's a
better way to learn now that we have
these new systems or something like
that?" But but so I think the first
thing we have to remember is that and
this and the way that I teach so I have
a company called AI mindset we we do
generative AI uh we do AI adoption but
we don't do it through teaching tools we
do it completely through understanding
how the brain works and why the brain
struggles with with this it has
everything to do with the brain so even
as we're saying the tools are developing
like when I go out and talk to companies
I talked to some of the biggest
companies in the world on this uh my
presentation hasn't changed in two years
because it has nothing to do with
technology it has everything to do with
how our brain operates and I think
that's really critical in education
because there's a lot of people invested
in how education works and there's not a
lot of people and I love our teachers. I
come from teachers like I work with
teachers but I don't see a ton of
teachers being like all right can't wait
to change the way I've done everything
in the way I've gotten you know my PhD
and the way I've done this my entire
life. I just don't see that. And so that
has to be incentivized. I think
incentives are everything. So that's
number one. And then then I sort of like
want to pivot into something that um and
I hope this doesn't get clipped as a
sound bite because I can imagine this
headline, but what are the skills we
actually really need, right? I mean like
and by the way, I'm a writer. I've I've
written books. I'm a published author.
All that kind of stuff. So I care very
very deeply about writing. But I have to
look in the collective global mirror
here and say, "Do we still need to know
how to write?" Giant question mark. By
the way, this is not Connor saying, "Do
we still need to?" But what I mean by
that is obviously the calculator is a
kind of a little easy example, but if we
think about the calculator, the
calculator all it did was democratize
math, but it's not like kids don't have
to learn math. And so I think that's
probably going to be what we need to do.
And gosh, my kids are going to kill me
for saying this, but I think they need
to write by hand or on an airgapped
computer so they learn how to write. Not
because writing is intrinsically
important in the same way learning long
division is not intrinsically important,
but you don't see people working at
NASA. It's not like because you have a
calculator you can work at NASA or you
can be a quant in a hedge fun. It
requires skills beyond just the
democratization of math through a
calculator. But the important part I
think is do we still need writing in the
same way we still need math. So what I
mean by that is I was just having this
conversation yesterday with um somebody
I really respect a CEO of a company and
she was saying you know I don't I don't
know that I still know how to write. I'm
using claude and chatbt and things like
that and I'm feeling awful about it. I'm
like, "Yes, but you did learn how to
write." And so, you're recognizing good
quality. And I think, and I don't want
to put words in Neil's mouth, but but
where I come from on this is at the very
root, kids need to learn what good
writing looks like. Otherwise, I think
that we are going to come to a point of
where everything is just AI slop. Can I
just pull on that writing thread a
little bit here? Because
>> Yeah. I mean,
I feel like there's writing as an output
and something we consume, but there's
also writing, as Neil, you expressed at
the very beginning, as a form of
thinking. And so, what happens when we
detach writing from learning? And can we
do that at all? Is writing core to how
we particular in that phase of our brain
development learn to think?
>> Well, let me put it like this. uh when
uh conveyor belts were invented, uh we
we could have eliminated walking,
uh we could actually have made it
possible to go everywhere on conveyor
belts and uh we we could probably have
made them quite fast. I see quite fast
ones at some airports
>> or or hoverboards, you know,
>> maybe either one. But the point is that
actually we we all go I bet all three of
us go to the gym quite frequently and uh
we we actually embark on physically
difficult activities that are pointless
except for the fact that they keep us
fit and most students uh that I see at
Stanford look to be in pretty good shape
physically uh but for some reason we
don't apply the same rules to our
brains. means now the point about
writing is not that everybody should
write a novel. In fact, I I wish I could
stop people writing novels. Far too many
novels get written and I I wish I could
also cut down the number of opeds that
are written.
>> If we could do one thing for the world,
it would be to decrease the number of
ops. I mean,
>> please people write less. Most of you
really don't write anything that
interesting. But the point is that in
learning to think on the basis of what
we have read and and then to write,
we're getting our brains fit. I I'll
give an example. Connor will probably
recognize and you two Taylor. I used to
find that until I had taught something,
stood up in front of a class and and
taught something, I wasn't quite ready
to write the book. And and this is all
about getting your brain fit because if
we're if our brains are obese, we kind
of watch some TV and we kind of get
involved in conversations. We have this
rough idea about let's say AI and we can
have a conversation about oh yeah, I
heard about AI. Yeah, it sounds really
scary but also kind of sounds kind of
good as well. Yeah, you know that's
that's the obese brain hasn't really
absorbed anything about AI. Hasn't
thought about it. It can have a
conversation about it, but the
conversation's entirely vacuous. So what
we really want to do in education is to
have very very fit brains. Brains that
can very quickly absorb lots and lots of
complex data, not necessarily in the
form of words. It might be just the form
of uh of data or lumps of pottery, but
they can absorb data in large
quantities. Then they can think
analytically. What does this signify?
What's the pattern here? And then they
can communicate to other human beings by
writing or by speaking what they think
they've inferred from all of this. These
are the things that make our brains fit.
And there is no doubt in my mind that in
a world of very powerful computers that
can not only be language large language
models but can also do scientific
research. Our brains need to be super
fit if if we're to have purpose if we're
not simply to become uh Yuvall Noah
Harrari's uh cowike creatures milked for
our data by AI. So I think just get into
the mental gym people. I say to the
students at the University of Austin,
one day I'm going to come in here. I'm
going to tell you, you've got two days
to read War and Peace and you're going
to be just shut in the library with the
book and then you're going to come out
and I'm going to ask you what's the
meaning of this book. That's the kind of
thing that a smart person can do. Yeah.
But Neil, so let me ask you this because
this is what I wrestle with all the
time, right? Which is and I love your
two examples of exercise. So the example
I sort of give is sometimes getting on
the treadmill and the reason that we may
get off the treadmill very quickly is
because our lyic system prioritizes you
know quick rewards and conserving energy
right it's sort of like this is why
behavior change is so hard so what I
would posit here is that I see young
people by the way so fit these days too
high school college and it's it's part
of the culture I think but also there is
a huge incentive for them to get fit
right they will look attractive to other
people it's almost like what drives us
as a species etc and the challenge that
I find that I'm trying to figure out
like how do you incentivize students
because students aren't incentivized by
critical thinking and they're not
incentivized by learning. They're
incentivized by will they get the grade
that I mean what you hear all the time
like I'll just get into the great
college and then I'll figure it out or
I'll just get into the great law school
and then I'll figure out all that kind
of stuff. But we have set up a system
that you were referring to earlier which
is grades are the holy grail. Like I
mean everything else and I'll figure out
everything else later. It's the
incentive structure. So I'm wondering
when you think about that and like hey
guys like you have to do this it makes
sense uh but the I feel like the the
internal incentive structures is broken. No
No
>> I think the employers have incentives
too. The elite employers know that uh
for example the Harvard degree can't
really be worth what it used to be worth
when the only grade that's given at
Harvard is a uh so that the perception
that grade inflation has caused a very
serious decline in standards at the
established institutions incentivize the
best employers to find other ways of
assessing ability. So you don't get
hired by the big tech companies or the
big Wall Street companies just on the
basis of your GPA anymore because they
they figured out over the last 10 years
that that's not a good signal at all. So
I think one of the interesting things
that's happening is that recruitment is
becoming more and more creative. I mean
I think of some of the quant hedge funds
and how they recruit. uh it's actually
by setting a whole bunch of examination
type challenges to the wouldbe uh
entry-level uh people. So I I think the
system's changing because there's an
incentive if you're an employer to find
out the really smart people as opposed
to the people who graduating sooner who
took all the soft courses. I think what
we're talking about here is partly how
do we make young people care as much
about their brains as they care about
their bodies. Now, my perception of of academic
academic
life of of of university life is that
while of course the athletes may attract
a certain number of members of the
opposite sex, uh there is still
something sexy about being smart. I
mean, I think uh O'Neal maybe I'm just
dreaming here, but I always felt it was
my witty repartee. It I always thought
it I thought it were I mean I used to
think it was the jokes anyway. But um I
think I think that's part of it that
that we're not just interested in
people's bodies. I mean, somebody can
look like a like a supermodel or or an
Olympic athlete, but if what they say is
just unbelievably dumb, it's not going
anywhere. The the other question is how
do you persuade people to take the same
attitude towards academic success as
currently exists in the military towards
the elite uh combat formation? So, I I
use the phrase the Navy Seals of the
mind to describe the graduates that we
want to produce at the University of
Austin. I want to convey a sense that
there is an elite quality in the realm
of of intellectual life that we've not
been valuing for the last 10 or 20
years. But now it's time to change and
say no no what we care about is
brilliance is real intellectual
brilliance in the same way that the Navy
Seals care about people who are
physically extraordinary courageous. So
I think changing those norms I think
it's happening. I think being able to
push in that direction so that young
people are interested not in the perfect
GPA which you achieve by cynically
taking the easy courses and telling the
professors what they want to hear. Now
we need to say no no no that that's so
20 years ago. Now what we want are
people who are just drop deadad
brilliant and they can play a game of
chess while at the same time coding
while at the same time doing math
problem sets while at the same time uh
writing sonnetss in ancient Greek. I
mean those people do exist but they
aren't valued as much certainly not in
the established institutions as they
should be. I mean that's the key point
right not in the established higher
education universities and I to say
something a little bit provocative here
like I think that norm is changing
outside of universities the emergence of
long form podcasts the way YouTube is
allowing people to go deep on topics
that they couldn't before is
incentivizing a kind of intellectual
curiosity I think that is not dissimilar
to the the fitness craze that's
happening in those same worlds right I
think there is something going on there
that people crave more,
they might just not be getting it from
universities. So, let's just touch on
that for a moment here, Connor. So,
universities are large bureaucratic
institutions that are incredibly hard to
move and to evolve. How do we convince
these institutions
that rethinking what they do in light of
this new technology is existential for them?
them?
I mean I think they I think they know
that intellectually. So, first of all, I
think we have to determine what problem
are we trying to solve here, you know,
and I think that it has to be done
pretty slowly and pretty carefully
because, you know, Tyler, what you're
hitting on is exactly right, like we
are, and I think you phrased it exactly
right, which is we are talking about
giant bureaucratic institutions. These
are not driven by well, you know, will
will I earn more money if I can produce
students who are real critical thinkers?
And by the way, again, family of
teachers. I work in university like I'm
very passionate about education faculty.
I'm a faculty member myself at times but
we have to understand that most people
have this very deep uh commitment to how
they have learned and how they've always
taught in the past and by god like AI is
not going to change that people so if
that means that you have to come into
the classroom and just write it out by
hand it's just a colossal colossal
missed opportunity if you can sort of
extrapolate from Neil's son school that
sort of you know is really kind of
thinking about this on the young age and
making education more fun I think maybe
it turns from fun into more
incentivization I think we have to be
realistic about incentives. But the idea
is how do you actually incentivize
uh faculty members in a in a in a state
and in an institution that doesn't work
like that. I mean you can have an
existential threat to this is why you
see companies like Meta spending
billions and billions of dollars because
that's an existential threat to their
business. It's P&L. It's money. It's
this is the market drives this. That's
not the case in institutions. So I think
that there first of all I agree like
when Stanford doesn't have a policy on
AI when a when NYU my school is sort of
struggling to find their policy on AI
there has to be a new way of thinking
about it. So instead I would focus much
less on the tech and much more on how do
we get people excited about using this
because once people start using this and
start using it as a learning tool and
then and I don't know where Neil falls
on this I'd like to ask him. I think
that we have to put the onus on the
guardrails. I think that it has to be
look you cannot learn this way because I
just think there's too many teachers who
will say well I've always taught this
way it's great and too many students are
like yep I get it but I'll learn when
I'm out of law school and it doesn't
matter anymore but right now the
incentive is getting this grade to get
into a better law school I think that
the only sol and by the way this is a
bit of a lite very limited viewpoint
which I hate to have but I don't know
another solution I think the guards have
to be in place so firmly which is where
Neil started this conversation which is
what if it was he said six or seven I
would say more like you know three or
hours a day where you have no access to
AI. You have to learn. So that's the
problem I'm wrestling with.
>> Neil, you have are maybe in the singular
unique position of having spent decades
inside the oldest university
institutions and now being a part of
creating a new one. Um I can only
imagine how the older institutions would
respond to your proposal and I I I know
it wouldn't happen quickly or maybe even
at all, but how has the response been
within a brand new one? Is this
happening this year at the University of
Boston? your cloysters and starship model.
model.
>> I I need to answer that question after I
pay my next visit uh which is in a
couple of weeks. Then I'll be able to
say if it's working. U the the challenge
even in a new university is to get the
professors to change the way they do
things. As Connor rightly says,
>> at the heart of all uh universities are
tenured faculty uh
>> with academic freedom, including on how
they teach, right?
>> Not only can they not be fired, but they
get considerable autonomy about how they
go about things. And the reason that a a
university doesn't have an AI policy is
that the default setting is to say we
leave it to the professors. And that's
what you'll almost certainly hear at
most institutions. But what does that
mean? That means that men and women in
their 40s, 50s, and 60s are essentially
allowing the students to misuse AI
because they themselves don't really
understand what's going on outside the
classroom and they have grown accustomed
to do things in ways that are very easy
to game. So, we have to change that.
It's just easier to do at a small
institution uh with fewer than 200
students than it would be at Harvard or
or at at Stanford. I only became
involved in creating a new university
cuz I just thought the established
institutions couldn't change themselves
that the incentives internal incentives
are just all pointing in the wrong
direction. We have to reinvent higher
education. That is very clear. Even
before Chat GPT, it was clear and that's
that's what we're trying to do in Austin.
Austin.
I hadn't fully realized until I looked
at Alpha School that the reinvention
could be even more radical than I
thought. And I'm beginning to see how
the new education, the educational
institutions of the future are going to
work. And I still like my Closter and
Starship analogy because I like the idea
that my kids are going to spend some of
the time learning the the core skills
including how to do calculus and read
Toltoy. But then they'll be unleashed
and let into the starship to use large
language models and all the other things
that AI provides
equipped with the mental discipline you
need. Let me put it see if you agree
with this con. Let me put it like this.
At the heart of using a large language
model well is is the way that you write
the prompts. My view is that somebody
who has not learned to think properly is
not going to write good prompts. is not
going to really be able to use the tool
at all or will use it badly. Do you
agree with that, Connor? Because it
seems to me that part of what we're
trying to do when we get people mentally
fit is to equip them with the kind of
cognitive skills that will enable them
to use AI optimally. Yeah, I actually I
actually may take a different tack on
that. So, I I kind of compare it much
more to sort of like a managerial
expertise, like if you know how to get
the best out of somebody rather than
writing. Uh, so I think of it more
probably less about the prompt itself
and more about how would you instruct a
new colleague or a new employee. How
would you get the best out? There's good
managers and bad managers and you can
you know it when you see it a little
bit. So maybe I might flip it on its
head and say I think the critical
thinking is to take the output rather
than the input and see is this good? And
also and I want to see if you agree with
this. It's actually the output can be
very very good but it doesn't have to be
right. So I just did a thing for master
class if you know the brand master class
and like people always talking about you
know hallucinations. I was trying to
talk about how to get over the problem
of hallucinations which is when it lies
very convincingly. I'm like that's I'm
not even sure that's the biggest
problem. Hallucinations you can sort of
spot much easier than you can spot the
syncopency like oh that's a great idea
when in fact it's not a good idea. And
the second part of that is what if it's
giving you outdated information. Do you
know how to draw in the right part of
information? So anybody can look at an
output and be like that's awesome. But
the problem is do you have the critical
thinking to ask the right questions like
where is this getting this information?
Is it just telling me I'm right because
blah blah blah? And then the third thing
I think is the hallucination. So so I'm
with you on the critical thinking is
critical. I would probably put it on the
output judging the output rather than
the input.
>> Yeah, I agree with that. My impression
thus far and I you know continue to run
these experiments is that there's a real
problem in the fact that the models uh
have not really been trained on the full
culpus of highquality knowledge because
Google books lost their case and a lot
of extraordinarily important literature
is not accessible online. uh and and so
what I notice when I ask say Gemini or
deep research to to answer a question is
that it comes back rather thin without
the kind of depth of of scholarship and
knowledge that you would have if you had
access to all the books in the Bodian
library or or Widner and I think that's
because of that very important case that
Google lost Google wanted to put every
book ever written including all of mine
on Google books
Now, they kind of won with YouTube
because every single piece of content I
ever did on television is free available
on YouTube now, but they lost the book's
case. And that means that it's hard to
train a large language model on the real
quality literature that's been published
in our time. So, I I'm still un to be
honest, I'm underwhelmed by the outputs.
I read them and I think eh
B+ maybe at best that and it it's never
original. It can never ever come up with
anything original except when it's
making it up, which is obviously not
what we want.
>> I think we might have actually found our
point of disagreement here, and I think
we I'd love to keep going down this
path, but it took us an hour, but we got
to this point. I I suspect a little bit
of disagreement in what we just said
there, Neil, but I want to bring this to
a wrap. Um, and like maybe just to get
you both to reflect a bit on the stakes
here, cuz I I I do think there's a lot
at stake in how we learn to think, how
we train future generations to think and
learn. And that's really tied to who we
are as humans, right? I mean, this has
been for 2,000 years anyway, and I don't
expect it to change um in its
importance. But Neil, in in your essay,
you said something quite striking that
strict prohibitions on devices will have
to be insisted upon if the rapid advance
of pseudo intelligence is not to plunge
humanity into a new dark age. Can you
just why so stark there and what's at stake?
stake?
>> Well, pseudo intelligence is Neil
Stevenson's joking name for artificial
intelligence in the book The Diamond
Age. It's one of the little jokes he
slips in. So um we we call it PI not AI.
Uh so credit where it's it's due. John
hate has written very compellingly about
the damage we've already done to young
people's brains by allowing them to have
smartphones and social media or what
used to be called social media and is
now actually AI media rather than social
media. And I think the next level damage
is what we're currently doing because
we're essentially cutting off a
generation from learning the key skills
of absorbing data, thinking about it
analytically, and then producing
convincing communications about it.
Henry Kissinger whose biography I'm in
the midst of writing without the help of
AI said after he had large language
models explained to him this has he
wrote a brilliant essay for the Atlantic
saying this is the potential to take us
back to before the enlightenment and the
scientific revolution because things
will start to happen around people that
they can't explain. I mean, you don't
really understand how the AI arrived at
its answer. And that was a very, I
thought, profound insight for a man in
his late '9s. He saw early what it
implied. And I I think he's going to be
vindicated if we carry on down this
road. I think we're going to have a
generation that is even more cognitively
harmed than the generation that just was
hit by smartphones and social media.
>> Connor, how do you frame the stakes? Do
you agree? God, it's such a it's it's a
great question. And so I think the
actually the the John height um analogy
is sort of apt. He wrote the anxious
generation. He's a he's he's a colleague
of mine over at Stern. So we get to do
some some stuff together over there. But
it's funny because my kids just started
back at school. And uh they just
instituted that policy that John has
been pushing whichever which you can't
have your phones during the school year
or sorry during the school day which
we're all in all in favor of as as
parents. But interestingly like in
John's research shows all this the kids
want that too, right? the kids are
happier when they don't have their
phones. So what does that mean for this?
This is the question, right? Because
it's even as the kids, even the kids
themselves, if we said, "Hey, you said
you're happier not having your phone."
Yeah, absolutely. Okay, then don't have
your phone. It's like, but that's kind
of what addiction is, right? And maybe
to sort of like take a slightly
different tack on this, are we talking
about addiction? Now, I'm not sure AI is
a bad addiction. I think I mean from a
adult standpoint, like I don't think my
addiction quote unquote to AI is a bad
thing. I think it helps me produce much
better things, but I'm also like
learning. I don't have I have a pretty
good incentive structure in my life. But
kids don't. And I guess what I would say
on this is even when they know that they
need to critically think. So maybe I'll
I'll say this. The way that Finn and I
teach this, my son Finn and I teach this
together is we talk about like a
mountain, right? And from the bottom of
the mountain, the kids just think,
"Okay, the top of this mountain, I just
got to get up to the top." And one is a
trail and the other's a gondola. the
winner gets to the top and then they get
to the top and they realize it's not a
mountain, it's a plateau and they have
to now it's the beginning, it's the
starting point, it's not the end and now
they have to go into this workforce and
the people in the workforce are gonna be
like, well, who has the muscles to
handle this? Right? And so that's the
the problem. So even if we tell them and
even if they know intellectually the
incentive structure is all screwed up, I
think that we probably have to force
kids even though they they understand
even though they want to critically
think, we have to force them to
critically think because otherwise the
incentives in the same way if you just
give them Snapchat and and Instagram and
everything else, they'll use it even
though they don't want to. I think we
have to have guardrails in place and I
don't know how else we do this. Maybe
that's shortterm, I don't know, but I
agree with Neil. think that critical
thinking is that important that it
deserves our attention and putting the
guardrails in place for their own protection.
protection.
>> Look, I think that's a pretty great way
to end this conversation and like thank
you both um for both talking about this,
but also the thinking you're putting
into this moment of transformation and
it's I I started by saying I think this
is transforming the university and I I
think it fundamentally is and uh it's
our responsibility to figure out how to
keep that model alive in some capacity.
I think and so thank you both for the
work you're doing on that and for
talking about it.
>> Machines like us is produced by
Paradigms in collaboration with the
Globe Mail. The show is produced by
Mitchell Stewart. Our theme song is by
Chris Kelly. Host direction from Athena
Carcanis. Our executive producer is
James Milward. Special thanks to Angela
Pachenza and the team at the Globe and
Mail. If you like the interview you just
heard, please subscribe and leave a
rating or a comment. It really does help
us get the show to as many people as possible.
Machines like us is supported by the
Maxel School of Public Policy at McGill
University. The Maxbell School offers
public lectures and seminars across
Canada and online in addition to their
11-month accelerated Masters of Public
Policy program which is accepting
applications now. Learn more at mcgill.ca/maxpell
mcgill.ca/maxpell
school. Machines like us is also
supported by SEFAR, a global research
organization proudly based in Canada.
From leading Canada's national AI
research and talent strategy to
advancing research in AI safety, SEFAR
is the catalyst for the exceptional
thinkers reframing our future. Explore
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.