0:05 I'm Taylor Owen from the Globe and Mail.
0:24 I spent my entire adult life on
0:27 university campuses. And until recently,
0:28 I thought I had a pretty good sense of
0:31 what they were about.
0:33 Universities are places we go to develop
0:38 our minds, to learn how to think. But AI
0:41 seems to be changing that. Just a few
0:42 months after the first version of Chat
0:45 GPT was released, a survey found that
0:47 90% of college students were already
0:50 using it. And honestly, I'd be shocked
0:52 if that number isn't closer to 100 by now.
0:54 now.
0:56 Students aren't just using it to write
0:58 their essays. They're using it to
1:00 generate ideas, to conduct research, and
1:02 to do their readings.
1:04 In other words, they're using it to
1:07 think for them. But when this comes up
1:10 in faculty meetings, I get a sense of paralysis.
1:11 paralysis.
1:13 Some worry that if we ban tools like
1:15 chat GPT, we might leave students
1:18 unprepared for a world where everyone is
1:20 already using them. But others think
1:23 that if we go allin on AI, we might end
1:25 up with a generation that can produce
1:27 work, but not necessarily original thought.
1:29 thought.
1:31 I'm honestly unsure which camp I fall
1:34 into. So, I wanted to talk to two people
1:36 with really different perspectives.
1:38 Connor Grenan is the chief AI architect
1:41 at NYU's Stern School of Business. He's
1:44 helping students and educators embrace
1:46 AI and has ideas for how it can actually
1:49 enhance education.
1:51 Neil Ferguson is a historian. He's a
1:53 senior fellow at Stanford and Harvard
1:55 and he's the co-founder of the
1:57 University of Austin. Lately, he's been
2:00 making the opposite argument that if
2:02 universities are to survive, they have
2:05 to return to their origins. cloistered
2:07 spaces where students have to learn
2:10 without the aid of technology at all.
2:13 Whichever path we take, the consequences
2:15 will be profound.
2:17 Because this isn't just about how we
2:20 teach and how we learn. It's about the
2:22 future of how we think. [Music]
2:24 [Music] [Laughter]
2:32 >> Neil Connor, welcome to the show.
2:34 >> Yeah, thank you. Good to be with you.
2:35 Neil, I want I want to start with you.
2:38 You wrote a provocative essay in the
2:39 London Times recently where you
2:41 expressed a great deal of I would say
2:44 alarm about the emergence of AI in
2:46 higher education. First off, what are
2:48 you seeing that makes you so worried?
2:51 Why this Clarion call and this moment of
2:53 of alarm and your perspective? what I
2:56 think I am seeing what many people are
3:00 seeing and so I based that article on
3:02 what I had heard and read uh from
3:04 multiple institutions
3:07 uh to put it very simply within a very
3:11 short time of the release of chat GPT
3:14 roughly 90% of American undergraduates
3:16 were using it and the way they were
3:19 using it uh was essentially to cut
3:24 corners and so I began began to ask
3:27 around and form the impression which was
3:29 then corroborated in a rather good
3:31 article in New York magazine that an
3:34 immense amount of undergraduate assignments
3:36 assignments
3:37 uh in universities all over North
3:40 America are being completed by large
3:44 language models rather than by students.
3:47 And I think it's fairly clear that
3:51 that's bad because if you're delegating
3:54 reading, thinking, and writing uh to
3:57 chat GPT, you're not learning to do
4:00 those things. And then of course there
4:02 was a a nice paper somewhat
4:05 controversial that came out from MIT
4:08 your brain on chat GPT which certainly
4:12 was interpreted by some in the media as
4:15 meaning or showing that that this kind
4:17 of behavior is really bad for young
4:19 brains. So that was the kind of starting
4:22 point for the the essay and I'll cut a
4:25 long story short. My argument is not
4:29 that we should sort of burn the machines
4:31 but that we have to create a period of
4:35 time in the student day and I would say
4:37 it should be about six or seven hours
4:40 long during which they don't have access
4:44 to AI. I'm going to assume that they do
4:47 AI all the rest of the time, but for six
4:49 or seven hours they shouldn't have
4:51 access to it. They'll have to read and
4:53 think and write for themselves. Last
4:57 point, you'll have to abandon now
4:59 decadesl long practices of allowing
5:01 students to do assignments in their own
5:06 time with their laptops uh far from the
5:08 supervision of professors. We got to
5:11 abandon all that and go back to written
5:15 and oral exams under invigilation, which
5:17 uh all of which sounds probably uh to
5:21 Connor terribly reactionary, but I I
5:23 actually think we need to do something
5:26 along those lines to avoid a generation
5:28 doing even more harm to their brains
5:30 than the previous generation did with
5:33 smartphones and social media. Look, I
5:35 want to get into every element of what
5:36 you just outlined there and we will, but
5:38 Connor, first, I mean, what are you
5:39 seeing from your perspective also at
5:42 another large American university and um
5:44 are you as worried as this?
5:47 >> Well, I mean, yeah, it it I'd love to
5:48 sort of like, you know, turn this into
5:51 like a flaming talk show debate kind of
5:53 thing where we're at each other's
5:55 throats. I don't really disagree with
5:57 anything Neil just said to be totally
6:00 honest. That MIT study was a little bit
6:02 maddening, but also right. I mean it's
6:04 in a way the most obvious study in the
6:05 world which is if people are using Chad
6:07 GBT instead it's sort of like saying you
6:08 know people can have tutors and if their
6:09 tutors are writing their papers instead
6:11 of them they're not going to learn and
6:13 it's extremely obvious. I like that it
6:15 started the conversation but I I got I
6:17 got mad at that article online uh
6:18 because I thought it was a little
6:21 sensational but I'm sensational too. So
6:24 uh I'm very forgiving of them but you
6:26 know the truth is I I really agree with
6:28 Neil. So, so let me come at it from the
6:30 other standpoint. And I think that he
6:31 and I are probably going to end up
6:33 probably in the same general area,
6:35 though hopefully we'll find some uh
6:36 areas of hard disagreement where we can
6:38 disagree each other.
6:40 >> But but here's the thing, right? I mean,
6:42 I was I so I'm at the I'm on the MBA
6:44 side. Uh so the graduate student side.
6:47 So I um so on one hand, if you're paying
6:50 this much for a business school degree
6:52 that we all sort of know how much these
6:54 things cost and you're using Chad GPD to
6:56 get over on it, you're out of your mind.
6:58 So there's that. However, I also have
7:01 teenagers, right? So my son is uh 16, my
7:04 daughter's 14. And you know, I do a lot
7:06 of this with my son. Finn and I like
7:08 went out to Nepal and and taught AI to
7:09 schools and everything else because I
7:11 think there's a fundamental aspect of
7:13 this which can really help people learn
7:15 in a way that they were never able to
7:17 learn before. It allows people who never
7:21 had access to tutors or bespoke learning
7:22 opportunities. I mean, the reason why
7:24 teachers have such a hard time is not
7:26 because of the tool. because of how
7:28 brains work, right? You can't be a
7:31 teacher in front of 25 students and get
7:33 into the heads of all 25 and know their
7:35 exact learning ability and they can all
7:36 learn in radically different ways. Of
7:37 course, the teacher can't do that. The
7:39 teacher has to teach in the same way the
7:40 teacher's always taught, which is, you
7:42 know, using their frameworks and the
7:43 lowest common denominator and what they
7:45 found works the best for the broadest
7:47 range. So, from that standpoint, I think
7:49 it would be foolish to sort of throw the
7:50 baby out with the bathwater, so to
7:51 speak. I think that AI is incredible
7:54 with this, which is why sorry to keep
7:56 bringing up Finn, but but I I think that
7:58 the voice that's missing from that is in
7:59 this case the high school student. We
8:00 could also argue the college student,
8:02 but I would say even the high school
8:05 student and below even more because they
8:07 are really incentivized to do exactly
8:08 what Neil's saying, which is cheat,
8:10 which is to sort of say, you know, we've
8:12 kind of given them this structure of the
8:14 only thing that matters is grades. And
8:16 when people say students are so much
8:17 better at this than adults, I'm like
8:19 yeah because students are incentivized
8:20 as Neil sort of point to in a way that
8:22 that senior tenure people in
8:24 organizations are not. If they have
8:26 chatbt write a paper for them very
8:28 quickly and it's very very good that
8:30 helps their future prospects in life. So
8:33 they're extremely incentivized. Uh but
8:36 let me leave off too with kind of this
8:38 you know the olive branch over to Neil
8:39 on this too which is you know I was
8:41 doing a thing for Google where they were
8:42 saying well in the MBA program which
8:44 again graduate business program how
8:46 should we now teach marketing in an age
8:47 of AI. I'm like I think we should teach
8:48 marketing the same way we've always
8:51 taught marketing because if people are
8:55 using AI instead of critically thinking
8:56 then how are they going to determine
8:57 what quality looks like when they get
8:59 out into the workforce? It's it doesn't
9:01 work that way. They have to build the
9:05 muscle first. And also I sorry also as
9:07 Neil said sort of like I would find it
9:09 horrifying about people writing in class
9:12 six or seven hours a day by hand. I do I
9:13 find it horrifying but only because I
9:16 see the pain on my kids' faces. But I
9:18 don't have a solution. I don't know what
9:19 else you can do because
9:21 >> but you have some solutions in practice,
9:24 right? Like you want people using this
9:26 within the classroom and within a
9:27 pedagogical context, right? So can you
9:30 just lay out a few of those bestcase
9:33 scenarios of use of uses here?
9:36 >> Yeah, absolutely. I I think that um you
9:37 know in the framework that we were we
9:39 were building out was this idea of you
9:41 have sort of like a lockdown which is
9:42 there are certain skills that people
9:43 absolutely need and you have to get rid
9:46 of AI. But otherwise, I just want to say
9:48 that this gives us the potential of
9:50 using the best potential learning tool
9:53 that has ever been created in history to
9:56 really advance and augment critical
9:58 thinking in the moment. That's going to
10:01 require a very serious rethinking of how
10:03 we teach and a very serious rethinking
10:06 of the proxies for grading. But I really
10:09 do think that this can take young people
10:12 so far beyond where they are. And and
10:14 I'm talking in terms of skipping entire
10:17 grades almost with the ability to if
10:20 used properly to go home, work with AI
10:21 and then have the teacher say, "Okay,
10:24 our expectations for you are much much
10:27 higher." Can could I just You both used
10:28 the word cheating.
10:31 Neil, what is cheating with AI? And is
10:33 the way we've been thinking about
10:35 cheating in universities and even the
10:37 term sort of plagiarism
10:39 um do we have the right framework for
10:41 thinking about this? I don't think
10:44 that's a difficult question because
10:48 obviously if you uh were to submit an
10:52 essay that had been written not by you
10:55 but by a tutor or a parent, you'd be
10:58 cheating. It's no different if you if
11:00 you claim that an essay written by uh
11:04 Gemini is your work. I think this is
11:07 straightforward because the act of of
11:10 writing involves really some quite
11:14 important uh cognitive muscle flexing. I
11:16 don't think one has really thought a
11:18 problem through certainly in my
11:20 experience until one has had to write
11:23 down uh what your solution to the
11:25 problem is or at least your analysis of
11:28 it is. So that's the easy bit.
11:30 >> Can I just push you on that one one
11:32 little thing? Is it the act of writing
11:34 the endstate act of writing or the
11:38 entire process of creating that essay?
11:41 So what if someone a student uses AI to
11:44 develop their outline or to do a
11:47 brainstorm their structure or something
11:48 like that? Is that circum is that
11:49 cheating as well?
11:53 >> What if that all do now the problem is
11:54 >> I think they're using it for all stages,
11:57 right? But like but that's very hard for
11:59 us adjudicating that to determine at
12:01 what speed they have. This is why Connor
12:04 said something very important uh in what
12:06 he said before. We're going to agree a
12:08 lot actually, Connor, when you said if
12:11 you use it in the right way, it's
12:14 potentially the greatest teaching and
12:18 learning tool ever. The wrong way to use
12:20 it is the way it's currently mostly
12:23 being used, which is to cut corners so
12:26 that you don't have to read, think, or
12:29 write. and and these stages, you know,
12:32 reading, absorbing information, then
12:33 thinking, and then writing are
12:38 tremendously important brain muscle uh
12:41 actions. And if you don't learn how to
12:44 do those things, then you really aren't
12:47 educated. And that's a problem. the
12:50 right way to use the large language
12:52 models, just to focus on those for a
12:56 bit, is the way that they're using them
12:59 at a remarkable school, the Alpha
13:01 School, uh, in Austin, Texas, just down
13:04 the road from our new university. And
13:07 there uh McKenzie Rice uh with the
13:10 support of Joe Limont is doing I think
13:13 what Connor has in mind that is to say
13:16 using the LLMs rather in the way that
13:18 Neil Stevenson describes in his
13:21 wonderful book the diamond age uh where
13:25 the student has the ability to develop
13:29 in a kind of customized way uh a a
13:33 question and answer relationship with uh
13:36 the LLM and this can greatly accelerate
13:38 learning because of course it it does
13:41 tailor the uh process to the individual
13:43 in a way that the traditional classroom
13:46 just can't. Let me be clear the
13:48 traditional classroom the way we have
13:52 been doing things prehat GPT was already
13:54 broken in a whole range of ways. Um I
13:56 I've been saying for years I cannot
13:59 understand why professors give lectures.
14:02 Lectures are an incredibly bad way of
14:04 teaching. Uh, the Socratic method has
14:07 been around since Yep. Socrates, it's
14:10 better, but it's hardly used. And I
14:12 could go on and on. So, things sucked
14:16 already. And that's partly why students
14:18 cheat because if you're confronted with
14:21 a system that sucks where you go to a
14:24 lecture, some guy like me drones on for
14:27 half an hour or an hour and then you're
14:29 given this list of things to read all of
14:31 which are kind of slightly toided and
14:33 then you're expected to turn in an
14:36 assignment which uh regurgitates them in
14:36 some way.
14:38 >> Right? I mean, of course, people are
14:40 going to take the line of least
14:42 resistance once it appears because none
14:43 of this was particularly satisfactory
14:45 before. So, I think there's an
14:47 opportunity here. U and and I'm
14:49 beginning to see that it can be done
14:52 radically differently. At alpha school,
14:55 we put our seven-year-old son into that
14:57 school for a week. Uh and it was clear
14:59 that it had a tremendous benefit for
15:02 him. So, I think we need to be as
15:04 innovative as they're being there. we we
15:07 kind of have to reinvent education in
15:09 order to make this work. If we leave the
15:12 old system and just patch on uh large
15:14 language models like chat GPT, we're
15:16 going to end up with the worst possible
15:18 combination of a little bit like you
15:20 remember the Hungarian economy under
15:22 socialism. They pretend to pay us and we
15:23 pretend to work. We
15:24 >> pretend to work.
15:26 >> That'll be it. That'll be universal. We
15:28 we will pay the tuition and then
15:31 everybody's going to pretend to work.
15:33 >> Our AIS will mark their AIS.
15:35 >> Yeah. I mean that's that's already
15:36 happening. That is already happening.
15:38 >> So look, I think just to push on one
15:39 thing, I mean I think people who have
15:40 read your essay will be a little
15:43 surprised that you're putting your child
15:45 in a school that uses AI. Can you say
15:47 what Alpha School is doing differently
15:48 than what you see happening on campuses
15:50 and what maybe higher education could
15:52 learn from that?
15:55 >> So Joe Leman and uh McKenzie Rice have a
15:58 common view that school as they
16:01 initially experienced it was dreadfully
16:03 boring. They've created a system which
16:07 is very much using AI to allow students
16:08 to learn at their own pace and they've
16:11 created incentives along the lines of if
16:13 you get this all done in 2 hours and
16:15 really smash it that's it you're done
16:17 for the day you can go play outside. So
16:19 they're they're really changing the way
16:21 that we think of education instead of
16:23 everybody has to sit there for six seven
16:27 or eight hours uh they just can say get
16:28 through the work if you do it really
16:31 well you're done. Uh and so that was
16:33 something that our son Campbell found
16:36 exciting and disconcerting
16:39 uh but but liberating. I think what's
16:42 important here is the idea that you you
16:45 don't say here's an assignment and then
16:48 they go off and get uh chat GPT to do
16:51 it. You you you actually say here here
16:52 are a set of things that we'd like you
16:55 to to master. Uh maybe it's a ma set of
16:59 mathematical concepts. Um, and you're
17:01 going to play with the the problems.
17:02 You're going to do a whole bunch of
17:04 problems and an AI is going to see how
17:06 quickly you learn. It's going to see how
17:08 you get uh get along and it's going to
17:11 respond to the way you do in the first
17:14 run of problem sets uh and that will
17:17 generate the next set accordingly. And
17:21 so instead of the student using uh the
17:24 large language model to shortcut around
17:27 an assignment, actually the student ends
17:30 up working a lot more intensively to
17:33 achieve certain goals, interacting with
17:35 a kind of living encyclopedia.
17:37 >> It's also probably more fun, right,
17:39 Neil? I mean, like it's it's a more
17:40 enjoyable way of learning as well.
17:42 >> The the Alpha School is all about making
17:45 going to school fun. And I do think
17:47 they're really on to something here to
17:50 the point that I think we the university
17:52 and Austin have to learn from what
17:55 they're doing. I I I am certainly struck
17:58 by the fact that artificial intelligence
18:01 requires us to reinvent education
18:04 fundamentally to make use of these
18:06 tools. If we don't do that then I think
18:08 the tools are going to in fact be
18:11 misused and the net educational impact
18:13 will be very negative. But I can see
18:15 from what's happening at Alpha School
18:18 that this can work extraordinarily well,
18:20 particularly for smart kids, but also
18:22 for kids who struggle because it's the
18:24 fact that it can be customized for the
18:26 individual student that seems to me so
18:27 potent. Anybody who doesn't quite know
18:29 what I'm talking about and who can't
18:31 make a trip to Austin should just read
18:33 Stevenson's book, The Diamond Age,
18:35 because it tells the story of a little
18:37 girl from a totally deprived background
18:41 who happens to stumble on what we would
18:44 now call uh an N an AI. In fact,
18:46 Stevenson's kind of ahead of his time.
18:48 He's writing in the 1990s, but this is
18:50 essentially a kind of living talking
18:55 book that evolves with her. there's a a
18:56 relationship between the little girl and
19:00 the book and whatever her question is,
19:02 whatever is she's interested in, it
19:05 helps her learn about the world. So, I
19:06 remember loving that book. I think it's
19:09 Stevenson's most brilliant book. But
19:11 that was of course inconceivable in the
19:14 1990s. It was a sort of vision of a of a
19:15 science fiction future. The
19:17 extraordinary thing is that future is
19:19 now here and little girls all over the
19:22 world including as Connor said in places
19:25 where educational provision is barely
19:28 existent now can access
19:32 without too much trouble a world of
19:35 adaptive knowledge knowledge that comes
19:38 to you in the right way in at the right
19:41 time in the right volume. I I find that
19:43 hugely exciting. Last thing I'd like to
19:45 say, the the original title I gave that
19:48 essay was that the closter and the
19:50 starship. And the idea I wanted to
19:53 convey was that we need to spend time in
19:56 the closter with just our brains to to
20:00 to learn certain foundational skills of
20:03 of cognition and communication. But when
20:05 we come out of the closter now, unlike
20:07 the monks of the Middle Ages, we we can
20:09 get into the starship. And that that's
20:11 an amazingly exciting combination. So I
20:13 don't want to give anybody listening the
20:14 impression that I'm a lite. I think
20:17 Connor and I basically agree and the key
20:19 question which we haven't really
20:21 addressed is why are the established
20:25 institutions so slow? I asked the other
20:27 day somebody quite senior at Stanford
20:29 University where I I spend part of my
20:30 time at the Hoover Institution. Hey,
20:32 what's the university policy on the use
20:34 of AI? There isn't one. Right?
20:53 part of the challenge here is that I
20:56 mean you guys broadly agree I think on
20:57 most of this but there's a lot of devil
21:00 in details on how this is rolled out and
21:02 we are essentially and as you say a lot
21:03 of it's just being pushed down to
21:06 faculty because I do agree institutions
21:09 are in an emot mode of avoidance here,
21:11 but we're being asked to sort of rebuild
21:14 a 2,000-year pedagogical model on the fly
21:16 fly
21:18 using a technology that is evolving by
21:21 the week. So, Conor, one of the things I
21:23 think we're touching on here is this
21:27 moment of real cognitive development
21:29 using a technology that allows for
21:31 cognitive offloading and those are
21:32 really intention with one another in a
21:34 university. So, Conor, how do you think
21:37 through that? like how do we use these
21:40 tools without that risk of cognitive
21:42 offloading that can be so damaging to
21:43 exactly what we're trying to do at a university?
21:44 university?
21:46 >> Yeah, I mean the short answer is I don't
21:48 know, but the more sort of like
21:51 hopefully robust answer here is yeah, I
21:52 mean it's easy to sort of say that we
21:54 should reinvent education. I I totally
21:55 agree with you know with Neil on this,
21:57 but like you know how I mean I think
21:59 that the the relevancy here is who's
22:01 incented. So when I see people out
22:02 incented in the working world, it's very
22:04 limited. It's uh it tends to be sort of
22:05 small startups. People are like, "Oh my
22:07 gosh, I need every tool that I need be,
22:09 you know, I have because I have to do
22:11 too much work and here's one." But that
22:13 is not the huge majority of people out
22:14 in the workforce. And so when we think
22:18 about education, the the idea is that
22:21 this requires a tremendous amount of
22:22 educational political will on the
22:24 education level, whatever that is,
22:26 educational will. And that's not the
22:28 education system in in in in certainly
22:30 in our country in the United States for
22:32 example uh or in Canada or I I would
22:33 even say Western Europe, right? It's
22:35 it's just not how it works. We're like,
22:36 "Nope, got this." And why is that?
22:39 Because faculty have spent many many
22:40 years doing something the exact same way
22:42 and they've been voted faculty of the
22:43 year and all that kind of stuff and they
22:44 know how to do it and everything. We
22:45 sort of saw the same thing a little bit
22:47 during COVID when everybody went online.
22:48 They just tried to move their everything
22:50 online and then the real innovative
22:51 people are like, "Well, how what's a
22:53 better way to learn now that we have
22:54 these new systems or something like
22:56 that?" But but so I think the first
22:58 thing we have to remember is that and
22:59 this and the way that I teach so I have
23:01 a company called AI mindset we we do
23:03 generative AI uh we do AI adoption but
23:04 we don't do it through teaching tools we
23:06 do it completely through understanding
23:08 how the brain works and why the brain
23:10 struggles with with this it has
23:11 everything to do with the brain so even
23:14 as we're saying the tools are developing
23:15 like when I go out and talk to companies
23:16 I talked to some of the biggest
23:18 companies in the world on this uh my
23:19 presentation hasn't changed in two years
23:20 because it has nothing to do with
23:21 technology it has everything to do with
23:23 how our brain operates and I think
23:25 that's really critical in education
23:27 because there's a lot of people invested
23:29 in how education works and there's not a
23:31 lot of people and I love our teachers. I
23:33 come from teachers like I work with
23:35 teachers but I don't see a ton of
23:37 teachers being like all right can't wait
23:38 to change the way I've done everything
23:40 in the way I've gotten you know my PhD
23:41 and the way I've done this my entire
23:43 life. I just don't see that. And so that
23:45 has to be incentivized. I think
23:47 incentives are everything. So that's
23:48 number one. And then then I sort of like
23:51 want to pivot into something that um and
23:53 I hope this doesn't get clipped as a
23:54 sound bite because I can imagine this
23:56 headline, but what are the skills we
23:57 actually really need, right? I mean like
23:59 and by the way, I'm a writer. I've I've
24:01 written books. I'm a published author.
24:03 All that kind of stuff. So I care very
24:05 very deeply about writing. But I have to
24:07 look in the collective global mirror
24:10 here and say, "Do we still need to know
24:12 how to write?" Giant question mark. By
24:13 the way, this is not Connor saying, "Do
24:15 we still need to?" But what I mean by
24:17 that is obviously the calculator is a
24:18 kind of a little easy example, but if we
24:19 think about the calculator, the
24:21 calculator all it did was democratize
24:22 math, but it's not like kids don't have
24:24 to learn math. And so I think that's
24:26 probably going to be what we need to do.
24:27 And gosh, my kids are going to kill me
24:29 for saying this, but I think they need
24:31 to write by hand or on an airgapped
24:33 computer so they learn how to write. Not
24:36 because writing is intrinsically
24:38 important in the same way learning long
24:39 division is not intrinsically important,
24:40 but you don't see people working at
24:42 NASA. It's not like because you have a
24:44 calculator you can work at NASA or you
24:46 can be a quant in a hedge fun. It
24:47 requires skills beyond just the
24:48 democratization of math through a
24:51 calculator. But the important part I
24:54 think is do we still need writing in the
24:55 same way we still need math. So what I
24:57 mean by that is I was just having this
24:59 conversation yesterday with um somebody
25:01 I really respect a CEO of a company and
25:03 she was saying you know I don't I don't
25:05 know that I still know how to write. I'm
25:06 using claude and chatbt and things like
25:08 that and I'm feeling awful about it. I'm
25:10 like, "Yes, but you did learn how to
25:12 write." And so, you're recognizing good
25:14 quality. And I think, and I don't want
25:15 to put words in Neil's mouth, but but
25:18 where I come from on this is at the very
25:20 root, kids need to learn what good
25:22 writing looks like. Otherwise, I think
25:23 that we are going to come to a point of
25:26 where everything is just AI slop. Can I
25:27 just pull on that writing thread a
25:28 little bit here? Because
25:30 >> Yeah. I mean,
25:32 I feel like there's writing as an output
25:34 and something we consume, but there's
25:36 also writing, as Neil, you expressed at
25:39 the very beginning, as a form of
25:42 thinking. And so, what happens when we
25:44 detach writing from learning? And can we
25:48 do that at all? Is writing core to how
25:50 we particular in that phase of our brain
25:53 development learn to think?
25:56 >> Well, let me put it like this. uh when
26:00 uh conveyor belts were invented, uh we
26:03 we could have eliminated walking,
26:05 uh we could actually have made it
26:08 possible to go everywhere on conveyor
26:11 belts and uh we we could probably have
26:13 made them quite fast. I see quite fast
26:14 ones at some airports
26:16 >> or or hoverboards, you know,
26:19 >> maybe either one. But the point is that
26:21 actually we we all go I bet all three of
26:25 us go to the gym quite frequently and uh
26:28 we we actually embark on physically
26:31 difficult activities that are pointless
26:34 except for the fact that they keep us
26:37 fit and most students uh that I see at
26:40 Stanford look to be in pretty good shape
26:42 physically uh but for some reason we
26:45 don't apply the same rules to our
26:48 brains. means now the point about
26:49 writing is not that everybody should
26:51 write a novel. In fact, I I wish I could
26:54 stop people writing novels. Far too many
26:57 novels get written and I I wish I could
26:59 also cut down the number of opeds that
27:00 are written.
27:02 >> If we could do one thing for the world,
27:03 it would be to decrease the number of
27:04 ops. I mean,
27:07 >> please people write less. Most of you
27:08 really don't write anything that
27:11 interesting. But the point is that in
27:15 learning to think on the basis of what
27:17 we have read and and then to write,
27:21 we're getting our brains fit. I I'll
27:22 give an example. Connor will probably
27:25 recognize and you two Taylor. I used to
27:27 find that until I had taught something,
27:30 stood up in front of a class and and
27:32 taught something, I wasn't quite ready
27:34 to write the book. And and this is all
27:37 about getting your brain fit because if
27:40 we're if our brains are obese, we kind
27:42 of watch some TV and we kind of get
27:44 involved in conversations. We have this
27:47 rough idea about let's say AI and we can
27:48 have a conversation about oh yeah, I
27:51 heard about AI. Yeah, it sounds really
27:53 scary but also kind of sounds kind of
27:56 good as well. Yeah, you know that's
27:58 that's the obese brain hasn't really
28:01 absorbed anything about AI. Hasn't
28:02 thought about it. It can have a
28:04 conversation about it, but the
28:06 conversation's entirely vacuous. So what
28:09 we really want to do in education is to
28:11 have very very fit brains. Brains that
28:14 can very quickly absorb lots and lots of
28:16 complex data, not necessarily in the
28:18 form of words. It might be just the form
28:20 of uh of data or lumps of pottery, but
28:22 they can absorb data in large
28:23 quantities. Then they can think
28:25 analytically. What does this signify?
28:27 What's the pattern here? And then they
28:29 can communicate to other human beings by
28:32 writing or by speaking what they think
28:34 they've inferred from all of this. These
28:37 are the things that make our brains fit.
28:40 And there is no doubt in my mind that in
28:42 a world of very powerful computers that
28:45 can not only be language large language
28:46 models but can also do scientific
28:49 research. Our brains need to be super
28:53 fit if if we're to have purpose if we're
28:56 not simply to become uh Yuvall Noah
29:00 Harrari's uh cowike creatures milked for
29:03 our data by AI. So I think just get into
29:05 the mental gym people. I say to the
29:06 students at the University of Austin,
29:08 one day I'm going to come in here. I'm
29:09 going to tell you, you've got two days
29:11 to read War and Peace and you're going
29:14 to be just shut in the library with the
29:16 book and then you're going to come out
29:18 and I'm going to ask you what's the
29:20 meaning of this book. That's the kind of
29:23 thing that a smart person can do. Yeah.
29:25 But Neil, so let me ask you this because
29:26 this is what I wrestle with all the
29:28 time, right? Which is and I love your
29:29 two examples of exercise. So the example
29:31 I sort of give is sometimes getting on
29:33 the treadmill and the reason that we may
29:34 get off the treadmill very quickly is
29:37 because our lyic system prioritizes you
29:38 know quick rewards and conserving energy
29:40 right it's sort of like this is why
29:41 behavior change is so hard so what I
29:44 would posit here is that I see young
29:46 people by the way so fit these days too
29:48 high school college and it's it's part
29:50 of the culture I think but also there is
29:52 a huge incentive for them to get fit
29:54 right they will look attractive to other
29:56 people it's almost like what drives us
29:59 as a species etc and the challenge that
30:00 I find that I'm trying to figure out
30:02 like how do you incentivize students
30:04 because students aren't incentivized by
30:05 critical thinking and they're not
30:06 incentivized by learning. They're
30:08 incentivized by will they get the grade
30:10 that I mean what you hear all the time
30:11 like I'll just get into the great
30:12 college and then I'll figure it out or
30:14 I'll just get into the great law school
30:15 and then I'll figure out all that kind
30:16 of stuff. But we have set up a system
30:17 that you were referring to earlier which
30:20 is grades are the holy grail. Like I
30:22 mean everything else and I'll figure out
30:23 everything else later. It's the
30:24 incentive structure. So I'm wondering
30:26 when you think about that and like hey
30:28 guys like you have to do this it makes
30:32 sense uh but the I feel like the the
30:34 internal incentive structures is broken. No
30:34 No
30:36 >> I think the employers have incentives
30:39 too. The elite employers know that uh
30:41 for example the Harvard degree can't
30:44 really be worth what it used to be worth
30:46 when the only grade that's given at
30:49 Harvard is a uh so that the perception
30:52 that grade inflation has caused a very
30:54 serious decline in standards at the
30:56 established institutions incentivize the
30:59 best employers to find other ways of
31:01 assessing ability. So you don't get
31:04 hired by the big tech companies or the
31:06 big Wall Street companies just on the
31:09 basis of your GPA anymore because they
31:11 they figured out over the last 10 years
31:13 that that's not a good signal at all. So
31:15 I think one of the interesting things
31:16 that's happening is that recruitment is
31:19 becoming more and more creative. I mean
31:20 I think of some of the quant hedge funds
31:23 and how they recruit. uh it's actually
31:26 by setting a whole bunch of examination
31:29 type challenges to the wouldbe uh
31:32 entry-level uh people. So I I think the
31:34 system's changing because there's an
31:36 incentive if you're an employer to find
31:38 out the really smart people as opposed
31:42 to the people who graduating sooner who
31:45 took all the soft courses. I think what
31:48 we're talking about here is partly how
31:51 do we make young people care as much
31:53 about their brains as they care about
31:58 their bodies. Now, my perception of of academic
31:59 academic
32:02 life of of of university life is that
32:04 while of course the athletes may attract
32:06 a certain number of members of the
32:09 opposite sex, uh there is still
32:12 something sexy about being smart. I
32:18 mean, I think uh O'Neal maybe I'm just
32:21 dreaming here, but I always felt it was
32:25 my witty repartee. It I always thought
32:27 it I thought it were I mean I used to
32:30 think it was the jokes anyway. But um I
32:33 think I think that's part of it that
32:34 that we're not just interested in
32:36 people's bodies. I mean, somebody can
32:39 look like a like a supermodel or or an
32:41 Olympic athlete, but if what they say is
32:43 just unbelievably dumb, it's not going
32:47 anywhere. The the other question is how
32:49 do you persuade people to take the same
32:51 attitude towards academic success as
32:54 currently exists in the military towards
32:57 the elite uh combat formation? So, I I
32:59 use the phrase the Navy Seals of the
33:01 mind to describe the graduates that we
33:03 want to produce at the University of
33:05 Austin. I want to convey a sense that
33:09 there is an elite quality in the realm
33:12 of of intellectual life that we've not
33:14 been valuing for the last 10 or 20
33:16 years. But now it's time to change and
33:18 say no no what we care about is
33:21 brilliance is real intellectual
33:23 brilliance in the same way that the Navy
33:25 Seals care about people who are
33:27 physically extraordinary courageous. So
33:30 I think changing those norms I think
33:31 it's happening. I think being able to
33:33 push in that direction so that young
33:35 people are interested not in the perfect
33:38 GPA which you achieve by cynically
33:40 taking the easy courses and telling the
33:41 professors what they want to hear. Now
33:44 we need to say no no no that that's so
33:46 20 years ago. Now what we want are
33:48 people who are just drop deadad
33:51 brilliant and they can play a game of
33:54 chess while at the same time coding
33:55 while at the same time doing math
33:57 problem sets while at the same time uh
34:00 writing sonnetss in ancient Greek. I
34:02 mean those people do exist but they
34:04 aren't valued as much certainly not in
34:06 the established institutions as they
34:08 should be. I mean that's the key point
34:10 right not in the established higher
34:12 education universities and I to say
34:13 something a little bit provocative here
34:16 like I think that norm is changing
34:18 outside of universities the emergence of
34:22 long form podcasts the way YouTube is
34:24 allowing people to go deep on topics
34:26 that they couldn't before is
34:28 incentivizing a kind of intellectual
34:31 curiosity I think that is not dissimilar
34:32 to the the fitness craze that's
34:34 happening in those same worlds right I
34:36 think there is something going on there
34:39 that people crave more,
34:41 they might just not be getting it from
34:43 universities. So, let's just touch on
34:46 that for a moment here, Connor. So,
34:48 universities are large bureaucratic
34:50 institutions that are incredibly hard to
34:55 move and to evolve. How do we convince
34:57 these institutions
35:00 that rethinking what they do in light of
35:03 this new technology is existential for them?
35:04 them?
35:06 I mean I think they I think they know
35:08 that intellectually. So, first of all, I
35:09 think we have to determine what problem
35:11 are we trying to solve here, you know,
35:13 and I think that it has to be done
35:15 pretty slowly and pretty carefully
35:17 because, you know, Tyler, what you're
35:18 hitting on is exactly right, like we
35:19 are, and I think you phrased it exactly
35:20 right, which is we are talking about
35:23 giant bureaucratic institutions. These
35:25 are not driven by well, you know, will
35:27 will I earn more money if I can produce
35:29 students who are real critical thinkers?
35:30 And by the way, again, family of
35:32 teachers. I work in university like I'm
35:34 very passionate about education faculty.
35:37 I'm a faculty member myself at times but
35:39 we have to understand that most people
35:42 have this very deep uh commitment to how
35:43 they have learned and how they've always
35:45 taught in the past and by god like AI is
35:47 not going to change that people so if
35:48 that means that you have to come into
35:50 the classroom and just write it out by
35:53 hand it's just a colossal colossal
35:55 missed opportunity if you can sort of
35:56 extrapolate from Neil's son school that
35:58 sort of you know is really kind of
35:59 thinking about this on the young age and
36:01 making education more fun I think maybe
36:03 it turns from fun into more
36:04 incentivization I think we have to be
36:06 realistic about incentives. But the idea
36:09 is how do you actually incentivize
36:12 uh faculty members in a in a in a state
36:14 and in an institution that doesn't work
36:15 like that. I mean you can have an
36:17 existential threat to this is why you
36:19 see companies like Meta spending
36:20 billions and billions of dollars because
36:21 that's an existential threat to their
36:23 business. It's P&L. It's money. It's
36:25 this is the market drives this. That's
36:27 not the case in institutions. So I think
36:30 that there first of all I agree like
36:31 when Stanford doesn't have a policy on
36:34 AI when a when NYU my school is sort of
36:36 struggling to find their policy on AI
36:38 there has to be a new way of thinking
36:40 about it. So instead I would focus much
36:43 less on the tech and much more on how do
36:44 we get people excited about using this
36:46 because once people start using this and
36:47 start using it as a learning tool and
36:49 then and I don't know where Neil falls
36:51 on this I'd like to ask him. I think
36:53 that we have to put the onus on the
36:56 guardrails. I think that it has to be
36:57 look you cannot learn this way because I
37:00 just think there's too many teachers who
37:01 will say well I've always taught this
37:02 way it's great and too many students are
37:03 like yep I get it but I'll learn when
37:05 I'm out of law school and it doesn't
37:06 matter anymore but right now the
37:07 incentive is getting this grade to get
37:09 into a better law school I think that
37:11 the only sol and by the way this is a
37:14 bit of a lite very limited viewpoint
37:15 which I hate to have but I don't know
37:17 another solution I think the guards have
37:19 to be in place so firmly which is where
37:21 Neil started this conversation which is
37:22 what if it was he said six or seven I
37:24 would say more like you know three or
37:26 hours a day where you have no access to
37:28 AI. You have to learn. So that's the
37:29 problem I'm wrestling with.
37:31 >> Neil, you have are maybe in the singular
37:34 unique position of having spent decades
37:35 inside the oldest university
37:37 institutions and now being a part of
37:39 creating a new one. Um I can only
37:41 imagine how the older institutions would
37:45 respond to your proposal and I I I know
37:47 it wouldn't happen quickly or maybe even
37:49 at all, but how has the response been
37:51 within a brand new one? Is this
37:53 happening this year at the University of
37:54 Boston? your cloysters and starship model.
37:56 model.
37:59 >> I I need to answer that question after I
38:02 pay my next visit uh which is in a
38:04 couple of weeks. Then I'll be able to
38:07 say if it's working. U the the challenge
38:09 even in a new university is to get the
38:12 professors to change the way they do
38:14 things. As Connor rightly says,
38:18 >> at the heart of all uh universities are
38:20 tenured faculty uh
38:22 >> with academic freedom, including on how
38:23 they teach, right?
38:25 >> Not only can they not be fired, but they
38:28 get considerable autonomy about how they
38:30 go about things. And the reason that a a
38:33 university doesn't have an AI policy is
38:34 that the default setting is to say we
38:36 leave it to the professors. And that's
38:38 what you'll almost certainly hear at
38:40 most institutions. But what does that
38:45 mean? That means that men and women in
38:50 their 40s, 50s, and 60s are essentially
38:54 allowing the students to misuse AI
38:56 because they themselves don't really
38:58 understand what's going on outside the
39:01 classroom and they have grown accustomed
39:03 to do things in ways that are very easy
39:05 to game. So, we have to change that.
39:07 It's just easier to do at a small
39:10 institution uh with fewer than 200
39:12 students than it would be at Harvard or
39:15 or at at Stanford. I only became
39:16 involved in creating a new university
39:18 cuz I just thought the established
39:20 institutions couldn't change themselves
39:23 that the incentives internal incentives
39:25 are just all pointing in the wrong
39:27 direction. We have to reinvent higher
39:29 education. That is very clear. Even
39:32 before Chat GPT, it was clear and that's
39:34 that's what we're trying to do in Austin.
39:35 Austin.
39:38 I hadn't fully realized until I looked
39:41 at Alpha School that the reinvention
39:43 could be even more radical than I
39:46 thought. And I'm beginning to see how
39:48 the new education, the educational
39:49 institutions of the future are going to
39:52 work. And I still like my Closter and
39:54 Starship analogy because I like the idea
39:56 that my kids are going to spend some of
40:01 the time learning the the core skills
40:03 including how to do calculus and read
40:06 Toltoy. But then they'll be unleashed
40:09 and let into the starship to use large
40:11 language models and all the other things
40:13 that AI provides
40:15 equipped with the mental discipline you
40:17 need. Let me put it see if you agree
40:20 with this con. Let me put it like this.
40:21 At the heart of using a large language
40:23 model well is is the way that you write
40:26 the prompts. My view is that somebody
40:30 who has not learned to think properly is
40:32 not going to write good prompts. is not
40:33 going to really be able to use the tool
40:36 at all or will use it badly. Do you
40:37 agree with that, Connor? Because it
40:39 seems to me that part of what we're
40:41 trying to do when we get people mentally
40:44 fit is to equip them with the kind of
40:46 cognitive skills that will enable them
40:49 to use AI optimally. Yeah, I actually I
40:50 actually may take a different tack on
40:53 that. So, I I kind of compare it much
40:54 more to sort of like a managerial
40:56 expertise, like if you know how to get
40:58 the best out of somebody rather than
41:01 writing. Uh, so I think of it more
41:03 probably less about the prompt itself
41:06 and more about how would you instruct a
41:08 new colleague or a new employee. How
41:09 would you get the best out? There's good
41:10 managers and bad managers and you can
41:12 you know it when you see it a little
41:14 bit. So maybe I might flip it on its
41:16 head and say I think the critical
41:17 thinking is to take the output rather
41:21 than the input and see is this good? And
41:23 also and I want to see if you agree with
41:25 this. It's actually the output can be
41:27 very very good but it doesn't have to be
41:29 right. So I just did a thing for master
41:30 class if you know the brand master class
41:31 and like people always talking about you
41:32 know hallucinations. I was trying to
41:33 talk about how to get over the problem
41:35 of hallucinations which is when it lies
41:36 very convincingly. I'm like that's I'm
41:37 not even sure that's the biggest
41:39 problem. Hallucinations you can sort of
41:42 spot much easier than you can spot the
41:43 syncopency like oh that's a great idea
41:45 when in fact it's not a good idea. And
41:47 the second part of that is what if it's
41:49 giving you outdated information. Do you
41:50 know how to draw in the right part of
41:51 information? So anybody can look at an
41:54 output and be like that's awesome. But
41:55 the problem is do you have the critical
41:57 thinking to ask the right questions like
41:59 where is this getting this information?
42:00 Is it just telling me I'm right because
42:01 blah blah blah? And then the third thing
42:03 I think is the hallucination. So so I'm
42:04 with you on the critical thinking is
42:06 critical. I would probably put it on the
42:07 output judging the output rather than
42:08 the input.
42:10 >> Yeah, I agree with that. My impression
42:13 thus far and I you know continue to run
42:16 these experiments is that there's a real
42:20 problem in the fact that the models uh
42:23 have not really been trained on the full
42:26 culpus of highquality knowledge because
42:30 Google books lost their case and a lot
42:33 of extraordinarily important literature
42:36 is not accessible online. uh and and so
42:39 what I notice when I ask say Gemini or
42:42 deep research to to answer a question is
42:45 that it comes back rather thin without
42:48 the kind of depth of of scholarship and
42:51 knowledge that you would have if you had
42:53 access to all the books in the Bodian
42:55 library or or Widner and I think that's
42:58 because of that very important case that
43:00 Google lost Google wanted to put every
43:02 book ever written including all of mine
43:04 on Google books
43:06 Now, they kind of won with YouTube
43:08 because every single piece of content I
43:09 ever did on television is free available
43:11 on YouTube now, but they lost the book's
43:14 case. And that means that it's hard to
43:16 train a large language model on the real
43:18 quality literature that's been published
43:23 in our time. So, I I'm still un to be
43:25 honest, I'm underwhelmed by the outputs.
43:28 I read them and I think eh
43:32 B+ maybe at best that and it it's never
43:34 original. It can never ever come up with
43:36 anything original except when it's
43:37 making it up, which is obviously not
43:38 what we want.
43:40 >> I think we might have actually found our
43:42 point of disagreement here, and I think
43:43 we I'd love to keep going down this
43:45 path, but it took us an hour, but we got
43:47 to this point. I I suspect a little bit
43:48 of disagreement in what we just said
43:50 there, Neil, but I want to bring this to
43:52 a wrap. Um, and like maybe just to get
43:54 you both to reflect a bit on the stakes
43:56 here, cuz I I I do think there's a lot
43:59 at stake in how we learn to think, how
44:02 we train future generations to think and
44:04 learn. And that's really tied to who we
44:06 are as humans, right? I mean, this has
44:09 been for 2,000 years anyway, and I don't
44:12 expect it to change um in its
44:15 importance. But Neil, in in your essay,
44:16 you said something quite striking that
44:19 strict prohibitions on devices will have
44:22 to be insisted upon if the rapid advance
44:24 of pseudo intelligence is not to plunge
44:28 humanity into a new dark age. Can you
44:30 just why so stark there and what's at stake?
44:31 stake?
44:33 >> Well, pseudo intelligence is Neil
44:36 Stevenson's joking name for artificial
44:37 intelligence in the book The Diamond
44:39 Age. It's one of the little jokes he
44:44 slips in. So um we we call it PI not AI.
44:47 Uh so credit where it's it's due. John
44:50 hate has written very compellingly about
44:52 the damage we've already done to young
44:54 people's brains by allowing them to have
44:57 smartphones and social media or what
44:58 used to be called social media and is
45:02 now actually AI media rather than social
45:05 media. And I think the next level damage
45:08 is what we're currently doing because
45:10 we're essentially cutting off a
45:13 generation from learning the key skills
45:15 of absorbing data, thinking about it
45:17 analytically, and then producing
45:19 convincing communications about it.
45:22 Henry Kissinger whose biography I'm in
45:24 the midst of writing without the help of
45:28 AI said after he had large language
45:31 models explained to him this has he
45:32 wrote a brilliant essay for the Atlantic
45:33 saying this is the potential to take us
45:35 back to before the enlightenment and the
45:37 scientific revolution because things
45:39 will start to happen around people that
45:42 they can't explain. I mean, you don't
45:45 really understand how the AI arrived at
45:48 its answer. And that was a very, I
45:50 thought, profound insight for a man in
45:52 his late '9s. He saw early what it
45:55 implied. And I I think he's going to be
45:57 vindicated if we carry on down this
45:58 road. I think we're going to have a
46:01 generation that is even more cognitively
46:04 harmed than the generation that just was
46:07 hit by smartphones and social media.
46:09 >> Connor, how do you frame the stakes? Do
46:11 you agree? God, it's such a it's it's a
46:13 great question. And so I think the
46:16 actually the the John height um analogy
46:17 is sort of apt. He wrote the anxious
46:19 generation. He's a he's he's a colleague
46:20 of mine over at Stern. So we get to do
46:22 some some stuff together over there. But
46:23 it's funny because my kids just started
46:26 back at school. And uh they just
46:27 instituted that policy that John has
46:28 been pushing whichever which you can't
46:30 have your phones during the school year
46:32 or sorry during the school day which
46:34 we're all in all in favor of as as
46:36 parents. But interestingly like in
46:38 John's research shows all this the kids
46:40 want that too, right? the kids are
46:41 happier when they don't have their
46:44 phones. So what does that mean for this?
46:46 This is the question, right? Because
46:48 it's even as the kids, even the kids
46:50 themselves, if we said, "Hey, you said
46:51 you're happier not having your phone."
46:53 Yeah, absolutely. Okay, then don't have
46:55 your phone. It's like, but that's kind
46:57 of what addiction is, right? And maybe
46:58 to sort of like take a slightly
47:00 different tack on this, are we talking
47:01 about addiction? Now, I'm not sure AI is
47:04 a bad addiction. I think I mean from a
47:05 adult standpoint, like I don't think my
47:07 addiction quote unquote to AI is a bad
47:08 thing. I think it helps me produce much
47:10 better things, but I'm also like
47:11 learning. I don't have I have a pretty
47:13 good incentive structure in my life. But
47:15 kids don't. And I guess what I would say
47:19 on this is even when they know that they
47:21 need to critically think. So maybe I'll
47:22 I'll say this. The way that Finn and I
47:25 teach this, my son Finn and I teach this
47:27 together is we talk about like a
47:28 mountain, right? And from the bottom of
47:29 the mountain, the kids just think,
47:31 "Okay, the top of this mountain, I just
47:32 got to get up to the top." And one is a
47:34 trail and the other's a gondola. the
47:35 winner gets to the top and then they get
47:36 to the top and they realize it's not a
47:38 mountain, it's a plateau and they have
47:40 to now it's the beginning, it's the
47:41 starting point, it's not the end and now
47:42 they have to go into this workforce and
47:43 the people in the workforce are gonna be
47:45 like, well, who has the muscles to
47:47 handle this? Right? And so that's the
47:49 the problem. So even if we tell them and
47:50 even if they know intellectually the
47:53 incentive structure is all screwed up, I
47:55 think that we probably have to force
47:58 kids even though they they understand
47:58 even though they want to critically
48:00 think, we have to force them to
48:02 critically think because otherwise the
48:03 incentives in the same way if you just
48:05 give them Snapchat and and Instagram and
48:06 everything else, they'll use it even
48:08 though they don't want to. I think we
48:10 have to have guardrails in place and I
48:11 don't know how else we do this. Maybe
48:13 that's shortterm, I don't know, but I
48:15 agree with Neil. think that critical
48:16 thinking is that important that it
48:18 deserves our attention and putting the
48:20 guardrails in place for their own protection.
48:21 protection.
48:23 >> Look, I think that's a pretty great way
48:25 to end this conversation and like thank
48:27 you both um for both talking about this,
48:29 but also the thinking you're putting
48:31 into this moment of transformation and
48:34 it's I I started by saying I think this
48:36 is transforming the university and I I
48:39 think it fundamentally is and uh it's
48:41 our responsibility to figure out how to
48:43 keep that model alive in some capacity.
48:44 I think and so thank you both for the
48:46 work you're doing on that and for
48:46 talking about it.
48:56 >> Machines like us is produced by
48:58 Paradigms in collaboration with the
49:00 Globe Mail. The show is produced by
49:02 Mitchell Stewart. Our theme song is by
49:05 Chris Kelly. Host direction from Athena
49:07 Carcanis. Our executive producer is
49:10 James Milward. Special thanks to Angela
49:11 Pachenza and the team at the Globe and
49:14 Mail. If you like the interview you just
49:15 heard, please subscribe and leave a
49:17 rating or a comment. It really does help
49:19 us get the show to as many people as possible.
49:27 Machines like us is supported by the
49:29 Maxel School of Public Policy at McGill
49:32 University. The Maxbell School offers
49:34 public lectures and seminars across
49:36 Canada and online in addition to their
49:38 11-month accelerated Masters of Public
49:40 Policy program which is accepting
49:42 applications now. Learn more at mcgill.ca/maxpell
49:44 mcgill.ca/maxpell
49:46 school. Machines like us is also
49:48 supported by SEFAR, a global research
49:51 organization proudly based in Canada.
49:53 From leading Canada's national AI
49:55 research and talent strategy to
49:57 advancing research in AI safety, SEFAR
49:59 is the catalyst for the exceptional
50:01 thinkers reframing our future. Explore