0:01 I need some more energy good
0:04 morning it's a pleasure to be here with
0:07 all of you uh today to talk about gen and
0:08 and
0:11 education um for those who don't know
0:15 what gen is imagine a
0:20 person who's often wrong but never in
0:22 doubt now be honest with me how many of
0:24 you thought about your
0:28 spouse I did not okay but that's
0:31 gen um and what I I want to talk about
0:34 is um what happens when we have large
0:37 language models like chpt and generative
0:39 AI intersect with institutions like
0:41 Harvard where I sit and I've been there
0:43 for the last 27 years currently
0:46 overseeing teaching and learning for the
0:49 University let me just um ask you a
0:51 question how many of you think in the
0:54 next 5 to 10 years generative AI will
0:57 have a very large impact on education
0:58 just raise your
1:01 hands how many would say a moderate
1:04 impact so we have a few how many would
1:07 say little to no
1:11 impact pretty much none okay uh let me
1:14 come back to this here's a chart showing
1:17 the rise of Technologies and the time it
1:19 took for different Technologies to reach
1:23 50% penetration in the US economy so if
1:26 you look at computers it actually took
1:31 20 years to reach about 30% penetration
1:33 radio it took about 20 years to reach
1:35 half the
1:38 population uh TV about 12 years
1:41 smartphones about 7 years smart speakers
1:43 about uh 4
1:46 years and chatbots about 2 and a half
1:48 years this is part of the reason we're
1:50 talking about this
1:53 today here's what we know so far about
1:55 gen and
1:58 Education First the transformative
2:01 potential St from its intelligence
2:06 that's the eye in AI okay
2:10 secondly as prudent Educators we should
2:12 wait until the
2:15 output is smart enough and gets better
2:17 and it's less prone to hallucinations or wrong
2:19 wrong
2:22 answers third given the state of where
2:25 bot tutors are it's unlikely I think
2:27 many believe that it's going to be
2:30 ultimately as good as the best AC
2:31 learning teachers who have refined their
2:34 craft over many many years and
2:37 decades fourth and Sal Khan talks about
2:39 this this is likely to ultimately level
2:40 the playing field in
2:43 education and finally the best thing we
2:46 can do is to make sure that we secure
2:48 access to everyone and let them
2:50 experiment before you take a screenshot of
2:51 of this
2:53 this
2:55 don't because I'm going to argue all of
2:58 this is
3:00 wrong now that I hopefully have have
3:01 your attention I'm going to spend the
3:03 next 10 minutes arguing
3:05 why uh let's actually start with the
3:07 first one which is the transformative
3:09 potential stems from how intelligence
3:10 the output
3:13 is I would argue and in fact we just
3:14 heard this from the previous speaker
3:16 we've been actually
3:19 experiencing AI for 70 years machine
3:21 learning for upwards of 50 years deep
3:23 learning for 30 years Transformers for 7
3:26 to 8 years this has been an improvement
3:28 gradually over time there were some
3:30 discrete changes recently but the
3:32 fundamental reason why this is taken off
3:35 I would argue has less to do with the
3:36 discrete improvements in intelligence 2
3:39 years ago as opposed to the Improvement
3:42 in Access or the interface that we have
3:44 with the intelligence what do I mean by
3:45 that I'm going to give you the one
3:47 minute history of human
3:49 communication so we started out sitting
3:52 around campfires talking to each other
3:55 from there we started writing pictures
3:58 on the walls that was Graphics from
4:00 there we start writing scroll s and
4:03 books that was formal text and finally
4:04 the Pinnacle of human human
4:07 communication which was ones and zeros
4:08 and that's
4:10 mathematics that's the evolution of
4:12 human to human communication the
4:13 evolution of human to computer
4:15 communication has gone exactly in the
4:18 opposite direction which is 60 70 years
4:20 ago starting with Punch Cards ones and
4:22 zeros for those of you old enough might
4:24 remember that then we move to things
4:27 like dos prompts commands that we had to
4:29 input by the way and this is the
4:31 fundamental thing the big difference
4:34 between Windows 1.0 and windows
4:36 3.0 functionally they were almost
4:38 identical the big difference was the
4:40 interface meaning we moved to a
4:42 graphical user interface and suddenly
4:45 7-year-old kids could be using computers
4:46 that I think is more similar to the
4:49 revolution we're seeing now which is AI
4:52 for a long time was the province of
4:54 computer programmers software Engineers
4:57 Tech experts with chat GPT it basically
4:59 became available to every one of us in
5:01 the planet through a simple search bar
5:02 that's basically the reason for the
5:05 revolution where is this going probably
5:08 towards just audio and I don't know if
5:09 anyone can guess what's the next
5:16 communication neural reading
5:19 emotions you might argue basically us
5:22 grunting and shaking our arms formally
5:24 that would be called the Apple Vision
5:28 Pro uh you could argue we are regressing
5:30 as a species on the other hand you could
5:33 argue that in fact what's happening is
5:35 that the distance between humans and
5:36 computers is fundamentally
5:39 shrinking so that's the first thing I
5:40 just want to say which is fundamentally
5:44 this is about access what does this mean
5:50 is this is
5:52 Photoshop there's a lot of people who
5:54 spend one year 2 years four years trying
5:57 to master this Graphics
6:00 design arguably we don't need this kind
6:02 of expertise anymore we can simply get
6:04 it by communicating directly in natural
6:07 language with computers now this for
6:09 those of you who don't know is epic it's
6:11 a medical software record my wife who's
6:14 a cardiologist does not like this she
6:16 spends 2 hours every single day filling
6:19 in notes on these software records You
6:21 could argue sometime in the near future
6:23 that communication will become much
6:25 simpler by the way one of the things to
6:28 keep in mind is for every one of you
6:30 sitting in organization ations and by
6:33 the way this is a happy organization to
6:35 think about what this is likely to do to
6:36 the or
6:38 structure if you think about the bottom
6:40 of this organization there's people who
6:42 have expertise in different kinds of
6:45 software okay some expertise in
6:47 Photoshop some in
6:50 concur uh some in different kinds of
6:52 software You could argue there's going
6:53 to be consolidation within those
6:56 functions the middle managers who used
6:58 to oversee all these software experts
7:00 it's likely we're going to see shrinkage
7:02 there in fact you could argue all the
7:05 way that the person at the top could in
7:07 fact do sales Graphics design design
7:09 marketing everything by just interacting
7:12 directly with the computer it's not a
7:13 stretch to say and some people predict
7:16 this that the first 1 person billion
7:18 dollar company is going to be likely to
7:20 be born pretty soon okay and people are
7:22 already working on this I would urge you
7:24 to think about this question which is
7:26 what does this mean for your expertise
7:28 and organizations or the organizations
7:30 you run
7:31 because that's going to have big
7:33 implications for how you run these
7:35 organizations all right so that's the
7:36 first point which is fundamentally this
7:39 is not about intelligence about how it's
7:41 accessed the implication of this is more
7:43 people will be able to use more
7:45 computers for specialized purposes but
7:47 it doesn't necessarily mean it's likely
7:49 to be the same people okay that's the first
7:50 first thing
7:52 thing
7:54 second I think we all look at these
7:57 hallucinations and we say let's wait
8:00 let's wait till it gets better
8:01 by the way that begs the question that
8:03 hallucinations are a fundamental
8:05 intrinsic property of generative AI
8:07 because they are probabilistic models
8:10 but I would go further and say even when
8:13 AI capabilities fall far short and
8:15 impair the human value proposition
8:18 there's still a reason to adopt it why
8:21 do I say that I'm a strategist as
8:23 strategist we think of two sides of the
8:26 equation one is the benefit side what
8:28 are customers willing to pay the other
8:30 is the cost or the time
8:33 side even if there's no improvement in
8:36 intelligence simply because of cost and
8:38 Time Savings there might be massive
8:41 benefits to trying to adopt this so the
8:42 metaphor I want you to think
8:50 company has anyone flown Ryan
8:54 Air uh what is the experience like
8:57 isan basic efficient basic efficient by
8:59 the way when I ask my students this they
9:01 often say I hate it every single time I
9:04 fly it and of course it begs the
9:06 question why are you repeatedly flying
9:09 it this is an airline like most lowcost
9:12 Airlines it doesn't offer any food on
9:14 board no seat selection you've got to
9:16 walk to the tarmac you got to pay extra
9:19 for bags no frequent flyers no lounges
9:20 and this is the most profitable airline
9:22 in Europe for the last 30 years
9:26 running why it's not providing a better
9:30 product it's saving cost
9:31 that's the metaphor I would love for you
9:33 to keep in mind when you think about
9:35 generative Ai and its potential so let
9:37 me just walk through this and sorry as a
9:39 strategist I have to put up a 2 x two
9:42 Matrix at some point there's two
9:43 Dimensions here I'd love for you to
9:45 think about the first is what is the
9:47 data that we're inputting into these
9:50 large language models and the data could
9:52 be explicit in the form of files like
9:56 text files numbers Etc that's explicit
9:59 data or it could be tacit knowledge meaning
10:00 meaning
10:04 creative judgment etc etc okay but the
10:06 second dimension is as important which
10:10 is what's the cost of making an error
10:13 from the output not the prediction error
10:16 what's the cost of something going wrong
10:18 in some cases it could be low in some
10:20 cases it could be high so let's actually
10:21 talk through some
10:25 examples first is explicit data low cost
10:27 of Errors that's high volume customer
10:29 support for the last 30 years this thing
10:31 is being automated by the way that
10:34 trajectory is likely to continue why do
10:36 I say that it is virtually impossible
10:39 for any company to have people manning
10:42 the phones to talk to 100,000 customers
10:44 this is the direction where it's going
10:46 even if we have two or three or 4%
10:49 errors it's okay it's simply much more
10:51 efficient to respond to customers in
10:53 this way okay so that's one
10:56 dimension second dimension is drafting
10:59 legal agreements for all the lawyers in
11:01 the room just watch out it's going to be
11:03 much much easier it already is to draft
11:07 legal agreements but we can't rely on
11:09 generative AI to Simply give us this
11:11 thing without checking it some of you
11:13 may have heard of that lawyer who did
11:16 that a couple years ago basically didn't
11:17 review the agreements there were some
11:20 errors he got fired so we might have
11:22 human in the loop you don't want to
11:25 basically take the output at face value
11:26 okay because the cost of making an error
11:27 is simply too
11:32 high third on the top right is creative
11:36 skills design marketing copywriting
11:37 these are things where it's hard to
11:41 evaluate what's truly better or worse
11:43 and so in some sense the design outputs
11:46 we get the social media content we get
11:48 as suggestions from generative AI pretty
11:50 good the cost of making an error there
11:53 not that high and finally we get to the
11:56 top right where we want to be very very
11:59 careful because this is like large LGE
12:01 enterprise software integration you
12:03 don't want to go there pretty soon okay
12:06 or designing an aircraft now what does
12:08 it mean for Education let's actually
12:10 play this out I'm going to use our
12:13 example as an illustration if I'm
12:15 sitting at
12:17 Harvard basically we get when we open up the
12:18 the
12:21 website about 10,000 applications in the
12:24 first couple months for admission maybe
12:26 30,000 people who look at the website by
12:28 the way they have questions it's impossible
12:29 impossible
12:31 to speak personally and individually to
12:34 everyone who has a question this is
12:37 beautiful for chat chat Bots to be able
12:40 to Simply respond again if there's an
12:43 error in the response it's okay I mean
12:45 these are people who are simply thinking
12:47 about applying and they might find
12:50 information in other ways secondly legal
12:52 contracts with food contractors we want
12:54 to be careful about human in the loop
12:56 thirdly designing social media content
12:58 when we go to the top left this is
13:00 something we can do far more efficiently
13:03 today with generative Ai and finally I
13:04 can assure you we're not going to be
13:06 using this anytime soon for hiring
13:09 faculty or disciplinary actions against
13:11 students by the way think about this not
13:13 just for your organization think about
13:15 it for you individually so if I was to
13:19 do that responding to emails I get a lot
13:21 of emails every
13:24 day most of these emails are things that
13:26 are very standard Professor when are
13:28 your office ours where's the syllabus posted
13:29 posted
13:31 by the way even in other cases where
13:34 students ask questions like Professor I
13:36 have two offers one from McKenzie one
13:38 from B Boston Consulting
13:41 Group the cost of an error is not that
13:44 high in my response you'll be okay or
13:46 I'm trying to decide whether to go to
13:48 Microsoft or Amazon you'll be okay okay
13:50 I'm just kidding by the way I can assure
13:51 you I respond to all those emails
13:54 individually but you get the
13:57 point writing a case study it takes us 9
13:59 months to write these famous har
14:01 business school case studies the head of
14:03 the MBA program last year said I want to
14:06 teach a case on Silicon Valley Bank
14:09 tomorrow what he did was go to Chachi PT
14:11 said write a case like Harvard Business
14:14 School with these five sections
14:15 financial information competitor
14:17 information regulatory information it
14:21 spits it out he then said please tweak
14:23 the information give me this data on the
14:25 financials talk about these competitors
14:27 he iterated it kept spitting out
14:29 information from beginning to to end he
14:33 had a case study complete in 71
14:36 minutes um if you're not scared by the
14:39 way we are about what the potential here
14:42 is brainstorming a slide for teaching
14:44 there's a couple slides in this talk
14:45 where I took some pictures and I started
14:48 trying to resize it PowerPoint designer
14:50 simply threw up some suggestions saying
14:52 here's how you might want to do it in 1
14:55 second it didn't take me 10 15 minutes
14:57 to try and redesign these slides a
14:59 beautiful application for using this and
15:01 finally thinking about exactly how I
15:03 teach in the classroom or my research
15:05 Direction I'm not going there anywhere
15:08 soon I'd love you to think about a
15:11 couple things from this simple framework
15:14 number one we are obsessed with talking
15:16 about prediction errors from large
15:18 language models I think the more
15:20 relevant question is the cost of making
15:24 these errors meaning in some cases the
15:27 prediction error might be 30% but if the
15:29 cost of error is zero it's okay to adopt
15:33 it in other cases prediction errors
15:36 might be only 1% but the cost of failure
15:39 is very high you want to stay away so
15:41 stop thinking about prediction errors
15:42 let's start thinking about the cost of
15:43 Errors for
15:45 organizations secondly if you notice
15:47 what I've done I've broken down the
15:50 analysis from thinking about Industries
15:52 what's the impact of AI on banking or
15:56 education or retail into jobs and in
15:58 fact gone a step further and broken it
16:01 down into into tasks so don't ask the
16:04 question of what is AI going to do to me
16:06 ask the question which are the tasks
16:08 that I can actually automate and which
16:10 are the tasks I don't want to touch and
16:12 the third is I don't know about you in
16:15 my LinkedIn feed every single day I get
16:17 new information about the latest AI
16:20 models and where the intelligence
16:22 trajectory is going getting better and
16:24 better that's basically about the top right
16:25 right
16:27 cell I would say that's a red herring
16:29 for most organizations
16:31 because basically there's three other
16:33 cells where you can adopt it right now
16:36 and today with human in the loop okay so
16:38 that's just something I'd love you to
16:40 think about by the way we did this with Harvard
16:41 Harvard
16:45 faculty where we interviewed 35 Harvard
16:47 faculty who were using gen deeply in their
16:48 their
16:50 classrooms those videos are up on the
16:52 web if you just type in Google
16:54 generative AI faculty voices Harvard you
16:56 see all these videos here are some
16:59 examples of what they were doing a
17:01 faculty co-pilot chatbot it's almost
17:04 like a teaching assistant that simulates
17:06 the faculty that answers simple
17:09 questions and is available to you
17:13 24/7 secondly one of the things that we
17:15 as faculty spend a lot of time thinking
17:17 about is designing the tests and the
17:20 quizzes and the assessments every year
17:22 and we've got to make it fresh because
17:24 we know our students probably have
17:26 access to last year's
17:28 quizzes large language models are
17:30 basically spitting this out in a couple
17:32 minutes and of course as individuals we
17:34 would refine it we're not going to just
17:36 take it at face value we refine it we
17:39 look at it but it's saving a lot of time
17:41 third when we're giving
17:44 lectures students often have questions
17:46 which they're too scared to ask live in
17:49 front of 300 students oh it's beautiful
17:51 if they can simply type in the questions
17:53 have gen summarized the questions and
17:55 put it up on a board The Faculty know
17:57 exactly what the sentiment is in the
17:59 classroom and where students are getting
18:01 confused by the way notice one thing
18:03 about all these
18:06 examples every single one of them is
18:09 about automating the mundane it's not
18:11 about saying let's rely on the
18:13 intelligence that's getting better and
18:15 better it's the left column of that
18:18 framework I was talking about so these
18:20 are ways that it's being used nowadays
18:21 in our
18:25 classrooms the third thing this premise
18:27 that bot tutors are unlikely to be as
18:29 good as the best instructors
18:32 we had a few colleagues at Harvard who
18:34 tested this for a course called physical
18:36 sciences 2 this is one of the most
18:38 popular courses and by the way the
18:40 instructors are very good in that course
18:41 they've been refining Active Learning
18:44 teaching methods for many years what
18:46 they did as an experiment was say for
18:49 half the students every week we'll give
18:51 them access to the human tutors for the
18:54 other half give them access to an AI bot
18:56 and by the way the nice thing about the
18:57 experiment is they flip that every
19:00 single week so some people always had
19:01 access to the humans some people had
19:03 access to the AI for that week but then
19:06 they'd flipped the next week every
19:09 single week they tested your Mastery of
19:10 the content during that
19:13 week and what was interesting
19:17 was the scores of the students using the
19:20 AI Bots were higher than with the human
19:22 tutors and these are tutors who've been
19:25 refining their craft year in and year
19:27 out what was even more surprising is
19:29 engagement was higher
19:32 by the way this is a first experiment
19:35 the only point is we better take this
19:38 seriously uh next will it level the
19:40 playing field in education part of the
19:44 premises because everyone has access any
19:47 individual in a village a low-income
19:49 area is basically going to have access
19:51 to the same technology as those who are
19:53 in Elite universities and this is going
19:55 to level
19:58 everything there's a possibility it
20:01 might go exactly the other way which is
20:02 the benefits might ACR
20:04 disproportionately to those who already
20:08 have domain expertise why do I say this
20:10 think about a simple example when you
20:12 have knowledge of a subject and you
20:16 start using generative AI or chat PT the
20:18 way you interact with it asking it
20:21 prompts follow on prompts you're
20:23 basically using your judgment to filter
20:25 out what's useful and what's not useful
20:27 if I didn't know anything about the
20:29 subject I basically don't know what I
20:31 don't know so in some sense the prompts
20:34 are garbage in garbage out by the way
20:36 this is being shown in different studies
20:38 there was a metaanalysis summarized by
20:41 The Economist a couple of weeks ago
20:42 where they basically talk about
20:43 different kinds of studies that are
20:46 showing for certain domains and
20:50 expertise the gap between high
20:51 performance High knowledge workers and
20:53 no knowledge workers is actually
20:56 increasing we better take this seriously
20:57 why and this is not the first time this
20:59 has happened
21:01 12 years ago there was a big revolution
21:04 in online education Harvard and MIT got
21:07 together created a platform called edex
21:10 where we offered free online courses to
21:12 anyone in the world by the way they
21:14 still exist if you want to take a course
21:15 from Harvard for
21:19 free pay $100 for a certificate you can
21:21 get it on virtually every subject what
21:24 happened as a result edex reached 35
21:26 million Learners as did corera and
21:29 Udacity and other platforms
21:32 what was beautiful is roughly free 3,000
21:36 courses the challenge was completion
21:39 rates less than 5% Why by the way if
21:40 you're used to a boring lecture in the
21:43 classroom the boring lecture online is
21:45 10 times worse so there's virtually no
21:47 engagement people take a long time to
21:49 complete or may not complete but here's what's
21:50 what's
21:53 interesting the vast majority 75% of
21:55 those who actually completed these
21:59 courses already had college degrees
22:01 meaning the educated rich were getting
22:04 richer now think about that that's very
22:07 sobering why is that because those are
22:10 people used to curiosity intrinsic
22:11 motivation by the way they're used to
22:13 boring lectures they've gone to college
22:15 but this has big implications for how we
22:17 think about the digital divide so I just
22:19 want to keep that in your mind and the
22:22 last thing I just want to say is rather
22:23 than going out and trying to create
22:25 tutor Bots for as many courses as
22:27 possible I think what we really need to
22:29 do is have a strategic conversation
22:32 about what's the role and purpose of
22:35 teachers given the way the technology is
22:38 proceeding the one thing I will say here
22:40 is that when we think about what we
22:43 learned in school okay think back think
22:44 back many many
22:47 years we learned many
22:51 things tell me honestly how many of you
22:53 have used geometry proofs since you
22:56 graduated from high
22:59 school three people
23:01 why did we learn state capitals and
23:05 world capitals of every single
23:09 country okay uh foreign languages and by
23:11 the way this is Italian Davy is not a
23:15 goddess Davy in Italian says You must
23:17 okay they have
23:19 similarities um why did we learn foreign
23:21 languages when we think about business
23:23 Concepts in our curriculum I often get
23:25 my students who come back 10 years later
23:27 and say those two years were the most
23:29 transformative years of my life life I
23:31 often asked them what were the three
23:33 most important Concepts you learned they
23:35 said we have no idea I'm like no no okay
23:38 give me one no no we have no idea I'm
23:39 like so why do you say this was
23:41 transformative the point simply being
23:44 they're saying this was transformative
23:46 not because of the particular content
23:48 but because of the way we were learning
23:49 we were forced to make decisions in real
23:52 time we were listening to others we were
23:54 communicating what are they saying
23:56 they're saying that the real purpose of
23:58 case method was listening and
24:01 communication the real purpose of proofs
24:04 was understanding Logic the real purpose
24:07 of memorizing state capitals was
24:08 refining your memory by the way that
24:10 example there is the poem If by rard
24:12 Kipling some of you might remember this
24:15 from school it goes something like this
24:16 if you can keep your head when all about
24:17 you are losing theirs and blaming it on
24:20 you uh I have PTSD because my nephew
24:21 when he was reciting this to me
24:23 preparing for his 10th grade exams I was
24:25 like what the heck are you doing but it
24:28 was basically refining memory skills and
24:29 for for languages it was just learning
24:32 cultures and syntax when we go deep down
24:35 and think about what we were actually
24:37 teaching I think that probably gives us
24:40 a little more hope because it means it
24:41 doesn't matter if some of these things
24:43 are probably accessible through
24:46 gen when calculators came along we
24:47 thought it's going to destroy math
24:49 skills we're still teaching math
24:51 thankfully 50 years later and it's
24:53 pretty good so this is something that I
24:54 think is going to be an important strategic
24:55 strategic
24:57 conversation this is the slide I'd love
24:59 for you to keep in mind which is
25:01 basically everything I've just said if
25:02 you want to take a screenshot this is
25:04 the slide to take a screenshot thank you
25:09 all so much um and I hope to be in
25:12 touch and keep this
25:15 here at HBS I took Professor anan's
25:16 class on economics for managers
25:18 listening to him feels like being back
25:19 in class fortunately he didn't call call
25:21 anyone which is terrific so thank you
25:24 for that now I have a few questions
25:27 we've got young children and you've got
25:30 so much of knowledge available now on
25:33 chat prompts what's your advice to
25:34 everyone who's got young children and
25:36 are wondering about what should they be
25:38 teaching their children so that when
25:40 they grow up and when we don't know what
25:41 the actual capabilities of these
25:43 machines are that what they've learned
25:44 is still
25:47 useful how old are your kids rul so my
25:49 son is nine and my daughter is five what
25:51 are you telling them right now now I
25:53 want to learn from you and I know we
25:54 telling them a lot of stuff with a good
25:56 bad ugly I don't I'm trying to refine
25:57 that and give them a framework of what
25:59 we should be telling them so there's two
26:01 things so I think first of all this is
26:02 probably one of the most common
26:05 questions I get uh by the way it's
26:08 really interesting that the tech experts
26:09 and there was an article in the Wall
26:12 Street Journal about this 10 days ago
26:13 are basically telling their kids don't
26:18 science that skill at least basic
26:20 computer programming is
26:23 gone Advanced Computer Science Advanced
26:25 Data analysis if you want to do that
26:27 that's going to be fine what are they
26:28 telling their kids to learn they're
26:30 telling their kids to learn how to teach
26:32 dance they're telling their kids to
26:35 learn how to do plumbing they're telling
26:37 their kids to learn about the
26:40 humanities why are they saying that
26:42 implicitly they're saying what are those
26:46 skill sets that are robust to machine
26:49 intelligence now I will say it is
26:50 virtually impossible to predict that
26:52 given the pace at which this Improvement is
26:53 is
26:55 occurring I probably have a slightly
26:57 different kind of answer by the way my
26:59 daughter's majoring in Psychology
27:01 without me telling her anything so the
27:02 kids I think know basically where this
27:04 is going but the one thing I'll say
27:06 Rahul is I don't know when you started
27:08 out College what were you measuring in
27:10 journalism journalism you started out
27:13 with journalism okay that's enlightened
27:15 I started out doing
27:17 chemistry and then the reason I switched
27:20 to economics was probably like many of
27:23 you there was one teacher who inspired
27:25 me and that's what made me
27:29 switch and I would say to kids follow
27:32 the teachers who inspire you and the
27:34 reason is if you can get inspired and
27:36 passionate about a subject that's going
27:38 to build something that's going to be a
27:40 skill that would last all your life
27:41 which is
27:44 curiosity which is intrinsic motivation
27:45 we talked about in the last session this
27:49 is no longer about learning episodically
27:51 it's about learning lifelong and that's
27:52 I think going to be the most important
27:54 St in the way that Indian families
27:56 operate and as do so many Asian families
27:58 too parents want to equip their children
28:00 with the skills that are likely to be
28:03 most useful when they grow up so it used
28:06 to be say engineering and doctors back
28:10 in the day then uh it a few years ago so
28:12 if you were looking ahead what do you
28:14 think the children should be learning so
28:16 they acquire skills which are useful in
28:18 the job market years down yeah I think
28:20 that's honestly being too
28:22 instrumental as I said 10 years ago a
28:24 lot of my students were talking to me
28:25 and saying what should I major in I
28:27 never told them computer science if I
28:28 told them that that I would have
28:31 regretted it but I genuinely mean this
28:33 that's looking at it things too narrowly
28:35 what I would say is think about things
28:38 like creativity judgment human emotion
28:41 empathy psychology those are things that
28:43 are going to be fundamentally important
28:45 regardless of where computers are going
28:47 by the way you can get those skills
28:49 through various subjects it doesn't
28:51 matter it's not a one-o-one mapping
28:53 between those skills and a particular
28:55 topic or disciplinary ERA this is partly
28:57 what I'm saying really think about where
28:58 their passion is how do we teach our
29:00 children how to think because
29:02 everything's available on Google
29:05 co-pilot chat GPT you can just chat GPT
29:08 it so joining the dots giving them a
29:10 framework to be able to interpret
29:12 analyze and think how do you tell them
29:16 that when the easiest thing is go yeah
29:18 so uh it's a good question just two
29:22 things on that the first is there was an
29:23 interesting study done by colleagues at
29:26 MIT recently where they had groups of
29:29 students and they were asked to
29:31 undertake a particular task or learn
29:34 about a topic some students were given
29:36 AI chat Bots some students were only
29:39 given Google search with no
29:42 AI what they found is the students with
29:44 access to AI
29:47 intelligence learned the material much
29:50 faster but when it came time to apply it
29:52 on a separate test which was different
29:55 from the first one they found it much
29:57 harder the students who learned the
29:59 material through Google search with no
30:01 other access took
30:04 longer but they did much better on those
30:07 tests why is that part of the issue is
30:11 learning is not simple it takes effort
30:13 okay and so part of the issue is you
30:15 can't compress that
30:19 effort um the harder it is to learn
30:21 something the more likely you'll
30:25 remember it for longer periods of time
30:26 and so I think for me the big
30:28 implication is when I tell my students
30:30 look all these Technologies a are
30:34 available it depends on how you use it
30:38 my basic approach to them is just saying
30:40 study because if you get domain
30:42 expertise you will be able to use these
30:45 tools in a much more powerful way later
30:48 on uh so in some sense this goes back to
30:51 the notion of agency it's like we can be
30:53 lazy with tools and Technologies or we
30:57 can be smart it's all entirely up to you
30:59 but this is my advice you know some of
31:02 my friends in Silicon Valley have the
31:03 toughest controls on their children when
31:06 it comes to devices you know we look at
31:07 how much time our children can spend on
31:10 their iPads or TV we're far more lenient
31:11 and they're the guys who are actually in
31:13 the middle of the devices and they're
31:15 developing them and they know the
31:17 dangerous side effects now those devices
31:18 are also the repository of knowledge
31:20 which is where you can learn so much
31:22 from so as an educ every parent has his
31:24 own take and how much time children can
31:26 spend but as an educator how do you look
31:28 at this device addiction just spending
31:30 far more time picking up some knowledge
31:31 but also wasting a lot of time yeah I
31:34 think I mean there's a Nuance here which
31:36 is basically what they're doing is not
31:39 saying don't use devices they're saying
31:41 don't use social media and this goes
31:43 back again to one of the things we were
31:44 talking about
31:48 earlier uh we have gone through a decade
31:50 where things like misinformation
31:52 disinformation and so on there is no
31:54 good solution as far as we know
31:56 today there's also various other kinds
31:57 of habits and so on that are going to
31:59 improved that's partly what they're
32:00 saying stay away from they're not saying
32:02 stay away from computers we can't do
32:04 that and in fact you don't want to do
32:05 that but there's a Nuance in terms of
32:07 how we interact with with tools and
32:09 computers that we just want to keep in
32:11 mind when we think about guardrails
32:13 right are you seeing your students
32:14 getting more and more obsessed with
32:16 their devices and how does that impact
32:19 what are you trying to do to get them to
32:21 socialize more you know to spend more
32:23 time with each other and not be stuck on
32:24 their phones or that yeah it's a very
32:26 interesting question so in some sense
32:28 last year we had a conference at Harvard
32:30 we had 400 people from our community
32:33 attend the conference and some of our
32:35 colleagues were saying we should have a
32:37 policy of laptops down no laptops and
32:39 class take out
32:41 devices I was coming in for a session
32:44 right afterwards but part of the reason
32:45 I wanted them to take out their mobile
32:48 phones was I had two or three polls
32:50 during my lecture where I wanted them to
32:52 give me their input so I said mobile
32:55 phone's out okay and this was sort of
32:57 crazy but the story illustrates
32:59 something interesting which is these
33:01 devices for certain things can be really
33:03 powerful it can turn a passive learning
33:05 modality into an active learning
33:07 modality where every single person is
33:09 participating we don't want to take that
33:11 away what we want to try and deal with
33:14 is people playing games while you're
33:16 lecturing now by the way me personally I
33:17 just put it on
33:19 myself if I'm not exciting enough or
33:21 energizing enough for my students to be
33:23 engaged use your Mobile phones that's on
33:28 me okay but that's partly what chall Eng
33:30 show fans how many felt engaged during
33:36 many okay no so uh that that that which
33:40 is why Agent take Ai and chat Bots can
33:42 never do what professors can right so uh
33:44 we I'll take some questions KH has a
33:45 question KH go
33:48 ahead hi professor uh you mentioned that
33:50 one of the things that we should work on
33:54 to teach our children is empathy how do
33:56 you actually teach empathy in know
33:59 formal educ Ed ation system or does this
34:03 just go back to then parents and
34:06 family it's a it's a hard it's a hard
34:09 question um in fact this is by the way
34:11 one of the most important issues we're
34:15 facing today on campuses it's related in
34:17 part even in higher education not just
34:19 younger kids when we talk about
34:21 difficult conversations on
34:24 campus part of the reason we're facing
34:28 those issues is because people are
34:30 intransigent it's like I don't care what
34:33 you say I'm not going to change my mind
34:35 one of the things we introduced a couple
34:37 years ago on the Harvard application for
34:40 undergraduate is a question that says
34:42 have you ever changed your mind when
34:44 discussing something with anyone else
34:46 okay or something to that effect but
34:48 that's basically saying how open-minded
34:50 are we that's one version of empathy
34:51 there's many other
34:53 dimensions I think part of the challenge
34:58 is that we don't teach that in schools
34:59 right we don't teach that formally in
35:01 schools which is partly why there's this
35:04 whole wave now of schools not just in
35:06 other countries in India which has
35:08 stared to talk about how do we teach the
35:10 second curriculum the hidden curriculum
35:11 how do we teach those social and
35:14 emotional skills The Book of Life so to
35:18 speak and I think I mean it's not rocket
35:20 science to say this it starts at home
35:22 right like that's basically what we do
35:25 with our kids every single day um but
35:26 that's something that's I think going to
35:28 become fundamentally more important
35:30 partly because of the reasons are what
35:32 what I talked about Dr sanjie B has a
35:40 up yes Dr wonderful wonderful listening
35:43 to you um just with regards Ai and
35:45 Technology I've always said that uh Ai
35:48 and digital technology is not an
35:51 expenditure It's actually an investment
35:53 so very quickly if you allow me just 60
35:55 seconds in healthcare it gives you
35:57 better clinical outcomes it has has
36:00 decreased from number one cause of death
36:03 as Hospital acquired infections in many
36:05 Hospital chains as practically less than
36:08 1% so it gives you a safer outcome it
36:10 gives you a better patient experience
36:12 the turnaround of the bed strength is a
36:14 lot quicker and more importantly is it
36:17 gives you better operational excellence
36:19 so all the hospitals as far as medical
36:20 facilities are concerned who have not
36:23 embraced it as yet will find it
36:24 difficult to operate in the present
36:27 environment the what Ai and digital
36:30 technology has made us learn as doctors
36:32 is that data is the new gold if you
36:34 don't analyze data if you don't see what
36:36 your results are if you don't see where
36:37 your clinical outcomes are then you
36:40 can't go forward so AI is what is in the
36:42 future for us all of us thank you that's
36:44 more in the form of an let me just
36:47 elaborate on that in two ways one is I
36:50 think I would just go back and useful to
36:53 contextualize AI right like right now we
36:55 we often get obsessed by the latest
36:56 technology when we think about
36:59 upskilling reskilling in education
37:01 there's a revolution that started a
37:04 decade ago as I alluded to there's
37:06 basically 3,000 courses available to all
37:09 of you today on any subject so the
37:11 notion of let's wait for AI no no no
37:13 it's already there my father-in-law
37:15 who's 92 years old during covid he said
37:17 bat what should I do I said we have all
37:19 these these courses from Harvard
37:21 available in the last two years or three
37:26 years he's completed 35 courses wow okay
37:29 at the age of 92 wow wow by the way he's
37:31 paid $0 for that because he said I don't
37:33 need a certificate and so I told him
37:34 you're the reason we have a business
37:37 model problem okay but that's one
37:40 aspect the the second aspect is sort of
37:42 thinking about where you're going I
37:44 think you're exactly right sanj which is
37:46 every organization is going to have lwh
37:48 hanging fruit the one thing I just
37:50 caution is there's going to be a paradox
37:54 of access meaning if every organization
37:56 every one of your peers has access to
37:58 the same technology as you
37:59 it's going to be harder for you to
38:02 maintain competitive advantage that's a
38:04 fundamental question okay this is just a
38:07 basic observation so I just want to sort
38:09 of mention that but you're absolutely
38:10 right about the lwh hanging fruit in
38:12 medicine and Healthcare okay Toby Walsh
38:14 has a question or an observation and
38:16 then we I there lots of hands up okay I
38:18 don't frankly know what to do because
38:19 we're also out of time so let this just
38:21 be where we conclude one of the greater
38:22 challenges especially in higher
38:24 education is the cost has gone through
38:27 the roof are you optimistic that AI is
38:31 going to be able to turn that around so
38:34 again I'll just go back uh to what's
38:37 happened in the last decade as I said
38:39 you can now get access to credentials
38:42 and certificates at a minimal cost
38:44 compared to the cost of getting a degree
38:46 okay just to put it in perspective we
38:49 have 177,000 degree students every year
38:51 who come to Harvard they are paying a
38:53 lot of money those who need financial
38:55 aid get financial aid by the way can
38:57 anyone guess how many students we have
39:00 touched over the last
39:03 decade 10 times 100 times that it's
39:04 about 15
39:07 million that is not a story We publicize
39:09 but that's a story about the number of
39:11 students who've actually taken a Harvard
39:13 course or enrolled in a Harvard course
39:14 so in some sense I think where we are
39:17 today is the marginal cost of providing
39:20 education is very very low what we need
39:23 for that is not incremental Improvement
39:25 on the existing model we need to
39:28 basically break it apart and say how do
39:30 we put it back together again in a way
39:33 that makes sense for everyone um there's
39:35 an organization that we just started at
39:37 Harvard called axim jointly with MIT
39:38 with the endowment from the sale of the
39:41 ax platform whose only function is to
39:43 increase access and equity in education
39:45 and by the way their focus is on 40
39:47 million people in America Who start
39:49 college but never completed not just
39:51 because of cost for many other reasons
39:54 right in some the potential to reduce
39:57 the cost is massive but it's going to
39:59 require leadership and strategy this
40:01 gentleman here has a question can
40:12 please uh so uh earlier it was okay use
40:14 Ai and it will summarize and help you in
40:16 productivity but with the latest open AI
40:19 models like o03 minia and all that they
40:21 are doing reasoning which is much better
40:24 than humans so the people who are not
40:28 using it are at a dis disadvantage
40:30 so isn't it right that the students use
40:35 Ai and uh be familiar with it and uh be
40:37 be up to speed with that rather than not
40:39 using it and be at a disadvantage to
40:41 other students yeah absolutely there's
40:44 no question about that by the way I sit
40:46 at Harvard overseeing the generative AI
40:48 task force for teaching and learning and
40:50 we have 17 faculty the most interesting
40:53 conversations I've had about adoption
40:54 are with our
40:57 students now there's when we understand
40:59 that Behavior it just throws up things
41:01 that we wouldn't even have thought about
41:03 I'll ask you one question we had a
41:04 Sandbox that we created for the entire
41:06 Harvard Community which was a safe and
41:08 secure sandbox giving them access to
41:10 large language models as opposed to
41:12 using public open AI the adoption rate
41:15 amongst our faculty was about 30 35% in
41:17 the first year what do you think the
41:23 students it was about
41:26 5% so we were surprised when we went to
41:28 them we said what's going on are you
41:30 familiar with the sandbox they said yeah
41:32 we are we said are you using it they
41:34 said no we said are you using AI in any
41:36 way yeah yeah we have access to chat GPT
41:38 we have our own private accounts there
41:40 so we're like wait a wait why are you
41:42 not using the secure Harvard
41:49 was they said why would we use something
41:51 where you can see what we're
41:54 inputting now by the way as faculty
41:56 members if the number one question we
41:58 talk about with generative of AI is oh
41:59 we're worried about cheating and
42:01 assessments our students are listening
42:03 to us they're like oh if that's what
42:04 you're worried about we're not coming
42:06 anywhere close to you okay so part of
42:08 the point is the students are far ahead
42:10 of us in terms of using this they're
42:11 using it to save time they're using it
42:14 for engaging in deep learning we better
42:15 understand that ourselves to figure out
42:17 what we can do
42:19 join brilliant presentation just wanted
42:21 to understand one side of the spectrum
42:24 you have all the you know the positives
42:26 what's on the other side what risk do
42:28 you think is there on the other side it
42:30 starts coding on its own gets out of
42:32 hand is that a possibility what's the
42:33 possibility so so the risks are the
42:35 things I talked about towards the end
42:38 okay which is number one we put our head
42:41 in the sand as institutions and we don't
42:43 take this seriously that's the first
42:46 risk the second risk is lazy learning
42:48 the way I would call it now again that's
42:51 agency it partly depends on you as a
42:53 student do I want to be lazy do I not
42:56 want to be lazy the third risk is
42:57 everything we were talking about in the
42:58 previous session with respect to misinformation
42:59 misinformation
43:02 disinformation the fourth big risk is
43:04 asking the fundamental question what's
43:06 our role as teachers and I'll just share
43:08 one anecdote in closing there's a
43:09 colleague at another school who called
43:12 me and said my students have stopped
43:15 reading the cases they're basically
43:16 inputting the assignment questions into
43:18 generative Ai and by the way they're so
43:20 smart they're saying give me a quirky
43:23 answer I can use in class okay the
43:25 assessments are compromised and get this
43:27 The Faculty have stopped reading cases
43:29 they're inputting the cases and
43:32 basically saying give me the teaching
43:34 plan that's the
43:37 downside you know we we met on a flight
43:39 from Delhi to Mumbai and we had a long
43:40 conversation about the future of
43:42 Education we've been able to in the past
43:44 45 minutes recreate the magic of that
43:46 conversation here on stage can we have a
43:47 very warm Round of Applause for the
43:49 professor for making the effort of
43:52 coming here and for joining us and for
43:54 delivering this master class thank you
43:56 absolute pleasure thank you so much
44:01 if you like the video do like comment
44:09 [Music]
44:12 subscribe if you like the video do like