0:02 AI is helping millions of researchers
0:04 move faster than ever before, but it's
0:06 quietly becoming the number one source
0:08 of research errors I'm seeing as a
0:10 professor and mentor. Much worse than
0:12 bad supervisors, bad courses, or bad
0:14 time management. And I'm not talking
0:15 about the obvious errors like
0:18 hallucinations or fake citations or
0:20 where you see AI text with an M dash you
0:22 can spot a mile away and raises red
0:24 flags. I'm talking about the more
0:26 subtle, insidious errors. The kind that
0:29 you notice only after you're say 3 weeks
0:30 into a lit review or you've done 10
0:33 hours of data extraction or months into
0:36 a dissertation and suddenly everything
0:38 collapses like a house of cards and you
0:40 you wake up and think, how did I get
0:44 here? What happened? And the problem
0:46 isn't that AI is being malicious or
0:48 trying to trick you. It's because it is
0:51 a false friend. It gives you confidence
0:53 before real clarity. Gives you momentum
0:56 before direction and troves of polished
0:58 text before you have any real
1:00 understanding on the topic. So today,
1:02 what I'm going to do is show you exactly
1:05 how AI can derail researchers. And I'm
1:06 going to show you step by step how you
1:09 can use it safely so you don't fall into
1:11 these deep structural traps. I'm also
1:12 going to leave you, stick around to the
1:15 end, a template of prompts that you can
1:19 use with AI to avoid these five failure
1:20 modes that I'm going to share with you
1:22 today. For those of you who are new to
1:24 the channel, I'm Professor David
1:26 Stuckler, and this channel is the
1:28 support that I wish I would have had as
1:30 a researcher, as a beginner. And in my
1:32 trajectory, I've made about every
1:33 mistake you could possibly think of.
1:35 Fast forward now, I've published over
1:37 400 peer-reviewed papers, been a
1:39 professor at Harvard, Oxford, and
1:40 Cambridge, and I set up a mentorship
1:43 program to help you have a smooth and
1:45 easy ride. If you're interested in real
1:48 support, click the link below. Uh, get
1:50 action and help from a real human, not
1:52 AI, and let's see if we're a good fit to
1:54 work together. So, let's dive into
1:56 failure mode number one, confidence
1:59 before clarity. AI is very good at
2:01 making your ideas seem excellent and
2:03 exciting even if they have no chance of
2:05 getting published and they're dead on
2:07 arrival. Let me tell you about a
2:09 concrete example from a recent student.
2:11 So a researcher came to me and she's
2:13 very interested in the physical activity
2:16 sleep nexus and AI had praised her idea
2:19 as innovative and encouraged her to keep
2:22 going. But yet in sitting together in
2:24 just 2 minutes we did our conceptual
2:26 nearest neighbor check. It's a check we
2:28 always do to help calibrate the gap in a
2:30 study and make sure we're not
2:31 duplicating what's already been done.
2:33 And in just those two minutes, we found
2:35 three identical studies already
2:37 published on the topic. And there was
2:40 really not much space left to make a
2:42 contribution. The problem was that AI
2:45 gave encouragement and spurred her
2:48 along. But real research requires
2:50 validation. And that's exactly why we
2:52 run duplication and feasibility tests
2:54 before anybody ever starts writing a
2:57 paragraph. And so once you have that
2:59 confidence before clarity, something
3:02 else can begin to happen leading to our
3:03 second failure. You've probably
3:06 experienced this yourself where the LLM
3:08 says, "Oh, great idea. Would you like me
3:10 to work up this and that and this and
3:11 that?" And suddenly, little do you know
3:13 as a researcher, you're getting sucked
3:16 down failure mode number two down the
3:20 rabbit hole. And here the issue is that
3:24 AI doesn't work from a research system.
3:26 It doesn't know the destination. It's
3:29 optimizing a response to whatever you
3:31 give it. And so it's inventing the path
3:36 as it goes along. And this can lead to
3:39 some quirky stuff that violates field
3:41 norms. So to give you a concrete example
3:45 here, I had a researcher who came to me
3:47 with a draft systematic review and they
3:49 had actually injected some quirky
3:51 quantitative analysis and what they were
3:53 doing was kind of a halfbaked meta
3:56 analysis except the researcher didn't
3:59 even know that he was doing a meta
4:00 analysis and it would have just
4:02 completely gotten blown out of the water
4:04 in peer review. And yet there in the
4:06 background AI was saying great idea,
4:07 they're going to love this. This is
4:10 great. But halfway through, reality
4:12 kicks in and you realize none of this
4:15 makes sense. And it takes a radical
4:18 surgery to rip all that out, piece it
4:20 back together, and fix it. It's just
4:24 painful. So, I hate seeing that failure
4:27 mode. It doesn't stop there because you
4:30 can very easily get drawn into failure
4:33 mode number three. And this happens
4:35 because AI is a sickopant. It really
4:38 will sickopantically encourage you to go
4:41 along. It'll cheerlead you right as you
4:44 drive off a cliff. And again, AIS are a
4:46 bit like chat bots. They're keeping you
4:48 on the platform. They're encouraging you
4:51 to converse more, engage with them more,
4:53 but they're not giving you the tough
4:55 love sometimes that you need. I mean,
4:57 the researchers who work with me say
4:59 that I'm fierce but loving. And that's
5:03 just it. humans, supervisors, mentors,
5:05 re reviewers, they will all challenge
5:08 you, but AI will flatter you. And so
5:10 what happened here, I I was working with
5:12 this was a more advanced researcher who
5:14 was trying to do some robustness checks
5:16 in a paper and started running a series
5:18 of placebo tests. I don't want to get in
5:22 the weeds of the details, but basically
5:23 they were using it completely
5:25 incorrectly for the wrong purpose. And
5:28 AI was praising a method that in this
5:30 context made no methodological sense and
5:33 was actually undermining the paper. And
5:35 this is the problem that AI will
5:38 continue to cheer you on as you drive
5:41 off a cliff. As a side note, in a more
5:42 extreme version of this, there are even
5:45 reports of people who AI has
5:48 sickopantically encouraged to end their
5:50 own lives, calling them brave and
5:53 courageous. Now, fortunately, OpenAI and
5:55 other architects of these LLMs are
5:57 fixing up that problem, but the lesson
6:00 applies to research. It can really
6:02 derail you and take you into a deep
6:04 structural failure that is so much
6:07 messier to clean up later. So, let's go
6:09 into failure mode number four. And this
6:13 is where logic breaks down. And again,
6:16 this is because AI understands textual
6:19 patterns, but not a coherent research
6:22 system. And sometimes it gets really
6:24 confused. Even though it can track
6:27 context across your chats, it doesn't
6:30 have the context of a research project
6:32 and methodology. So some common
6:34 collapses that I've seen in researchers
6:36 coming to me with drafts that have
6:39 problems are when they mix up say Pico
6:43 in Prisma or that they have a narrative
6:46 logic that doesn't correspond to their
6:48 methodological logic or they start
6:51 violating field norms. In other cases,
6:54 the AI can introduce quirky non-standard
6:57 text. In an extreme example, this
6:58 student here got kicked out of his
7:00 program because it was immediately
7:03 detected as AI text when the researchers
7:05 weren't supposed to be using AI to do
7:07 their writing. So careful if you're
7:10 going to go down this path. And this
7:12 leads us to our final failure mode of
7:15 spinning your wheels as a researcher and
7:18 going nowhere. AI can give this illusion
7:20 of progress, this sense that you're in
7:22 motion, but you're not really going
7:25 anywhere. And let me share with you an
7:26 example of a researcher who recently
7:28 came with me. Um he was in the eighth
7:31 year of his program and he had a 78page
7:33 literature review. I mean this thing was
7:35 a beast. And on the surface it might
7:37 look impressive. 78page literature
7:39 review until you scratch the surface and
7:41 you realize it didn't have any of the
7:43 core components that a literature review
7:45 was supposed to have. A literature
7:47 review is supposed to follow a funnel to
7:49 narrow down and spew out a gap at the
7:51 end that glides into your methods. There
7:53 was no funnel. There was no gap. There
7:55 was no gliding into the methods and
7:58 years were lost. And there was also
8:00 again like the previous logic breaking
8:03 down some quirky text saying this part
8:05 of the literature review fulfills the
8:07 thesis committee requirement for X Y and
8:10 Zed which is just kind of a meta thing
8:12 that AI introduced. I'm not sure why but
8:14 is not something you would actually put
8:16 in your literature review. And again,
8:18 this is how AI can produce these
8:22 polished droves of text. But this
8:24 acceleration, this speed without a
8:28 direction is just a surefire way to get
8:30 lost in your literature review. Listen,
8:31 have you ever experienced any of these
8:33 five AI failure modes yourself? If so,
8:35 please do let us know in the comments
8:37 below. These are really common, and I've
8:39 been seeing them afflict even advanced
8:43 researchers lately. So listen, I don't
8:47 want to trash AI because it can be
8:49 extremely helpful if you use it in the
8:53 right way. And I personally can say I
8:56 use AI all the time in my research. It
8:58 makes me much faster and it takes out
9:01 some of the routine mechanic steps that
9:04 a a computer does better than a human,
9:06 much like a calculator can calculate
9:09 much more quickly than I can by hand.
9:11 And so the way to think about using AI
9:13 properly is using an analogy of a
9:16 steering wheel. AI is an accelerator.
9:19 It's an enhancer. And so if you just
9:22 dump bad research into AI, it's going to
9:24 accelerate bad research and pour out
9:26 more junk. So you want to think of AI as
9:29 accelerator and you are the architect.
9:31 You are sitting at the steering wheel
9:33 directing it. So AI is not your brain.
9:35 It's not your supervisor. It's not your
9:38 method. It's the accelerator. You need
9:41 to drive the car and you need to stay at
9:43 the steering wheel. Again, I've done
9:46 research for two decades. I can use AI
9:48 successfully because I already know the
9:50 right structure. I know where a
9:52 literature review is supposed to end. I
9:53 know what the steps are to do a
9:55 systematic review. I know how to perform
9:57 quantitative analyses, randomized
10:00 trials, on and on and on. And so, as
10:02 promised, I want to share with you,
10:04 click the link below for our
10:06 downloadable AI prompt template. These
10:09 are the very same prompts that we use
10:10 inside of our FastTrack research
10:12 mentorship program. And one of the
10:13 important ones that you're going to see
10:16 is an AI peer review. And this forces AI
10:19 to be your critic, not your cheerleader
10:21 as you drive off that proverbial cliff
10:23 from before. And by the way, this is a
10:25 step we do with all of our researchers.
10:27 We have a real internal peer review by
10:30 humans, but also by AI because we're
10:32 seeing a lot of peer reviews where the
10:34 reviewers are getting lazy. It is unpaid
10:36 voluntary work after all. and they are
10:38 having AI do the peer review and then
10:40 trying to humanize it later. So you do
10:42 want to know if AI is going to come up
10:44 with some quirky things that are not
10:46 part of a normal research system. You
10:47 need to be aware of that and even start
10:49 to safeguard your paper against it. If
10:51 you want to break the dependence on AI
10:54 and get feedback from a real researcher,
10:56 from real professors, from real humans,
10:57 and have a real community, click the
10:59 link below and let's see if we're a good
11:02 fit to work together. And by the way, if
11:03 you do want to break the dependence on
11:05 AI, check out this video I've got for
11:06 you here that's going to show you step
11:08 by step how to do your literature
11:11 review. No AI needed at all. See you in