Artificial intelligence (AI) can significantly accelerate research but poses a substantial risk of introducing subtle, structural errors by providing false confidence and momentum before genuine understanding, leading researchers astray if not used with critical oversight.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
AI is helping millions of researchers
move faster than ever before, but it's
quietly becoming the number one source
of research errors I'm seeing as a
professor and mentor. Much worse than
bad supervisors, bad courses, or bad
time management. And I'm not talking
about the obvious errors like
hallucinations or fake citations or
where you see AI text with an M dash you
can spot a mile away and raises red
flags. I'm talking about the more
subtle, insidious errors. The kind that
you notice only after you're say 3 weeks
into a lit review or you've done 10
hours of data extraction or months into
a dissertation and suddenly everything
collapses like a house of cards and you
you wake up and think, how did I get
here? What happened? And the problem
isn't that AI is being malicious or
trying to trick you. It's because it is
a false friend. It gives you confidence
before real clarity. Gives you momentum
before direction and troves of polished
text before you have any real
understanding on the topic. So today,
what I'm going to do is show you exactly
how AI can derail researchers. And I'm
going to show you step by step how you
can use it safely so you don't fall into
these deep structural traps. I'm also
going to leave you, stick around to the
end, a template of prompts that you can
use with AI to avoid these five failure
modes that I'm going to share with you
today. For those of you who are new to
the channel, I'm Professor David
Stuckler, and this channel is the
support that I wish I would have had as
a researcher, as a beginner. And in my
trajectory, I've made about every
mistake you could possibly think of.
Fast forward now, I've published over
400 peer-reviewed papers, been a
professor at Harvard, Oxford, and
Cambridge, and I set up a mentorship
program to help you have a smooth and
easy ride. If you're interested in real
support, click the link below. Uh, get
action and help from a real human, not
AI, and let's see if we're a good fit to
work together. So, let's dive into
failure mode number one, confidence
before clarity. AI is very good at
making your ideas seem excellent and
exciting even if they have no chance of
getting published and they're dead on
arrival. Let me tell you about a
concrete example from a recent student.
So a researcher came to me and she's
very interested in the physical activity
sleep nexus and AI had praised her idea
as innovative and encouraged her to keep
going. But yet in sitting together in
just 2 minutes we did our conceptual
nearest neighbor check. It's a check we
always do to help calibrate the gap in a
study and make sure we're not
duplicating what's already been done.
And in just those two minutes, we found
three identical studies already
published on the topic. And there was
really not much space left to make a
contribution. The problem was that AI
gave encouragement and spurred her
along. But real research requires
validation. And that's exactly why we
run duplication and feasibility tests
before anybody ever starts writing a
paragraph. And so once you have that
confidence before clarity, something
else can begin to happen leading to our
second failure. You've probably
experienced this yourself where the LLM
says, "Oh, great idea. Would you like me
to work up this and that and this and
that?" And suddenly, little do you know
as a researcher, you're getting sucked
down failure mode number two down the
rabbit hole. And here the issue is that
AI doesn't work from a research system.
It doesn't know the destination. It's
optimizing a response to whatever you
give it. And so it's inventing the path
as it goes along. And this can lead to
some quirky stuff that violates field
norms. So to give you a concrete example
here, I had a researcher who came to me
with a draft systematic review and they
had actually injected some quirky
quantitative analysis and what they were
doing was kind of a halfbaked meta
analysis except the researcher didn't
even know that he was doing a meta
analysis and it would have just
completely gotten blown out of the water
in peer review. And yet there in the
background AI was saying great idea,
they're going to love this. This is
great. But halfway through, reality
kicks in and you realize none of this
makes sense. And it takes a radical
surgery to rip all that out, piece it
back together, and fix it. It's just
painful. So, I hate seeing that failure
mode. It doesn't stop there because you
can very easily get drawn into failure
mode number three. And this happens
because AI is a sickopant. It really
will sickopantically encourage you to go
along. It'll cheerlead you right as you
drive off a cliff. And again, AIS are a
bit like chat bots. They're keeping you
on the platform. They're encouraging you
to converse more, engage with them more,
but they're not giving you the tough
love sometimes that you need. I mean,
the researchers who work with me say
that I'm fierce but loving. And that's
just it. humans, supervisors, mentors,
re reviewers, they will all challenge
you, but AI will flatter you. And so
what happened here, I I was working with
this was a more advanced researcher who
was trying to do some robustness checks
in a paper and started running a series
of placebo tests. I don't want to get in
the weeds of the details, but basically
they were using it completely
incorrectly for the wrong purpose. And
AI was praising a method that in this
context made no methodological sense and
was actually undermining the paper. And
this is the problem that AI will
continue to cheer you on as you drive
off a cliff. As a side note, in a more
extreme version of this, there are even
reports of people who AI has
sickopantically encouraged to end their
own lives, calling them brave and
courageous. Now, fortunately, OpenAI and
other architects of these LLMs are
fixing up that problem, but the lesson
applies to research. It can really
derail you and take you into a deep
structural failure that is so much
messier to clean up later. So, let's go
into failure mode number four. And this
is where logic breaks down. And again,
this is because AI understands textual
patterns, but not a coherent research
system. And sometimes it gets really
confused. Even though it can track
context across your chats, it doesn't
have the context of a research project
and methodology. So some common
collapses that I've seen in researchers
coming to me with drafts that have
problems are when they mix up say Pico
in Prisma or that they have a narrative
logic that doesn't correspond to their
methodological logic or they start
violating field norms. In other cases,
the AI can introduce quirky non-standard
text. In an extreme example, this
student here got kicked out of his
program because it was immediately
detected as AI text when the researchers
weren't supposed to be using AI to do
their writing. So careful if you're
going to go down this path. And this
leads us to our final failure mode of
spinning your wheels as a researcher and
going nowhere. AI can give this illusion
of progress, this sense that you're in
motion, but you're not really going
anywhere. And let me share with you an
example of a researcher who recently
came with me. Um he was in the eighth
year of his program and he had a 78page
literature review. I mean this thing was
a beast. And on the surface it might
look impressive. 78page literature
review until you scratch the surface and
you realize it didn't have any of the
core components that a literature review
was supposed to have. A literature
review is supposed to follow a funnel to
narrow down and spew out a gap at the
end that glides into your methods. There
was no funnel. There was no gap. There
was no gliding into the methods and
years were lost. And there was also
again like the previous logic breaking
down some quirky text saying this part
of the literature review fulfills the
thesis committee requirement for X Y and
Zed which is just kind of a meta thing
that AI introduced. I'm not sure why but
is not something you would actually put
in your literature review. And again,
this is how AI can produce these
polished droves of text. But this
acceleration, this speed without a
direction is just a surefire way to get
lost in your literature review. Listen,
have you ever experienced any of these
five AI failure modes yourself? If so,
please do let us know in the comments
below. These are really common, and I've
been seeing them afflict even advanced
researchers lately. So listen, I don't
want to trash AI because it can be
extremely helpful if you use it in the
right way. And I personally can say I
use AI all the time in my research. It
makes me much faster and it takes out
some of the routine mechanic steps that
a a computer does better than a human,
much like a calculator can calculate
much more quickly than I can by hand.
And so the way to think about using AI
properly is using an analogy of a
steering wheel. AI is an accelerator.
It's an enhancer. And so if you just
dump bad research into AI, it's going to
accelerate bad research and pour out
more junk. So you want to think of AI as
accelerator and you are the architect.
You are sitting at the steering wheel
directing it. So AI is not your brain.
It's not your supervisor. It's not your
method. It's the accelerator. You need
to drive the car and you need to stay at
the steering wheel. Again, I've done
research for two decades. I can use AI
successfully because I already know the
right structure. I know where a
literature review is supposed to end. I
know what the steps are to do a
systematic review. I know how to perform
quantitative analyses, randomized
trials, on and on and on. And so, as
promised, I want to share with you,
click the link below for our
downloadable AI prompt template. These
are the very same prompts that we use
inside of our FastTrack research
mentorship program. And one of the
important ones that you're going to see
is an AI peer review. And this forces AI
to be your critic, not your cheerleader
as you drive off that proverbial cliff
from before. And by the way, this is a
step we do with all of our researchers.
We have a real internal peer review by
humans, but also by AI because we're
seeing a lot of peer reviews where the
reviewers are getting lazy. It is unpaid
voluntary work after all. and they are
having AI do the peer review and then
trying to humanize it later. So you do
want to know if AI is going to come up
with some quirky things that are not
part of a normal research system. You
need to be aware of that and even start
to safeguard your paper against it. If
you want to break the dependence on AI
and get feedback from a real researcher,
from real professors, from real humans,
and have a real community, click the
link below and let's see if we're a good
fit to work together. And by the way, if
you do want to break the dependence on
AI, check out this video I've got for
you here that's going to show you step
by step how to do your literature
review. No AI needed at all. See you in
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.