This content explains advanced AI prompting techniques by focusing on the underlying mental models and principles, rather than just providing specific prompts. It aims to empower users to achieve more sophisticated and reliable AI outputs through structured interaction.
Key Points
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
Teaching AI is really hard. Teaching
advanced prompting is even harder. This
video will make it easier. My goal is to
equip you with an understanding of the
mental models, the principles that
advanced prompters use. We're going to
go beyond the level of a prompt. I don't
have a specific magical prompt to give
you here. I will have lots of examples
in the writeup, but my goal with this
video is to actually lay out what are
the principles that advanced prompters
use that are not widely known, right?
that that you might not be aware of but
that underly a lot of these advanced
prompting techniques. Let's dive into
the first one. Advanced prompters build
self-correction systems. And so the
first category here that we're talking
about is about how you force models to
attack their own outputs and to get past
the fundamental limitation of single
pass generation. which is a fancy way of
saying you want to push the model to get
past the initial generation step into
thinking about what it's done. One way
to do this is called chain of
verification where you in the prompt
require a verification loop inside the
same conversational turn. So this might
look like you know analyze this
acquisition agreement list your three
most important findings that's not
special. Now, same prompt. Identify
three ways your analysis might be
incomplete. For each, cite the specific
language that confirms or refutes the
concern and then revise your findings
based on all of this thinking or this
verification. That's a very easy example
of chain of verification. It can
obviously get more complex. The key
thing to realize is you're not asking
the model to be more careful. That's too
vague. You're structuring the generation
process to include self-critique as a
mandatory step. and that advocate
activates verification patterns that the
model was trained on but you probably
wouldn't have gotten into by default.
Another technique in this same bucket is
adversarial prompting. And so if chain
of verification asks models to verify
their work, adversarial prompting is a
lot more aggressive. It demands that a
model finds problems even it if it needs
to stretch. And so use this when you
really really need to be sure. A good
example of this would be you need to be
sure that your security architecture
review is as complete as possible. So
you might say something like please
attack your previous design. You need to
identify five specific ways it could be
compromised. For each vulnerability you
need to assess likelihood. You need to
assess impact etc. These these
approaches are designed to be tools that
you use in specific situations. And
that's why I'm giving you these short
examples is I want to give you a sense
of not just the mental model, not just
the principle, but where advanced
prompters tend to use these techniques.
Let's go to the next one. Strategic
edgecase learning. Let's say you're
having trouble distinguishing edge cases
or boundary conditions around a
particular problem set and you're trying
to describe it in words and it's not
working well. One of the ways to handle
this is called fshot examples or fshot
prompting. And what you're trying to do
is you're trying to include examples of
common failure modes and boundary cases
so that you can teach the model how to
distinguish those gray spaces, those
edges in the situation. And so one
example of how to do this is to show
let's say you're trying to prevent a SQL
injection attack, right? The first
example might be a very obvious
injection with a raw string
concatenation. It's a baseline. The
model should pick this up. The second
example might be a parameterized query
that looks safe but has some sort of
second order injection stored somewhere,
right? Maybe stored in XSS, etc. The
failure mode would fool a naive
analysis, but you're trying to teach the
model through the example that this is
the kind of thing to look for in this
edge case. And so by including examples
of subtle failure modes, it doesn't have
to be SQL, right? I picked SQL because
that's a really interesting one. But you
can really use subtle failure modes for
much less technical subjects as well.
The model learns to distinguish what
looks secure from what is secure or what
looks correct from what is correct. And
that enables you to drop your false
negatives when you're trying to use the
model for this kind of correct
categorization work really
significantly. So those particular sort
of principles, edge case learning,
adversarial prompting, chain of
verification, those are all around sort
of how we build selfcorrection systems.
That's the larger bucket these fall
into. Advanced prompters also do meta
prompting. And I've talked about this
before. It's worth talking about again.
People do not realize how powerful meta
prompting is until they try it. And I
want to give you a couple of specific
techniques that advanced prompters use
that you can try as well. The first one
is reverse prompting. So this technique
exploits the model's meta knowledge
about what makes prompts effective. The
model's been trained on a lot of prompt
engineering conversation and you can ask
it to design an optimal prompt. And the
funny thing is people don't realize
this. You can ask it to define the
prompt to solve a particular defined
task and it will just write its own
prompt and execute on it. And so one of
the ways that you can do this is to say
you're an expert prompt designer. Please
design the single most effective prompt
to analyze quarterly earnings reports
for early warning signs of financial
distress. Consider what details matter,
what output format is most actionable,
what reasoning steps are essential, then
execute that prompt on this particular
route. You see how we've included a
request for specific outputs along with
a request for the prompt. You can
totally do that. You can ask for a
prompt with certain characteristics and
the model can use best practice, look at
the output, formulate the prompt, keep
in mind what you're looking for, and
then run it. People don't realize you
can do this, but it sure opens up a lot
of power in the model. Another one is
recursive prompt optimization. So, this
is a situation where you can say,
"You're a recursive prompt optimizer. My
current prompt is here. Your goal is
this. I need you to go through multiple
iterations with me, right? For version
one, just add the missing constraints.
For version two, please resolve
ambiguities. And for version three,
enhance reasoning depth. You can pick
what the versions do. But the point is
you are starting to define aspects of
the prompt you care about. You're not
saying what the new prompt will be.
That's up to the model. And you are
giving the model multiple iterations in
one pass. And so it's going back over
the prompt again and again and again.
And this can force a kind of structure
and constraint that improves the quality
of the prompt on the specific axes that
you care about. So those are a couple
techniques for metaring. Another
advanced technique that I want to call
out or advanced principle that we get
into with prompting is reasoning
scaffolds. How do you structure
prompting? How do you structure
interaction with the AI for deeper and
more comprehensive analysis? So self-cut
correction can catch mistakes. Meta
prompting can improve prompt design.
Reasoning is really controlling how the
model thinks and changing it by
providing a structure that forces
thorough analysis. One technique that is
not often practiced is deliberate over
instruction. So basic prompts and a lot
of the model training around token
optimization compress outputs. There's a
lot of be concise, summarize briefly,
etc. And when models are trained that
way, they may prematurely collapse their
reasoning chains. You may not want that.
And so one way to fight it is to append
at the end of your ask a really clear
definition of over instruction. Do not
summarize. You might say expand every
single point with implementation
details, with edge cases, with failure
modes, with historical context. You just
go on and on, right? And then say like,
I really need exhaustive depth here. I
don't need an executive summary. I don't
need conciseness. Please prioritize
completeness. The reason you do this is
because you want to expose the model's
reasoning to examine it. I want to
emphasize that this is about thinking
with the model. You're not doing this to
take what the model writes and just copy
paste it. This is one of those tools
that you use when you really want to
understand the problem space and the
model's thinking and respond back
effectively. So deliberate over
instruction is a big one. Another one is
called zeroot chain of thought
structure. And so this technique
exploits how LLMs are trained to
continue patterns. And so what you might
do if you're thinking about pushing the
model's thinking in a certain direction
is provide a template with blank steps
that triggers a chain of thought
automatically because the model's
objective immediately becomes filling
the structure you've laid out and that
would require decomposing the problem.
So let's say you're root causing a
technical issue and you have a series of
questions you know the model needs to
think through if it's going to do this
correctly. You can literally list the
question with a blank in sequence in the
order you think it should be correctly
done and the model will start to
structure its thinking around that
scaffolding. So this is really effective
for problems that are quantitative for
technical problems because it creates a
natural progression from the breakdown
of the problem to the solution. And that
makes it much easier to understand what
the model is thinking and to find out if
you're actually in the right territory
from a structure of thought perspective.
Is the model on the right track here or
not? Reference class priming is another
advanced technique here. Reference class
priming provides examples of reasoning
quality and asks the model to match that
explicit reasoning bar. So let's say
you're using the model's own best output
as a quality benchmark rather than
relying on human provided examples. LLMs
are trained to continue patterns and
when you show it an example of
highquality reasoning whether provided
by humans or another model and you ask
it to provide analysis that matches the
standard you are priming the
distribution of the model toward that
level of depth. This is really different
from sort of traditional fshot
prompting. You're not showing input
output pairs here to teach the model
what to do. But instead, you're
providing examples of quality reasoning
and asking the model to meet that bar of
quality. Without priming, outputs can
vary wildly on quality and format, and
it can be difficult to control for that
directly with a prompt. And so,
sometimes having an example can push the
model toward producing much more
consistent quality across an overall
document set. Another technique that I
want to call out is around perspective
engineering. So, if we talked about
reasoning, I gave you a few techniques
here. Perspective engineering is really
cool. Single perspective analysis will
have blind spots determined by the
model's default reasoning mode. So
advanced prompters will push the
perspective of the model to generate
competing viewpoints on different
priorities and that leads to higher
quality thinking as well. One example is
a multi-persona debate. So let's say you
want to simulate the perspective of
three different experts. You actually
can instantiate three experts. You can
just say three experts with conflicting
priorities need to debate. These are the
personas, right? Persona has priority X,
persona 2 has priority Y, persona 3 has
priority Z. They must argue for their
preference and critique the others
positions. After debate, you must
synthesize a recommendation that
addresses all of their concerns. You can
do this when you have something where
you need a vigorous debate, but you
don't know how to push the LLM to get
there. A good example is costbenefit
analysis for vendors, right? If you're
trying to figure out if you buy with a
vendor or if you do a different
approach, you can simulate that whole
conversation and it's not the same as
having the seauite talk about it, but it
is great preparation and it helps you to
expose perspectives and thinking you
might not be aware of otherwise is an
example of taking a very human
conversational technique like debate and
actually deliberately putting it into
the chat. deliberately getting the chat
to work like a human and argue back and
forth in ways that help us to learn and
make better decisions. Now, critically,
personas do need specific potentially
conflicting priorities. You can't just
vanilla instantiate them with no
conflicts and expect good results here.
Another technique that I think is really
important to call out is temperature
simulation. Now, temperature is this
idea that the model uh is more
deterministic, more focused when it's
low temperature or cold, and more
creative when it's hot. That's
traditionally controllable via the API,
but you can actually do that in the chat
indirectly. One way to do it is to have
the model roleplay at different
temperatures. So you could say, I want a
junior analyst who is uncertain and who
overexplains to look at this problem
first. I want a confident expert who is
concise and direct. That would be a
cooler temperature. Uh and then I want
you to synthesize both perspectives and
highlight where uncertainty is warranted
and where confidence is justified. We're
basically giving the model a low
temperature pass, a high temperature
pass, and asking it to synthesize. I
think that what's interesting here is
that we can take a lot of the techniques
that we see in the API and simulate them
effectively as advanced prompters in the
chat to get where we want to go. I hope
this has been helpful for you. There are
probably other advanced prompting
techniques I could get into. I don't
want to make this overwhelming, but
these are the kinds of mental models
that advanced prompters find difficult
to articulate, but they inform a lot of
productivity. They're very highly
leveraged examples. If you're interested
in diving in further, the writeup has a
lot more. It has sort of examples of the
prompts, etc. Enjoy. Good luck
prompting. It's so powerful. We should
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.