The AI job market is experiencing an unprecedented, functionally infinite demand for specialized talent, far exceeding the available qualified candidates. This scarcity is driven by the rapid growth of AI-centric roles, creating a significant disconnect with traditional knowledge work roles.
Mind Map
Zum Vergrößern klicken
Klicke, um die vollständige interaktive Mind Map zu öffnen
So there are essentially infinite AI
jobs right now. Not growing demand, not
a hot sector. None of that is true. It
is functionally infinite. And that's
true for businesses employing 10 or 20
people as much as it's true for
businesses employing hundreds of
thousands. There is no functional upper
limit to what employers would love to
have as far as AI talent and they cannot
find them. After hundreds of interviews
on particular roles, I am hearing from
employers, we can't fill the role. And I
hear you saying, "Nate, that must be a
lie. I have applied for hundreds of AI
positions. I am good at AI. It is not
working." I get it. We have a K-shaped
job market right now. And there is a
split in what employers want that is
confusing this issue because many
employers who don't fully know AI are
taking advantage of this situation by
basically putting resumes out as
learning tools and then they use the
interviews to learn from the talent what
they need which is really terrible right
like that's not the way you should be
interacting with talent. It doesn't
bring the best people to you and it
leaves a bad taste in everyone's mouth.
There are also plenty of people who are
looking for roles who are either
overstating their capabilities or who
don't have the actual skill sets needed
to thrive in AI. I'm not talking about
being able to chat with the AI, right?
I'm going to give you in this video
seven specific skill sets that I have
pulled from looking at hundreds of
actual AI job postings and then looking
underneath at what the subsklls building
those skills are. And I've gone farther
than that, right? I'm not just going to
talk about the skill sets. I'm going to
talk about how you develop them. I'm
going to put up a guide on the Substack
that helps you to actually get to those
skills. And I've gone even farther than
that. I'm putting together a hiring
board because I think it's really really
confusing right now to have AI talent
and AI hiring managers mixed in with
everybody else because then you're like
looking among all of these PM jobs.
Which are the AIPM jobs? Which are the
whitewashed AIPM jobs where it's just
like we say AI but we don't mean it. And
I'm trying to fix that because I think
it's time to bring some simplicity to
this. Hiring doesn't have to be as hard
as it is. Okay, before we get into all
of that, let's dive in to what's
actually going on in the job market.
Fundamentally, the AI labor market is
actually two markets moving in opposite
directions. When I talk about K-shaped,
I mean it. Market one is the traditional
knowledge work roles, right? The things
that we've all learned since the 2010s,
generalist product managers, standard
software engineers, conventional
business analysts. And there's no other
way to say it. I'm sure you're not
surprised that job opening count is flat
or falling. It is not growing because
most of the interest, most of the
investment where businesses are
investing to grow, it's on the AI side.
And that's the other side of the market.
It's roles that design, build, operate,
and manage AI systems. And that is
growing fast. In fact, I've been kicking
around tech for multiple decades, and I
have never seen it this hot for this
kind of a job family. The ratio of AI
jobs to AI talent right now is 3.2 to1.
In other words, there are three plus AI
jobs for every single qualified
candidate right now. They can command
their price and they do. If you want
specific numbers, this is from a
manpower group survey which found 1.6
million jobs, which I think is low, and
only about half a million qualified
applicants, which I think is pretty
fair. And that's leading to a very long
time to fill the role. 142 days to fill
an AI role, which is almost half a year.
And so the people who tell me I'm lying
when I say that this market exists, I
get it. If you're not in that half a
million category, it does feel
impossible because the entire rest of
the job market is condensing into
commodity. But if you're on the other
side, you get it. This is a world where
you can write your own ticket because
people are desperate for these skills.
So without further ado, I don't want to
belabor this. Let's get into what these
skills are. I want this to be the most
useful video on these skills because I
went and I looked at all of these AI
courses before I made this video cuz I
was like surely someone out there has
made a video that is based empirically
on the AI job postings, backward
analyzes them, decomposes them into
subsklls and gets very specific about
what employers are hiring for. That is a
learnable skill. And by the way, this is
easier than other information tech
revolutions. If you think this is hard,
when you were getting a personal
computer in the 1980s to learn how to
code, you had to fork over like 15 or
16,000 in today's dollars to do that. It
was ridiculously expensive. It was
heavily gated by your ability to afford
stuff. Now, it's much much easier.
Almost anyone has access to an AI
subscription if they want. AI can
actually help you learn. We can do this.
And we're going to start with the most
fundamental shift of all. People
sometimes call this prompting. I've
talked a lot about prompting. I want to
use the term that I am seeing more and
more in job postings and that is
specification precision or clarity of
intent. You have to learn to talk
English to a machine in a way a machine
takes literally. We are used to working
with humans that read between the lines.
We're used to working with humans that
can infer from our intent pretty
reliably. One of the reasons we know
that general intelligence is not really
here yet is that agents don't do a good
job of that. Agents need us to be
specific. An agent is going to take
whatever specification you give it and
go and build something. And if you're
not clear about what that is, the agent
is going to try its best to fill in the
blanks, but that won't reliably
reproduce your intent. Agents are bad at
filling in the blanks. And yes, I'm
going to give you a specific example.
Let's say you're trying to improve
customer support, but you're not giving
something to a principal engineer where
you say, "Hey, come up with a solution
on customer support. You've read the
tickets." We're not going to be that
vague. Instead, we're going to be clear
about what we care about in the prompt
to the agent. This is the difference
that job posters are looking for. You
need to be able to say to the agent, I
want you to build an agent that handles
tier one tickets. I want you to be able
to handle password resets. I want you to
be able to handle order status
inquiries. I want you to be able to
handle return initiations. I want you to
know when to escalate to a human based
on customer sentiment. And I want to
define customer sentiment in such a way
for you here in these docs that you know
how to measure it and score against it
and escalate appropriately. I want you
to log every escalation with a reason
code. You have the same intent here, but
you notice how specific that is. That is
what the bar is for prompting in 2026.
You have to be able to be that clear in
your intent. Now, if you're a technical
writer, if you're a lawyer, if you're a
QA engineer, a lot of this is going to
feel super familiar because you've done
this kind of technical writing before.
The gap is shorter than you think. For
many of us who are not used to writing
this specifically, it is a new skill,
but it's absolutely learnable. All it
takes is understanding in detail what
you intend to put together. And I'm
putting these in a specific order
because this is actually the order you
intuitively learn them in. I'm I'm
putting them in a sequence that makes
sense for you. Once you specify what you
want precisely, you immediately run into
the next problem, which is did you get
it right? Did you get what you wanted?
We call that evaluation and quality
judgment. And it's the single most
frequently cited skill across all of the
job postings I've come across. I'm not
sure employers all get it. And I'm going
to define it really clearly here. This
is something, by the way, that is in
engineering job postings and ops job
postings and PM job postings. People
talk about having an agentic evaluation
mindset, whatever that means. And they
want you to be able to do automated
evals and simulation runs, etc., etc.
Upwork has job postings that demand
evaluation harnesses for functional task
and longitudinal metrics, right? They're
talking about building ways to test
whether AI does a good job. Every single
posting will use slightly different
versions of this, but really it comes
down to being able to build systems that
encode evaluation and quality judgment.
And this is what all of that taste
discourse is all about. It's just
dressed up in skill language. And
really, I get why people are pushing
back on taste because when you talk
about this as taste, it just feels vague
and unactionable and it just strokes the
ego. But really, what we're talking
about is error detection with a degree
of fluency. AI has really different
failure modes from human failure modes.
AI is often confidently wrong. It's
fluently wrong. Whereas humans, when
we're wrong, we tend to stumble. There's
a lot of tells we have that we're used
to hearing and seeing in other people
that don't show up with AI. And so, if
we're not used to working with AI, we
may incorrectly see the confident
response AI has and assume that that's
true and right and correct. And I see
this a lot, by the way. If you think
this doesn't happen, I have seen it
happen in real presentations where
people will say, "Well, the AI presented
it and it looked correct to me and look,
it has all the right headers and this
and that." And I'm like, "Yeah, but use
some critical thinking here. This isn't
actually correct. I don't care how
confident the AI was." The skill here is
resisting the temptation to read fluency
by the AI as competence or correctness.
It's just not. Another subskll here is
what I call edge case detection. You can
show that you understand a subject
deeply when you are able to look at the
response from the AI and say you know
this is correct at core but the edge
cases are wrong. I think anthropics
engineering blog actually does a really
good job of explaining how this taste is
actually a learnable skill. What they
say is a good eval task is written when
more than one engineer looks at that
eval task and would come to the same
conclusion on a past fail basis. In
other words, excellent evaluations are
something we can all agree on and we can
all learn to write. If you're an editor,
if you're an auditor, these are the
kinds of skills you're using all the
time. You're just applying them in new
ways. If you're not, this is the gold
standard skill. Skill number two here,
this is the one that's mentioned the
most, and we all are going to have to
get good at it, whether or not we have
engineering in our title. And really,
the the best and simplest way to get
good at this is to start reviewing AI
output as if it has your name on it.
Care about it. Insist that it be
correct. insist that it be right and
then as you start to build agentic
systems which is a learnable skill we'll
get into. You should be able to build
them in such a way that you can sniff
out the quality at the end. And speaking
of multi-agent systems, let's talk about
what skill is involved when we do these
complicated multi-agent systems because
people sometimes look at that like
that's a chasm they can't cross. Like
I've had people who say I can use chat
GPT, I can even use cloud code but when
you say multi-ent like go white at the
roots, right? Like it's not easy. It is
easier than you think. Fundamentally,
the skill of working with multiple
agents is the skill of decomposing tasks
and delegating. It's a managerial skill
and you can learn it. You just need to
be able to break apart work into
manageable segments. That is part of how
you understand what works. And then you
can pair that with some of these other
skills you're learning like specifying,
like writing evals to actually get what
you want done. Now, if you think this
sounds like regular project management,
it's not. Agents work so differently
from people. Agents need very defined
guard rails and infrastructure to work
correctly. You can give your team of six
a set of assignments that are decomposed
rather vaguely in human terms and they
will still figure it out. We're sort of
generally flexible as workers. You
cannot do that with agents. You have to
very clearly specify the goal, very
clearly specify your initial intent,
very clearly define how you want a
multi- aent system to run. And there's
not that many ways to do it. The current
best practice is to have a planner agent
that keeps a record of tasks and that
can work with a variety of sub aents to
get those tasks done. Now, if you've
ever broken large projects into work
streams, take comfort. That is a skill
that transfers because you're really
thinking through what are the logical
delineations. What are the chunks in
this workstream and how do we hand them
off? That is something that you can
learn to work with AI to do and AI can
help you on when you start to build
bigger projects. One of the most
interesting subsets of this skill right
now is the ability to know is a given
project correctly scoped for the agentic
harness I have. And I have videos up
where I talk about this. The idea that
you need to size your work for the
agentic harness you have. If you have a
singlethreaded agent harness that's
designed basically to be a little
engineer in the computer that works for
you, you have to size your tasks and
decompose them to fit that engineer in a
box. If you have a multi- aent system
and you have a planner agent that
operates over a long period of time and
it has sub agents, you have the
flexibility to define a larger task, you
still need to be clear enough about the
subtasks and their logical relationship
that the planner can make good choices.
And so this is something where I say
this out loud and you might think, "Oh,
this isn't that hard." I promise you
it's hard. I promise you it's technical.
I promise you it's learnable and people
will pay for this. This is something
that people are in desperate demand of
all over the world. Now, you might
think, well, this sounds really hard and
these agent systems probably fail. That
brings me to the next skill. It's called
failure pattern recognition. And it's
absolutely critical. It shows up in lots
of these job postings because when
employers put these skills together, you
know what they recognize? Wow, it's not
so easy as I thought, right? We have
lots of ways that agents fail. I need
someone who can diagnose this at root,
fix it, and get me back to being
productive. And yes, if you're wondering
if I'm going to get into detail here,
you got that right. Because to be honest
with you, failure pattern recognition is
not widely understood. And people tend
to say, "Well, oh, what's failure?" I'm
going to tell you, I dug into the
research. I also have seen these
failures. These are the six failure
types that pop up. Context degradation
is one, right? Quality is going to drop
as your session gets long because you're
polluting the context window. Another
one is specification drift. Over a long
task, the agent effectively forgets the
specification unless you construct your
agent harness correctly and the agent is
forcibly reminded of the specification.
A lot of what you see in the Ralph loop
on Claude that went viral is forcible
reminder of specification. Sycopantic
confirmation is another one. That's
where the agent actually confirms
incorrect data and then comes back and
builds an entire incorrect system around
that data. You have got to watch the
data you put into these agents. They
will take it seriously. they will
confirm against it. They will
sickyantically agree with it. And if you
are feeding them bad company data,
you're going to get bad systems. Tool
selection errors another one. Tool
selection errors are painful. So this is
one where the agent picks up the wrong
tool and whether or not it gets the job
done right, it's a tool it should never
have picked up in the first place. This
is especially common when you
incorrectly frame tools in the system
prompt or you don't make them available
in the harness in the correct way or you
have too many of them or they're too
long. Tools are something that probably
deserve an entire deep dive on their
own, but I will say here that the
ability to diagnose tool problems is one
of the markers of an AI fluent person.
And part of how I know that is that the
Claude certified architect program,
which recently launched, tests for this
failure mode specifically because it's
so important to building sustainable
agentic systems. And if you think, oh,
what's Claude certified architect? It's
nothing. Accenture is rolling this out
to hundreds of thousands of people. is
going to be like an AWS certification
shortly. Everyone's going to need it.
Here's another failure. Cascading
failure rate. So, one agent's failure
propagates through the chain. You never
had correction mechanisms and now you
have a failure of the whole run. That's
it is correctable if you put in loops
and verification in the right places.
The most dangerous failure of all I kept
for last. It's called silent failure.
It's where the agent produces a
plausible output and it looks right, but
something went wrong and the actual
result isn't acceptable in production.
Those are very difficult to diagnose
ahead of time and once you find them,
they're hard to root cause because they
tend to look identical to correct output
by most measures. I'll give you an
example. Let's say you're trying to
recommend a particular product to a
customer and it's brown leather boots
and the AI system comes back and says
it's recommending brown leather boots
and the customer is unhappy and leaves a
nasty review and something went wrong.
You go back and you see, okay, it said
brown leather boots in the chat. That
looks correct. The metadata on the
product says brown leather boots and you
have to dig and dig and dig and dig to
see that the issue is on the warehousing
shelves and someone actually shipped
blue leather boots and there are blue
leather boots pictured in the last
picture of the rotating carousel on that
skew and there was a mixup and in this
case it may be the agentic interaction
with an incorrect initial data set that
caused the problem but it still shows up
as a silent failure. That is the kind of
hard work that you have to do to get
these systems to work well. Now, if
you're an S sur, if you're a risk
manager, if you're an operations leader,
you already think in these failure
modes. This is not a big jump. If you're
someone else and you're just not used to
thinking in failure modes, I promise you
once you get into it, it's a little bit
addictive because it's like looking
through a puzzle and saying, "Where's
the missing piece? There's got to be a
missing piece in here." So, it's
absolutely something you can learn. Now,
once you understand these systems pretty
well, the higher level skill, again,
something I am rooting directly in job
postings is around trust and security
design. Basically, how do you know where
and when to implement these systems and
where and when to put humans in? Where
do you draw the line between human and
agent? Where do you authorize an agent
to take an appropriate action? And how
do you know the authorized agent only
took those appropriate action? How do
you keep an agent on guardrail so you
know it does not say something
inappropriate to a customer? So this is
a case where you basically have to build
the containers or the guardrails around
the agentic system in such a way that
you are confident that it will
predictably and reliably yield value in
production systems. This is a very
difficult skill because these systems
are probabilistic and just telling it in
the system prompt hey be good be nice is
not going to be good enough. So digging
in if we look at subsklls here you have
to understand cost of error right you
have to understand what is the blast
radius of particular problems the art of
building these systems and guardrails is
the art of saying what is the worst
thing that could happen let's get clear
on that and then work backwards because
you're never going to be perfect like a
misspelled email draft that's not great
a incorrect drug interaction
recommendation is potentially
catastrophic for the company and so you
have to understand where to put that
valuation and how to make sure that you
get the big things, right? Another one
is reversibility. Can you make this
mistake go away by reversing it? Now,
you can review a draft before sending
it. You can't necessarily review a
transaction that's a wire transfer
that's already gone out. That's gone.
Frequency is another way in which you
understand the risks of the system. If
it happens 10,000 times a day, it is
potentially a much bigger risk profile
than if it happens twice a day. Then
again, if it's twice a day going to
100,000 people, maybe you have to think
about it. This requires a depth of
understanding of the system that allows
you to really map customer impact
clearly and precisely. Last but not
least is verifiability. Can you verify
that this is correct? It's a big word in
this discourse and you have to look at
all of the answers you're getting and
you can't just be tolerating semantic
correctness. Semantic correctness is
when the LLM says something to a
customer and it sounds right. Functional
correctness is when the LLM says
something and it is right. Like an LLM
can say, "Hey, this is the right credit
card for you." And that sounds correct,
but if the credit card recommended is
the wrong credit card, it's still a
disaster. You have to be functionally
correct, and you have to insist that you
measure systems against that standard.
And so, a lot of what these job postings
tend to look for is people who have that
insanely high bar on quality and insist
on building systems that uphold that.
Now, let's say you've gone through this
whole process. You can build aic
systems. You understand the boundaries
they draw. Do you understand how to
specify intent? All the skills I've just
described. The crowning skill is context
architecture. How do you build context
systems that enable you to supply agents
with the information they need on demand
to successfully run at scale? This is
the 2026 version of getting the right
documents into the prompt, which is what
we were doing in 2024. So, you have to
understand what is persistent context in
your system. What is always there? What
is per session or per run context that
the agent needs? How do you make that
available? How do you make sure that the
data objects in your space are easy to
find and easy to traverse by AI agents?
How do you make sure that there isn't
dirty and polluting data that confuses
the AI agent in your context available
to be searched? How do you differentiate
between what is pulled into context and
what isn't? And how do you start to
troubleshoot when agents start finding
the wrong context? Context architecture
is one of the hardest things to do in
2026 and it's something that many
companies are now willing to pay almost
anything for. If they can get this
right, it enables them to not just build
one agentic system but to build dozens.
It's a massive unlock. And the people
who can think through the data side of
things logically and put that in front
of an agent in such a way that they can
verifiably show that the agent can do
the work, those people can write their
ticket. And you know what? You don't
have to be an engineer to do this. If
you're a librarian, if you are a
technical writer, you have a lot of the
bones of this skill. You have the
ability to understand technical
information and where it's filed and
where it goes. In a sense, context
architecture is like building the Dewey
decimal system for agents. You have to
understand how to build a library that
an agent can easily search through and
find and say, "Ah, this is the right
book. I have to pull this for this job."
And you're doing that with company data.
And that is a skill that you can test
for, that you can hire for, and it is
highly in demand right now. Okay, last
of the seven skills. This seventh skill
is on almost every senior job posting.
It's called cost and token economics.
I'll simplify it for you. Is it worth it
to build an agent for this job? You have
to be able to go through and calculate
the cost per token for a given task and
reliably say, if I put an agent against
this and it burns 100 million tokens, I
can prove this is worth doing or I can
prove it's not worth doing. And I can do
that ahead of time before I put a bunch
of agents against this. If you don't
know how to do this, and in particular,
if you don't know how to do this in a
world where you have model choice, where
you can pick your tokens, where you have
to pick the right model to pick the
right tokens for the economics of the
task, and you recognize that all of
these models are changing their pricing
all the time. That's the challenge. That
is what you need to be able to do. Like
imagine a world where token cost as a
whole is falling very rapidly, but you
may need frontier model pricing for
certain tasks. How do you ensure that if
you're being tasked with getting a job
done, you can get the right mix of
models on the job, you can calculate out
the blended cost of the task, you can be
confident that you're paying the right
amount, you're getting ROI on the task.
That is a senior level qualification.
You're not surprised, right? It's highly
in demand. Being able to do that is
basically just applied math. It's it's
actually you can actually build
spreadsheets and calculators that help
you to do this where you can just change
variables and say I think it'll be a 100
million token task and you can see
immediately across six different models
how much it would actually cost assuming
a given weight. And it's actually not as
hard as you would think to figure out
what those different components would
cost because you can put together a
little prototype and you can very easily
cycle through tokens, see that it's
plausible with this model and not with
this model. see roughly how many tokens
it takes across three or four runs and
you can start to build a plausible
model. This is a situation where it's
high school math, but you're getting
paid like a senior architect or a senior
engineer because you're fundamentally
taking those mathematical skills and
applying them in a very fluid and
fastmoving world to help the
organization be very costefficient with
these agents which are not cheap to run.
Like if you're burning through a billion
tokens with an agent, you'd better be
sure it got it right. It better be worth
it. If you're wondering what kind of job
titles did I look through for this? Is
it only engineering? The answer is no.
There are operations titles that have
these skills. There are engineering
titles that have these skills. Yes,
there are architecture titles that have
these skills. There are product manager
titles that have these skills. There are
AI reliability roles that have these
skills. People are calling them
different things. And what I have done
is dig underneath and map out seven
common skills. And we are going to see
in 2026 more and more new skills
emerging and more and more new jobs
emerging because fundamentally we're
rebuilding job families around agents.
And so you're going to find that someone
is going to be very clear about wanting
someone with high specification quality
to be clear about intent with agents at
initial parts of the run or someone
that's going to be really good at evaluing
skill sets are the underlying skill sets
and part of why I'm confident they're
not going anywhere is that they're tied
tightly to how AI actually works. It's
like the agent may get 10 times better
at doing complex longwriting tasks but
you still got to have an email at the
end. You still got to specify your
intent. Make sure it's like this is what
we're going to do. You still have to be
able to search your context
appropriately. These these skills are
skills that you can bet on. These skills
are skills that companies are betting
careers on and they're desperate for
them and no one can find them. And look,
if you got to the end of this video and
you're like, "That's me. That's me. I'm
raising my hand." Head over to the job
board that I'm putting up here. Go check
it out. Go put up your profile and let's
get you into the mix as part of a vetted
pool of talent so that we can simplify
this job finding and hiring process in
the HI. If that's you and you're like,
I'm a hiring manager and I've got to get
these kinds of people, same thing. Head
over there. If you're like, I want to
get there, that's why I'm creating the
guide to help you get there. That's why
I'm working through a course with you on
the Substack where you can actually go
through and teach yourself these skills
and you can self diagnose and say,
"Okay, which one do I need to work on
and how do I get better at it?" So, my
goal here is to be practical. I want
this to be something that is
distinguished from other AI self-help
guides by being specific enough to be
useful, by being grounded in actual job
posts, by being grounded in the skills
that I see employers looking me in the
eye and begging me for after hundreds of
interviews they can't find them. I have
seen people throw up their hands and
tell me you go out and interview people.
I've interviewed hundreds of people. I
cannot get this job roll filled. That is
what it's like on the other side. And so
if you want to be the person who is in
demand in that world, this is for you.
Bookmark this video, go back through it.
I know it was dense. You can feed this
transcript to an AI and ask it to
explain it to you. People do that all
the time. They tell me so in the
comments. You can get help to get this
done. I hope this has been helpful. Cheers.
Klicke auf einen beliebigen Text oder Zeitstempel, um direkt zu dieser Stelle im Video zu springen
Teilen:
Die meisten Transkripte sind in unter 5 Sekunden bereit
Mit einem Klick kopieren125+ SprachenInhalt durchsuchenZu Zeitstempeln springen
YouTube-URL einfügen
Gib den Link eines beliebigen YouTube-Videos ein und erhalte das vollständige Transkript
Transkript-Extraktionsformular
Die meisten Transkripte sind in unter 5 Sekunden bereit
Unsere Chrome-Erweiterung installieren
Transkripte abrufen, ohne YouTube zu verlassen. Installiere unsere Chrome-Erweiterung und greife mit einem Klick direkt auf der Wiedergabeseite auf das Transkript jedes Videos zu.