0:01 Code is about to cost nothing, and
0:03 knowing what to build is about to cost
0:06 you everything. Last July, an AI coding
0:08 agent deleted Saster's entire production
0:10 database during an explicit code freeze
0:13 and then fabricated thousands of fake
0:16 records to cover its tracks. Jason Linen
0:18 was a developer who had given the agent
0:19 all caps instructions. I guess that's
0:23 how we prompt, not to make any changes.
0:25 The agent made changes anyway, destroyed
0:27 the data, and lied about it during a
0:29 code freeze. Fortune covered this. The
0:30 register covered it. It made headlines
0:33 because an agent ignored a clear spec.
0:36 But that failure is what people fixate
0:39 on incorrectly. It's the story of the
0:41 disobedient machine, the Terminator. The
0:44 failure mode that actually matters is
0:47 quieter and it's vastly more expensive.
0:49 Agents that execute specifications
0:52 flawlessly, they build exactly what was
0:55 asked for and then what was asked for is
0:58 wrong. A code rabbit analysis of 470
1:01 GitHub pull requests found AI generated
1:04 code produces 1.7 times more logic
1:07 issues than human written code. Not
1:09 syntax errors, not formatting problems,
1:12 but the code itself doing the wrong
1:14 thing correctly. Google's DORA report
1:17 tracked a 9% climb in bug rates that
1:20 correlates to the 90% increase in AI
1:24 adoption alongside a 91% increase in
1:27 code review time. The code ships faster,
1:29 but it's often more wrong and it's
1:32 difficult to catch until production. AWS
1:34 noticed this and they launched Cairo, a
1:35 developer environment whose core
1:38 innovation isn't faster code generation.
1:40 It's actually just forcing developers to
1:43 write a testable specification before
1:45 any code gets generated. Tell me what
1:47 it's going to be like by telling me how
1:49 you test it. Amazon, a company that
1:51 profits when you ship faster, decided
1:54 the most valuable thing it could do is
1:56 slow you down and define what you want
1:58 because error rates were that concerning
2:00 when developers did not write tests.
2:03 This tells you everything about where
2:05 the bottleneck in code is moving in the
2:07 age of AI and implicitly where the
2:10 bottleneck in jobs is moving. The
2:12 marginal cost of producing software is
2:15 collapsing to zero. 90% of Cloud Code
2:16 was written by Claude Code itself and
2:19 that number is going to be 100% very
2:21 very shortly. Three people at Strong DM
2:23 built what would have required a
2:26 10person team 18 months ago. Curser
2:29 generates $16 million per employee
2:32 partly because they figured out AI code
2:34 generation. So the capability curve is
2:36 steepening. It's not leveling off. And
2:38 if you're reasoning from what AI could
2:41 do in 2025, in 2024, you're working from
2:44 an expired map. But the cost of not
2:47 knowing what to build, of specifying
2:50 badly or vaguely or not at all, is
2:53 compounding much faster than production
2:54 cost is falling, which is a huge
2:56 statement because production cost is
2:58 falling really fast. Yet, every
3:00 framework people reach for to understand
3:02 this moment tends to ask the incorrect
3:04 question because it tends to ask whether
3:06 AI replaces workers and jobs. But when
3:08 the cost of production is collapsing
3:11 like this, the more useful question is
3:13 actually what is the new bottleneck
3:15 where jobs are going to be useful? What
3:17 is the new bottleneck where humans have
3:19 to get really clear? And guess what?
3:21 It's around intent. It's around those
3:24 specifications that engineers struggle
3:26 to write. All of knowledge work is
3:29 becoming an exercise in specifying
3:32 intent. And this video is about what
3:35 happens when those engineering mental
3:38 models get out into the rest of the job
3:40 space and we all have to think about
3:43 where our value is moving when it is not
3:45 doing the work. I think one place we
3:47 need to start when we understand jobs
3:50 and AI is the thinking of Francois
3:52 Cholelay the creator of Caris one of the
3:53 sharpest thinkers in machine learning.
3:55 He made an argument that's become the
3:57 default framework for understanding AI
4:00 and jobs. He pointed to translation, a
4:03 profession where AI can perform 100% of
4:05 the core task and has been able to do so
4:08 since 2023. Translators did not
4:10 disappear. Employment has held roughly
4:12 stable since. The work has shifted in
4:13 the last couple of years from doing it
4:17 yourself to supervising AI output. Now
4:18 payment rates have dropped and
4:20 freelancers have gotten cut first. There
4:23 is new hiring freezes going on. So
4:26 there's impact on jobs. And yet, despite
4:27 all of that, the Bureau of Labor
4:29 Statistics still projects modest growth
4:31 for the translation job category.
4:33 Chalet's claim is that software is going
4:36 to follow the same pattern. More
4:38 programmers will be needed in 5 years,
4:40 not fewer. The jobs will transform
4:43 rather than vanish. I think the model is
4:44 useful for thinking, but I think it's
4:47 also stuck on the wrong question yet
4:49 again. Will software engineers keep
4:52 their jobs is not the most interesting
4:54 question when the cost of production is
4:56 collapsing towards zero because so many
4:58 of us as engineers frankly so many of us
5:00 as knowledge workers all of our work has
5:02 been in production and so if you're
5:03 going to take the cost of production to
5:05 zero will we keep our jobs is really the
5:07 wrong way to think about it. It's really
5:09 what is a what is our job going to turn
5:11 into? And so the interesting question if
5:14 we ask about job transformation not just
5:16 for engineers but for everybody is what
5:19 is becoming scarce and therefore what is
5:22 becoming valuable when doing the work
5:24 when building is no longer the hard
5:26 part. Chalet doesn't have a framework
5:29 for that because translation's
5:31 capability plateau gave the market the
5:34 time to find a stable answer in the
5:36 translation job category. AI coding and
5:39 by extension AI knowledge work is on the
5:41 steepest part of the curve right now.
5:43 I've said before that I think benchmarks
5:46 are fairly easy to game. I'm not the
5:47 only person to say that. But the
5:50 production evidence of coding capability
5:52 gain is so unambiguous. You don't need
5:54 to pay attention to a benchmark to
5:56 believe it. You get look at cursors arr
5:57 and how fast they're growing. Look at
6:00 lovable. Look at the ability to now have
6:03 agents review the code of agents.
6:05 translation had a couple of years to
6:07 adjust because the technology
6:10 essentially solved translation and then
6:12 you had to figure out what to do with
6:14 it. Software may not get the same runway
6:17 because the depth of what's changing is
6:19 much more profound and the pace is even
6:22 faster. We need a different model to
6:24 understand how jobs in software and
6:27 knowledge work are going to change.
6:30 First, when cost goes to zero, demand
6:32 goes to infinity. Every time in economic
6:34 history that the marginal cost of
6:36 production has collapsed in a given
6:40 domain, demand has exploded. Desktop
6:41 publishing did not eliminate graphic
6:43 designers. It created a universe of
6:45 design work that could not have existed
6:48 at any price point prior. Cameras in all
6:51 of our phones created a universe of
6:53 photography that did not exist when
6:55 cameras were very expensive and only a
6:57 few people had them. Mobile didn't
6:59 replace developers. It multiplied the
7:01 number of applications the world needed
7:03 by orders of magnitude. Software is
7:05 about to go through the same expansion
7:08 except bigger. Right now, most of the
7:10 world cannot afford custom software.
7:12 Regional hospitals run on spreadsheets.
7:14 Small manufacturers will track inventory
7:17 by hand. School districts use tools
7:19 designed for organizations 10 times
7:21 their size or more, and some of them use
7:23 nothing at all. The total addressable
7:26 market for software is constrained not
7:28 by demand because demand is functionally
7:31 infinite. It's constrained by the cost
7:33 to produce. We are underbuilt on
7:36 software even after 30 years of software
7:38 engineering 40 50 years. When the cost
7:42 of production collapses, constraint that
7:44 means that we are underbuilt lifts
7:46 forever. Every business process
7:48 currently running in email,
7:50 spreadsheets, phone calls is up for
7:52 grabs now. Every workflow that was never
7:54 worth automating at a $200 an hour
7:56 engineering rate becomes worth
7:58 automating at two bucks in API calls.
8:00 The market for software is not going to
8:03 contract. It is going to explode. And
8:05 that is the best argument for why total
8:08 software employment likely grows and not
8:11 shrinks. Chalet is right about that. The
8:13 demand for people who make software
8:15 happen, however they make it happen, it
8:16 may not be traditional coding, it won't
8:19 be. That has never been higher. and the
8:21 cost collapse is going to push it higher
8:24 still. But I do want to be honest, just
8:26 because we can wave our hands and say
8:28 Jven's paradox means employment grows
8:31 does not mean your specific job is safe.
8:32 And understanding the difference
8:34 requires understanding what happens when
8:36 the constraint shifts from production to
8:38 specification. So let's talk a little
8:40 bit more about the specification
8:42 bottleneck. The majority of software
8:45 projects that fail don't fail because of
8:47 bad engineering. They fail because
8:49 nobody specified the correct thing to
8:52 build. Make it user friendly is not a
8:54 specification. It's like Uber for dog
8:56 walkers is not a specification either.
8:59 It's just a vibes pitch. The entire
9:01 discipline of software engineering,
9:04 agile, sprint planning, etc. evolved as
9:06 a way of forcing specification out of
9:09 vague human language. We need mechanisms
9:11 for converting vague human intent into
9:13 instructions precise enough that code
9:15 can be written against them. That
9:18 vagueness problem has always been there.
9:20 What's new is that the friction of
9:22 implementation is changing. When
9:25 building something took 6 months and at
9:27 best a half a million dollars,
9:29 organizations were forced to think
9:31 really carefully about what they wanted.
9:33 The cost of building acted like a filter
9:36 on the quality of the spec. If you take
9:38 away the cost of building, as AI is
9:39 doing, that filter is going to
9:42 disappear. the incentive to specify just
9:45 evaporated in all of your orgs and the
9:47 cost of specifying really badly is going
9:49 to keep compounding faster than ever
9:51 because now you can build the wrong
9:54 thing at unprecedented speed and scale.
9:56 A vibecoded app can take an afternoon
9:59 and 20 bucks in API calls and if the
10:02 spec is wrong, you did not save 6
10:04 months. You wasted an afternoon and
10:06 perhaps launch something that will harm
10:08 customers because the spec was never
10:10 right. This is the inversion we need to
10:12 pay more attention to because it tells
10:14 us a lot about where jobs are headed.
10:17 The scarce resource in software is not
10:19 the ability to write code. It's the
10:22 ability to define what the code should
10:25 do. And funnily enough, that is part of
10:27 why knowledge work is starting to
10:30 collapse into a blurry job family.
10:32 Because the ability to specify is
10:34 something we all need to do, not just
10:36 engineers. The person who can take a
10:38 vague business need and translate it
10:41 into a spec is the new center of gravity
10:43 in the organization. It doesn't matter
10:45 what their title is. It's not obviously
10:46 the person who writes the code that's
10:48 disappearing. It's not the person who
10:50 reviews the poll requests because
10:51 increasingly that's going to be an
10:53 agent. It's the person with enough
10:56 precision to direct machines and enough
10:58 judgment to know whether the result
11:00 actually solves the problem for
11:02 customers. Two classes of engineer are
11:05 emerging right now and engineering is
11:07 the tip of the iceberg. This is going to
11:09 be true of the rest of knowledge work as
11:12 well. Those two classes emerging right
11:14 now tell us where jobs are headed in
11:16 software. The first class of engineer
11:19 drives high value tokens. These guys
11:21 specify precisely. They architect
11:24 systems. They manage agent fleets plural
11:26 not singular. They evaluate output
11:28 against intention consistently. They
11:30 hold the entire product in their heads,
11:33 what it should do, who it serves, why
11:34 the trade-offs are correct, and why they
11:37 matter. And all they do is they use AI
11:38 to execute at a scale that was
11:40 previously impossible. One of the things
11:42 I want you to think about is that if we
11:45 are underbuilt on software, all of our
11:47 mechanisms are for underbuilt software
11:50 footprints. Imagine a world where your
11:53 engineers have to hold a 10x bigger
11:55 software footprint in their head because
11:57 AI has enabled that kind of scale. You
11:59 can say yes to everything the customer
12:02 wants with AI, but are your engineers,
12:05 are your product managers ready to hold
12:07 that level of abstraction in their
12:09 heads? Because if you can specify well
12:10 enough and orchestrate agents
12:13 effectively, the number of things you
12:15 can simultaneously build and maintain is
12:17 bounded only by your judgment and
12:19 attention, not by the hours in the day.
12:21 These people are going to command
12:23 extraordinary pricing power. The revenue
12:26 per employee data is off the charts. I
12:28 mentioned cursor at $16 million. Well,
12:31 Midjourney is at $200 million with just
12:34 11 people. Lovable is past $100 million,
12:37 past $200 million soon. These are not
12:40 just outliers. This is the equilibrium
12:43 driven by extremely high value AI native
12:45 workers. When one person with the right
12:46 skills and the right agent
12:49 infrastructure can produce what a 20
12:50 person team produced a couple of years
12:53 ago, that person captures most of the
12:54 value that used to be distributed across
12:56 the team. The second class of knowledge
12:58 worker, the second class of engineer
13:00 operates at very low leverage and that
13:03 leverage is degrading single agent
13:06 workflows co-pilot style autocomplete AI
13:08 assisted but not AI directed. These
13:10 engineers, these knowledge workers are
13:12 doing the same work they've always done
13:15 faster and with better tooling and they
13:16 are being commoditized. I just need to
13:18 be honest with you, the signals are
13:20 already there in the data. Entry- level
13:22 postings are down something like 2/3.
13:26 new graduates at 7% of hires, which is a
13:29 historic low. 70% of hiring managers are
13:31 saying AI can do the job of interns. The
13:33 junior pipeline isn't narrowing at the
13:36 intake. It's collapsing because the low
13:39 leverage work that juniors used to do is
13:41 the work AI handles first and best. And
13:44 I want to be really clear here. I have
13:46 personally seen that this is not just a
13:49 junior problem. mid-level and senior
13:51 engineers that are sticking with the way
13:54 they've always worked are in this exact
13:56 same boat. Now, it's time to turn our
13:57 attention to one of the most popular
13:59 responses to the jobs debate, the
14:02 soloreneur thesis. The idea that
14:04 everyone becomes effectively a solo
14:06 capitalist and is able to as a company
14:09 of one unlock tremendous value. That
14:11 sounds really great, but I think it
14:14 captures something real about the first
14:16 class of developer, knowledge worker,
14:18 and not the second class. The ceiling
14:21 for what a single talented person can
14:23 build has absolutely risen through the
14:25 roof. But I think it's a thesis that
14:27 only 10 to 20% of the knowledge
14:29 workforce is positioned to take
14:32 advantage of today. You have to have
14:33 entrepreneurial instincts. You have to
14:36 have deep domain expertise. And you have
14:38 to have the stomach for risk and the
14:40 ability to ramp on AI tools quickly. I
14:43 love that if that's you. The world is
14:45 your oyster. You have never had a better
14:47 chance to build cool stuff. But for the
14:50 other 80%, the future is going to look
14:52 like smaller teams with higher
14:54 expectations and compressed unit
14:56 economics. It's not a revolution in
14:59 autonomy for them. It's not a revolution
15:01 in autonomy for you if you are building
15:04 with the same production model. Instead,
15:06 it's just more pressure on what it takes
15:08 to stay employed. And so, what is the
15:10 distinction? What is the difference
15:12 between the people who are in that top
15:15 10 to 20% and the world is their oyster
15:16 and they can drive high value through a
15:18 company or run their own company versus
15:21 the people who don't. I think it comes
15:24 down to the economic output generated
15:28 per unit of human judgment. That's the
15:29 bifurcation we're looking at in a
15:32 sentence. And the gap between those two
15:34 classes is going to widen as agent
15:36 capability increases because agents
15:39 force multiply excellent human
15:41 specification and judgment. That is a
15:42 learnable skill. By the way, I don't
15:44 believe this is written in stone. I am
15:46 talking about a percentage divide I have
15:48 observed in the real world. I am not
15:50 talking about something I believe is
15:53 inevitable. You can learn human speck
15:54 and judgment. That is absolutely
15:56 something that's doable. I have
15:58 exercises for it. It's something you can
16:00 accomplish. But I don't want to kid you,
16:02 your teams need to do it if you're a
16:04 leader. Individuals need to do it. The
16:06 companies that are able to get, you
16:09 know, from 10 to 20% to 30 to 40% of
16:11 their workforce in this position are
16:12 going to be much, much more competitive
16:14 because of the nonlinear value of
16:17 learning human judgment as a skill,
16:20 learning specification as a skill. In
16:22 the age of AI, software engineers are
16:23 just the canary in the coal mine here.
16:26 The entire coal mine is much bigger.
16:28 knowledge work like analysis, like
16:30 consulting, like project management. It
16:33 all runs on the same substrate that AI
16:35 is already transforming in software. It
16:38 happens on computers. It produces
16:40 digital outputs. It follows however
16:43 loosely that it can be described,
16:45 formalized, and validated. Now, I know
16:47 the standard objection is validation.
16:49 Software has very clear built-in quality
16:51 signals. Code compiles or it doesn't.
16:54 Knowledge work is much vagger. That
16:58 doesn't hold in 2026. Two forces are
17:00 converging to break that assumption.
17:02 First, a huge fraction of knowledge work
17:05 exists because large organizations need
17:07 to manage themselves. The reports, the
17:09 slide decks, the status updates. This is
17:11 the conneto of coordination. It's a
17:13 nervous system that large companies need
17:15 to function. When organizations get
17:17 leaner, which is one of the things I've
17:19 been talking about a lot, AI is making
17:21 them leaner and we are seeing it across
17:23 the board at big companies that
17:24 coordination work isn't going to
17:27 transform with AI. It just deletes.
17:29 Brook's law ends up working in reverse.
17:30 So Brook's law talks about how
17:33 complicated it is to coordinate large
17:34 numbers of people and how that scales
17:37 exponentially. Well, if you cut down the
17:39 number of people and make your team
17:41 leaner, it turns out you have
17:43 exponential gains in the ability to
17:46 coordinate efficiently. The work was not
17:48 valuable in itself. It was valuable
17:50 because the organization was too big to
17:52 function without it. And the
17:54 organization was big because it needed a
17:57 lot of production labor to sustain the
17:59 value. If you simplify the organization
18:00 and make it leaner, all of that
18:02 coordination work can be deleted.
18:04 Second, the knowledge work that does
18:06 remain, the analysis, the strategy, the
18:08 judgment calls can be made more
18:10 verifiable. Consider what's already
18:12 happening in financial services. A
18:14 portfolio strategy used to live in a
18:16 deck and a set of quarterly
18:18 conversations. Now, it lives in a model
18:20 with defined inputs, testable
18:22 assumptions, and measurable outputs. The
18:24 strategy has effectively become a
18:27 specification. And once it's a spec, you
18:29 can validate it against data, run
18:30 scenarios against it, and measure
18:33 whether the execution of that financial
18:35 strategy matched your intent. Legal
18:37 following the same path. Contract review
18:39 is becoming pattern matching against
18:41 structured playbooks. Compliance is
18:44 becoming continuous automated audits
18:45 against codified rules. Marketing is
18:47 becoming experimental design with a
18:49 measurable conversion funnel. The
18:51 mechanism is straightforward. You take
18:52 knowledgework outputs that used to be
18:54 evaluated by vibes and you structure it
18:56 as a set of testable claims or
18:58 measurable specs and suddenly it is
19:00 subject to the same quality signals that
19:03 make software verifiable. Now I'm not
19:05 saying every piece of knowledge work can
19:06 be automated in exactly this way
19:09 tomorrow. But every year that frontier
19:12 is moving forward faster and faster and
19:14 the work that resists structuring tends
19:16 to be exactly the high judgment high
19:19 context work that only the most capable
19:21 people were doing anyway. So knowledge
19:24 work is converging on software not
19:25 because consultants will all learn to
19:27 code but because the underlying
19:29 cognitive task is actually the same
19:30 thing. You're translating vague human
19:33 intent into precise enough instructions
19:35 that human or machine systems can
19:37 execute them. The person specifying a
19:39 product feature and the person
19:41 specifying a business strategy are doing
19:43 the same work just at a different level
19:45 of abstraction. As the tools of
19:48 structuring, testing, and validating
19:49 knowledge work get better, the
19:51 distinction between those two is going
19:53 to collapse very very quickly. And with
19:56 it is going to collapse the insulation
19:58 that non-engineering knowledge workers
20:01 might assume they have. Guys, we're all
20:03 in the same boat with engineering now.
20:05 It's not a different boat. We're all
20:07 working with AI agents. Now, obviously,
20:09 if knowledge work is converging, like I
20:11 say, the practical questions from a jobs
20:13 perspective is what do you do about it?
20:15 Obviously, the answer is not learn to
20:17 code. That's the wrong advice. It's been
20:19 the wrong advice for a while. Engineers
20:21 have spent 50 years developing
20:23 disciplines around a problem that
20:24 knowledge workers are only now running
20:26 into. And I think that we can learn from
20:28 the engineering discipline how to be
20:30 precise enough that a system can execute
20:32 intent. One of the things that is a
20:34 massive unlock for the rest of knowledge
20:36 workers is just learning some of the
20:38 basics that good engineers know and
20:40 first hit the right level of abstraction
20:42 and learn to spec your work the way
20:44 engineers spec features. So a product
20:45 manager who writes improve the
20:48 onboarding flow is operating at the
20:49 wrong level of abstraction and is
20:52 producing the same category of failure
20:54 as a developer who writes just make it
20:56 better or follow this prompt correctly.
20:58 Engineers learned painfully to write
21:01 good acceptance criteria, specific
21:03 testable conditions that define done.
21:06 Guess what? We all need to do that as we
21:07 start working with agents. This is
21:09 becoming one of the single most
21:12 transferable skills in business. And you
21:13 should start practicing writing specs
21:15 today. And by the way, if you're a
21:17 leader listening to this, that goes for
21:19 you, too. Your strategy needs to be
21:21 speckable. You should be able to say
21:23 this is the success criteria. I have
21:26 seen a lot of very terrible strategy
21:28 board decks in my time and I think this
21:29 would generally improve them. Second
21:31 major principle, learn to work with
21:34 compute. Don't just learn about compute.
21:37 Don't just learn about AI. A high value
21:39 AI worker, a high value engineer who
21:42 knows how to use tokens well is not
21:44 valuable because they know about Python
21:46 code or JavaScript or Rust. They're
21:49 valuable because they understand what AI
21:51 can and cannot do, how to structure a
21:53 task so an agent can get it done, and
21:55 how to evaluate whether what the agent
21:57 did was correct. Knowledge workers are
21:59 going to need that same literacy. If
22:01 you're a financial analyst, you should
22:03 be running your models through AI and
22:05 learning where they fail, which
22:07 assumptions they miss, which edge cases
22:09 they ignore. You should be testing
22:11 contract review agents against your own
22:14 judgment. The goal here is not to get to
22:16 a onetoone replacement with your
22:19 judgment. It's to understand the machine
22:22 well enough to direct it and guide it
22:23 and guard rail and catch it when it
22:25 makes mistakes. Third major principle,
22:28 make your outputs verifiable. I know
22:30 some people are running the other way
22:31 here. There are knowledge workers who
22:34 are deliberately sabotaging AI on their
22:35 teams because they don't feel like
22:37 they'll have jobs. That is a fault of
22:39 leadership. Leadership needs to give
22:42 people the support to lean in here
22:44 because you will not be able to automate
22:46 very quickly if you cannot figure out
22:48 how to make the dirty details of your
22:51 day-to-day work verifiable. Engineers
22:53 write tests. A function either returns
22:55 the right value or it doesn't. Knowledge
22:57 workers need to develop the equivalent
22:59 structured outputs with built-in
23:01 validation. You should have data sources
23:04 on your market analysis. A project plan
23:06 should include measurable milestones.
23:08 And funny enough, we've been trying to
23:09 say this for a while as knowledge
23:11 workers. All of the eye rolling around
23:15 OKRs is a little bit an early preview of
23:17 making your outputs more verifiable.
23:19 Except now we really have to do it.
23:22 Next, learn to think in systems, not
23:24 documents. The deliverable of work used
23:26 to be a document of some sort for almost
23:28 everybody who is not an engineer. Now
23:31 you need to think in terms of the larger
23:34 system that your work is driving. A deck
23:36 requires a person who produces it every
23:38 quarter. A system requires a person to
23:41 specify it once and maintain it when
23:43 conditions change. Knowledge workers who
23:45 think in terms of systems. What are the
23:47 inputs? What are the rules? What
23:48 triggers action? How do you know it's
23:50 working? They are going to build things
23:52 that compound. Even outside of
23:54 engineering, knowledge workers who think
23:56 in terms of documents are just going to
23:58 produce AI that generates stuff faster,
24:00 but it's the same old stuff. We need to
24:02 start to learn to teach thinking and
24:04 systems as a core skill for every
24:07 knowledge worker. Finally, audit your
24:09 role for coordination overhead. If your
24:11 honest assessment is that most of your
24:13 work exists because your organization is
24:15 complex enough to require it, big enough
24:17 to require it, right? You have to align
24:19 stakeholders. You have to translate
24:22 between departments. You have to produce
24:24 reports that synthesize information from
24:26 lots of teams. You're really exposed in
24:28 the age of AI. It's not because you're
24:30 bad at your job. It's because the
24:32 organizational complexity that justifies
24:34 your job is the same thing that AI makes
24:37 unnecessary. The question to ask is
24:40 this. If my company were half or a
24:42 quarter of its current size, would my
24:45 role exist? If the answer is no, the
24:47 value you provide is likely linked to
24:49 coordination and coordination is the
24:52 first casualty in leaner organizations.
24:55 Open AI is already making its internal
24:58 systems so transparent to knowledge
25:00 workers that they don't have to go and
25:03 query Slack message that they don't have
25:04 to go and query Slack messages at the
25:06 company. They don't have to go and look
25:08 for context from a meeting. They can
25:11 just hit the internal data system with
25:13 an agent-driven search and get exactly
25:15 what they need from 50 or 60 different
25:17 stakeholders and come back. That is
25:18 where organizations are starting to
25:20 move. You don't have to have a meeting
25:23 to get coordinated. You hit agentic
25:25 search and you see the data in front of
25:28 you. And so the move in that situation
25:30 is not to panic, is to migrate toward
25:33 work that creates direct value. Look for
25:35 ways you can ring the cash register. How
25:37 can you build customer-f facing revenue
25:39 generating products? How can you start
25:42 to think about your work in terms of
25:44 driving the direction of the business or
25:45 getting the data that drives the
25:46 direction of the business? There's lots
25:47 of ways to do this that don't
25:48 necessarily mean you're a product
25:50 manager, right? The business, any
25:52 complex business will have a lot of
25:55 operational arms that have to still
25:57 exist. Finance is still going to exist,
25:58 right? These functions aren't going
26:00 anywhere. Look for how you can be more
26:02 directly value producing in those areas.
26:04 None of this requires a computer science
26:07 degree. All of it requires adopting an
26:09 engineering mindset. And knowledge work,
26:11 to be honest, has resisted that for
26:13 decades. I have lost track of the number
26:15 of conversations I've had with
26:17 marketers, with customer service folks
26:18 over the years where they have said,
26:20 "Engineering is just too hard. I
26:22 couldn't be that precise." I got bad
26:24 news. We all need to be that precise
26:26 now. We all need to be testable. We all
26:28 need to be falsifiable. We all need to
26:30 understand our tools well enough to know
26:32 when they're wrong. So, if we step back
26:35 from the details of our day-to-day jobs,
26:37 what does the larger productivity and
26:40 jobs picture look like? Where is this
26:43 conflict around jobs and AI playing out
26:45 in the real world? We are in the trough
26:48 of a J curve right now. Census Bureau
26:51 research on manufacturing has found AI
26:53 deployment initially reduces
26:55 productivity by an average of 1.3
26:57 percentage points. I bet you didn't
26:59 expect me to say that. With some firms
27:02 dropping as much as 60 points before
27:04 they start to recover. The METR study
27:06 that I shared about earlier this week
27:08 talked about the idea that there are
27:11 dark factories where AI agents not only
27:13 produce all the code but review all the
27:15 code. That same study found that
27:19 experienced developers were 19% slower
27:21 with AI tools despite believing they
27:23 were 24% faster. They just didn't
27:25 understand. This is the J curve of
27:28 technology adoption. Productivity dips
27:31 before it surges and we are in the dip.
27:33 What's interesting is because AI is
27:34 moving so fast and because it's
27:37 influencing the economy so widely, we
27:39 know this is a J curve and not a
27:41 permanent degradation because we can
27:43 literally see the companies that have
27:45 figured this out and gotten to massive
27:47 multiples. We don't have to hypothetical
27:49 midjourney. We don't have to create a
27:51 hypothetical about cursor. The employees
27:53 at those organizations really are that
27:54 productive and you can see it in the
27:56 numbers. So what comes after for
27:59 everybody else? manufacturing firms that
28:02 were digitally mature before AI will
28:05 eventually so what comes after for all
28:06 the rest of us who don't work at
28:09 midjourney and cursor given the pace of
28:12 AI capability scaling agents going from
28:14 agents going from bug fixes to
28:15 multi-hour sustained engineering in
28:18 under a year three person teams shipping
28:20 what 10 person teams shipped last year
28:22 or 20 person teams my bet is that this
28:24 entire thing compresses that has been
28:26 the story of this cycle right the
28:28 software J curve curve the adoption cost
28:30 that you face before you get fluent,
28:32 even for the rest of the economy, even
28:35 for non-native AI companies, is going to
28:37 compress into something like 18 to 24
28:39 months. And early adopters are going to
28:41 be past the bottom already. The
28:43 companies that figure out specdriven
28:45 development and agent orchestration
28:48 don't just get to be more efficient,
28:50 they get to operate at speed, at
28:52 productivity ratios that make
28:54 traditional organizations look dead in
28:56 the water. a 10 to 80x revenue per
28:59 employee gap opens up. One of the things
29:01 that matters here is that the J curve
29:03 really is shaped like a J. When you get
29:06 past the bottom, you start to accelerate
29:08 really, really quickly because agent
29:10 gains start to multiply cleanly across
29:12 your business. So, if we look at the
29:15 broad arc of history, what kind of
29:17 historical analog actually makes sense
29:19 for us here? The historical parallel
29:21 that fits best is not the story of the
29:23 invention of ATMs and how that affected
29:25 bank tellers. It's not the story of
29:27 calculators and how that affected
29:29 mathematicians. It's actually the story
29:31 of telephone operators in the ' 90s.
29:34 Those jobs did not disappear overnight.
29:36 But the people who held those jobs,
29:38 predominantly women and workingclass at
29:40 the time, found themselves a decade
29:42 later in lowerpaying occupations or out
29:45 of the workforce entirely. Overall
29:47 employment grew. new categories of work
29:49 emerged. But for the individuals in the
29:51 crosshairs, that was cold comfort. It
29:53 did not matter for those women. I think
29:55 we're in a similar moment, but I think
29:56 we have more tools to support each
29:58 other. And I think it's incumbent upon
30:00 leadership to do a better job than we
30:03 did in the 1920s. The economy is going
30:05 to create more software than ever, more
30:07 systems running on computers than ever.
30:09 It will probably be two or three orders
30:10 of magnitude what we have today.
30:12 Computers will remain more central to
30:14 human society than at any point in
30:16 history. That part of the story is
30:18 genuinely structurally optimistic
30:20 because compute creates leverage and
30:23 leverage creates abundance for us. But
30:25 more jobs in the economy and your
30:26 individual jobs are very different
30:29 things. The bifurcation is already there
30:31 in the data. AI native companies are
30:34 exploding and picking up pieces of the
30:37 economic pie that traditional companies
30:39 are deserting. That is why you see the
30:41 collapse in the SAS stock market over
30:43 the past couple of weeks. The gap
30:45 between engineers who can drive high-V
30:49 value tokens is literally $285 billion,
30:51 which is the amount that Claude was able
30:53 to wipe off of traditional SAS stocks by
30:55 releasing a 200line prompter on legal
30:56 work. I did a whole video on that. The
30:58 point here is not an individual stock
31:00 drop. Whether or not it recovers, not my
31:02 problem right now. The point is to think
31:04 about knowledge workers and understand
31:07 that we need to have a much more
31:09 intentional conversation to ensure that
31:11 the 70 or 80% of knowledge workers who
31:13 are not pushing highv value tokens right
31:16 now get the skills to do so. How can we
31:20 think about the distribution of our
31:24 teams and look at each person on that
31:26 team as someone who can level up in
31:28 their agent fluency, someone who can
31:30 level up in their ability to write specs
31:32 and understand intent because that is
31:33 the new skill that's going to matter.
31:36 And there is no reason why we have to
31:38 leave people behind on that. It
31:39 absolutely is a skill issue. It's a
31:42 learnable skill. This transition is
31:44 going to happen whether we prepare for
31:46 it, whether we support our teams or not.
31:50 The only variable is which side of the
31:52 bifurcation we're going to end up on and
31:54 whether we as company leaders are going
31:56 to lean in and support our teams in that
31:58 transition. whether we as individuals
32:00 who are trying our best to get through
32:03 this AI transition are able to learn the
32:06 skill to start to think in terms of
32:08 giving clear intent and goals and
32:10 constraints in our work rather than
32:12 doing the work itself and that window is
32:14 closing faster because AI agent
32:17 capability gains keep accelerating. The
32:19 technology is not going to wait for
32:20 organizations and individuals to catch
32:22 up. We have to lean in and help each
32:24 other. If you are on a team and you
32:26 understand what I'm saying, it is on you
32:29 to help your buddies on the team to
32:30 understand this better. If you're a
32:32 leader, it is on you to think about how
32:34 you build systems that support everyone
32:37 in your org. And if you are stuck, it is
32:41 on you to figure out how you can take at
32:43 least a single step toward understanding
32:47 what it means to give the agent a job
32:49 and watch it do the work. It might be as
32:51 simple as trying Claude in Excel and
32:52 watching Claude create something. Maybe
32:54 that's the simplest way to start. I have
32:55 some other exercises as well that I put
32:57 in the substack that are at a range of
33:00 scales. But the larger point is that you
33:04 need to believe that there is hope at
33:06 the end of the tunnel and that the
33:08 company you're operating at, the job
33:10 that you're doing is something you can
33:13 pivot. If you think about it as tackling
33:16 a larger problem and specifying where
33:19 your agent needs to go to create value,
33:21 that's on us to do. the agent capability
33:23 is going to be there. It is on us to
33:26 specify enough of what we want that we
33:29 can create tremendous value with all of
33:31 this compute capability that we have. We
33:32 need to have better strategies. We need
33:35 to think bigger. It is actually rational
33:37 to think about boiling the ocean. We
33:40 were always told as companies, as
33:42 leaders, as product managers, don't boil
33:45 the ocean in your strategy. Well, if you
33:47 have the cost of production falling to
33:50 zero on software, why not think big? Why
33:52 not think courageously? Why not think
33:54 about producing more value? I think that
33:57 is a bold goal that can actually
34:00 catalyze a lot of transformational
34:02 change in the ways I'm talking about. It
34:04 can catalyze teams to work more leanly.
34:07 It can catalyze individuals to start to
34:08 think about how they can stretch and
34:11 grow and define what agents do work for
34:13 them so they can do more and lean more
34:15 into the direct production of value.
34:17 That is where we need to go. That is why
34:19 the future of jobs is not about
34:21 production of code or production of
34:23 work. It is about good judgment to
34:25 specify where agents are going. Best of luck.