0:02 AI is collapsing futures and most of us
0:04 are missing what that really means. We
0:06 think collapsing as in destroying.
0:07 That's not what what I mean here.
0:10 Collapsing as in compressing is what
0:12 people are missing because AI is
0:14 collapsing multiple different dimensions
0:17 of our work lives into a single thread
0:19 pointing to the future. And we're we're
0:21 missing the deeper implications of that.
0:23 The first collapse is horizontal.
0:25 Engineer, product manager, marketer,
0:27 analyst, designer, opsley. These used to
0:29 be distinct career paths with very
0:31 distinct skill sets. They're all
0:34 converging now very quickly into a
0:37 single meta competency orchestrating AI
0:40 agents to get work done. If you cannot
0:43 do that, none of the rest of the domain
0:44 knowledge is going to matter in late
0:48 2026. And yes, I don't want to lose the
0:50 fact that we still have folks who have
0:52 10 years, 15 years experience in these
0:54 individual domains in front-end design
0:57 in being an operational lead in doing
1:00 deep back-end engineering. But you don't
1:03 have value there unless you can do the
1:05 orchestrating AI agents piece certainly
1:08 by late 2026, early 2027. That is how
1:10 fast this space is developing and I
1:12 don't think most of us are ready for it.
1:15 The second collapse is temporal. the
1:16 leverage you thought you could build
1:18 over the next five years. The way we've
1:20 been trained to think about career
1:22 ladders as these steady steps wait two,
1:23 three years next promotion, that
1:26 timeline is compressing into months. The
1:28 rate of AI capability improvement nearly
1:31 doubled in the last year and it's just
1:33 going to keep going faster. Both
1:36 collapses point to one conclusion. Now
1:38 is what matters. Not your 5-year plan,
1:40 not your eventual intention to get up to
1:43 speed on AI because the future keeps
1:45 arriving faster. Preparation means
1:47 engagement. And I'll add one more piece
1:50 here that I think is absolutely true
1:52 across everyone who engages with AI productively.
1:54 productively.
1:57 This is an art you learn by doing. You
2:00 do not get to learn to ride a horse by
2:02 reading a book as a friend of mine
2:05 called out. You do not get to learn to
2:07 swim by sitting in a deck chair and
2:09 watching the ocean. You just got to get
2:11 in. And that's very true of AI because
2:13 it's an experiential technology. Let me
2:15 go a bit deeper onto the differentiation
2:17 between knowledge work roles because I
2:19 think a lot of times when you see oh the
2:21 knowledge work roles are collapsing.
2:22 They're all going to be the same. It
2:24 feels like a big claim. It feels like
2:26 it's overhyped.
2:28 Gartner's predicting that close to half
2:29 of enterprise applications will
2:31 integrate task specific AI agents by the
2:34 end of 2026. That's up from less than 5%
2:37 in 2025. It's absolutely exploding. It's
2:39 an eight-fold increase in just over a
2:43 year. 57% of companies uh as of 2025
2:45 claim to have AI agents in production.
2:47 Now those can have varying degrees of
2:49 competency. They the the direction is
2:52 nothing but it's exploding. So what this
2:54 means is that specific domain AI
2:56 expertise is going to be mediated
2:58 through these universal skills. The
3:01 differentiation is going to be whether
3:03 you can apply your marketing skills,
3:05 your engineering skills, your finance
3:07 skills, whatever it is in an AI
3:10 agentshaped way. Think about what a
3:12 product manager does today versus two
3:14 years ago. The job used to require
3:17 synthesizing customer feedback, writing
3:18 specs, coordinating with engineering,
3:20 managing stakeholders, and now
3:22 increasingly the job involves just
3:25 prompting models to draft spec and using
3:27 AI to analyze customer data. And you're
3:29 often now using agents to update
3:31 tickets. You're using agents to directly
3:34 build in production. Your entire job is
3:36 radically different. And that pattern
3:38 repeats across every function. Legal
3:40 teams using AI to review contracts are
3:42 compressing jobs that took weeks into
3:45 hours. Finance teams can now use clawed
3:47 in Excel to build projections that used
3:50 to take days. Customer success teams can
3:53 run AI agents that handle 80% of initial
3:56 inquiries or 90 or 95. There is going to
4:00 be a fundamental turnover of skills
4:03 across every one of these jobs families.
4:04 What used to be 50 different
4:06 specializations is going to converge
4:09 into variations on a single theme.
4:11 Humans directing AI with good knowledge
4:14 and good software-shaped intent toward
4:16 an outcome. I've talked about software
4:17 shaped intent before. I think it's one
4:20 of the biggest skills we're missing when
4:22 we direct agents. We need to think in
4:25 terms of what agents can deliver within
4:27 the technical ecosystem they occupy.
4:30 Where is the agent's tool set? Where is
4:31 the agent's memory? Where is the agent's
4:33 workflow? When I direct the agent to do
4:35 something, is it going to look
4:38 softwareshaped? As in, is it going to be
4:40 an interface that adequately reads and
4:43 writes data so that I can solve the
4:45 problem? Software is leveraged expressed
4:48 in silicut. Fundamentally, if you know
4:50 how software works, and so much of
4:52 software is just reading and writing
4:54 data and presenting it in a way that's
4:56 useful, if you start to think in those
4:58 terms, you're going to be able to apply
5:00 the specific domain knowledge you have
5:02 in design, in finance, in customer
5:04 success, and you're going to be able to
5:07 use AI agents more effectively. Even if
5:09 your job isn't building software, this
5:10 used to be a product only thing or an
5:13 engineering only thing. The idea that we
5:16 now work with agents is becoming
5:17 universal. And the idea that we have to
5:20 think in software terms is coming out of
5:21 the technical box. It's coming out of
5:23 engineering. It's coming out of product.
5:25 It's coming for all of us. And I want to
5:27 be clear, your expertise doesn't
5:29 disappear here. It just becomes
5:31 foundational rather than differentiating
5:34 by itself. You need to have great domain
5:35 knowledge to direct AI effectively. It's
5:38 part of how seniors compete in a world
5:40 where everyone has access to the same AI
5:42 tools. But you have to be able to
5:45 leverage that through AI. And I think
5:47 most people think of that still in terms
5:49 of their specific domain. We have this
5:51 sort of single lane focus. And what I'm
5:52 calling out is that we have a giant
5:54 bottleneck on skilling. Like all of our
5:56 skills are starting to converge around
5:59 this one gigantic meta skill of driving
6:01 AI agents. The second collapse I want to
6:03 talk about, I mentioned temporal
6:05 collapse. This is really important and
6:08 we keep missing it. career leverage is
6:10 compressing into the present moment
6:12 because AI is accelerating time.
6:14 Consider even just the Sweetbench coding
6:17 benchmark. AI systems could solve 4% of
6:19 problems in 2023 and they've essentially
6:21 solved the entire benchmark 2 years
6:23 later. I I don't know exactly what it's
6:24 going to be when you see this video, but
6:26 it's around 90 95%.
6:29 Sweetbench is saturated and the fact
6:31 that we saturated it is not even the
6:34 most important thing. The doubling time
6:36 to get that number up is shrinking. AI
6:39 progress is accelerating. Traditional
6:40 career planning assumed you had the
6:42 time. Learn a skill, apply it for years,
6:44 build expertise, get promoted,
6:47 eventually learn that expertise and
6:49 figure out how to leverage it in
6:51 leadership. That timeline gave you a
6:53 sense that you could plot out your
6:55 growth over time and get some breathing
6:57 room. You could be strategic about when
6:59 to invest your learning energy. And that
7:01 assumption, if you take it at face value
7:04 like you could in the 2000s and 2010s,
7:06 that's now catastrophically wrong
7:08 because you have to assume a career path
7:11 where AI is gaining speed ever more
7:13 rapidly. And this creates a really tough
7:15 dynamic for career planning. I don't
7:17 want to sugarcoat that. The skills that
7:20 will matter in 2027 are being defined
7:23 now by people engaging now. If you wait
7:25 until the tech settles down, you're
7:26 going to find that the early adopters
7:28 have already built the workflows,
7:30 established the norms, and captured the
7:32 opportunities that you were waiting for.
7:34 They'll have two years of compound
7:35 learning while you're still figuring out
7:37 the basics. So, there is I cannot
7:39 promise you a way to settle down. This
7:42 is a chaotic period. There is no mature
7:44 state to wait for. There is only a
7:47 continuously steepening curve and it's
7:48 going to reward folks who can climb in
7:51 early and go faster. I compare AI to
7:54 riding a bike. If you are going slow on
7:56 a bike, it's really hard to balance and
7:58 you feel like you're never going to
8:00 catch up. But experientially, when you
8:03 go faster on a bike, the steadiness
8:06 increases. The way to balance gets
8:08 easier. And kids have so much trouble
8:09 learning this. They think if they go
8:11 slower, they'll be safer. But they're
8:14 actually safer going faster. And that is
8:16 what you have to learn with AI. You're
8:18 actually safer leaning in and going
8:21 faster than you are going slower because
8:23 slower forces you to constantly think
8:25 about breaking and stopping and slowing
8:27 down and figuring out how you can adjust
8:29 and work this into your existing
8:31 workflow. And I see so many of us acting
8:33 like kids on a bike for the first time.
8:34 We're just trying to figure out how to
8:37 go very slowly. I got to say AI is going
8:39 too fast for that. You got to get on the
8:40 bike and go as quick as you can because
8:42 that's the easiest way to balance. Like
8:44 people ask how I keep up. It's because
8:46 I'm going pretty fast on the bike and it
8:48 feels really steady. The old career
8:50 model assumed your expertise appreciated
8:52 over time. You would learn something
8:54 valuable. It would stay valuable
8:55 gradually. It would compound. The new
8:56 model is really different. Your
8:59 expertise atrophies. It depreciates
9:01 unless you continuously update it. And
9:03 the depreciation rate is accelerating
9:06 because AI progress is going faster. I'm
9:08 not trying to argue for panic here. It's
9:10 an argument for continuous engagement.
9:13 The people who are thriving now are not
9:16 the ones who just go to an AI class and
9:18 master it once and then coast. They're
9:20 the ones who develop the meta skill of
9:22 continuously learning and adapting as
9:25 the tech evolves. The halflife of any
9:27 given piece of specific AI knowledge is
9:29 short and it's getting shorter. The
9:33 halflife of the learning habit around AI
9:35 is getting longer and more durable. If
9:36 you doubt the magnitude of what's
9:38 happening, follow the money. This is the
9:40 biggest capex project in human history.
9:42 Big tech's combined AI capital
9:45 expenditure was close to half a trillion
9:47 dollar in 2025. It's going to be well
9:50 over half a trillion in 2026. And in
9:53 total, the big five, Amazon, Microsoft,
9:55 Google, Meta, Oracle plan to add more or
9:59 less at least $2 trillion in AI related
10:02 assets in the next four years. This is a
10:04 tremendous amount of operational
10:06 investment in what these companies
10:08 believe is the future. The money is
10:10 committed. AI is happening and is going
10:12 to define the next era of computing so
10:15 thoroughly that we have got to
10:17 understand that there is no other way
10:19 there. The only way out and through is
10:21 AI. And that's what I mean when I say
10:23 it's collapsing timelines and
10:25 compressing career trajectories. There
10:27 is no other way through the career path
10:29 that does not include AI. And things get
10:31 uncomfortable at that point. Many people
10:33 are resisting and you don't have time to
10:36 resist. If you tried chat GPT in 2022
10:37 and said it hallucinated and you just
10:39 left it, you don't have time for that
10:41 anymore. You don't have time to say I'll
10:43 wait it until it matures. Like that's an
10:44 that's like sitting by the bike and say
10:46 I'll wait till it gets steadier. It's
10:48 not going to get steadier. You don't
10:50 have time to say my job is immune. I got
10:52 news for you. It's not immune. Anytime
10:54 you are touching a computer, you are
10:57 touching AI. That's how pervasive it's
10:59 going to be in the next year. And to be
11:01 clear, for people who are saying, I want
11:04 to exit the ride. I want to stop a tech
11:06 career. I have respect for that. And I
11:08 know folks who have said, I had my
11:10 career. I think I'm done. I want to open
11:13 a bookshop. I want to go and start doing
11:16 carpentry. That's fine. That's great.
11:17 That is something that you can choose to
11:20 do intentionally. I think that's a much
11:22 more productive choice than trying to
11:24 stay in the industry that is converging
11:27 on AI and trying to resist that. that's
11:28 just not going to go well and it's going
11:30 to make everybody including you kind of
11:33 miserable. And so if you really think AI
11:34 is not for you, I think the best thing
11:36 you can do is pick that alternate career
11:38 path that takes you away from the screen
11:39 because if you're going to stay in
11:41 fields touched by AI, which is
11:42 increasingly everything to do with a
11:44 computer, you're going to have to
11:46 engage. I want to close by giving you
11:48 some encouragement. It is easy to look
11:51 at this and to be doom and gloom. It is
11:52 easy to say, I did not make the choice
11:55 for AI. I would argue none of us did.
11:57 The industry as a whole made that choice
11:59 and we are all living through this
12:01 moment together. We did not choose to
12:04 compress timelines. We did not choose to
12:06 compress career paths. It's happening
12:09 for us. And I have seen over and over
12:12 again that when people recognize that
12:14 and when they choose to say even though
12:16 I didn't get to pick this, I am going to
12:18 choose to engage with AI with curiosity.
12:19 I'm going to choose to engage AI and
12:21 learn to ride the bike. I'm going to
12:23 lean in as far as I can lean in even if
12:26 I'm not quite sure. that is going to get
12:28 you so much farther. It is going to get
12:30 you an accelerated rate of learning.
12:32 You're going to be less overwhelmed.
12:34 Curiosity literally opens up your brain.
12:38 And we need openness to this AI world if
12:40 we want to be able to shape it in a way
12:42 that works for us. And I've seen
12:44 numerous dozens and dozens and dozens of
12:47 examples where people have chosen that
12:49 positive path. They've chosen to lean in
12:52 in widely differing fields. healthcare,
12:55 tech, finance, engineering, product.
12:57 I've even seen folks lean in on like
13:00 small town community building in AI. And
13:02 without exception, that choice to
13:05 positively lean in has taken them
13:06 farther. And so, if I can leave you with
13:09 anything in the middle of a timeline
13:11 that feels like it's increasingly wild
13:13 and unpredictable, it's just an
13:15 invitation to get on the bike with AI.
13:17 You got to go faster. And you've got to
13:20 be able to believe that if you lean in
13:23 and pry, if you jump in and you say,
13:25 "All right, I'm going to I'm going to
13:26 try something new. I'm going to try
13:27 Claude Code." Whatever it is that's new
13:29 to you, right? I'm going to try lovable.
13:30 I'm going to try a different way of
13:32 working with my chatbot. Great. And then
13:34 do the next thing. And then lean in a
13:35 little farther. And then lean a little
13:36 farther. And you're going to go faster
13:39 and faster and faster and faster. And
13:40 it's going to feel steadier over time
13:43 because you're going to pick up how AI
13:46 works across all of these systems in
13:48 your unconscious brain. The patterns
13:49 will start to solidify and you're
13:51 basically learning to work with this new
13:53 piece of technology in a way that feels
13:55 very stable over time. And so if I can
13:57 encourage you with anything is that
13:59 going faster is safer and less scarier