0:02 10 days. That's how long it took
0:04 Anthropic to build and ship Claude
0:06 Co-work after they noticed something
0:07 their product team was not expecting.
0:09 Developers were using their own coding
0:12 tool to organize expense receipts. And
0:13 really that story of the timeline
0:15 matters more than anything else about
0:17 the launch of Claude Co-work this week.
0:19 It's not the expense receipts that are
0:20 interesting. It's that the timeline
0:23 reveals how anthropic and AI native
0:25 organizations operate and how that
0:27 operational velocity is becoming as much
0:29 a competitive advantage as the models
0:31 themselves. Here's what happened. Claude
0:34 Code launched as a terminalbased agent
0:36 coding tool. Engineers used it to write
0:38 software, debug production issues,
0:40 refactor legacy code bases. The tool sat
0:42 in the terminal because that's where
0:44 developers live. And it worked because
0:47 the underlying architecture, a sandbox
0:49 agent that could read files, write
0:51 files, execute plans, and loop humans in
0:53 on progress, ad turned out to be a very
0:55 genuinely reliable model for production
0:58 work. And so, Enthropic's internal data
1:02 shows that they saw a 67% increase in
1:04 merge pull requests per engineer per
1:06 day. Engineers don't inflate those
1:08 numbers for fun, guys. If engineers were
1:10 using it, it was because it was useful.
1:13 But then the Claude Code product team
1:15 noticed something in the usage patterns.
1:18 People were not just writing code. They
1:20 were pointing Claude code at folders
1:22 full of receipts, full of other things,
1:24 full of photos, and asking it to produce
1:25 expense spreadsheets to categorize the
1:27 photos from the family vacation. You get
1:29 the idea. They were asking it to
1:31 organize messy downloads directories.
1:33 They were using a coding tool for
1:35 research synthesis, for transcript
1:37 analysis, for file management. anything
1:39 that could be expressed as here are some
1:42 files here's what I want make it happen
1:44 now it's easy to think that a PM would
1:47 treat this as scope creep right instead
1:50 anthropic shipped the same underlying
1:52 agent architecture you get with clawed
1:54 code it's now wrapped in a UI that
1:56 doesn't require anyone to be technical
2:00 at all so 10 days from observation to
2:02 launch but here's what makes this more
2:04 interesting than pure speed people have
2:06 been asking for exactly this capability
2:09 for a while and the moment claude code
2:11 demonstrated what Agentic AI could do in
2:13 a terminal non-technical users started
2:15 saying I'd love to get access to
2:17 something similar I'm not a coder but
2:19 demand alone doesn't tell you whether
2:21 the capability is actually going to work
2:24 and so what anthropic was looking for
2:27 was validation and they got it both from
2:28 their own product data from developers
2:31 already using cloud code for those tasks
2:34 but also from what they saw over the
2:38 holidays with people using generalurpose
2:40 clawed code agents to do everything from
2:42 growing their tomato plants to building
2:45 sensors for their homes to writing and
2:48 shipping production software to writing
2:50 and shipping their own to-do lists,
2:52 right? Things that would help you brief
2:54 and get ready for your day. And so when
2:56 they saw all of those different use
2:58 cases emerging, it became undeniable
3:01 that what they were sitting on was
3:03 perhaps the first truly general purpose
3:05 agent. Now compare their speed of
3:08 response to classic enterprise software
3:09 timelines where I mean this is a big
3:11 company right? Cloud code is running uh
3:13 in billions of dollars in run rate. A
3:14 feature request would typically go
3:17 through months of reviews before anyone
3:19 write a line of code and obvious market
3:21 demand would have to be approved and
3:23 docs would have to be written. It's just
3:24 not like that. They turned around and
3:26 said we're going to build it. They use
3:27 clawed code to build it and then they
3:29 built co-work in a matter of like a week
3:31 and a half or so. This matters because
3:34 the AI race is no longer just about
3:36 models. It's about who can observe user
3:38 behavior, recognize what's actually
3:41 working, and rapidly ship responses
3:43 before competitors jump in and grab the
3:45 market. Now, if you were anywhere near
3:48 tech Twitter over the 2025 holidays,
3:49 code just was all over your timeline.
3:51 Engineers were posting about their
3:52 productivity gains. Founders were
3:54 building entire products in a weekend.
3:56 There was an entire Google principal
3:57 engineer thread that hit five and a half
4:00 million views because uh Jana said that
4:03 she had prototyped the product that she
4:06 spent an entire year on with her team at
4:08 Google in one coding session with Claude
4:10 Code. Helen Lee Cup, a mom who voice
4:13 records ideas during morning walks, not
4:14 a developer, was writing about how she
4:16 figured out how to use Cloud Code anyway
4:18 to build what she wanted. So, it's not
4:20 that Cloud Code was a secret. It's that
4:23 the story was getting out and people
4:24 were figuring out how to use the
4:27 terminal despite themselves. And that's
4:29 that's exactly the problem.
4:31 Non-technical users could see the
4:32 capability. They could watch engineers
4:34 accomplish in hours what used to take
4:36 days. They could read the threads, but
4:39 it takes a special kind of non-technical
4:41 user to jump into the terminal, look at
4:42 the blinking cursor, not get
4:45 intimidated, and just go with the text.
4:47 The capability was really visible in
4:49 testimonials from all kinds of people,
4:51 but the access was not. And so what
4:53 gradually emerged over the last month or
4:56 two is a conviction that what was
4:57 special about Claude code wasn't the
4:59 code part at all. The underlying
5:01 capability, an AI that can read your
5:03 files, understand your instructions,
5:05 make a plan, and execute a multi-step
5:08 workflow that works for almost anything
5:10 expressable as a task with inputs and
5:12 outputs. The code ended up being a
5:14 constraint for branding and an
5:16 insistence on something that isn't true
5:19 for general purpose work. And so co-work
5:21 keeps all the best of clawed code, same
5:23 architecture and puts it in a friendlier
5:25 package. You can point it at a folder
5:27 using an interface, right? You just
5:29 click and point. You can describe what
5:31 you want in a chat and walk away. It
5:34 makes a plan. It shows you the plan. It
5:35 executes the plan autonomously. It loops
5:38 you in on the progress just like Claude
5:39 Code does, but you're not in the
5:41 terminal. You can queue up multiple
5:43 tasks and let Claude work through them
5:45 in parallel, which feels less like a
5:46 conversation and more like leaving
5:48 multiple messages for a co-orker. I
5:50 think this is a very 2026 experience.
5:52 Instead of saying, I'm going to have a
5:54 long running iterative chat. I'm going
5:55 to try and prompt everything exactly
5:57 right. It's going to look more like I
5:59 have six different things I want to do.
6:00 I'm going to type in six different
6:02 messages and get six different threads
6:04 going. And the agent is going to work on
6:06 all of them at once. And here's where
6:08 the strategic picture gets interesting.
6:10 Microsoft Copilot, it's a coding agent.
6:12 It lives in the browser in the cloud.
6:14 Google Workspace AI lives in the browser
6:16 and the cloud. There are other tools. Uh
6:18 do anything is a great example of a new
6:21 tool that came out in 2026. It lives in
6:23 the browser. The interaction surface is
6:25 web applications. The value proposition
6:28 is we navigate websites on your behalf.
6:30 Co-work is different because it operates
6:33 at the file system level and can also
6:35 use the browser. And so the interaction
6:37 surface is the folders on your local
6:39 machine plus anything it can touch on
6:41 the web. And so the value proposition is
6:43 that it processes the work artifacts
6:45 that are already in your world and
6:47 anything you can touch on the web.
6:49 That's pretty powerful. In a sense,
6:51 these aren't directly competing
6:54 paradigms. They're complimementary. And
6:56 I think anthropic knows that co-work
6:58 integrates with claude and chrome
7:00 precisely in order to bridge those
7:02 modes. And the file system first design
7:04 reflects a specific thesis about where
7:06 your leverage leverage as a worker
7:09 actually lives. So browser agents are
7:11 really constrained by the adversarial
7:13 nature of the web. The web is designed
7:15 for humans, right? Sites can block them.
7:17 Captures can stop them. Login flows
7:19 break them all the time. Every
7:21 interaction ends up being mediated by
7:23 interfaces that are designed for us, for
7:25 people, maintained by companies that are
7:27 interested in selling to people, and
7:29 that have really no particular interest
7:31 at this time in making life easier for
7:32 AI agents, although that may soon
7:35 change. The error surface is enormous
7:36 because you're navigating systems that
7:39 you can't control. Now, I will say these
7:41 web agents have made enormous progress
7:43 in getting more accurate at navigating
7:45 the web and in reliably asking you to
7:46 intervene. I see that across not just
7:49 cloud code in Chrome but across the
7:51 Atlas browser system across comet across
7:53 others as well. On the other hand, file
7:55 system agents operate in territory that
7:58 is entirely yours. Your files don't have
8:00 bot detection. Your folders don't
8:01 require authentication, do they? Most of
8:03 them. The agent can read, it can write,
8:05 it can execute with permissions that you
8:08 explicitly grant. The environment is
8:11 cooperative rather than adversarial. And
8:13 that's a huge difference. The strategic
8:16 implication is is simple, but it kind of
8:18 pops out once you look at it. Browser
8:21 agents will always be a little bit
8:23 brittle for high stakes tasks because
8:26 the web fights back. The web is
8:28 adversarial because it needs to be from
8:30 a security perspective. File system
8:33 agents can be robust because your local
8:35 machine is not adversarial. Your local
8:38 machine is friendly. And so Anthropic's
8:41 bet is that long-term most valuable
8:42 knowledge work ends up living in your
8:44 files. It lives in your docs, your
8:46 spreadsheets, your notes, your receipts,
8:48 your recordings, stuff that gets on your
8:50 hard drive or in your Google Docs. And
8:51 that processing these artifacts is where
8:54 the real productivity leverage sits long
8:57 term. Now, of course, they added in web
9:00 and you can use web browsing in co-work.
9:03 I tried it. It works real well. All you
9:06 have to do is ask co-work to do a task.
9:07 make sure that you provide it the
9:10 appropriate login directly in Chrome.
9:12 You'll see a handy little yellow tab
9:14 group that belongs to Claude and you're
9:17 off to the races. And so it's not like
9:19 Claude is limiting web access. It's more
9:23 that Claude is recognizing that the
9:25 leverage that you see comes from owning
9:29 a friendly place where work happens,
9:31 which is your file system. It's a
9:33 non-adversarial space and Claude can
9:35 touch it really easy. This may force
9:37 Microsoft's hand. Neuron Daily came out
9:39 with a prediction that Microsoft will
9:41 have to launch a desktop native general
9:44 agent to compete. And I actually think
9:45 they're underelling it. I think
9:47 everybody is going to launch a desktop
9:50 native general agent in 2026. This is
9:52 the year of the desktop native general
9:55 agent wars because everybody is going to
9:58 get disintermediated
10:02 by this handy little effectively inbox
10:04 where you can do work. Wouldn't you
10:06 rather be in one place and say, "Hey,
10:09 get me my briefing for the day. Hey, get
10:11 me these three metrics I care about from
10:13 my dashboards. Hey, make sure my
10:15 presentation is ready and give it a
10:17 final polish." And it's all done in one
10:18 place. You don't have to switch between
10:20 PowerPoint and Tableau and whatever else
10:23 you're doing. And Claude for the first
10:25 time offers that kind of promise with
10:27 co-work. That's why this is such a huge
10:31 deal. This is a cruise missile aimed at
10:33 the heart of knowledge work. Everything
10:35 you do as a knowledge worker is about
10:38 file ins and file outs. It's about
10:40 modifying information. And for a long
10:44 time in 2024 and 2025, you chatted with
10:46 something and then you had to take those
10:47 inputs and outputs and put them
10:50 somewhere else. Well, not anymore. You
10:51 can actually directly interact with
10:54 them. Now, the immediate question that I
10:57 have and I bet you have is how does that
10:59 relate to the concerns about sloppy
11:00 work? We've had a lot of concerns,
11:02 especially in late 2025, about people
11:05 just throwing AI work that they didn't
11:07 check and didn't pay attention to kind
11:09 of over the wall and saying, "Good luck,
11:12 y'all." And that's not good citizenship.
11:14 It's not good to
11:16 build a community. It doesn't help you
11:19 in your career. It's slop and it's bad.
11:21 And so, the interesting thing about
11:24 co-work is that it's designed to be
11:26 anti-slop. It doesn't mean you can't
11:29 misuse it. You can, but it's designed to
11:30 be more thoughtful. And this deserves
11:32 some unpacking because the anti-slop
11:34 thesis is much more interesting than I
11:36 first thought. And the more I dug into
11:38 co-work, the more I saw like that
11:39 thoughtfulness underneath. Ultimately,
11:41 the work slop crisis isn't about AI
11:43 being bad at writing. It's about AI
11:46 making it frictionless to produce very
11:48 passible looking output that shifts the
11:51 cognitive burden, the the real thinking
11:53 you need to do just down the street. And
11:55 so the person receiving the AI generated
11:58 memo now has to do the thinking the
12:00 sender skipped. If you generate your PRD
12:02 and don't look at it, the engineer has
12:03 to think about it instead of the PM. And
12:06 the result is communication that looks
12:09 like work but functions as a tax on
12:12 attention. In fact, a study by BetterUp
12:14 quantified this at nearly 2 hours spent
12:16 per piece of work slop received, which
12:19 adds up to a lot of lost productivity.
12:20 And so Coowworks Design makes several
12:22 specific bets against this pattern.
12:26 First, unlike a chat, the core output of
12:29 this tool is an artifact, not a text
12:32 blob. When you ask Coowork to process,
12:33 say, your expense receipts into a
12:36 spreadsheet, it produces an Excel file
12:38 with working VLOOKUP formulas and
12:41 conditional formatting, not a CSV that
12:43 you then clean up, not markdown you have
12:44 to copy paste. The output is the
12:47 deliverable. This matters because work
12:49 slop typically lives in the gap between
12:51 the AI generated draft and the usable
12:53 work product. Co-work tries to close
12:55 that gap by producing files that don't
12:57 require the human cleanup pass.
12:59 Essentially, if you can define your
13:02 intent well enough, Claude code now
13:04 dressed up as Claude Co-work is able to
13:07 do a good enough job that it will get it
13:08 all the way done. And of course, that
13:10 depends on your ability to define intent
13:12 well, which is one of the key skills of
13:14 2026. The second thing to call out here
13:16 is that the architecture is borrowed
13:18 from a context where slop is immediately
13:21 fatal. So cloud code users are typically
13:23 writing software, often production
13:25 software. If the output required
13:27 constant cleanup, engineers just would
13:29 drop it. Uh and yes, there's a lot of
13:31 talk about how you ship more and more
13:33 code, you ship more and more bugs. But
13:37 at the end of the day, you can still use
13:40 AI tooling to review large masses of AI
13:42 produced code and get very high quality
13:45 code results in late 2025, early 2026.
13:47 Tropics thesis is the same architecture
13:50 that produces trustworthy code can
13:52 produce trustworthy knowledge work,
13:53 anti-slop knowledge work. And so
13:55 software engineers who already trust
13:57 Claude code enough to ship what it
14:02 produces are going to be okay using
14:04 claude co-work for knowledge work and
14:07 more importantly the rest of us will too
14:08 because even if we haven't had the
14:10 experience of shipping code with cloud
14:13 code we can understand the idea that the
14:15 difference between slop and not slop is
14:18 about work quality and we can appreciate
14:19 the finished and polished quality of the
14:21 artifacts you tend to get out of
14:23 co-work. The third anti-slop element is
14:25 subtle but important. Claude code keeps
14:27 you in the steering loop rather than the
14:29 editing loop. So the interface is
14:32 designed around task delegation with
14:34 very visible progress visibility. You
14:35 literally see check marks down the side,
14:38 right? It's not about prompt response
14:39 cycles. You don't just prompt it and see
14:41 more text appear. It's very different.
14:44 You describe an outcome and Claude makes
14:46 a plan. You see the plan. You can
14:48 redirect mid-execution. One of the nice
14:50 things that Claude added here is that
14:52 you can send a message to the agent in
14:54 the middle of executing and just hit a
14:56 button that's marked Q and the agent
14:58 will pick up your piece of context and
15:00 add it into the longunning work without
15:03 interrupting itself. This fix fixes a
15:04 major blind spot that I've seen in a lot
15:06 of AI tooling where you have to either
15:08 interrupt a valuable piece of work or
15:10 wait for it to finish to add an
15:12 important piece of context. Not with
15:13 Claude co-work as long as you can
15:16 describe an outcome, Claude can write
15:18 the plan. You can see the plan. You can
15:20 redirect it. And the cognitive work that
15:22 we're describing here is on you, but it
15:23 happens at the top. It's the steering
15:25 work. It's articulating what you want.
15:28 It's not downstream cleaning up what you
15:30 got. As long as you can tell Claude
15:33 co-work about what you want to build,
15:34 whether that's expense reports or
15:37 whether that's give me specific feedback
15:39 on my day ahead or give me a
15:41 productivity review based on looking at
15:44 my calendar or please help me prepare a
15:46 presentation for this upcoming meeting.
15:48 Claude Claude Co-work can do it. I will
15:51 also say that the file system sandbox
15:53 forces specificity and this is a safety
15:54 feature with co-work that I really like.
15:57 You cannot vaguely ask co-work to help
15:59 with your expenses. You must point it at
16:01 real folders that contain real files.
16:03 You manually touch the mouse and say,
16:05 "Please add expenses folder." And this
16:08 constraint means that AI must operate on
16:10 real work artifacts rather than just
16:12 generating content randomly in a vacuum.
16:14 And so the input is really concrete and
16:16 the output has something that it can
16:17 attach to and be faithful to. This is
16:19 going to reduce hallucination, right?
16:20 And there's a fifth element that's easy
16:24 to miss. The task Q model changes the
16:26 social dynamics of AI assisted work.
16:29 I'll get into that. In chatbased AI,
16:31 you're constantly prompting. You're
16:32 evaluating. You're prompting. You're
16:34 evaluating. You go back and forth. The
16:35 rhythm encourages fast and shallow
16:37 interactions. It's like batting a tennis
16:39 ball back and forth. You prompt. You get
16:41 text. You prompt. Again, co-work's
16:43 design encourages deeper thought
16:45 fundamentally. And I love that. with
16:47 deeper thought about what you want,
16:49 deeper thought about what you're willing
16:51 to step away from and let Claude co-work
16:54 on for a while. The AI is not waiting
16:57 for your next message anymore. It's
16:59 executing a plan. And this shifts the
17:02 cognitive load from, well, what do I
17:04 prompt next? Do I remember the right
17:06 prompt to what do I actually need done?
17:08 Which is by far the more interesting
17:09 question. And that requires
17:11 thoughtfulness. And thoughtfulness is
17:12 anti-slop. Now, will all of this
17:15 actually solve work slop? Look, it's too
17:16 early to tell. It just came out this
17:19 week. But I will say this is the kind of
17:21 anti-slop architecture we need to see
17:23 more of. And I think the critical piece
17:26 to call out is that we are seeing
17:29 finally a jump into general purpose
17:32 agents for non-technical mainstream
17:34 users. We are going to see a lot more of
17:37 these in 2026. Clearly Claude got out in
17:39 front with their initial release here. I
17:42 expect releases from chat GPT soon. I
17:43 expect releases from Google soon. I
17:45 expect releases from Microsoft. And that
17:47 brings us to a safety piece. How safe
17:49 are these? I get asked this a lot. I
17:51 think Anthropic safety disclosure is
17:52 worth looking at a little bit more
17:54 closely because it's unusually direct
17:56 and the implications cut in multiple
17:59 directions. Anthropic warns about prompt
18:01 injections right up front. And prompt
18:03 injections are attempts by attackers to
18:05 alter clawed co-works plans through
18:07 content it might encounter on the
18:09 internet. Right. And what they share is
18:10 that they've built defenses against
18:13 prompt injections, but that they cannot
18:15 promise that it will always be safe. One
18:16 of the things that's really interesting
18:19 is it looks like they've built an intermediation
18:20 intermediation
18:24 summary zone or summary workflow stage
18:28 between raw internet input received and
18:29 what the agent gets to complete the
18:32 task. And if that's the case, it gives
18:35 us a sense of how the anthropic team is
18:36 thinking about multi-layered defenses
18:38 from prompt injection. You can imagine
18:39 it as a series of walls and you're
18:41 trying to keep hostile bots and hostile
18:44 actors out. Now, in the short term,
18:46 cautious enterprises may decide that
18:48 having anything that has any kind of
18:50 prompt injection warning is too risky.
18:52 But to be honest with you, I kind of
18:55 doubt it because the promise of
18:58 accelerating tasks that used to take
19:02 days into hours or less is so great that
19:05 people are willing to trade that. And in
19:07 practice, as someone who has used Claude
19:09 Code a fair bit and now Claude Co-work,
19:12 the instincts that that AI has are
19:14 pretty solid. It asks you for permission
19:17 when it wants to touch uh website pages
19:19 and interact with them. It does not tend
19:21 to take actions like login or payment
19:24 unless you specifically authorize it.
19:26 And even then on high consequence
19:27 actions like payments, it usually says
19:29 you got to do this. I can't do this. And
19:32 so the constitutional AI principles that
19:35 embody Claude or that the anthropic team
19:38 built into Claude help Claude to make
19:40 good common sense choices on the wild
19:42 and woolly world of the internet. And
19:44 the file system sandbox also helps. If
19:48 you are mounting files locally, you are
19:51 putting them into not the direct file
19:52 access. So I want to be clear if you're
19:54 not a technical person, a sandbox is a
19:57 safe and secure container. You can put a
19:59 file and copy it. Like let's say I have
20:01 my receipts, the actual receipt file can
20:03 be my receipts folder on my hard drive.
20:06 If I copy that folder into my sandbox, I
20:08 can manipulate it. I can do things on it
20:10 and it's very low consequence because
20:12 it's a copy in a secure container and
20:14 I'm not touching the core folder. Now,
20:16 this doesn't mean that cloud can't touch
20:17 your folder. So, just because it mounts
20:19 it in a sandbox and containerizes the
20:22 folder doesn't mean that it doesn't
20:24 touch your hard drives. It does. It can
20:25 make changes in your files. That's part
20:28 of the value. But the idea that you are
20:30 securely containerizing the area of
20:33 operation matters a lot when you are
20:36 building with a tool that is even
20:38 potentially vulnerable. Let me dive just
20:40 a little bit more into a story I
20:42 mentioned briefly earlier about Jana
20:44 Dogen who is a Google principal engineer
20:46 and who posted that post that got 5 and
20:47 a half million views. Uh what she said
20:49 is I'm not joking and this isn't funny.
20:51 We've been trying to build distributed
20:53 agent orchestrators at Google since last
20:55 year. There are various options. Not
20:57 everyone's aligned. I gave Claude code a
20:59 description of the problem. It generated
21:01 what we actually built last year in an
21:04 hour. Now, it turned out that what
21:06 Claude Code built was a prototype. It
21:07 wasn't the full production code. So, I
21:09 don't want to overstate the promise. But
21:11 the idea that Claude Code could look at
21:13 the problem set, independently derive
21:15 the correct solution, and begin to
21:17 prototype that quickly should not be
21:19 underestimated. That is still a very
21:21 meaningful step toward what we would
21:23 typically describe as artificial general
21:26 intelligence. This same power is now
21:29 available in co-work. Co-work is just a
21:31 nice user interface dressed up over
21:33 claude code. And so if you've had
21:35 friends that are telling you that you
21:37 ought to use cloud code and you've been
21:38 resisting, you've been like, I'm not in
21:40 the terminal. I'm not a terminal person.
21:42 Use claude co-work now. It's in it's in
21:44 the max plan for now. And that's only
21:45 available for individuals. It's an
21:47 alpha. I get all of that. It's in the
21:49 expensive plan. But Anthropic
21:51 historically brings that down market. It
21:53 brings it into enterprise. It brings it
21:55 into teams quickly. I am trying to give
21:56 you a sense of what you can actually do
21:58 with it so that you can understand it.
22:00 At the end of this video, I'll go ahead
22:03 and share my screen and show you what
22:05 Claude co-works like so that you can get
22:07 a look for yourself. But before we do
22:09 that, I want to get a little bit at
22:13 where this tells us we're going in 2026.
22:15 First, I think that this is showing us
22:17 that the chatbot was a transitional
22:19 form. It existed because LLMs could
22:22 generate text before they could reliably
22:24 execute plans. I don't think that's true
22:26 anymore. Claude code has proved that
22:28 agentic execution works for not just
22:30 software engineering, but for everything
22:32 else. And if that hypothesis holds,
22:33 several things follow, each with
22:35 implications that go much deeper than
22:37 you might think at first. One, I think
22:39 task cues are going to start to replace
22:42 chat interfaces in 2026. And that's much
22:44 more than a UX change. The co-work model
22:46 where you queue up tasks, you let Claude
22:48 work through them in parallel, you get
22:50 notified on completion, is closer to
22:52 like an email or a ticketing system than
22:55 a conversation. But the deeper shift is
22:56 in the relationship between the human
22:59 and the AI. So chat interfaces position
23:01 the AI as a respondent. You ask, it
23:04 answers, you ask again. Task cues,
23:05 you're positioning the AI as your
23:07 worker. You're delegating, it executes,
23:10 and you're reviewing. So this is not
23:12 about asynchronous versus synchronous
23:14 interaction. It's about whether you're
23:15 having a conversation with the AI or
23:17 whether you're managing it like an
23:19 employee. And the management framing
23:21 changes what kinds of tasks feel
23:23 appropriate to delegate like how much
23:25 context you provide up front, how you
23:27 evaluate the output. People manage
23:29 workers differently than they converse
23:30 with their adviserss. And as AI
23:32 interfaces shift toward the management
23:34 model, I would expect behaviors the way
23:36 we use AI to shift accordingly. I will
23:38 also call out that verification is going
23:41 to continue to be a scarce skill because
23:43 the second order effects on
23:45 organizational structure of everybody
23:47 having cla co-work have not been at all
23:50 thought through. When AI can execute
23:52 multi-step workflows in parallel across
23:54 multiple threads across the whole
23:56 organization, the bottleneck shifts to
23:58 knowing whether the output is correct
24:00 and whether you formed the task
24:02 correctly. And so what Jean Dogen was
24:04 talking about applies more broadly. The
24:06 tool amplifies people who already know
24:08 what they're doing while potentially
24:10 misleading people who don't. This is why
24:12 I think AI fluency is such a critical
24:14 piece in 2026. Consider what this means
24:16 for how teams are structured. Junior
24:17 roles have traditionally served as
24:19 execution layers. You give them
24:21 well-defined tasks. They complete them
24:23 and senior people review them. If AI
24:25 handles execution, we're going to
24:27 continue to see pressure on junior roles
24:29 where firms that are not creative are
24:31 going to say they don't need juniors and
24:32 firms that are more creative are going
24:34 to say we need AI native juniors who can
24:36 teach us new patterns of work.
24:37 Organizations that figure out how to
24:40 develop domain expertise and anti-slot
24:42 mechanisms in an AI augmented
24:43 environment are going to have a very
24:45 significant competitive advantage over
24:47 those that accidentally eliminate their
24:49 career development pipeline by
24:51 overindexing toward killing their junior
24:53 roles. And that's going to be a
24:55 temptation because the power of this
24:57 system, it's it's addictive. It's hard
24:59 to step away from. You can do so much
25:01 with the co-work interface. I do think
25:03 the file system and browser convergence
25:05 is inevitable, but I think the way we
25:07 get there matters. So co-work plus
25:09 browser automation covers most knowledge
25:11 work in principle. The next step is
25:13 going to be seamless handoffs. How do
25:15 you start with files, push to web
25:17 services, pull results back to files,
25:19 share with a colleague? And so the
25:21 integration points between file system
25:23 agents, browser agents, things are going
25:25 to break there, right? I know that my uh
25:28 Google calendar has trouble recognizing
25:30 Claude even when I give it a login. It
25:31 works sometimes, it doesn't work other
25:33 times. I think that might be intentional
25:35 on Google's part. Whoever is able to
25:36 solve these integration problems is
25:38 going to be able to get a unified
25:40 execution layer in place that is going
25:42 to unlock a ton of productivity. My
25:44 guess is that this will probably take a
25:45 little bit longer than people expect
25:47 because the hard part isn't actually
25:49 making any type of agent work in
25:50 isolation. It's making them work
25:53 together reliably enough that users
25:55 don't have to think about what mode
25:56 they're in. If I were looking to the
25:58 future, I'd watch for two big signals
26:00 coming up. The first is how quickly does
26:03 Microsoft or OpenAI or Google respond?
26:06 If any of them ships something quickly
26:08 in the next 2 to 3 weeks, the next
26:10 month, my sense is not only does the
26:12 competitive picture remain open, but
26:14 everyone is seeing signals on the ground
26:15 that this is enough the future of work
26:17 that we have to pay attention. The other
26:19 thing I would look at is unit economics
26:21 and pricing. We are in a world where we
26:23 are blessed with so many models. Do we
26:26 start to see clawed co-work come down
26:28 into more economical price tiers?
26:30 perhaps with a dumber model, perhaps
26:31 with a limited number of max queries,
26:33 whatever it takes. But ultimately, I
26:35 think the incentive to give everyone
26:38 these kinds of tools is very very high.
26:42 As long as users read us, can show that
26:44 we use those tools to produce useful
26:46 products and as long as companies can be
26:48 confident that the touches on the web
26:50 and the integrations with the rest of
26:52 corporate systems are secure enough that
26:55 the work can be usefully done and
26:58 usefully saved and secured. I fully
27:00 expect those kinks to be worked out as
27:02 Anthropic inevitably pulls this over
27:03 into their teams and enterprise
27:05 products. I'll close with a deeper
27:08 question. What happens when a product
27:10 team can observe a user behavior on
27:12 Monday and ship a fullyfledged product
27:14 on Thursday? That's the thing that keeps
27:16 sticking with me. I started with that
27:18 and that's what I keep thinking. This
27:20 took 10 days and now I'm going to show
27:21 you. It took 10 days to build. They
27:23 built it with cloud code. What does it
27:26 look like? This is cloud code work. All
27:27 right. So you see that they're giving
27:28 you affordances right away. And by
27:30 affordances, I mean they're giving you
27:32 suggestions. You can create a file, you
27:34 can crunch data, you can make a
27:36 prototype, you can send a message. Uh
27:38 yes, it will really send a message. You
27:40 can prep for the day or organize files.
27:42 That's just a preview. This progress bar
27:44 is where you'll see actual plans getting
27:46 made. The artifacts are where you'll see
27:48 uh artifacts getting made. Let me give
27:50 you an example of what we could do here.
27:53 Please produce uh a full PowerPoint
27:58 describing the launch of Claude Co-work.
28:01 Conduct all the necessary research you need
28:03 need
28:05 to do so.
28:09 And when it's complete, please place it
28:12 in my downloads folder as a PPTX file.
28:14 Then I go work in files. I'll choose a
28:16 different folder. This is my downloads
28:18 file. I'll just stick it in there. I'm
28:20 going to allow claude code to change it.
28:22 And that's it. I can just tell it to go.
28:23 And you see how it's starting to get
28:24 into this. And you're going to start to
28:27 see a pro plan and progress bar being
28:29 made here. Notice that it's using those
28:30 Claude skills that we've talked about
28:33 before. Now we have a plan. It's already
28:35 researched Claude co-work details.
28:38 Check. I can ask a question or recommend
28:39 a change right here. If I want to change
28:42 that, I can read the PPTX skill
28:44 documentation. So, I can change that. I
28:46 can change the way it makes a PowerPoint
28:48 skill. Uh, and it's now designing a
28:49 presentation structure and aesthetic. I
28:50 can give it feedback on the aesthetic
28:52 right here. You see how different this
28:55 is from the chat. Like before in chat, I
28:57 would have to say, wait, stop. I want it
28:59 to be like a modern presentation or
29:01 whatever. Not anymore. I can just adjust
29:03 it. It's giving me a suggested slide
29:06 structure. I'm going to say, please add
29:08 a note on nonobvious
29:11 insights and implications to the
29:12 presentation. And it's right in the
29:13 middle of the work. I'm just going to
29:15 throw it in. You can see where it's
29:17 working. It's got a shared CSS file it's
29:19 working on here. You can see the context
29:20 it's got. It's now starting to create
29:23 the slides. It's using these skills. I
29:25 love the transparency here. And if you
29:27 want to do something else, you can
29:29 immediately just slip over here, open up
29:32 a new task and say, can you please look
29:36 at my Google calendar and give me an
29:40 assessment how busy I am and what would
29:43 be the most useful shift to my daily ritual
29:45 ritual
29:47 uh to prepare more effectively for work.
29:48 And this is all happening in the
29:50 background, right? Like the claude is
29:51 still working on the other presentation.
29:53 So, I can just start this one off. And I
29:54 have my Google calendar open in my
29:56 browser. And so, it's looking through.
29:58 It's going to continue doing its
30:00 analysis. We can go back. Now, we're
30:03 going to check back in on all of the
30:05 work that Claude is doing here. So, you
30:07 see I have multiple agents running,
30:08 right? Like Claude is doing research on
30:10 the one hand for Claude Co-work to build
30:12 me a slide presentation. The same Claude
30:15 Co-work is also working to analyze my
30:17 schedule. And you can do five, six,
30:18 seven of these. Now, I asked it to be a
30:20 little bit impersonal here, so I don't
30:22 reveal people's private information, but
30:23 it talks about how I'm busy, how I need
30:26 to defend my breakfast block, how I need
30:29 to defend my wake window, and having a
30:31 time to work out every day is a good
30:33 thing. Now, I will be honest with you,
30:34 these are not absolutely groundbreaking
30:35 assessments. The thing that's
30:37 significant is I can do this in parallel
30:39 looking at the calendar, come back,
30:40 it'll give me assessments all at the
30:42 same time that it's working on my
30:44 PowerPoint deck. And that's the thing I
30:45 want you to grab a hold of. And yes,
30:46 it's still working on the PowerPoint
30:48 deck. And you can actually see all of
30:50 the different artifacts it's created
30:52 along the way. Let's start a new task.
30:53 Now I'm looking for duplicate files and
30:55 downloads. Where have I got extra files
30:57 and it has access to the downloads
30:58 folder cuz I gave it access to that at
31:00 the beginning of the task and it's just
31:01 running. Still working on creating
31:03 slides. I'll go back to the downloads.
31:05 This is what the future of work looks
31:07 like. It looks like jumping back and
31:09 forth uh between these different tabs.
31:11 You can see what it's running here now.
31:13 It's copying the PowerPoint to the
31:15 downloads folder. Look at that. It gives
31:17 me my sources, all the things it looked
31:19 at. And it's going to give me a handy
31:20 little button to open in PowerPoint. And
31:22 yes, it really did make the PowerPoint.
31:24 It made it from scratch. You can go
31:26 through and see the key fold features.
31:29 You can see how it works, real world use cases,
31:30 cases,
31:33 availability and pricing, non-obvious
31:35 insights, which it added in the middle
31:37 bigger picture. This was all done in the
31:38 middle of doing three or four other
31:40 things. This is what I mean by the
31:43 future is here. So, if you're not using
31:45 co-work, you are missing out on the
31:46 future of work. I've got a whole guide
31:50 on it up on Substack. This is by far the
31:52 most exciting thing that I have seen
31:54 come out of AI in the last few months.
31:56 And I know that people will accuse me of
31:58 being hypy, but the thing that makes
32:00 this a breakthrough is that it's not for
32:02 technical people. Everybody can use
32:04 this. There was no code in what I
32:07 described. It was just asking the the AI
32:09 agent to do stuff for you, and it did
32:11 it. And I know not everybody has the max
32:12 plan, so I wanted to give you that
32:13 hands-on look so you can see it for
32:15 yourself. Good luck out there and get on