Anthropic's rapid development and release of Claude Co-work, a general-purpose AI agent, demonstrates a new paradigm in operational velocity and product development, shifting the competitive advantage from AI models alone to the ability to quickly observe user needs and deliver solutions.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
10 days. That's how long it took
Anthropic to build and ship Claude
Co-work after they noticed something
their product team was not expecting.
Developers were using their own coding
tool to organize expense receipts. And
really that story of the timeline
matters more than anything else about
the launch of Claude Co-work this week.
It's not the expense receipts that are
interesting. It's that the timeline
reveals how anthropic and AI native
organizations operate and how that
operational velocity is becoming as much
a competitive advantage as the models
themselves. Here's what happened. Claude
Code launched as a terminalbased agent
coding tool. Engineers used it to write
software, debug production issues,
refactor legacy code bases. The tool sat
in the terminal because that's where
developers live. And it worked because
the underlying architecture, a sandbox
agent that could read files, write
files, execute plans, and loop humans in
on progress, ad turned out to be a very
genuinely reliable model for production
work. And so, Enthropic's internal data
shows that they saw a 67% increase in
merge pull requests per engineer per
day. Engineers don't inflate those
numbers for fun, guys. If engineers were
using it, it was because it was useful.
But then the Claude Code product team
noticed something in the usage patterns.
People were not just writing code. They
were pointing Claude code at folders
full of receipts, full of other things,
full of photos, and asking it to produce
expense spreadsheets to categorize the
photos from the family vacation. You get
the idea. They were asking it to
organize messy downloads directories.
They were using a coding tool for
research synthesis, for transcript
analysis, for file management. anything
that could be expressed as here are some
files here's what I want make it happen
now it's easy to think that a PM would
treat this as scope creep right instead
anthropic shipped the same underlying
agent architecture you get with clawed
code it's now wrapped in a UI that
doesn't require anyone to be technical
at all so 10 days from observation to
launch but here's what makes this more
interesting than pure speed people have
been asking for exactly this capability
for a while and the moment claude code
demonstrated what Agentic AI could do in
a terminal non-technical users started
saying I'd love to get access to
something similar I'm not a coder but
demand alone doesn't tell you whether
the capability is actually going to work
and so what anthropic was looking for
was validation and they got it both from
their own product data from developers
already using cloud code for those tasks
but also from what they saw over the
holidays with people using generalurpose
clawed code agents to do everything from
growing their tomato plants to building
sensors for their homes to writing and
shipping production software to writing
and shipping their own to-do lists,
right? Things that would help you brief
and get ready for your day. And so when
they saw all of those different use
cases emerging, it became undeniable
that what they were sitting on was
perhaps the first truly general purpose
agent. Now compare their speed of
response to classic enterprise software
timelines where I mean this is a big
company right? Cloud code is running uh
in billions of dollars in run rate. A
feature request would typically go
through months of reviews before anyone
write a line of code and obvious market
demand would have to be approved and
docs would have to be written. It's just
not like that. They turned around and
said we're going to build it. They use
clawed code to build it and then they
built co-work in a matter of like a week
and a half or so. This matters because
the AI race is no longer just about
models. It's about who can observe user
behavior, recognize what's actually
working, and rapidly ship responses
before competitors jump in and grab the
market. Now, if you were anywhere near
tech Twitter over the 2025 holidays,
code just was all over your timeline.
Engineers were posting about their
productivity gains. Founders were
building entire products in a weekend.
There was an entire Google principal
engineer thread that hit five and a half
million views because uh Jana said that
she had prototyped the product that she
spent an entire year on with her team at
Google in one coding session with Claude
Code. Helen Lee Cup, a mom who voice
records ideas during morning walks, not
a developer, was writing about how she
figured out how to use Cloud Code anyway
to build what she wanted. So, it's not
that Cloud Code was a secret. It's that
the story was getting out and people
were figuring out how to use the
terminal despite themselves. And that's
that's exactly the problem.
Non-technical users could see the
capability. They could watch engineers
accomplish in hours what used to take
days. They could read the threads, but
it takes a special kind of non-technical
user to jump into the terminal, look at
the blinking cursor, not get
intimidated, and just go with the text.
The capability was really visible in
testimonials from all kinds of people,
but the access was not. And so what
gradually emerged over the last month or
two is a conviction that what was
special about Claude code wasn't the
code part at all. The underlying
capability, an AI that can read your
files, understand your instructions,
make a plan, and execute a multi-step
workflow that works for almost anything
expressable as a task with inputs and
outputs. The code ended up being a
constraint for branding and an
insistence on something that isn't true
for general purpose work. And so co-work
keeps all the best of clawed code, same
architecture and puts it in a friendlier
package. You can point it at a folder
using an interface, right? You just
click and point. You can describe what
you want in a chat and walk away. It
makes a plan. It shows you the plan. It
executes the plan autonomously. It loops
you in on the progress just like Claude
Code does, but you're not in the
terminal. You can queue up multiple
tasks and let Claude work through them
in parallel, which feels less like a
conversation and more like leaving
multiple messages for a co-orker. I
think this is a very 2026 experience.
Instead of saying, I'm going to have a
long running iterative chat. I'm going
to try and prompt everything exactly
right. It's going to look more like I
have six different things I want to do.
I'm going to type in six different
messages and get six different threads
going. And the agent is going to work on
all of them at once. And here's where
the strategic picture gets interesting.
Microsoft Copilot, it's a coding agent.
It lives in the browser in the cloud.
Google Workspace AI lives in the browser
and the cloud. There are other tools. Uh
do anything is a great example of a new
tool that came out in 2026. It lives in
the browser. The interaction surface is
web applications. The value proposition
is we navigate websites on your behalf.
Co-work is different because it operates
at the file system level and can also
use the browser. And so the interaction
surface is the folders on your local
machine plus anything it can touch on
the web. And so the value proposition is
that it processes the work artifacts
that are already in your world and
anything you can touch on the web.
That's pretty powerful. In a sense,
these aren't directly competing
paradigms. They're complimementary. And
I think anthropic knows that co-work
integrates with claude and chrome
precisely in order to bridge those
modes. And the file system first design
reflects a specific thesis about where
your leverage leverage as a worker
actually lives. So browser agents are
really constrained by the adversarial
nature of the web. The web is designed
for humans, right? Sites can block them.
Captures can stop them. Login flows
break them all the time. Every
interaction ends up being mediated by
interfaces that are designed for us, for
people, maintained by companies that are
interested in selling to people, and
that have really no particular interest
at this time in making life easier for
AI agents, although that may soon
change. The error surface is enormous
because you're navigating systems that
you can't control. Now, I will say these
web agents have made enormous progress
in getting more accurate at navigating
the web and in reliably asking you to
intervene. I see that across not just
cloud code in Chrome but across the
Atlas browser system across comet across
others as well. On the other hand, file
system agents operate in territory that
is entirely yours. Your files don't have
bot detection. Your folders don't
require authentication, do they? Most of
them. The agent can read, it can write,
it can execute with permissions that you
explicitly grant. The environment is
cooperative rather than adversarial. And
that's a huge difference. The strategic
implication is is simple, but it kind of
pops out once you look at it. Browser
agents will always be a little bit
brittle for high stakes tasks because
the web fights back. The web is
adversarial because it needs to be from
a security perspective. File system
agents can be robust because your local
machine is not adversarial. Your local
machine is friendly. And so Anthropic's
bet is that long-term most valuable
knowledge work ends up living in your
files. It lives in your docs, your
spreadsheets, your notes, your receipts,
your recordings, stuff that gets on your
hard drive or in your Google Docs. And
that processing these artifacts is where
the real productivity leverage sits long
term. Now, of course, they added in web
and you can use web browsing in co-work.
I tried it. It works real well. All you
have to do is ask co-work to do a task.
make sure that you provide it the
appropriate login directly in Chrome.
You'll see a handy little yellow tab
group that belongs to Claude and you're
off to the races. And so it's not like
Claude is limiting web access. It's more
that Claude is recognizing that the
leverage that you see comes from owning
a friendly place where work happens,
which is your file system. It's a
non-adversarial space and Claude can
touch it really easy. This may force
Microsoft's hand. Neuron Daily came out
with a prediction that Microsoft will
have to launch a desktop native general
agent to compete. And I actually think
they're underelling it. I think
everybody is going to launch a desktop
native general agent in 2026. This is
the year of the desktop native general
agent wars because everybody is going to
get disintermediated
by this handy little effectively inbox
where you can do work. Wouldn't you
rather be in one place and say, "Hey,
get me my briefing for the day. Hey, get
me these three metrics I care about from
my dashboards. Hey, make sure my
presentation is ready and give it a
final polish." And it's all done in one
place. You don't have to switch between
PowerPoint and Tableau and whatever else
you're doing. And Claude for the first
time offers that kind of promise with
co-work. That's why this is such a huge
deal. This is a cruise missile aimed at
the heart of knowledge work. Everything
you do as a knowledge worker is about
file ins and file outs. It's about
modifying information. And for a long
time in 2024 and 2025, you chatted with
something and then you had to take those
inputs and outputs and put them
somewhere else. Well, not anymore. You
can actually directly interact with
them. Now, the immediate question that I
have and I bet you have is how does that
relate to the concerns about sloppy
work? We've had a lot of concerns,
especially in late 2025, about people
just throwing AI work that they didn't
check and didn't pay attention to kind
of over the wall and saying, "Good luck,
y'all." And that's not good citizenship.
It's not good to
build a community. It doesn't help you
in your career. It's slop and it's bad.
And so, the interesting thing about
co-work is that it's designed to be
anti-slop. It doesn't mean you can't
misuse it. You can, but it's designed to
be more thoughtful. And this deserves
some unpacking because the anti-slop
thesis is much more interesting than I
first thought. And the more I dug into
co-work, the more I saw like that
thoughtfulness underneath. Ultimately,
the work slop crisis isn't about AI
being bad at writing. It's about AI
making it frictionless to produce very
passible looking output that shifts the
cognitive burden, the the real thinking
you need to do just down the street. And
so the person receiving the AI generated
memo now has to do the thinking the
sender skipped. If you generate your PRD
and don't look at it, the engineer has
to think about it instead of the PM. And
the result is communication that looks
like work but functions as a tax on
attention. In fact, a study by BetterUp
quantified this at nearly 2 hours spent
per piece of work slop received, which
adds up to a lot of lost productivity.
And so Coowworks Design makes several
specific bets against this pattern.
First, unlike a chat, the core output of
this tool is an artifact, not a text
blob. When you ask Coowork to process,
say, your expense receipts into a
spreadsheet, it produces an Excel file
with working VLOOKUP formulas and
conditional formatting, not a CSV that
you then clean up, not markdown you have
to copy paste. The output is the
deliverable. This matters because work
slop typically lives in the gap between
the AI generated draft and the usable
work product. Co-work tries to close
that gap by producing files that don't
require the human cleanup pass.
Essentially, if you can define your
intent well enough, Claude code now
dressed up as Claude Co-work is able to
do a good enough job that it will get it
all the way done. And of course, that
depends on your ability to define intent
well, which is one of the key skills of
2026. The second thing to call out here
is that the architecture is borrowed
from a context where slop is immediately
fatal. So cloud code users are typically
writing software, often production
software. If the output required
constant cleanup, engineers just would
drop it. Uh and yes, there's a lot of
talk about how you ship more and more
code, you ship more and more bugs. But
at the end of the day, you can still use
AI tooling to review large masses of AI
produced code and get very high quality
code results in late 2025, early 2026.
Tropics thesis is the same architecture
that produces trustworthy code can
produce trustworthy knowledge work,
anti-slop knowledge work. And so
software engineers who already trust
Claude code enough to ship what it
produces are going to be okay using
claude co-work for knowledge work and
more importantly the rest of us will too
because even if we haven't had the
experience of shipping code with cloud
code we can understand the idea that the
difference between slop and not slop is
about work quality and we can appreciate
the finished and polished quality of the
artifacts you tend to get out of
co-work. The third anti-slop element is
subtle but important. Claude code keeps
you in the steering loop rather than the
editing loop. So the interface is
designed around task delegation with
very visible progress visibility. You
literally see check marks down the side,
right? It's not about prompt response
cycles. You don't just prompt it and see
more text appear. It's very different.
You describe an outcome and Claude makes
a plan. You see the plan. You can
redirect mid-execution. One of the nice
things that Claude added here is that
you can send a message to the agent in
the middle of executing and just hit a
button that's marked Q and the agent
will pick up your piece of context and
add it into the longunning work without
interrupting itself. This fix fixes a
major blind spot that I've seen in a lot
of AI tooling where you have to either
interrupt a valuable piece of work or
wait for it to finish to add an
important piece of context. Not with
Claude co-work as long as you can
describe an outcome, Claude can write
the plan. You can see the plan. You can
redirect it. And the cognitive work that
we're describing here is on you, but it
happens at the top. It's the steering
work. It's articulating what you want.
It's not downstream cleaning up what you
got. As long as you can tell Claude
co-work about what you want to build,
whether that's expense reports or
whether that's give me specific feedback
on my day ahead or give me a
productivity review based on looking at
my calendar or please help me prepare a
presentation for this upcoming meeting.
Claude Claude Co-work can do it. I will
also say that the file system sandbox
forces specificity and this is a safety
feature with co-work that I really like.
You cannot vaguely ask co-work to help
with your expenses. You must point it at
real folders that contain real files.
You manually touch the mouse and say,
"Please add expenses folder." And this
constraint means that AI must operate on
real work artifacts rather than just
generating content randomly in a vacuum.
And so the input is really concrete and
the output has something that it can
attach to and be faithful to. This is
going to reduce hallucination, right?
And there's a fifth element that's easy
to miss. The task Q model changes the
social dynamics of AI assisted work.
I'll get into that. In chatbased AI,
you're constantly prompting. You're
evaluating. You're prompting. You're
evaluating. You go back and forth. The
rhythm encourages fast and shallow
interactions. It's like batting a tennis
ball back and forth. You prompt. You get
text. You prompt. Again, co-work's
design encourages deeper thought
fundamentally. And I love that. with
deeper thought about what you want,
deeper thought about what you're willing
to step away from and let Claude co-work
on for a while. The AI is not waiting
for your next message anymore. It's
executing a plan. And this shifts the
cognitive load from, well, what do I
prompt next? Do I remember the right
prompt to what do I actually need done?
Which is by far the more interesting
question. And that requires
thoughtfulness. And thoughtfulness is
anti-slop. Now, will all of this
actually solve work slop? Look, it's too
early to tell. It just came out this
week. But I will say this is the kind of
anti-slop architecture we need to see
more of. And I think the critical piece
to call out is that we are seeing
finally a jump into general purpose
agents for non-technical mainstream
users. We are going to see a lot more of
these in 2026. Clearly Claude got out in
front with their initial release here. I
expect releases from chat GPT soon. I
expect releases from Google soon. I
expect releases from Microsoft. And that
brings us to a safety piece. How safe
are these? I get asked this a lot. I
think Anthropic safety disclosure is
worth looking at a little bit more
closely because it's unusually direct
and the implications cut in multiple
directions. Anthropic warns about prompt
injections right up front. And prompt
injections are attempts by attackers to
alter clawed co-works plans through
content it might encounter on the
internet. Right. And what they share is
that they've built defenses against
prompt injections, but that they cannot
promise that it will always be safe. One
of the things that's really interesting
is it looks like they've built an intermediation
intermediation
summary zone or summary workflow stage
between raw internet input received and
what the agent gets to complete the
task. And if that's the case, it gives
us a sense of how the anthropic team is
thinking about multi-layered defenses
from prompt injection. You can imagine
it as a series of walls and you're
trying to keep hostile bots and hostile
actors out. Now, in the short term,
cautious enterprises may decide that
having anything that has any kind of
prompt injection warning is too risky.
But to be honest with you, I kind of
doubt it because the promise of
accelerating tasks that used to take
days into hours or less is so great that
people are willing to trade that. And in
practice, as someone who has used Claude
Code a fair bit and now Claude Co-work,
the instincts that that AI has are
pretty solid. It asks you for permission
when it wants to touch uh website pages
and interact with them. It does not tend
to take actions like login or payment
unless you specifically authorize it.
And even then on high consequence
actions like payments, it usually says
you got to do this. I can't do this. And
so the constitutional AI principles that
embody Claude or that the anthropic team
built into Claude help Claude to make
good common sense choices on the wild
and woolly world of the internet. And
the file system sandbox also helps. If
you are mounting files locally, you are
putting them into not the direct file
access. So I want to be clear if you're
not a technical person, a sandbox is a
safe and secure container. You can put a
file and copy it. Like let's say I have
my receipts, the actual receipt file can
be my receipts folder on my hard drive.
If I copy that folder into my sandbox, I
can manipulate it. I can do things on it
and it's very low consequence because
it's a copy in a secure container and
I'm not touching the core folder. Now,
this doesn't mean that cloud can't touch
your folder. So, just because it mounts
it in a sandbox and containerizes the
folder doesn't mean that it doesn't
touch your hard drives. It does. It can
make changes in your files. That's part
of the value. But the idea that you are
securely containerizing the area of
operation matters a lot when you are
building with a tool that is even
potentially vulnerable. Let me dive just
a little bit more into a story I
mentioned briefly earlier about Jana
Dogen who is a Google principal engineer
and who posted that post that got 5 and
a half million views. Uh what she said
is I'm not joking and this isn't funny.
We've been trying to build distributed
agent orchestrators at Google since last
year. There are various options. Not
everyone's aligned. I gave Claude code a
description of the problem. It generated
what we actually built last year in an
hour. Now, it turned out that what
Claude Code built was a prototype. It
wasn't the full production code. So, I
don't want to overstate the promise. But
the idea that Claude Code could look at
the problem set, independently derive
the correct solution, and begin to
prototype that quickly should not be
underestimated. That is still a very
meaningful step toward what we would
typically describe as artificial general
intelligence. This same power is now
available in co-work. Co-work is just a
nice user interface dressed up over
claude code. And so if you've had
friends that are telling you that you
ought to use cloud code and you've been
resisting, you've been like, I'm not in
the terminal. I'm not a terminal person.
Use claude co-work now. It's in it's in
the max plan for now. And that's only
available for individuals. It's an
alpha. I get all of that. It's in the
expensive plan. But Anthropic
historically brings that down market. It
brings it into enterprise. It brings it
into teams quickly. I am trying to give
you a sense of what you can actually do
with it so that you can understand it.
At the end of this video, I'll go ahead
and share my screen and show you what
Claude co-works like so that you can get
a look for yourself. But before we do
that, I want to get a little bit at
where this tells us we're going in 2026.
First, I think that this is showing us
that the chatbot was a transitional
form. It existed because LLMs could
generate text before they could reliably
execute plans. I don't think that's true
anymore. Claude code has proved that
agentic execution works for not just
software engineering, but for everything
else. And if that hypothesis holds,
several things follow, each with
implications that go much deeper than
you might think at first. One, I think
task cues are going to start to replace
chat interfaces in 2026. And that's much
more than a UX change. The co-work model
where you queue up tasks, you let Claude
work through them in parallel, you get
notified on completion, is closer to
like an email or a ticketing system than
a conversation. But the deeper shift is
in the relationship between the human
and the AI. So chat interfaces position
the AI as a respondent. You ask, it
answers, you ask again. Task cues,
you're positioning the AI as your
worker. You're delegating, it executes,
and you're reviewing. So this is not
about asynchronous versus synchronous
interaction. It's about whether you're
having a conversation with the AI or
whether you're managing it like an
employee. And the management framing
changes what kinds of tasks feel
appropriate to delegate like how much
context you provide up front, how you
evaluate the output. People manage
workers differently than they converse
with their adviserss. And as AI
interfaces shift toward the management
model, I would expect behaviors the way
we use AI to shift accordingly. I will
also call out that verification is going
to continue to be a scarce skill because
the second order effects on
organizational structure of everybody
having cla co-work have not been at all
thought through. When AI can execute
multi-step workflows in parallel across
multiple threads across the whole
organization, the bottleneck shifts to
knowing whether the output is correct
and whether you formed the task
correctly. And so what Jean Dogen was
talking about applies more broadly. The
tool amplifies people who already know
what they're doing while potentially
misleading people who don't. This is why
I think AI fluency is such a critical
piece in 2026. Consider what this means
for how teams are structured. Junior
roles have traditionally served as
execution layers. You give them
well-defined tasks. They complete them
and senior people review them. If AI
handles execution, we're going to
continue to see pressure on junior roles
where firms that are not creative are
going to say they don't need juniors and
firms that are more creative are going
to say we need AI native juniors who can
teach us new patterns of work.
Organizations that figure out how to
develop domain expertise and anti-slot
mechanisms in an AI augmented
environment are going to have a very
significant competitive advantage over
those that accidentally eliminate their
career development pipeline by
overindexing toward killing their junior
roles. And that's going to be a
temptation because the power of this
system, it's it's addictive. It's hard
to step away from. You can do so much
with the co-work interface. I do think
the file system and browser convergence
is inevitable, but I think the way we
get there matters. So co-work plus
browser automation covers most knowledge
work in principle. The next step is
going to be seamless handoffs. How do
you start with files, push to web
services, pull results back to files,
share with a colleague? And so the
integration points between file system
agents, browser agents, things are going
to break there, right? I know that my uh
Google calendar has trouble recognizing
Claude even when I give it a login. It
works sometimes, it doesn't work other
times. I think that might be intentional
on Google's part. Whoever is able to
solve these integration problems is
going to be able to get a unified
execution layer in place that is going
to unlock a ton of productivity. My
guess is that this will probably take a
little bit longer than people expect
because the hard part isn't actually
making any type of agent work in
isolation. It's making them work
together reliably enough that users
don't have to think about what mode
they're in. If I were looking to the
future, I'd watch for two big signals
coming up. The first is how quickly does
Microsoft or OpenAI or Google respond?
If any of them ships something quickly
in the next 2 to 3 weeks, the next
month, my sense is not only does the
competitive picture remain open, but
everyone is seeing signals on the ground
that this is enough the future of work
that we have to pay attention. The other
thing I would look at is unit economics
and pricing. We are in a world where we
are blessed with so many models. Do we
start to see clawed co-work come down
into more economical price tiers?
perhaps with a dumber model, perhaps
with a limited number of max queries,
whatever it takes. But ultimately, I
think the incentive to give everyone
these kinds of tools is very very high.
As long as users read us, can show that
we use those tools to produce useful
products and as long as companies can be
confident that the touches on the web
and the integrations with the rest of
corporate systems are secure enough that
the work can be usefully done and
usefully saved and secured. I fully
expect those kinks to be worked out as
Anthropic inevitably pulls this over
into their teams and enterprise
products. I'll close with a deeper
question. What happens when a product
team can observe a user behavior on
Monday and ship a fullyfledged product
on Thursday? That's the thing that keeps
sticking with me. I started with that
and that's what I keep thinking. This
took 10 days and now I'm going to show
you. It took 10 days to build. They
built it with cloud code. What does it
look like? This is cloud code work. All
right. So you see that they're giving
you affordances right away. And by
affordances, I mean they're giving you
suggestions. You can create a file, you
can crunch data, you can make a
prototype, you can send a message. Uh
yes, it will really send a message. You
can prep for the day or organize files.
That's just a preview. This progress bar
is where you'll see actual plans getting
made. The artifacts are where you'll see
uh artifacts getting made. Let me give
you an example of what we could do here.
Please produce uh a full PowerPoint
describing the launch of Claude Co-work.
Conduct all the necessary research you need
need
to do so.
And when it's complete, please place it
in my downloads folder as a PPTX file.
Then I go work in files. I'll choose a
different folder. This is my downloads
file. I'll just stick it in there. I'm
going to allow claude code to change it.
And that's it. I can just tell it to go.
And you see how it's starting to get
into this. And you're going to start to
see a pro plan and progress bar being
made here. Notice that it's using those
Claude skills that we've talked about
before. Now we have a plan. It's already
researched Claude co-work details.
Check. I can ask a question or recommend
a change right here. If I want to change
that, I can read the PPTX skill
documentation. So, I can change that. I
can change the way it makes a PowerPoint
skill. Uh, and it's now designing a
presentation structure and aesthetic. I
can give it feedback on the aesthetic
right here. You see how different this
is from the chat. Like before in chat, I
would have to say, wait, stop. I want it
to be like a modern presentation or
whatever. Not anymore. I can just adjust
it. It's giving me a suggested slide
structure. I'm going to say, please add
a note on nonobvious
insights and implications to the
presentation. And it's right in the
middle of the work. I'm just going to
throw it in. You can see where it's
working. It's got a shared CSS file it's
working on here. You can see the context
it's got. It's now starting to create
the slides. It's using these skills. I
love the transparency here. And if you
want to do something else, you can
immediately just slip over here, open up
a new task and say, can you please look
at my Google calendar and give me an
assessment how busy I am and what would
be the most useful shift to my daily ritual
ritual
uh to prepare more effectively for work.
And this is all happening in the
background, right? Like the claude is
still working on the other presentation.
So, I can just start this one off. And I
have my Google calendar open in my
browser. And so, it's looking through.
It's going to continue doing its
analysis. We can go back. Now, we're
going to check back in on all of the
work that Claude is doing here. So, you
see I have multiple agents running,
right? Like Claude is doing research on
the one hand for Claude Co-work to build
me a slide presentation. The same Claude
Co-work is also working to analyze my
schedule. And you can do five, six,
seven of these. Now, I asked it to be a
little bit impersonal here, so I don't
reveal people's private information, but
it talks about how I'm busy, how I need
to defend my breakfast block, how I need
to defend my wake window, and having a
time to work out every day is a good
thing. Now, I will be honest with you,
these are not absolutely groundbreaking
assessments. The thing that's
significant is I can do this in parallel
looking at the calendar, come back,
it'll give me assessments all at the
same time that it's working on my
PowerPoint deck. And that's the thing I
want you to grab a hold of. And yes,
it's still working on the PowerPoint
deck. And you can actually see all of
the different artifacts it's created
along the way. Let's start a new task.
Now I'm looking for duplicate files and
downloads. Where have I got extra files
and it has access to the downloads
folder cuz I gave it access to that at
the beginning of the task and it's just
running. Still working on creating
slides. I'll go back to the downloads.
This is what the future of work looks
like. It looks like jumping back and
forth uh between these different tabs.
You can see what it's running here now.
It's copying the PowerPoint to the
downloads folder. Look at that. It gives
me my sources, all the things it looked
at. And it's going to give me a handy
little button to open in PowerPoint. And
yes, it really did make the PowerPoint.
It made it from scratch. You can go
through and see the key fold features.
You can see how it works, real world use cases,
cases,
availability and pricing, non-obvious
insights, which it added in the middle
bigger picture. This was all done in the
middle of doing three or four other
things. This is what I mean by the
future is here. So, if you're not using
co-work, you are missing out on the
future of work. I've got a whole guide
on it up on Substack. This is by far the
most exciting thing that I have seen
come out of AI in the last few months.
And I know that people will accuse me of
being hypy, but the thing that makes
this a breakthrough is that it's not for
technical people. Everybody can use
this. There was no code in what I
described. It was just asking the the AI
agent to do stuff for you, and it did
it. And I know not everybody has the max
plan, so I wanted to give you that
hands-on look so you can see it for
yourself. Good luck out there and get on
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.