0:01 The most interesting thing about
0:04 OpenClaw is not the agent, it's the web.
0:07 The web is forking in the age of agents,
0:09 and nobody's talking about it enough.
0:11 Last Tuesday, three things happened
0:12 within hours of each other. Coinbase
0:14 launched Agentic Wallets, which are
0:16 crypto wallets designed not for people,
0:18 but for agents. Cloudflare shipped
0:20 Markdown for agents, a feature that
0:22 automatically converts any website into
0:24 agent readable markdown when an AI
0:26 system requests it. And then OpenAI
0:28 published a developer blog post about
0:30 skills and shell tools that let agents
0:31 install software dependencies, run
0:34 scripts, and write files inside hosted
0:36 containers. None of these companies
0:38 coordinated their announcements. They
0:40 didn't need to. They're all building
0:43 toward the same future. They all see the
0:45 open claw phenomenon, and that future is
0:47 arriving faster than any of them or most
0:50 of us expected. In the last few videos,
0:52 I've covered Open Claw's chaotic launch,
0:54 the emergent behaviors that made
0:57 researchers rethink agent capability and
0:59 what thousands of community-built skills
1:01 reveal about what people actually want
1:04 from their AI agents. This video is
1:06 about something bigger than Open Claw.
1:08 It's about the infrastructure layer
1:10 that's forming under it and underneath
1:12 every agent that comes after it. It's
1:15 about a new kind of web. Every major
1:16 infrastructure company on the internet
1:19 is now simultaneously building a
1:21 different piece of what amounts to an
1:24 entirely new way for commerce and
1:26 interaction to get done across the
1:28 internet. And those pieces are snapping
1:30 together faster than most of our mental
1:32 models can track. Let's start with the
1:33 money. Agents can't do much on the web
1:35 if they can't pay for things. Coinbase's
1:38 Agentic Wallet solved this on the crypto
1:41 side using a protocol called X42 that's
1:43 already processed over 50 million
1:45 machineto-achine transactions. Yes, you
1:47 heard that right, 50 million. The
1:49 wallets come with programmable spending
1:51 limits, session caps, and gasless
1:53 trading on Coinbase's base network.
1:55 developers can spin one up in under 2
1:56 minutes with a command line tool. And
1:58 the wallets use non-custodial
1:59 architecture, which means that even if
2:02 the agent is compromised, the keys
2:05 themselves sit in secure hardware that
2:07 the agent cannot access. So the agent
2:10 can't leak those keys. Within 24 hours
2:12 of this launch,
2:14 new AI agents registered wallets on
2:16 Ethereum. That's not developer
2:19 experimentation. That's an ecosystem of
2:21 agents with wallets forming in real
2:23 time. The use cases that Coinbase
2:25 highlighted tell you where Coinbase
2:27 thinks this is going. Agents that
2:30 autonomously rebalance DeFi portfolios,
2:32 agents that pay for API calls as they
2:35 make them. Agents that purchase compute
2:37 on demand and participate in creator
2:39 economies. Brian Armstrong's pitch is
2:41 quote, "The next generation of agents
2:43 won't just advise, they'll act." Which,
2:45 like, duh, that's what Open Claw is all
2:47 about. But this is clearly where he's
2:49 going. What he did not say is that the
2:51 architecture implies that agents with
2:53 wallets will become real economic
2:56 entities, that can earn, that can spend,
2:58 and that accumulate capital
3:00 independently of the humans who created
3:03 them. That's a category of software that
3:05 has never existed before. And that is a
3:07 whole mess of legal problems that we
3:08 have not encountered yet. Stripe is
3:10 solving the same problem on the
3:12 traditional payment side. Their Aenta
3:14 Commerce suite, which was launched in
3:16 December, allows businesses to connect a
3:18 product catalog and start selling
3:20 through AI agents with a single
3:22 integration. They built a new payment
3:25 primitive called shared payment tokens,
3:28 scoped, time constrained credentials
3:30 that let an agent initiate a purchase
3:32 using a buyer saved payment method
3:34 without ever seeing the card number.
3:36 Stripe's fraud detection system, radar,
3:39 had to be retrained from scratch because
3:42 the old signals were all calibrated for
3:44 human shopping behavior. Think about
3:46 what that means. Decades of fraud
3:48 detection machine learning built on
3:50 patterns like mouse movement
3:53 variability, browsing time, session
3:56 behavior, device fingerprinting, all of
3:59 it became useless when the buyer is
4:02 software. Agent traffic doesn't move a
4:04 mouse. It doesn't browse. It doesn't
4:06 exhibit the behavioral variability that
4:08 distinguishes a legitimate shopper from
4:10 a bot. Stripe had to build an entirely
4:13 new fraud model for a client that is by
4:16 any prior definition a bot. And yet now
4:18 bots are purchasers. Brands including
4:20 Urban, Etsy, Coach, Kate Spade, and
4:23 Revolve are all already onboarding.
4:25 Google is getting in on the action, too.
4:26 They launched their agent payments
4:28 protocol back in September. PayPal and
4:31 OpenAI partnered on instant checkout in
4:33 chat GPT. Visa built a trusted agent
4:36 protocol at NRF 2026 which is a
4:38 conference for this in January. Google
4:39 announced the universal commerce
4:41 protocol which is an open standard for
4:43 agent to commerce interaction and
4:46 Stripe's ACS immediately auto supports
4:47 it meaning merchants who integrated
4:49 Stripe's agent tools are already
4:51 compatible with Google's agent shopping
4:53 infrastructure without writing one more
4:55 line of code. The industry consensus, as
4:57 a decrypt analyst put it, is quote,
4:59 "Agents that can't spend money are
5:02 fundamentally limited," which is true,
5:03 but there's a whole lot down the road
5:06 once you do that. Nevertheless, every
5:08 major payment company reached this
5:10 conclusion independently within the same
5:12 couple of month window. But we're not
5:13 done when we're talking about payments.
5:15 Let's go over to content access. The web
5:18 is made of HTML, and HTML is designed
5:21 for human browsers, not language models.
5:23 pages are bloated with scripts, tracking
5:26 pixels, navigation menus, and ads. When
5:29 an agent needs to read a web page, it
5:30 has to strip all of that stuff that we
5:32 humans like out of the way and convert
5:34 it into something useful. Usually,
5:36 that's markdown. This is such a common
5:38 step that an entire category of
5:41 companies like Firecrawl or Exa exists
5:43 just to do that conversion. Now,
5:46 Cloudflare's Markdown for agents cuts
5:47 out that middleman. When an AI agent
5:50 requests a page for any Cloudflare
5:53 enabled site, it sends an accept header
5:55 and Cloudflare intercepts the request,
5:57 fetches the HTML from the origin server,
6:00 converts it to markdown on the fly, and
6:02 serves it back. The response even
6:04 includes an X markdown tokens header
6:07 with the estimated token count, so the
6:10 agent can manage its own context window.
6:12 No scraping anymore, no conversion
6:15 libraries, no wasted compute. The agent
6:17 just asks for markdown and gets
6:19 markdown. This matters a lot more than
6:21 it might sound. Cloudflare serves
6:24 roughly 20% of the web. when they decide
6:27 agents are first class citizens of the
6:29 web, which is what they just did. When
6:31 they decide agents are not to be
6:33 blocked, but rather clients who should
6:35 be served in their preferred format,
6:38 markdown, Cloudflare is making an
6:40 infrastructure level commitment to a
6:43 world where software reads websites as
6:46 routinely as humans do. And Cloudflare
6:48 isn't stopping at markdown conversion.
6:50 They launched three companion features
6:53 in the same release. First, LLM.ext text
6:55 and LLM's full.ext, which are
6:57 standardized machine readable site maps
6:59 that tell agents what's on a site and
7:02 how to navigate it. Just like robots.ext
7:04 told search engine crawlers the exact
7:07 same thing two decades ago. Second,
7:10 Cloudflare launched AI index. It's an
7:12 opt-in search index where sites can make
7:14 their content discoverable to agents
7:16 directly through Cloudflare's MCP server
7:19 and search API. And that means they can
7:22 bypass Google entirely. Third and most
7:25 telling, Cloudflare is including builtin
7:28 X42 monetization support. So site owners
7:31 can charge agents for content access
7:34 using the exact same protocol as
7:37 Coinbase's wallets. Cloudflare isn't
7:39 just making the web readable for agents.
7:42 They're building an economic layer for a
7:45 web where agents pay to access content.
7:47 Then there's search. Google search is
7:49 optimized for humans, obviously. 10 blue
7:52 links, ads, featured snippets, knowledge
7:53 panels. Recently, they added AI
7:56 summaries. None of that is useful to an
7:58 agent that needs to programmatically
8:00 find specific information and then come
8:03 back with structured data. Exa.ai built
8:04 a search engine from scratch
8:06 specifically for agents, their own
8:08 index, their own neural retrieval
8:09 models, their own embedding
8:12 infrastructure. Their API returns raw
8:14 URLs and content, not search engine
8:16 result pages. Their research endpoint
8:18 chains multiple searches together.
8:20 agentically parallelizing across output
8:23 fields to minimize latency. It scores
8:26 95% on simple QA, a benchmark for
8:29 factual accuracy. For comparison,
8:31 perplexity scores lower. So, if you're
8:33 thinking, is this going to be a new bar
8:35 for accurate agentic search? You would
8:37 be right. But the benchmark results are
8:39 much less interesting than what this is
8:41 all implying about the future of
8:43 internet market structures. Google built
8:45 a search engine for humans and spent
8:48 decades perfecting it. Now there's a
8:50 parallel need search for machines and
8:53 Google's architecture is the wrong shape
8:56 for that. The companies that build agent
8:58 native search from first principles have
9:01 an actual structural advantage, not just
9:03 a marketing one. An independent
9:05 benchmark from AI multiple tested the
9:07 major agent search providers
9:09 head-to-head. If search led on a
9:11 composite agent score, Firecrawl, Exa,
9:13 and Parallel Pro were statistically tied
9:16 behind it. But the latency spread tells
9:17 you where the real differentiation is
9:20 starting to live. In an agent workflow,
9:22 Brave returned results in 669
9:24 milliseconds, which is about 2/3 of a
9:29 second. Parallel Pro took 13.6
9:31 whole seconds. In an agent workflow
9:34 where each search is one step in a long
9:37 chain, that latency difference compounds
9:39 into minutes really, really fast. The
9:41 providers that own their own
9:43 infrastructure and their own agentic
9:45 index rather than wrapping Google's API
9:48 have a structural speed advantage that
9:50 grows much more valuable as agent
9:52 workflows get more complex. And guess
9:54 what? They're going to in 2026. And then
9:57 there's execution. Openai's blog post on
9:59 skills, shell, and compaction reads like
10:01 a road map for turning agents into
10:04 advisors and workers. Skills are
10:06 reusable version instruction bundles.
10:07 We've heard about them from Claude
10:09 before. I've talked about them a fair
10:11 bit. Think of them as standard operating
10:13 procedures for AI for a particular task.
10:15 An agent can load them on demand,
10:16 immediately learn the skill, and get
10:19 going. The shell tool gives agents a
10:20 real terminal environment where they can
10:23 install dependencies, run scripts, and
10:25 write output files. Compaction manages
10:27 the context window automatically so that
10:30 longunning agent workflows don't crash
10:32 when they hit token limits. The details
10:34 matter here because they reveal OpenAI's
10:36 bet about what agent architecture
10:38 actually is going to look like in
10:41 production. Skills aren't prompts.
10:43 They're versioned. They're mountable
10:45 instruction packages. They look more
10:47 like Docker images than chat templates.
10:49 An organization can build a Salesforce
10:51 skill, test it, lock down the version,
10:53 and deploy it across every agent in the
10:55 company with a guarantee that every
10:58 agent follows the same procedure. When
11:00 the procedure changes, you just update
11:02 that skill version and every single
11:03 agent will follow. You don't have to
11:05 mess with CIS prompts or anything else.
11:07 That's the difference between arteasonal
11:10 prompt engineering and actual software
11:12 engineering applied to AI operations.
11:14 The shell tool is equally telling. It
11:16 gives agents a real Linux environment,
11:19 not a sandbox playground, but a terminal
11:21 where they can write files to disk and
11:24 type commands like install, curl, and
11:26 grap. The pattern OpenAI describes
11:28 installing dependencies, fetching
11:30 external data, producing a real
11:32 deliverable that is functionally
11:35 identical to how a human freelancer
11:37 works today. Human freelancers read the
11:39 brief, set up the tools, do the
11:41 research, and deliver the artifact. So
11:44 do agents. The difference is the agent
11:46 can now do it inside a container in just
11:49 a few seconds. And skills ensure that it
11:52 follows the same procedure every single
11:54 time. Glean is an enterprise search
11:55 company and it was an early skills
11:58 customer and they saw accuracy on
12:01 Salesforce related tasks jump from 73 to
12:03 85% with a single well ststructured
12:05 skill. At the same time it got faster
12:07 because the agent wasn't thinking about
12:09 what to do and they saw about an 18%
12:12 decrease in time to first token which
12:15 matters when every single query counts.
12:17 The gains come from moving stable
12:19 procedures out of system prompt and into
12:22 versioned modular instruction bundles
12:24 which is frankly again just software
12:26 engineering applied to AI workflows.
12:28 We're not reinventing the wheel here.
12:30 Nothing revolutionary. Everything that
12:32 is revolutionary comes from second order
12:35 effects. All we're doing is a classic
12:37 enterprise deployment except we're doing
12:40 it with AI. We now have version control,
12:42 testing, roll back. That part isn't new.
12:44 The part that's new is that we're doing
12:47 all of this for autonomous AI agents.
12:48 Last but not least, they launched
12:50 compaction, which is not a particularly
12:52 flashy feature, but which is super
12:54 important to support longunning
12:56 workflows. Any agent running for a while
12:59 accumulates pages of search results, API
13:01 responses, calculations, conversation
13:03 history, and the context window gets
13:06 dirty. It fills up. The agent starts to
13:08 forget earlier steps or drift. The agent
13:10 may crash. Compaction handles all of
13:13 this server side and automatically
13:15 summarizes and compresses the context to
13:17 keep the agent operational across
13:18 workflows that would otherwise be
13:20 impossible. It's the kind of feature
13:23 that makes agents viable for tasks that
13:25 take longer, like hours instead of just
13:27 a few minutes. And that kind of
13:29 sustained multi-step work at scale
13:32 redefineses how easily you can roll out
13:34 agents across an enterprise environment.
13:36 So let's step back. What happens when
13:38 you combine all of the different
13:40 primitives I have been talking about
13:42 here? An agent that has a wallet, search
13:45 capabilities, content access, payment
13:47 rails, and an execution environment is
13:50 more than an assistant. It is an
13:52 economic actor. Consider what a
13:54 developer calling himself chat app
13:56 demonstrated on X this week. He
13:59 connected OpenClaw to Cance 2.0, which
14:01 is a video generation model inside an
14:03 app called Chatcut. Then he sent the
14:06 agent an Amazon product link. The agent
14:08 crawled the Amazon page, extracted
14:11 product info and photos, identified
14:13 which assets were suitable for video
14:15 generation, fed them into seed dance,
14:17 which is an incredible video model, and
14:20 produced a userenerated content style
14:22 product video. The kind of content that
14:25 brands pay creators a,000 bucks to
14:28 produce. No human touched any step
14:30 between paste this link and here's your
14:33 video. I watched it. It looks pretty
14:35 good. That is the emergent web. Not
14:38 agent doing a task, but agents chaining
14:41 capabilities together across services to
14:44 produce outputs that previously required
14:46 multiple humans and multiple tools. The
14:49 Amazon page wasn't designed for agents.
14:52 Cance 2.0 actually wasn't designed to
14:55 receive input from web crawlers. Chatcut
14:57 wasn't designed as an orchestration
15:00 layer, but because each piece exposes
15:02 its capabilities through APIs and
15:03 structured data, the agent can
15:04 [clears throat] stitch them together
15:07 into a workflow that no individual
15:09 company planned. This is the pattern
15:11 that the infrastructure convergence
15:14 makes inevitable. When content is
15:16 available as markdown, search returns
15:18 structured data, execution happens in
15:20 containers, and payment flows through
15:22 tokenized protocols. So the agent
15:24 doesn't need anybody to build an
15:26 integration between A and B. It can read
15:29 both services, understand both, and
15:31 chain them together on the fly. The
15:33 emergent web is therefore not a platform
15:35 that any one person is going to build.
15:38 It's what happens automatically when the
15:40 primitives exist and the agent is smart
15:42 enough to combine them together. And the
15:44 agents increasingly are. The
15:46 implications for the creator economy
15:49 alone are staggering. The UGC product
15:50 video would have cost, you know, a
15:52 thousand bucks and the agent can
15:54 replicate that workflow from one link,
15:56 not maybe with human creative judgment,
15:59 but at a cost that approaches zero and a
16:00 turnaround time measured in a couple of
16:03 minutes. If you multiply that by every
16:05 content type that follows a repeatable
16:06 pattern, like product descriptions,
16:09 social media posts, email campaigns,
16:12 comparison articles, you start to see
16:14 why the infrastructure companies are
16:15 building for a scale that isn't there
16:18 yet. They are seeing a world where this
16:21 kind of emergent agent behavior is the
16:24 norm, the default, not just a weird demo
16:26 from a guy on X. Poly Market provides
16:28 the most provocative case study of where
16:30 this goes. The prediction market
16:34 platform processed $12 billion in volume
16:37 in January 2026 alone. Researchers from
16:40 IMDIA Networks Institute analyzed 86
16:42 million bets and found that algorithmic
16:45 traders extracted roughly $40 million in
16:47 arbitrage profits over a 12-month
16:49 period. The top three wallets placed
16:53 over $10,000 bets combined. Only half a%
16:56 of all Poly Market users earned more
16:59 than $1,000. The rest were effectively
17:01 just providing liquidity for bots to
17:03 extract value. And here's where it gets
17:05 even more interesting. Poly Market
17:08 itself tweeted in early February of this
17:11 year that quote autonomous AI agents are
17:14 now trading on Poly Market in an attempt
17:16 to subsidize their token costs. Agents
17:18 are trying to earn money to pay for
17:21 their own compute. The loop is closing.
17:23 Meanwhile, the data on how well agents
17:25 are doing is mixed but illuminating.
17:28 OLAS protocols Poly Strat agents among
17:30 the most sophisticated autonomous
17:32 prediction market systems that are being
17:35 publicly tracked achieve maybe 55 to 65%
17:37 win rates over time with performance
17:39 varying really dramatically by domain.
17:41 Agents tend to be better at predicting
17:43 things that follow from data rather than
17:44 things that follow from culture, which
17:46 is not surprising. It tells you the kind
17:48 of economic activity that agents are
17:49 really well suited for versus the kind
17:51 that maybe humans are well suited for.
17:53 I'm not sure we'll see an agent doing
17:55 the Met Gala anytime soon. The
17:58 cumulative volume of AI trades on Poly
18:01 Market is continuing to grow. It's just
18:03 going to when you have AI agents by the
18:05 thousand registering for wallets and
18:07 trying to start getting into currency.
18:09 This is where the scam also lives. The
18:11 talk right now is flooded with videos of
18:13 people claiming to turn, I don't know,
18:15 50 bucks into 3,000 bucks in a couple of
18:17 days. These videos get thousands of
18:19 likes. These videos get thousands of
18:22 bookmarks. People are clearly hungry for
18:24 the words AI and make money in the same
18:27 video. The reality is considerably less
18:30 glamorous. The bot that famously turned
18:33 $313 into $438,000
18:36 in a month was running latency
18:38 arbitrage, exploiting a millisecond gap
18:40 between when Bitcoin moved on Binance
18:43 and when Polyarket odds adjusted. That
18:45 kind of algorithmic trading is not what
18:47 your open claw bot is going to be able
18:49 to do. That is highfrequency trading
18:51 which has been known in finance circles
18:53 for a long time and is just being
18:55 applied to poly market as the market
18:58 matures. It requires really fancy setups
19:00 like colllocated infrastructure with sub
19:03 10 millisecond latency. It requires
19:05 capital that is a whole lot larger than
19:07 any Tik Tok video would suggest. And if
19:09 you try and do it with something like an
19:12 openclaw agent, you're going to run into
19:14 real costs. One developer who actually
19:16 built and tested an autonomous
19:17 polymarket agent reported that
19:20 Cloudflare blocks API requests from data
19:23 center IPs and requires custom bypass
19:25 infrastructure just to place orders.
19:26 Another one found that running the bot
19:29 for just a couple of days racked up 200
19:32 bucks in API fees alone. So yes,
19:34 sophisticated autonomous trading agents
19:38 can generate returns on Poly Market. No,
19:40 you cannot replicate this with your open
19:43 claw by feeding it a Tik Tok tutorial.
19:45 The infrastructure requirements, the API
19:47 costs, and the competitive dynamics make
19:49 this a game for well- capitalized tech
19:52 operators, not retail experimenters. But
19:54 the underlying premise, the thing we've
19:56 been talking about all video, the idea
19:58 that agents can participate in economic
20:00 activity and generate revenue, that is
20:02 not a scam. That is the direction that
20:05 Coinbase, Stripe, Google, PayPal, Visa,
20:07 and OpenAI are all aggressively building
20:10 toward simultaneously with billions of
20:12 dollars in infrastructure investment.
20:14 The question isn't whether agents will
20:17 be able to transact autonomously. The
20:18 question is whether guardrails will be
20:21 built fast enough to prevent very
20:23 predictable disasters. I covered
20:26 openclaw security nightmare in detail in
20:28 my first video. The one-click remote
20:30 code execution, malicious skills
20:32 disguised as crypto tools, Cisco's
20:34 research team finding data xfiltration
20:36 in a third party skill. I'm not going to
20:39 rehash all of that. What I want to focus
20:41 on instead is the structural problem
20:43 that those incidents illustrate because
20:45 it scales with the infrastructure for
20:47 agent commerce. Every primitive that
20:50 makes agents more capable also makes
20:52 them more dangerous. An agent with a
20:54 wallet can pay for APIs or get drained
20:56 by a malicious skill. An agent with
20:58 shell access can install dependencies or
21:00 execute arbitrary code injected through
21:03 a prompt. An agent with search can find
21:06 information or be redirected to
21:08 adversarial content designed to
21:10 manipulate his behavior. And last but
21:11 not least, an agent with Cloudflare
21:14 served Markdown can read websites or
21:16 consume poison content at machine speed.
21:19 It's kind of your choice. The security
21:21 community is already responding to the
21:23 threats that come with these new
21:24 primitives. And the responses are
21:26 instructive because they reveal what
21:28 serious people think the real attack
21:29 surface is going to look like for
21:32 agents. Ironclaw is a rustbased
21:35 re-implementation of OpenClaw by near.ai
21:38 co-founder Ilia Polosukin and it
21:40 sandboxes every single tool that
21:43 OpenClaw uses into isolated web assembly
21:45 environments. Assumption being that any
21:47 tool an agent touches is a potential
21:50 compromise vector. OpenAI's shell tool
21:52 meanwhile includes org level and request
21:54 level network allow lists, domain
21:56 secrets that prevent credential leakage
21:58 and container isolation. The assumption
22:00 being that agents will run untrusted
22:02 code and the environment must contain
22:05 the blast radius. Coinbase's agentic
22:07 wallets use enclave isolation for
22:09 private keys and programmable spending
22:11 guardrails. The assumption there being
22:13 that the agent itself cannot be fully
22:15 trusted with the assets it manages.
22:18 Notice the pattern across all of these.
22:20 Every serious security approach treats
22:23 the agent as a potential adversary. That
22:25 is the correct approach. It does not
22:28 treat the agent like a trusted employee.
22:29 That is the right mental model for where
22:32 we're at at this point in 2026. And it's
22:34 one that most of the Tik Tok buzz
22:37 tutorial crowd has not internalized.
22:39 Look, agents have existed for a while
22:42 now. APIs have existed for decades. The
22:43 concept of software transacting with
22:46 software predates the web itself. What's
22:49 new is all of these factors converging
22:51 to make the agentic web. In the span of
22:53 just a few months, every layer of the
22:56 stack went from concept to production to
22:59 infrastructure. Money, content, search,
23:03 execution, identity is all in production
23:05 now simultaneously. The web is starting
23:08 to fork. There's the human web, the one
23:09 you're reading right now or listening to
23:12 a video on right now with fonts and
23:14 layouts and images and scroll
23:16 animations. And at the same time, in
23:18 parallel, on another fork, there's the
23:21 agent web, a parallel layer of APIs,
23:23 structured data, markdown content,
23:25 payment protocols, and execution
23:27 environments designed for software that
23:29 will never open a browser. These two
23:31 webs run on the same physical
23:33 infrastructure, the same servers, the
23:35 same CDNs, the same payment rails, but
23:37 they serve fundamentally different
23:39 clients with different needs. A human,
23:41 we want a beautiful product page. An
23:43 agent, they want a JSON payload with the
23:45 price, the availability, and the payment
23:47 endpoint. A human might want search
23:49 results they can browse. An agent just
23:51 wants structured data to act on. You get
23:53 the idea. A human wants a checkout flow
23:55 with trust signals. the agent just wants
23:57 tokenized payment primitives and will be
23:59 getting on with its day. The analogy
24:01 that keeps coming to mind for me as I
24:04 look at this is the early mobile web. In
24:06 2007 when the iPhone launched, the web
24:09 already existed. It worked on phones
24:11 technically, but it was designed for
24:13 desktops and the experience I can
24:16 testify was terrible. What followed was
24:18 a decade long rebuild for the mobile
24:21 web. responsive design, mobile first
24:23 frameworks, app stores, push
24:26 notifications, GPSware services, tap to
24:28 pay. The underlying infrastructure was
24:31 the same, but the interface layer forked
24:33 completely. The companies that
24:35 recognized the fork early, that built
24:37 for the new client instead of trying to
24:39 make the old interface work on the new
24:41 device, those were the ones that built
24:43 the dominant platforms of the next era.
24:45 We are at the same inflection point
24:48 today except the new client isn't a
24:50 smaller screen. It's not a screen at
24:53 all. It's software that reads, decides,
24:56 pays, and acts. The interface it needs
24:58 isn't visual. It's structured. It's
25:00 programmable. It's transactional. And
25:02 the companies building that interface
25:04 right now, they're not the startups that
25:06 are hoping to get lucky. They're the big
25:08 boys. They're Coinbase, Stripe,
25:10 Cloudflare, Google, OpenAI, Visa,
25:11 PayPal. companies with the
25:14 infrastructure, scale, and distribution
25:17 to make their design decisions into de
25:19 facto web standards. The mobile fork
25:21 created trillion-dollar companies,
25:22 right? It created Uber, Instagram,
25:24 WhatsApp, Snap. They would not have
25:26 existed on the desktop web. Not because
25:29 the desktop web lacked capabilities, but
25:31 because it lacked the interface
25:32 primitives that mobile clients really
25:35 needed. It lacked real-time location,
25:37 always on connectivity, camera first
25:40 interaction, push notifications, tap to
25:42 pay at physical registers. The agent
25:46 fork is going to do the same thing again
25:48 in the 2020s. The businesses that emerge
25:50 from it will be the ones that could not
25:52 have existed on the human web, not
25:54 because the human web lacks information,
25:56 but because it lacks the interface
25:59 primitives that agent clients really
26:01 need. structured data, tokenized
26:03 payments, machine readable content,
26:04 programmatic search, execution
26:07 environments. In my last video on Open
26:10 Claw, I talked about the 7030 rule. The
26:12 idea that people consistently want to
26:15 maintain maybe roughly 70% human control
26:17 of agent delegated tasks. That's the
26:19 demand side. That's the human side of
26:21 the story, right? This video has really
26:24 been about the supply side of the story,
26:26 the agent side of the story. And that
26:30 side doesn't care about our 7030 split
26:31 or what kind of control we want to
26:33 maintain. The infrastructure that is
26:35 being built right now that I have spent
26:39 this video talking about assumes a 0
26:41 world. Fully autonomous agents with
26:43 their own wallets, their own search
26:45 capabilities, their own execution
26:47 environments, and their own economic
26:49 relationships with the services they
26:52 use. The gap between the infrastructure
26:54 being built and the trust people are
26:57 willing to extend to agents is the
26:59 central tension of the next few years in
27:02 AI. Every company in the agent stack is
27:05 betting that trust will catch up to the
27:07 capabilities that are being built today.
27:10 And every security incident that we see,
27:12 especially with the open claw story,
27:14 things like claw havoc, like the 500
27:17 message iMessage disaster, reduction
27:19 databases being wiped by unsupervised
27:22 agents, those kinds of stories push the
27:25 timeline of trust back even though they
27:27 don't stop people trying agents. For
27:29 now, the agent web is really small.
27:31 Developers running OpenClaw on Mac minis
27:33 and VPS instances, AI shopping
27:35 assistants placing orders through
27:37 Stripes ACPs. But I want to call out
27:40 that small now does not mean small
27:43 later. Because of how quickly Open Claw
27:45 is growing because of how much venture
27:48 funding is going into agents in 2026, we
27:51 are likely to see explosive growth in
27:55 this new branch of the web in 2026. I
27:58 don't know if a fully realized agentic
28:01 web arrives in 3 months, 3 weeks, or 2
28:04 years. That's an open question. that
28:07 it's being built is not a question and I
28:09 increasingly have no doubt we are headed
28:11 toward a world where agents are as
28:14 ubiquitous on the web as people. It is
28:17 up to us to shape those web standards so
28:19 they work well for both agents and
28:21 people. And it's up to us to make sure
28:23 the primitives that we build like
28:25 payments, like security, are robust
28:28 enough that we actually can trust agent
28:30 operations and agent economics, the way
28:32 we've learned to trust other humans for
28:35 commerce over the web. Without that base
28:37 layer of trust, the future of the
28:40 agentic web may be still born. And that
28:41 is the thing I want to leave you with.
28:45 What is going to build trust in the
28:47 agentic web? And as much as these
28:49 companies are investing in primitives,
28:52 the primitive of trust is something that
28:54 we are going to have to see realized
28:58 over time by good faith actors who are
29:00 building for a future where both humans
29:03 and agents work on the web together. If
29:05 you know of someone building in that
29:07 space, pop them in the comments. I'd
29:09 love to see them. Best of luck out there
29:11 and uh enjoy the wild agentic web. It's