0:01 Right now there's a battle playing out
0:03 at the heart of agent world and it's a
0:05 battle between titans, right? Nvidia's
0:07 on one side with Nemo Claw, OpenAI and
0:09 Enthropic are on the other side. If
0:11 you're telling me Nate, no, no, no,
0:12 they're all building agents, I'm the
0:14 first to agree with you. That's not the
0:17 point. The point is that Anthropic and
0:21 Open AAI spent a year in 2025 figuring
0:24 out that the companies they work with
0:26 did not have the expertise to actually
0:28 apply the solutions they were giving
0:30 them. So they would launch cool stuff
0:32 like codec and claude code and see it
0:34 suffer in production when they could not
0:36 figure out how to get actual teams at
0:38 actual businesses to adopt them in ways
0:40 that they themselves were using
0:42 internally right anthropic ships I swear
0:44 every 8 hours right and open AAI ships
0:46 very very fast as well but they weren't
0:47 seeing those speed ups at other
0:48 companies and they could not figure out
0:51 why and so now because of that year of
0:54 failures open AI and anthropic are very
0:57 publicly tying up with big consulting
0:58 firms and they're doing that because
1:01 they know that they need to find ways to
1:03 work with services firms to get their
1:07 actual content, their actual code into
1:10 the hands of people in a way that's
1:13 accessible to them. It turns out that AI
1:15 doesn't teach itself, at least not for
1:17 most people. And I think that's a bitter
1:19 lesson that Enthropic and OpenAI have
1:21 learned. I don't know that Nvidia agrees
1:23 because on the other side of this,
1:26 Nvidia just launched Nemo Claw and the
1:28 backstory there is very very different.
1:31 Nemo claw came from the open claw
1:34 moment, right? Jensen walked out onto
1:36 the stage and he said this is the
1:39 future, right? The future is open claw
1:41 because the future is an agentic
1:42 operating system. And that's what he
1:44 saw. And so regardless of what you think
1:47 about OpenClaw the piece of software
1:50 that Peter Steinberger coded, OpenClaw
1:52 the system, OpenClaw the paradigm,
1:54 OpenClaw the idea, that's what Judson
1:56 was talking about. And he wanted to take
1:59 that idea and bring it securely to the
2:01 enterprise. Because of course the big
2:02 thing with OpenClaw if you're in
2:04 business is it's not secure. It's not
2:06 something you can lock down well.
2:08 There's lots and lots of issues with
2:10 giving your agent access to your stuff
2:12 and the open internet. And so Nemo Claw
2:14 is designed to be a lot more locked
2:17 down. So what makes Nemo claw tick? Nemo
2:19 claw is actually an add-on to OpenClaw.
2:21 It's not that it replaces it entirely.
2:23 It's that it's designed to run in
2:25 OpenShell, which is Nvidia's proprietary
2:28 runtime environment. And that ensures
2:30 that Nvidia is able to wrap the open
2:32 call instance in a way that's secure. So
2:34 it has policybased guard rails which are
2:37 YAML declarations which the agent has to
2:38 follow. It has model constraints which
2:41 do two jobs. Job one is ensuring that
2:43 Nvidia can validate the safety, but
2:45 really job two is ensuring that Nvidia
2:47 gets to serve the model because one of
2:49 Jensen's larger moves here is to go from
2:51 just managing the chip layer to move
2:53 into the Agentic world because in his
2:56 business he needs to go from just
2:58 selling chips to scaling up to sell more
3:00 of the value chain. And he's convinced
3:02 Agentic is a big piece of it and hence
3:04 Nemoclaw. Nemoclaw also runs on local
3:07 first compute. And yes, as you'd expect,
3:08 there's an Nvidia play there because
3:10 Nemoclaw is designed to run safely and
3:13 efficiently on Nvidia chips that run
3:15 locally. Nemo Claw is very much a
3:16 strategic play for Jensen because what
3:19 Jensen is trying to do is he's trying to
3:22 figure out how to pivot into an
3:25 ecosystem play where everybody who has
3:27 all of this energy around OpenClaw will
3:30 be indirectly contributing to value in
3:32 Nemoclaw, which he can then sell to
3:34 enterprise. Like that's the dance he's
3:35 trying to walk here. And by the way, if
3:37 you're a contributor to OpenClaw and
3:38 that makes you annoyed, I get it. This
3:40 is just part of how corporate works. And
3:42 so the long and the short of it is that
3:44 Jensen is bolting on enterprisegrade
3:47 compliance and security solutions as a
3:49 patch, as a layer over the top of
3:52 OpenClaw to make it something with an
3:54 open framework that runs on Linux that
3:56 enterprises can pick up and use. Whether
3:59 or not you find that believable, I want
4:01 you to step back and look at how this
4:04 assumes competence on the part of
4:06 enterprises. Remember, we started this
4:08 video and we talked about the story
4:10 anthropic and open AI have been telling
4:12 themselves where they recognized very
4:15 publicly over the last year or so that
4:18 their solutions were too complicated to
4:20 successfully roll out to engineering
4:23 teams at enterprises. Now, here comes
4:25 Jensen onto the stage and he says, "You
4:27 know what? You developers are smart. You
4:29 developers can figure this out. People
4:31 are already using OpenClaw by the
4:34 hundreds of thousands. You guys got
4:36 this. Let me just roll out this
4:37 open-source framework and we're good to
4:39 go. And you know what? I think one of
4:41 the things I notice about Jensen's
4:43 approach. It's not necessarily the
4:45 corporate strategy here. It's actually
4:47 the fact that a lot of what he focuses
4:50 on are basics that we have known in data
4:52 backend engineering for a long time. And
4:55 this is something that I keep coming
4:57 back to and thinking about as I go
5:00 through change management processes with
5:02 companies. I recognize that in many many
5:05 ways what consultants are making
5:07 complicated today is actually the
5:10 age-old practice of good data
5:12 engineering that turns out to be super
5:14 useful in the age of AI. And I can't
5:17 help but wonder if open AI and anthropic
5:19 changed their tune a little bit and
5:22 instead of saying AI AI AI isn't it
5:24 amazing and complexifying it for people
5:25 if they actually came in and said let's
5:28 talk about what we've always known as
5:30 developers. Let's talk about how data
5:31 actually works in the principles of
5:34 development and then and then let's talk
5:37 about how AI ladders onto that data
5:39 backend in ways that are really useful.
5:41 Maybe the process of change would be
5:43 easier. And I think in a way Jensen
5:45 understands that. Just for fun, let's go
5:47 all the way back to Rob Pike's five
5:48 rules of programming. If you don't know
5:51 who Rob Pike is, you should because he's
5:53 one of the creators of Unix and Go. He's
5:56 an absolutely legendary developer. Rob
5:58 Pike's five rules are things that get
6:00 taught computer science. They're things
6:02 that senior engineers teach to juniors.
6:05 They're sort of written in the stars if
6:07 you're in the discipline. Rule number
6:09 one, you can't tell where a program is
6:11 going to spend its time. Bottlenecks
6:14 occur in surprising places. So don't try
6:15 to second guessess and put in a speed
6:17 hack until you've proven that's where
6:20 the bottleneck is. I cannot tell you how
6:21 many times I've used that rule when
6:23 debugging systems. It actually works. It
6:25 is very hard to tell until you run a
6:27 system where the bottlenecks are going
6:29 to happen. That is true for agentic
6:31 systems people. That rule didn't go out
6:33 of style. And by the way, yes, I'm going
6:35 through all five of these because I
6:36 don't think we talk about them enough.
6:38 And I don't think we realize amidst all
6:40 the hype and all the change that some of
6:42 these ancient engineering practices
6:45 still hold true. Rule two, measure.
6:48 Don't tune for speed until you've
6:51 measured. And even then, don't do it
6:53 unless one part of the code overwhelms
6:55 the rest. In other words, if you aren't
6:57 measuring and baselining your
6:58 performance, it's really hard to
7:00 optimize. Do we see that with aentic
7:03 systems? We sure do. How many times do
7:05 people tell me they don't like an
7:07 individual LLM response and I have to
7:09 tell them maybe you should baseline it?
7:12 Maybe you should measure before you make
7:14 big assumptions and changes. Rule number
7:16 three is kind of just don't get fancy or
7:19 more precisely it's fancy algorithms are
7:21 slow when your number is small and your
7:23 number is usually small in computer
7:26 science terms. Fancy algorithms have big
7:28 big constraints. Fancy algorithms
7:31 usually only work at scale. Until you
7:32 know that your number is frequently
7:36 going to be large, don't get fancy. This
7:38 is true for agentic engineering as well.
7:41 If you're trying to build aic systems,
7:44 simple scales well. And in fact, I would
7:46 add there's probably a correlary here.
7:49 Simple scales better than complex. And
7:52 this is something that may have shifted
7:54 with agentic engineering because we did
7:55 find for a while if we were writing
7:58 algorithms that there were times at
7:59 large scales when you had to have a
8:02 fancier algorithm. Now I think we're
8:04 abstracting a lot of that edge case
8:06 complexity to LLMs and that requires us
8:08 to have very stable simple architectures
8:10 that scale. So that's one that I have
8:12 some interesting nuance around but
8:14 fundamentally it's true right don't get
8:16 over fancy especially when the system is
8:18 small. Rule number four, fancier
8:21 algorithms are buggier than simple
8:22 algorithms. This was the era, by the
8:24 way, when Rob had to write his
8:26 algorithms by hand. I know that everyone
8:28 here doesn't know that anymore because
8:30 we all just prompt our LLMs. But this
8:32 was handwritten stuff, right? Use simple
8:34 algorithms for simple data structures.
8:36 That's the heart of rule number four.
8:38 And this is a correlary to rule three
8:39 because if rule three talked about
8:42 simplicity and scale, rule four talks
8:44 about simplicity and bugs. It is very
8:48 very hard to debug complex agentic
8:50 systems. You're like, is it the prompt?
8:51 Is it all of this context that I'm
8:54 pulling in? What's the problem? As much
8:56 as you can simplify because the more
8:59 that you simplify, the better off you're
9:01 going to be, the better off you're going
9:03 to be debugging, the better off you're
9:05 going to be maintaining the system, etc.
9:08 Rule number five, data dominates. If
9:10 you've chosen the right data structures
9:12 and if you've organized things well, the
9:14 algorithms will almost always be
9:16 self-evident. In other words, write dumb
9:18 code and have smart objects in your data
9:20 system. Right? That's the short version.
9:22 This cannot be more true in the age of
9:26 AI. Data engineering is the key to
9:29 having good smart agentic systems. And I
9:31 think we miss that. This is not new at
9:33 all. This is decades old. Every time we
9:35 go through hype cycles, and I've been
9:37 through a bunch of them, right? I've
9:39 been through the cloud hype cycle. I've
9:40 been through the mobile hype cycle. Now
9:43 I'm in the AI hype cycle. And we forget.
9:46 We think it's all new. And we forget
9:47 little things like the fact that we
9:49 should keep structure simple, that data
9:51 dominates, that we should build data
9:53 structures that enable us to do more
9:55 complicated things in ways that are
9:58 sustainable. This is what Jensen is
10:00 arguing for when he wants a simple set
10:03 of primitives to build an open-source
10:05 ecosystem for agents. In a way, I think
10:08 Nvidia's engineers understand this
10:10 better than a lot of the other engineers
10:12 in the AI ecosystem right now. And that
10:14 may be because they have to be so close
10:17 to the kernel and so close to the metal
10:19 all the time. You have to have good
10:20 principles when you're trying to
10:23 optimize for GPUs. And when you optimize
10:25 for GPUs over time, you build an
10:27 engineering culture that demands
10:29 excellence and adherence to good best
10:31 practices. And I see that written all
10:35 over Nemo Claw. And I think that if we
10:38 look at the story of how much trouble
10:40 organizations are having adapting to AI
10:42 and if we ask ourselves is it the
10:44 message itself that's the problem or is
10:46 it the way it's presented I would kind
10:48 of argue it's been the way it's
10:50 presented because we have presented I
10:52 have seen so many consultants pedalling
10:54 complexity as if it was a good thing
10:57 with AI like presenting some kind of
10:59 complicated agentic mesh and saying this
11:02 is the way or presenting a really
11:04 complicated change management paradigm
11:06 or presenting lots and lots and lots of
11:09 very hardto- read docs and saying go dig
11:10 into this. These are your prompting
11:13 tools. Simpler scales. We need simpler
11:16 approaches that enable people to
11:17 understand what we're saying. And
11:21 ironically, if we go back to the way we
11:23 always engineered systems, we're going
11:26 to find that a lot of those truisms like
11:30 Rob Pike's rules still work. They're not
11:32 out of style. And that brings me to one
11:34 of my favorite examples in the age of AI
11:36 because I want to make this more
11:37 updated. Yes, there's new things, new
11:40 changes, but we have to understand how
11:42 these old structures are informing new
11:45 ways we work. I think factory.ai has a
11:47 wonderful example here. Their agent
11:50 readiness framework evaluates code bases
11:52 against eight different technical
11:54 pillars. style and validation, build
11:56 systems, testing, documentation, the dev
11:58 environment, code quality,
12:00 observability, security, and governance.
12:03 And what they find is that consistently
12:05 speaking, the agent isn't the broken
12:08 thing. The environment is, which goes
12:10 back to that data insight. If you can
12:12 fix your data structures like llinter
12:14 configs, like documented builds, like
12:17 dev containers, like an agents.mmarkdown
12:19 file, agent behavior then becomes
12:21 self-evident. It's effectively a
12:23 correlary to what Pike was talking about
12:25 years and years and years ago. And so
12:28 Facto's data shows that getting these
12:31 fixes right compounds in exactly the way
12:33 we would expect it to following good
12:35 software engineering principles. If you
12:37 have better environments, you make your
12:40 agents more productive, which frees time
12:42 to make your environments better, which
12:44 in turn feeds the loop and your agents
12:45 get more productive over time. And
12:47 there's a convergence here around
12:49 Agentic best practices that I want to
12:51 call out and name explicitly. So I'm
12:53 talking about factories best practices,
12:55 Nvidia's best practices, but also some
12:57 of the way Enthropic organizes things,
12:59 some of the way Microsoft organizes
13:02 things. There are essentially a whole
13:05 set of agentic rules of the road that we
13:08 are publishing that are Pikees rules
13:10 rediscovered by people who know their
13:13 fundamentals. And I want to name the
13:15 primitives that are emerging because I
13:16 think that we should understand these
13:18 rules of the road that underly best
13:20 practices across a bunch of different
13:22 companies and recognize their old roots
13:24 cuz I think it will help us to change
13:26 more effectively. So with that, I want
13:29 to walk you through the five hard
13:31 problems that I've seen in production
13:33 agent deployment. I'm going to go
13:34 through each one in detail because the
13:36 distribution of difficulty here tells
13:38 you about where people are spending
13:39 money, where people are expecting
13:41 engineers to solve it internally and
13:43 really what best practice looks like.
13:46 The first one is context compression. So
13:48 longunning agent sessions fill up
13:50 context windows. They just do even
13:52 million token context windows or 10
13:53 million token context windows, they all
13:56 fill up. And every compression strategy
13:58 is lossy. It always loses something. So
14:00 factory tested three different
14:02 production approaches to see which was
14:03 best. They had their own method which
14:05 they call anchored iterative
14:08 summarization. Big words. It maintains a
14:10 structured and persistent summary with
14:13 explicit sections for session intent for
14:15 file modifications for decisions made
14:17 and for next steps. When the compression
14:20 triggers the newly truncated span gets
14:22 summarized and then merged with the
14:24 existing summary. So the structure
14:26 essentially forces preservation. you
14:28 can't break the previous summary. Right
14:30 now, they compared this approach against
14:32 OpenAI's compact endpoint, which
14:34 produces a very opaque. You can't see
14:36 what's on the black box, and it just
14:38 gives you compressed representations
14:40 that are optimized to be reconstructed
14:42 faithfully. That's a fancy way of saying
14:44 it's it's compressed very highly, and
14:46 you can't read the output to verify what
14:49 was preserved because OpenAI famously
14:50 doesn't expose any of that. And then
14:52 they tested it against Anthropic's
14:54 built-in compression through the cloud
14:55 software development kit, which
14:57 generates very detailed structured
14:59 summaries, but regenerates the full
15:01 summary every time rather than doing it
15:03 incremental. That difference starts to
15:05 matter across repeated compression
15:07 cycles because you're regenerating the
15:09 whole summary. You're playing telephone
15:12 again. The results were clear. Facto's
15:14 approach of incremental summarization
15:17 scored the highest, but all three
15:19 struggle with tracking artifacts. So if
15:21 you're naming and remembering particular
15:23 files, all three struggle with that a
15:25 bit. And the mitigation here is pretty
15:27 simple. You have to think about your
15:30 project in terms of milestones and make
15:31 sure that the milestones can be
15:33 compressed in ways that allow the agent
15:35 to continue to work. And that if you
15:38 cannot do that, you have multi- aent
15:41 frameworks that allow the agent to pick
15:44 off and address big pieces of work and
15:46 then die and refresh the context window
15:48 with a new agent without losing that
15:49 context. so that you get these
15:51 longunning tasks. That's how you get
15:53 these multi-week agent runs and don't
15:55 stuff out the context window. You see
15:56 how it all comes back to data? Like
16:00 these are real 2026 agentic problems,
16:02 but they come back to underlying
16:04 principles around how we handle data and
16:07 complexity that aren't new. Codebased
16:08 instrumentation, that's another one.
16:10 Gene, does that come back to pike and
16:12 measuring? It sure does. This isn't even
16:14 an agent problem, right? This is a
16:16 software hygiene problem. We have always
16:18 had challenges when we've been doing
16:20 engineering projects, especially where
16:22 we've been in a rush. It's been hard to
16:25 be disciplined and measure. Making the
16:27 codebase agent ready is partly about
16:29 being able to measure stuff and we
16:31 should not forget it. I don't want to
16:33 belabor this one too long. If you are an
16:35 engineer and you're like, I need to be
16:37 able to make a contribution to AI, one
16:39 of the simplest things you can do is
16:41 just do the measuring. It's decades old.
16:43 it's not new, but figuring out how to
16:44 say this is our current baseline
16:46 performance maybe with our LLM chat
16:47 window, maybe with our agent, whatever
16:49 it is, and you can measure it
16:51 effectively because you understand this
16:53 is the baseline. This is what latency
16:56 looks like. This is what a good set of
16:57 responses looks like and I have a nice
17:00 golden data test set and I can true that
17:01 up against what's in production. You
17:03 have done a tremendous service to your
17:05 business and you don't get appreciated
17:07 enough probably, but it's really
17:09 important and it's not new. It's just
17:11 that we have to take it seriously
17:13 because we are giving these autonomous
17:16 agents a lot of power and we're not
17:17 really measuring them if we're not
17:19 disciplined. Problem number three in
17:22 agentic coding work is around linting.
17:23 Now, if you don't know what linting is,
17:24 I'm not talking about the stuff in your
17:27 couch cushions. Linting is when you are
17:29 doing static analysis of the code.
17:31 You're not making changes. You're just
17:33 checking it for small style issues, for
17:35 inconsistencies, for potential bugs at
17:37 runtime, and you're coming up with a
17:40 report. Linting rules are how we make
17:43 linting work. And one of the ways that
17:45 you can detect issues with agentic code
17:47 is by getting very very strict with your
17:49 linting so that you are insistent on
17:52 extremely clean code. This isn't new,
17:54 right? This is about enforcing simple
17:57 structures. The factory team has this
17:59 lengthy series of blog posts about all
18:02 of the obsessive linting rules they have
18:03 that basically put the code in a
18:06 straight jacket and say it must adhere
18:07 to best practices all the time. Now
18:09 individual developers if they're the
18:12 ones in charge of linting may say ah I
18:13 don't know I'm tired. I don't really
18:15 want to write all my linting rules. But
18:16 in a good healthy engineering
18:18 organization you have some common core
18:20 around linting where you say okay this
18:22 is what good looks like for us. We're
18:23 going to insist on it. And that's
18:24 especially important when you have
18:26 agents involved because the agents are
18:28 by definition just trying to get the job
18:30 done. They are lazy developers that are
18:33 happy just to kind of throw it off their
18:34 plates and not listen. And so if you
18:36 don't have a strict linter that is going
18:39 to go through and insist on simplicity,
18:40 you are going to be in trouble. Again,
18:43 not a new thing. It's just a common
18:45 thing that we are now applying in the
18:47 world of agents. An ancient engineering
18:49 piece of wisdom, if you will. Problem
18:51 number four, how you handle multi-agent
18:53 coordination. I've talked about this in
18:55 other videos. We're converging around a
18:58 rule where we say planners and executors
19:00 are the way to do longunning multi- aent
19:02 coordination and that makes sense
19:04 because we're not over complicating it.
19:06 And one of the things that Pike has
19:09 called us to remember is hey you don't
19:11 need to optimize something prematurely.
19:12 You don't need to optimize it if you
19:14 can't measure it. And so when we've
19:16 actually tried to overoptimize and over
19:18 complicate and there are engineering
19:20 teams at many orgs that try and do this
19:22 I just I encourage folks to say you know
19:24 what let's not over complicate it build
19:26 the simplest possible version of this
19:28 agentic development pipeline and then we
19:31 can always add more value by
19:33 complexifying it if we really have to
19:34 but we don't need to optimize
19:36 prematurely if we can't even measure
19:38 whether it does the job yet again not
19:40 new and if you're wondering why am I
19:42 taking time to talk about what isn't new
19:44 it's really simple I think consultants
19:46 often like to sell this as all new
19:48 because it drums up business. I would
19:50 prefer to tell the truth and say these
19:53 are ancient data engineering practices.
19:55 These are old software engineering best
19:57 practices that we can apply in ways that
19:59 are new to build these systems, but the
20:01 practices and principles aren't that
20:03 new. And I think that helps us with our
20:05 change management. The last challenge is
20:07 the hardest one. It's around
20:09 specifications and fatigue. What I find
20:12 in practice is that teams really, really
20:15 struggle with a skill of defining a spec
20:17 clearly upfront. It's a lot of work.
20:18 There are some people who claim it can't
20:20 be done or if it's so much work, we
20:22 should just code the thing. I've seen
20:24 real speedups, but it does require you
20:26 to be very precise and crystal clear in
20:28 your thinking. And you also have to be
20:30 very good at writing emails at the end.
20:32 And you have to be disciplined about not
20:34 taking shortcuts. And so if you are
20:36 going to give an agent a context window,
20:38 you have to be disciplined about making
20:40 sure your context graph is really clean
20:42 so the agent can go search and get the
20:45 context it needs cleanly by navigating a
20:47 hierarchy rather than just stuffing it
20:48 all in the context window and hoping and
20:50 praying because you're lazy. In other
20:53 words, we humans have to be less lazy if
20:55 we want the agents to do good work for
20:57 us. And I know that is counterintuitive
20:59 because you are often sold a world where
21:01 humans should just sit back and we just
21:03 go and get coffee and then we're done.
21:05 That's not how it actually works. And
21:06 that's never how good engineering
21:08 worked. It shouldn't be new. It
21:10 shouldn't be a surprise. And I think
21:12 sometimes we're sold Asians as like
21:13 labor savers. And that's just
21:16 disingenuous. It's just not true. So why
21:18 does all this hype exist? I went through
21:20 five problems. I showed you how they're
21:21 critical now in the world of Asians. I
21:23 showed you how they rest on old
21:25 engineering best practices. I think if
21:27 we messaged them that way, it would be
21:28 useful to us. I think it would be easier
21:30 to understand. I think that anthropic
21:32 and open AI would have less issues
21:34 communicating to developers. I think
21:35 it's something that Nemo Claw started to
21:39 get right. Part of why as an industry we
21:41 have not done this well is that the
21:43 chaos is worth a lot of money.
21:45 Consultants coming in and pedalling
21:48 their wares and saying this study shows
21:49 that it's really hard helps them earn
21:52 business. And it is hard, right? But
21:54 it's hard in a way consultants typically
21:56 don't help you with. It's hard in a roll
21:58 up your sleeves, get into the code,
22:01 co-build with me, dig in, help me
22:03 understand the principles. And so many
22:04 times consultants don't want to get
22:06 their shoes dirty, right? They they want
22:07 to come in and just do a PowerPoint
22:09 deck. Ah, they want to deliver a great
22:11 deck and then move on. That's not how it
22:14 works, right? If you're going to do real
22:16 change management, if you're going to
22:18 help engineers and product managers and
22:20 designers figure out how their roles are
22:22 changing because their whole jobs are
22:24 changing, you can't do it with a
22:26 PowerPoint deck. It's not going to work
22:28 that way. You have to go back and anchor
22:29 in things that we all understand and
22:31 have built on. And that as I've showed,
22:33 you can do that. And then you have to
22:35 walk forward and say, here's how this
22:37 applies today. That's why I've walked
22:39 through these problems. That is much
22:41 more specific than I have seen in any
22:43 standard run-of-the-mill consultant
22:46 deck, which so often like level up here
22:48 and they talk fluffily about how great
22:50 AI is. It doesn't help you get the work
22:52 done. And this is what I think we're
22:54 missing when we look at launches like
22:57 Nemo Claw because Nemoclaw as a launch
22:59 is interesting. Nemoclaw as a play for
23:01 Nvidia definitely interesting. They're
23:03 trying to move beyond chips. But
23:06 Nemoclaw is a way of saying to the
23:08 industry, you got this. you can figure
23:10 this out. We've got good engineering
23:13 best practices that we can rely on and
23:16 use to do real agent work. Now, that's
23:18 interesting. And that's something that I
23:21 wish we did more of. And I think if we
23:25 worked more on that piece as a
23:28 discipline, we would have less need for
23:30 these tie-ups that we see between
23:32 consulting firms and big companies like
23:35 OpenAI and Anthropic. Because I think at
23:36 the end of the day, in a sense, when
23:38 you're outsourcing the change
23:40 management, you are losing control of
23:42 the narrative. And one thing anthropic
23:45 and open AI probably don't want to do is
23:47 lose control of the AI change narrative
23:50 in their target companies. It is already
23:51 fraud enough. There are already enough
23:54 people producing half-true rumors,
23:56 sometimes completely false rumors about
23:59 what AI can and cannot do, what AI will
24:00 and will not do. And by the way, it is
24:02 both. I see lots of false rumors about
24:04 what AI can do. I see lots of false
24:06 rumors about what it can't. I think it's
24:09 helpful if we go back and we say this is
24:11 just computing. We've known about
24:14 computing for a long time. We understand
24:16 how computing works. The fundamentals
24:19 aren't changing, but we have a new level
24:21 of abstraction to put over the top and
24:23 we should talk about it concretely and
24:26 explain in a detailed way how our old
24:29 principles of engineering have actually
24:31 evolved. And that's what I tried to do
24:32 in this video. That's what I laid out
24:35 for you so you could understand we're
24:37 not doing new stuff here when we design
24:39 Agentic Systems. We're relying on good
24:41 engineering practices we've already had.
24:43 And in a way, a lot of what I'm doing on
24:46 this channel is actually teaching good
24:48 data engineering practices to a lot of
24:50 people who didn't come up and do data
24:52 engineering in school. Because it turns
24:54 out if you want to build these systems
24:57 yourself, you have to know just enough
25:00 about data engineering to build systems
25:01 that work. And it turns out it's not
25:03 scary. It turns out you can learn these
25:05 principles. You don't have to go and get
25:07 a CS degree. And that's really
25:10 empowering and that's really cool and
25:11 that's really fun for me because I'll be
25:13 honest, I didn't get a CS degree either.
25:15 I taught myself. I was building
25:17 computers. I had fun. And I think what's
25:19 interesting is LLMs are essentially a
25:22 teachable moment. LLMs are giving so
25:25 many more people access to compute.
25:28 We're all coming to this with fresh eyes
25:29 because when we look at change
25:31 management in orgs, I've talked about
25:32 engineers, but to be honest with you,
25:34 it's not just engineers, right? It's
25:36 product managers, it's sales, it's CS.
25:38 Shopify was shocked when they first got
25:40 cursor because there were so many CS
25:41 people who wanted cursor, right? They
25:43 were coding under the desk. Coding under
25:46 the desk is a massive 2026 phenomenon
25:48 that is by definition not engineering
25:50 related. And if you want the coding
25:53 under the desk to work, you got to make
25:55 sure that we have a little bit of a
25:57 sense of how best practices work. And if
26:00 we understand that, we're going to be
26:02 able to take tools like Nemo Claw and
26:04 actually put them to work effectively.
26:06 So hats off to Nvidia for believing in
26:08 us a little bit, right? For saying we
26:10 could roll our own. We can build stuff
26:13 that works. We can understand how good
26:15 data engineering best practices, old
26:18 computer science best practices that age
26:20 well are still applicable today, evolve
26:22 them appropriately and tackle good
26:24 agentic engineering challenges. I want
26:26 more of that and I hope you do too. Chips.