vibe coding [ __ ] is that you don't have
to feel bad throwing it away. If you
were to have the average developer build
really early, realize the product was
bad, and scrap it, they'd feel like
[ __ ] But if you're letting an AI do it
or you're letting Theo do it where I
love when my code is thrown away. It
feels great to know it's being replaced
by something that makes more sense or is
being deprecated because it didn't make
sense. I love that vibe coding out this
part doesn't give us any of those
benefits and comes with even more
negatives. It just kind of sucks. Like I
think we can all agree with this. The
hard part of one of these like product
specs isn't even writing it. Writing one
of those things is not that hard.
[ __ ] forcing yourself to sit and read
it is the worst thing in the world. And
I don't know if this is my ADHD
speaking. Correct me if I'm wrong, chat,
but just sitting there and reading
through these specs is the worst thing
ever. It's so much worse than code
review, or more importantly, so much
worse than just coding yourself. And I
would way rather be building prototypes
and demos and testing these things
myself than I would be sitting there
reading a spec trying to keep myself
from losing interest in order to find
the actual problems in it. Personally,
the things I do like fit into that
general mindset. I like code. I like
conversations. I like playing with new
ideas and solutions. And personally, I
like deprecating things that don't work.
Also simplifying where I can. These are
all things that I do like doing. And
here are things that I don't like. code
reviewing, reading long specs,
convincing a PM to end a project that
they've been planning for 8 to 12 months,
months,
sitting in meetings with 10 plus people
where nobody is paying attention because
it's boring as hell. I hate these
things. Okay, I actually do kind of like
code review. I need to I I say it's
important to like code review. No one
really loves it, but it is an important
process and I owe my team so many
reviews now that it makes me literally
sick. But yeah, the point being here, if
the tools you're introducing make it
easier for people to spend their time
doing this part and harder to spend
their time doing this part, then your
tools [ __ ] suck. So look at something
like hero, which is AWS's AI coding IDE.
It makes it so I don't have to code as
much. Makes it so I don't talk with
other people as much because I'm just in
my editor waiting for things to finish.
It doesn't make it easier to play with
new ideas because everything has to be
baked into those giant markdown files
that it generates. It doesn't
necessarily make deprecating things
easier or harder. It's kind of a noop
there, but you have no incentive to
remove things or simplify. You just let
it do the work. So, it takes away the
incentives for these bottom parts and it
doesn't let you do the top parts
anymore. It's replacing these. What
about the things I don't like? It means
I have to review way more code. It means
I have to spend half my time in my
editor reading specs. I go to my editor
to escape reading specs. Why are you
making it my job? It doesn't help with
convincing PMs to end projects because
now they're convinced they can go build
it themselves using this. Or when I tell
them it's a bad idea, they're like, why
don't you go use Curo and whip it up and
then we'll see how you feel about it.
And now that I'm not spending my time
coding, we have so much more time to sit
in these useless meetings with a bunch
of people. Yeah, that's the problem I
have is a lot of these tools take away
the parts that we actually enjoy, the
things that we can actually have fun
doing, and potentially improve the
product while we do, and replaces it
with more of the things that we don't
actually enjoy. And not in a way that
makes us more productive, just in a way
that is shifting of the effort. We've
replaced writing code in our editor by
hand with reviewing the code. We've
replaced making prototypes with letting
AI write this giant spec for us. That's
not a good trade. And if we want to
benefit from these tools, we have to go
all the way back up here and rethink our
process. Old process doesn't work well
with these new tools. It effectively
just kills the fun parts and makes the
shitty parts way easier to generate way
too much text for. But I saw this chart
forever ago. I don't feel like finding
it right now, so I'm not going to. But
the chart was like a hilarious curve
that had a couple points on it of like
points in time where the length of the
average law was relatively short if you
just counted by words and then over time
it started to spike pretty meaningfully
and then suddenly spiked again. And then
it spiked way harder recently. And these
spike points were things that will make
a lot of sense. The first one was the
introduction of the typewriter. Suddenly
law started getting way longer really
fast. Then we had the word processor
where you could copy and paste. There
was a time before copy and paste or law
and I don't even want to think about
doing that job. But then what we have
now with AI generation has made the laws
skyrocketing in length because once you
make something way easier to do, we end
up with way more of it. But if the
problem wasn't the amount of the thing,
it was the quality of the thing, this is
bad. The problem with laws in the US or
wherever else is not that there aren't
enough words in the law. I hope we can
all agree that more words does not make
a better law. I know we can similarly
agree that more code does not equal
better apps. If anything, I would argue
this often means the opposite. There's
an inverse correlation here. And these
new technologies made the wrong part
easier. They made writing lots of words
easier, but they didn't make writing
good legislation easier. And the result
is that it is harder to ship good
legislation because now you have to
parse through way more words of [ __ ]
before you can get to the point. And if
that's where we end up with AI code, we
are [ __ ] If it's literally just like
if our job is now just code reviewing
gigantic piles of slop that don't follow
patterns or practices because some other
random person told us to go generate the
[ __ ] That's going to suck. So we need
to rethink our process if we want to
benefit from these things. We end up in
a situation where code is more
straightforward to produce but more
complex to verify which doesn't
necessarily make teams move faster
overall. Absolutely. It would probably
make us move slower. The review process,
the spec process, all of that would be
slower if we had more code to deal with
throughout it. Not a new challenge.
Developers have long joked about
copypaste engineering, but the velocity
and scale that LM enable have amplified
those copypaste habits. Absolutely
agree. You could argue that all LLMs and
backard agents and this stuff is is
Google searching stack overflow and copy
pasting code on steroids. Understanding
code is still the hard part. The biggest
cost is understanding it, not writing
it. Absolutely agree. LMS reduce the
time it takes to produce code, but they
haven't changed the amount of effort
required to reason about behavior,
identify subtle bugs, or ensure
long-term maintainability. that work can
be even more challenging when reviewers
struggle to distinguish between
generated and handwritten code or
understand why a particular solution was
chosen. Absolutely agree. And to again
emphasize the points that I was making
before, it almost feels like there's two
types of code where we have throwaway
code and we have production code. The
goal here being on the left you have
things that your goal is to figure out
what the product is or whether or not
the thing can work and you don't care if
it all goes away and the right is code
that is expected to be maintained for a
long time and keep doing the thing it's
supposed to do for even longer.
Throwaway code this is stuff like my
shitty scripts for benchmarks or stuff
like sandboxes or prototypes for some
new feature. Whereas the production code
side is core infra for main product or
rust library powering millions of apps
or new feature being built by 12 devs.
The problem was that up until recently
the cost difference between these two
things was not that big. It seems
obvious now that like things on the left
are very easy to build and things on the
right aren't. But we're still fresh off
an era where people were sincerely
saying that we should write everything
in Rust, even small side projects,
because people were so unable to
perceive the gap between these two types
of code. The thing that made me special
was that I was good at distinguishing
between the two, knowing when it was
best to just write throwaway code,
writing this part really fast, and then
figuring out what subset of it should be
productionized to go to the other side.
I would rapidly rotate between these two
sides with projects that I was working
on. I would make a demo on the throwaway
code side. We would iterate with users,
figure out what features they actually
want. I would then go build a more
production version and then I would run
into some weird tech problem with it and
I would go back to prototype different
tech implementations. I wasn't using
code exclusively as a thing that's put
up on GitHub for someone else to approve
and then ship to users. I was using code
for a lot of other things, too. I would
use it for figuring out what to build. I
would use it for testing out different
tech implementations. I would use it for
processing my emails to figure out which
reports we were getting more often. I
would use it for all sorts of things
where the code itself didn't matter and
I didn't want anyone to review it
because that wasn't the point. If there
was something worth reviewing in the
code on the side, I would copy paste it
out, put it in a real PR or in a gist
somewhere and send it to my team to look
at. But this distinction wasn't one that
many engineers could or would make
because most engineers write code the
same way regardless of what they're
doing. I can't tell you how many side
projects I've seen by people who work at
fang companies. And these fang interns
working on side projects are spinning up
the equivalent of like Facebook or
Netflix's stack for their personal to-do
list. And people's inability to
distinguish between these and find
patterns that work for different parts
and learn things from each side is a
very real problem. And as a result,
these AI tools are going to confuse a
lot of people because if you're looking
at something like lovable, but you see
throwaway code and production code is
like the same thing and you write these
the same way and lovable to be frank
falls under the throwaway code side as
most of these agent code tools do. If
that's where this is built to do things,
but you don't meaningfully distinguish
these two in your head, you see this is
bad and this is good, not as different
values for different purposes. Now you
see this as a shitty useless tool. But
if you think about this as a way to make
your production code simpler and better
because you don't have to prototype the
same way that you write the real thing,
you start benefiting a lot from these
same tools. That's the thing I want to
really emphasize here is if you took the
code that you write in something like
lovable or even the code that I write
when I'm during this prototype stage and
you throw it at your team to review,
they're going to hate you because that
code wasn't meant to be reviewed. That
code wasn't meant to be read. That code
was meant to figure out something. This
code is meant to be maintained. This
code is meant to solve a problem,
usually a knowledge gap. Teams still
rely on trust and shared context.
Absolutely. Software engineering has
always been collaborative. It depends on
shared understanding, alignment, and
mentoring. However, when code is
generated faster than can be discussed
or reviewed, teams risk falling into a
mode where quality is assumed rather
than insured. That creates stress on
reviewers and mentors, potentially
slowing things down in more subtle ways.
Absolutely agree here as well. If by the
time your mentee is caught up on the
codebase, half the shit's changed
because something filed a PR destroying
all the stuff that was there before,
good luck. If you don't have the right
mindset to mentor this person because
you think you can just go AI generate
the work that they're going to take
longer to do anyways, you're [ __ ] If
you let them go AI generate something
and it looks just like the code you
normally AI generate, so you hit merge,
but it doesn't actually work, you're
[ __ ] These problems happen if we
don't distinguish between throwaway and
production code. And the final point in
this article, LM are powerful, but they
don't fix the fundamentals. There's real
value in faster prototyping,
scaffolding, and automation. Oh, look at
that. We agree fully right at the end. I
had a feeling. But yeah, this is the
thing that I'm excited about is
previously most devs weren't capable of
doing the throwaway version much faster.
Like I honestly believe if you would ask
the people on some of my teams at
Twitch, if they were to just make this
to see if the feature worked and the
production ready version was going to
take nine months and you ask them how
fast can you do a demo version for us to
play with, they would say probably 2 to
3 months. You can do it in 2 to 3 days.
There are very very few products that
you can't build a usable version of in a
few days unless they're like deep tech
bets, but that's not the case for your
[ __ ] crud app. Stop pretending. If
you honestly think the best you can trim
your production process is from many
months to a few months, then get out of
the way of the people who can do it
faster so they can figure out what you
should build. If it takes you that long
to build anything, then don't build
things that are uncertain. If it takes
you 6 months to build a prototype,
you're not building prototypes. You
shouldn't be calling them that because
you're going to feel so bad throwing
that 6 months of work away. If I spend
two weeks making a demo version or three
days making a demo version and we throw
the whole thing away, I don't give a
[ __ ] That's the point. We still learned
the lesson. As the author says, the LMS
don't remove the need for clear
thinking, careful review, and thoughtful
design. That can be your problem after
we figure out what to build. It's not
going to replace those parts. But if you
can't acknowledge the gap here, then you
just sound dumb because this is the
problem that I see a lot of is somebody
whose life is on this production code
side. So, some principal engineer. Let's
say some exec or some PM goes to this
principal engineer. I'll just make a
fake Slack thread down here. CEO. Hey,
at principal, think we could use lovable
to test out new ideas? Could be useful
so we stop building bad features. Lol.
Principle. Seriously at CEO, you think
some vibe coding BS is better than your
bestin-class edge team? Do I have to
remind you that I'm Azure certified?
That we just took a great class on agile
last week. What more do you want? This
is the problem because this CEO or even
they're not a CEO. Let's say they're
just like some random PM says this and
that product manager really wants to
figure out what features are good or bad
ideas earlier because they were tired of
wasting so much time doing random
[ __ ] But as sarcastic as I made
this look, this is a very real thing. It
would probably look more like this. I
don't think we can get meaningful info
from something that took less than 3
months to build. Lovables for non-devs
making personal apps, not for real work.
I could absolutely like I've seen
messages like this from PMs and if you
don't think this really happens, watch
this. Could have changed the word
lovable to something a bit different.
Electron or React or JavaScript or
Superbase or Convex or any of the
technologies that we like to talk about
here, you'll realize this is a very real
message people like this send all the
time. And the fact that lovable is one
of those things shows what I mean. The
point here is that the principal
engineer is so unable to think outside
of their perspective because to them the
meaningful info isn't is the product
useful. What they mean by this point
here is much different. What they mean
is meaningful info about does the tech
spec work. They assume the product
already works. They don't hear the PM
for where they're at. All they can think
about is whether or not they can figure
out how to build the thing through
building the thing. and lovable is not
going to give them any unique insights
on how to engineer the thing. So,
they're going to ignore that path and
that possibility entirely, but the PM is
going to see other people building
useful stuff with it. They're going to
realize that the problems that they want
to solve can absolutely be solved with
these tools. They're going to build a
deep frustration because they're talking
past each other. And that's how you end
up with these adversarial relationships.
I legitimately believe that these types
of conversations are going to start
happening a lot and that the PMs and the
principles are going to start fighting a
ton because these guys just cannot see
the value of a prototype and these guys
don't understand that all the principal
engineers thinking about here is the
technical details not whether or not the
features worth building or not in the
first place. One of the coolest moments
I had at Twitch was when my designer
started hitting me up with random
questions about HTML and JavaScript
stuff. And I was really confused because
none of that had to do with her role and
it was not questions about things that
we used at Twitch. So I just asked her
what she was working on. She sent me the
screenshot and it turned out she was
trying to build a prototype version of
Mod View and it was really compelling in
terms of how it looked, but nothing
worked in it at all. It wasn't even
using real data. It was meant to be a a
demo demo like just showing what this
could be. So she could put that in front
of moderators and ask how this looked to
them, how useful this would be to them.
Someone like that, like Iris, one of the
best designers I've ever worked with,
would benefit so much from these tools.
Not because she'll be filing a whole
bunch of PRs, but because she could
figure out in a much more iterative
process what the right thing is. This is
the big thing I want to drive home.
Improving time to next realization is a
very good thing. We should be optimizing
for insights. How quick can you go from
an assumption to a new learning? If I
have an assumption that users want a
thing, what's the shortest path to
figure out if they do or not? If I think
this button should work this way, what's
the shortest path to figure out if it
should or not? How quickly can we get to
the next learning, the next insight, the
next understanding, the next moment
where we understand something we didn't
before? And if we can use these tools to
get more insights, to have more aha
moments as we figure out [ __ ] they're
useful. But if we use them to speedrun
through the process without getting any
insight throughout, this sucks. And I
don't think a lot of these tools are
being pitched this way. I think these
tools are being pitched as developers
are slow. Replace them and make it
faster. And not we can figure out what
our users want faster. We can iterate
more effectively. We can make tighter
iteration loops and get more feedback
faster. That's the magic of this. The
cost of writing code has indeed dropped,
but the cost of making sense of it
together as a team has not. That's still
the bottleneck. Let's not pretend it
isn't. I absolutely agree. Team
understanding is a massive bottleneck
for most things. The additional point I
would drive home here is this. If you
have a giant spec, just we'll say this
rectangle is the spec. And this was
built primarily by the product lead, the
design lead, and some tech lead coming
up with all the pieces that are going to
be in here. Generously speaking, maybe
this much of it, like maybe is actually
good and correct. And then the rest,
whatever is left in this box here, that
all is garbage that needs to be thrown
away. I would argue that the ratio here
is often quite a bit worse. I can't tell
you how many times I would read the
spec, the whole team would be bought
into the spec and then the thing we
shipped looked nothing like the spec
because the spec was bad and had a lot
of bad assumptions in it. What if
instead of the spec, we have the
prototype and instead of it being this
giant thing, the prototype
is this small thing and maybe the ratio
is way worse. Maybe the prototype is
more bad than good. But we get some good
insights here. Maybe the whole prototype
is bad and now we have the insight this
idea was bad in the first place. It
takes a lot less time to do this part to
build this thing. And if we build this
thing, which is way simpler, the
communication gets easier, too, because
it's way easier for two to three people
to communicate about a small prototype
than it is for 20 plus people to talk
about this giant spec. And the
likelihood that you learn things and
catch things is significantly higher
when you start this way. Maybe you have
a second prototype after that takes
these good parts and extends them. And
now you have more good parts and you
make something slightly bigger, but it
also has more bad parts. And you make
another prototype where the good parts
are even smaller because you had some
bad assumption and you got screwed over
and now you have a ton of bad parts. But
the bad parts are good because you just
learned a bunch of lessons. Now all of
these bad parts can be thrown away
forever and never done again. And now
the next version ends up being twice as
big, significantly better, and it still
has some rough edges, but you know
exactly what those are. And you learned
a bunch of lessons not long ago that are
applicable here. You can trim those off
and it turns out the product you're
building is way smaller than that other
spec might have suggested. And as people
in chat are saying, this also makes the
job way more fun. It's not fun to be
slaving away at [ __ ] Jira tickets. I
don't believe you if you say it is. It's
so much more fun to be trying new
things, playing with new solutions,
iterating with a team of people, trying
to solve real problems. It's so much
more fun. And these tools make it so
that doing each of these steps is way
cheaper than it used to be. If it took a
team three months to build this and nine
months to build this, I would understand
the hesitation. But if it takes 20 devs
9 months to build this, but it takes one
dev 3 days to build this, you are really
stupid if you're still building this way
initially when you don't actually know
if the product is needed or not. Really
dumb. And that's why I am excited about
these AI tools. The way I kind of think
about it is suddenly way more teams have
their theo that can get this prototype
out way faster, iterate heavily on it to
find the real thing you want to solve
and then kick it off to the traditional
product lead PM principal engineer
people to spec out and do the right way.
But if we have way less pieces as we get
there, we can communicate way better
throughout it and everyone has a much
better understanding. Three people
trying to understand this will always be
a better experience than 20 to 30 people
trying to understand this. Think that's
pretty obvious. Huge shout out to Pedro
for writing this article. It was
absolutely awesome and inspired what's
been one of my favorite rants in a
minute. So, thank you for putting this
out. Appreciate you a ton as well as the
permission to react to this. This was
great. Give his blog a look if you
haven't yet and give him a follow on
Twitter. Thank you all for watching this
rant. I had a lot of fun with this.
Clearly, I had some things to get off my
chest. I'm curious how y'all feel
though. Was this a good one? Was this
misguided? You think it being weird the
split between prototypes and more
traditional building for production? Let
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.