YouTube Transcript:
Anthropic has weird vibes
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
developers love Anthropic and there's
good reasons why. They build models that
make it way easier for us to write code
and cloud code is also really really
good. That all said, I don't consider
them good stewards of the open source
and software development world. It
almost seems like they fell into it and
they're getting away with things that I
don't think they necessarily should.
from cutting off access to everyone from
Windinsurf to OpenAI directly to sending
DMCA requests to people who analyze
cloud code too hard on GitHub to just
generally weird vibes about how they
present themselves and talk about
things. I have feelings about anthropic.
I wish I had more good things to say. I
have found myself in a position where
I've had to defend them a lot in the
past. I've been saying for even longer
that I don't really like anthropic.
Today is going to be the day where I
finally sit down and explain to you guys
all of the things that I don't really
like about Anthropic. Please know that
I'm speaking from experience here. As
someone who paid them $31,000
last month, I have been through it with
Anthropic. If we're going to keep
offering teeth reach out at eight bucks
a month, we need to make some money. So,
quick word from today's sponsor and then
we'll dive right in. Coding's never been
more fun. I can do the parts I like. I
can have AI do the parts I don't. and I
can think about my systems a whole lot
more because we can write so much more
code. Code review, however, seems to be
the cost. Now that we're shipping more
code, we also have to review more code.
And I am so tired of the tedious process
of making sure every syntactic thing is
handled, that we're not doing things
that are going to introduce weird bugs.
And I just want to know what the PR
actually is. Today's sponsor, Code
Rabbit, is here to bring AI to our code
reviews. And it has changed how we do
code review fundamentally. I was so
skeptical going in. I really didn't
think something like Code Rabbit would
be for me and I'm so thankful I tried it
because it has changed how we do code
review. It summarizes your PRs. It
leaves diagrams for what's actually
going on, what happens, where, when, and
why. A change summary of each file and
what was affected in it. But most
importantly, it leaves actual comments
and feedback on your PR that have caught
dozens of bugs that we would have
shipped to production. I can tell you
with 100% confidence that there are at
least 40 bugs that I personally wrote
that didn't make it into T3 chat because
code rabbit caught them. Would my other
teammates have caught them? Possibly.
Decent chance for some, less of a chance
for others. There have definitely been
ones the team missed the code rabbit
hit. But more importantly, the team no
longer has to. Their role in code review
isn't going through line by line making
sure I got my syntax correct anymore.
Code rabbit does all that stuff so my
team can focus on the details. The
coolest thing is it learns how you and
your team work. If it gets something
wrong or does something different, you
can tell it at Code Rabbit, no, we don't
do it that way. You can add rules and
other things, too. You can enforce
specific behaviors you want in your
codebase by putting a rule in for Code
Rabbit, and now it will call that out in
review. Like here, we don't use
process.env. We use the T3 env package
so that we have type- safe environment
variable access. And here, someone made
a change that used process.v, and it
yells at them for it and tells them,
"No, go use this instead." This is the
type of review comment that I feel bad
leaving because I don't want to block
the PR. Code Rabbit will do it for you.
And since it does these reviews so
quickly, there's a good chance that you
will get these comments before anyone
else on your team even starts doing the
work. You are no longer inconveniencing
your team with your [ __ ] code. You're
inconveniencing Code Rabbit. It's an AI.
Who cares? If you want to stop bothering
your team with [ __ ] code reviews and
want to focus on the details that
matter, check them out today at soyv.link/codrabbit.
soyv.link/codrabbit.
These are the core pieces I want to
cover. the weird ways they cut off
access, their bad open source behaviors,
their holier than thou attitude, and the
fact that their pricing is, to put it
frankly, absurd. They're like the one
company that has never really lowered
prices and seems to only be slowly
raising them. But let's start with the
cutting off access to competitors
because this is a recent drama. I'm
going to start with this fun quote from
an OpenAI employee. This is a meme
quote, but you get the idea. As part of
our effort to democratize the fruits of
AI and ensure the singularity is
helpful, harmless, and honest, we've
decided to deny others working on the
singularity, the ability to chat with
our helpful, harmless, and honest model.
This is going to be fun. Historically,
Anthropic has tried to position
themselves as this like really positive,
good, kind, working for the best of
humanity company. And every time someone
does that, I get a little bit skeptical.
And I find in their case the
contradictions are pretty absurd.
There's a recent leaked memo from the
CEO, who by the way used to work at
OpenAI where he said, "Unfortunately, I
think the no bad person should ever
benefit from our success is a pretty
difficult principle to run a business
on, wrote Dario in a note to staff about
the possibility of pursuing Gulf State
investments." Yeah,
this is the way Anthropic feels like it
works right now. They're on one hand
trying to be the cool, nice, thoughtful
company building these models, but in
reality they're yet another business and
if anything of the current AI major
labs, they're the one contributing the
least back. Like sure, they have MCP,
which is a cool standard, but at the
same time, they have Cloud Code, which
is the only one of the modern CLI coding
tools that isn't open source. All of the
others are. Not only is Cloud Code not
open- source, but as I hinted at before,
they have been DMCAing people for trying
to reverse engineer it and share what
they found. And there's a very, very
cringe video where the guy who created
Cloud Code dropped some quotes that uh I
have feelings about.
So, we released Cloud Code in February. Yeah.
Yeah.
So, it's been about a little over 3
months. What's it been like? What's the
reaction been from the community?
Yeah, just insane. Yeah, like it's so
unexpected, you know, but before we
released it, we weren't sure if we want
to release it.
Um, it was this tool that internally is
just it makes our engineers and
researchers so productive and we were
having this debate. We're like, is this,
you know, secret sauce? Are we sure we
want to give it to people cuz this is
the same tool everyone at Anthropic uses
every day.
Yeah, they considered not releasing
Claude code because it's secret sauce
and they don't want to give away the
thing that makes them so productive.
God, it just feels so gross. Like I'm
trying to not be like just straight up
rude about this, but like the the
arrogance necessary to even start to
think this way is absurd to me.
Anthropic almost seems like they know
how weak their edges. I'm going to make
a bold statement. The only reason
Anthropic is relevant right now is their
early lead on tool calling. They beat
everyone else to getting that right. So
if you wanted to build your own tools or
your own products around AI, if you're
building your own agents, if you're
building your own thing that runs in the
terminal, if you're building an AI
editor like cursor, wind surf, or
whatever else, anthropics models were
way better at taking a format to
describe some work. Like if you want to
have it be able to look up the weather,
you tell it do tool name equals weather
value equals some zip code. You tell it
that it can do this to get weather
information. and it will get back the
weather information and it will do it.
Their models had this first and it gave
them a huge benefit because up until
recently, there weren't many other
options that would call tools reliably
at all. Even Gemini 25 Pro likes to tell
you what it's going to do with the tools
and then not do it. And I've
consistently found it get nerfed
randomly with updates so that it gets
even worse when you're trying to use it
for real agentic tasks. I still find the
anthropic models to be the best by far
for this. That said, they're now facing
competition. GLM 4.5 and Kimmy K2 are
the biggest risk that Anthropics ever
faced. These models are really good at
tool calling and they're openweight
which means not only can we use these
for free however we want, we can use
them to generate data to train other
models to be good at tool calling as
well. These are two openweight models
that anyone can go download and use that
do the thing that made enthropic so
special a year ago that only they could
do. 3.5 sonnet was a gamecher and it
took us almost a year to catch up
outside of the anthropic bubble but the
catch-up has happened. There are lots of
other models that can do this now. The
Horizon models that are currently
available on Open Router that are
probably by a big lab handle tooling
really well. Newer OpenAI models like
GPT 4.1 does tool calls really well.
These openweight models do tool calls
really well. They're losing their edge.
They were briefly better at UI too. Like
for a while, the anthropic models were
the ones that were best at Tailwind. I
had Claude for Opus generate an image
studio for my video on the Horizon
models, and it this is what it looked
like. It did a fine job. But when you
compare that to what Horizon generated,
it's a night and day difference. Like
they're about to lose their edge on UI.
And this is a non-reasoning model, by
the way. Horizon Beta doesn't have
reasoning, and it was still able to make
something this much better than the most
expensive model that Anthropa currently
offers. Their edge is going fast and the
way that they've historically operated
as a business has been very protective
of that edge. They claim that companies
like Windsurf and OpenAI are trying to
use their models to make theirs better.
They're trying to steal that secret
sauce, the thing that we just heard
about in the previous video. No, that's
a silly way of thinking about it. In a
world where people can't stop publishing
their research, sharing their learnings,
releasing open weights, and more, it's
kind of insane that Anthropic, the only
company that's done all of this wrong.
Anthropic, the only company that
released their CLI closed source. Both
Google with Gemini CLI and OpenAI with
Codeex, have fully open sourced their
CLIs and even take contributions adding
other model support to it. The day that
Codex came out by OpenAI, there was a PR
file that added cloud models and it was
merged the same day. They're also the
only company that doesn't release open
weight models. I know, I know OpenAI
hasn't, but I have a feeling there's one
coming very, very soon. Hell, by the
time this video is published, it might
already be out. They're trying really
hard to get it right. Anthropic seems to
have no interest whatsoever in that.
They're also the only Blab that never
drops prices or does anything to make
things cheaper. Even Gemini overhauled
their cash layer, which was an immediate
drop in the costs for us. They don't.
There are things that they do that are
good. I don't want to just sit here and
rail on them. Their system cards
anthropics to be credited for
snitchbench. I wouldn't have made that
bench if it wasn't for the detailed
system card they published with Opus and
Sonnet with V4. Those system cards gave
us really good ideas and they were also
heavily misinterpreted by a bunch of
lunatics on Twitter that I had to prove
wrong by making a benchmark that's now
relevant and possibly has research being
published about it. Crazy. They also
kind of make standards. Like MCP, as
much as I hate it, is okay. And there
really wasn't a way for them to do that
without just releasing it as a standard.
But uh yeah, I'm trying to think of
other good things to say and I'm
honestly kind of struggling. I just have
rough feelings. You need to account for
my bias and the fact that Anthropic is
one of the biggest bills that I
encounter at any given time. The amount
of my money that these guys are taking
is absurd. And the discount they offered
us was 5% if we commit to minimum 20k a
month of spend. So, if I send them 60
grand because the minimum three-month
commitment, if I hand them 60 grand,
they'll be kind enough to give me 5% off
until we hit the 20,000 of spend and
then we're back to the normal rate
again. What? It's insane. It's
absolutely absurd. But I want to dig
more into the access cutoff stuff
because I think this is particularly
[ __ ] The wind surf one was extra
funny because the Windsurf acquisition
never even went through. Enthropic was
just being aggressive early because
they're kind of arrogant pricks. like
there was no reason for them to do this.
Their chief science officer Jared Kaplan
was interviewed and asked about this and
he said on stage, "I think it would be
odd for us to be selling Claude to
OpenAI." Windsurf got cut off hard from
anthropic, but there's something really
funny about how that works. Anthropic
models can be used on GCP and AWS. You
go to open router, you go to claude
sonnet 4, you'll see that not only can
you use Google vertex and bedrock for
their models, they're actually faster
and often more reliable. It's at the
point where it's hard to use cloud
models just through the official APIs
because open router will make it way
more reliable because when anthropic
goes down, which believe me, they will
go down. They will go down a lot, you
can fall back on Amazon Bedrock or the
many different vertex deployments
instead. So when Enthropic cuts somebody
off, they're not even cutting them off
from access to the model. They're
forcing them to restrict access to
something like the anthropic infra
directly. So you can still use quad in
the wind surf editor. They're just being
petty. The only reason they would do
this is because they are angry and they
want to vent about it publicly. And the
reason they would do that is they want
people to see them for how they feel,
which is what we're doing here. And
we're going to make fun of them for it
because it's petty, narcissistic,
[ __ ] [ __ ] behavior. And I can't
believe a real billion dollar company is
doing this. These [ __ ] are worth $60
billion and they are arbitrarily
restricting access and calling [ __ ]
stupid shots out on stage because
they're so insecure. It's insane. I
can't believe this is just continuing to
happen. And the reason we're filming
this today is because Anthropic and
OpenAI just confirmed the fact that they
have been cut off from cloud access as
well. Why would Open AAI need access?
They need to run [ __ ] benchmarks.
That's why this is an attempt to keep
OpenAI from being able to run benchmarks
comparing the anthropic models to their
models. But as I mentioned before, they
can just use them from the other
providers. So this is purely petty. This
is purely spite. This is purely pathetic
and they deserve to be called out for
their [ __ ] This is something that
an insecure product manager does at a
company. This isn't something that a big
multi-billion dollar company does. At
the very least, they don't do it without
being made fun of, which is what we're
all here for today because this is
[ __ ] pathetic. The more I read into
this one, the more [ __ ] stupid it
gets. Here is the statement from
Anthropic spokesperson. Claude Code has
become the go-to choice for coders
everywhere. So, it was no surprise to
learn OpenAI's own technical staff were
also using our coding tools ahead of the
launch of GPT5. Unfortunately, this is a
direct violation of our terms of
service. Customers are banned from using
the service to build a competing product
or service, including to train competing
AI models. They're also not allowed to
reverse engineer or duplicate the
service. This is pathetic. This means like
like
I just realized this means we can't use
cloud code. Apparently is against the
terms of service for me to use cloud
code because we're building T3 chat,
which is a chat app that competes with
anthropic. Are they going to cut off
access to the model for our website,
too? Am I going to have to stop offering
anthropic models because they're that
insecure? Is it really coming to that
point? Are they really that pathetic?
Are they going to cut off Google, too,
because Google has better models? I
can't help myself but poke the bear a
little bit. If you haven't already tried
T3 Chat, we were pouring our heart and
soul into this app. It's really, really
good. For eight bucks a month, you get
access to every model versus 20 bucks a
month for getting access to three. Both
are limited, but their limits are way
less transparent, and their app also
sucks. I have a long video describing
all the ways claw.ai AI is a miserable
thing to use. We built something that we
slaved over the details so it isn't
miserable to use. We support all the
cloud models and all of the other really
good models as well for eight bucks a
month. If you haven't subscribed yet,
I'll give you the first month for $1.
Use code weird vibes at checkout for
that discount. Anyways, like is Google
no longer going to be allowed to offer
anthropic models on the cloud? I
wouldn't be surprised if that happens. I
legitimately think there's a moment
coming up where Google and Anthropic
have a big explosive breakup because
they're now making models better than
Anthropic. This is going to get messy
real fast. Let let it be known that I
called this one first. Anthropic's going
to get way pettier and way messier and
start doing things that are obviously
stupid because the things they're doing
right now, in my opinion, are obviously
stupid. Much like DMCAing somebody
because they're fond of the work you
did. This one's insane. Dave Schumer dug
into the cloud code source and he had a
fun time with this. Again, Anthropic is
really insecure. So, they went out of
their way to try and restrict access to
cloud code source even though there were
already existing open source things when
they released it, including ADER, which
is a really good open- source CLI
agentic coding tool built in Python. So,
Dave was curious how Cloud Code's
implementation differed from ADER. So,
he went to look and got a gigantic messy
minified file from what was grabbed from
the npm package. You can see some sentry
stuff in here. So, they're using sentry
within the CLI. Interesting. But they
did have enable source maps on, which
means when you scroll down, you can find
the source map. Apparently though, an
update was pushed when he took a quick
break. And in that update, the source
map URL string was gone. The folks at
Anthropic realized they made a huge
oopsie and pushed an update that removes
the source map. No matter, I'll go grab
the earlier version. Oh, they removed
that, too. Do you know how hard it is to
get npm to remove an old version of a
package? you pretty much cannot do it.
Someone had to pull a lot of strings in
order to get a different version of a
package taken down from npm. That almost
never happens. So, there's a really
funny blend here of the incompetence to
ship the source map URL, the insecurity
to care enough to offiscate it in the
first place, and the connections and
just kind of bullying behaviors that
they have to actually get it taken off
of npm. That is pathetic. Thankfully,
Dave still had it on his machine, so he
was able to go through the source map
URL and actually break it down and find
all of the source code, confirm that
it's using ink. The animation is just
some ass characters. There's no secret
sauce to leak in the system prompt.
Remember, the secret sauce they were
concerned about. There's jack [ __ ] for
that. Files will probably go out of date
pretty dang quick. It's the Anthropic
team's actively developing the tool.
They're already up to 2.19. This was
from 2.8. So, the reward for publishing
that code was a DMCA strike. Thankfully,
GitHub, unlike Anthropic, is a
relatively transparent company. So, all
the DMCA requests they get get published
on their DMCA repo. So, we can see that
Anthropic did this. And we can see this
is far from the only time they have done
this. They really like to DMCA people
trying to rehost cloud code. Look at all
those repos they took down for their own
mistake. The harsh reality is that if it
wasn't for the recent innovations in
tool calling across other models, we
would kind of be stuck with this and
have to deal with it. Thankfully,
there's been a lot of innovation across
other models catching up to and
surpassing Anthropic. It seems like this
has been a focus point for OpenAI for a
lot of this year. As I talked about a
bunch before in the 4.1 drop video and
the 04 mini drop video, OpenAI seems to
really want to win devs back because it
turns out it doesn't matter how smart
your model is on a benchmark. If
developers aren't using it and they're
not building things around it, you have
a real chance of losing in the long
term. And Anthropic's edge has been that
developers like them and build things
around them. That edge is waning fast.
If they were a reasonable company, they
would recognize that and course correct.
They would open source cla code. They
would drop all their models as
openweight. They would lower their
prices a bit. They'd go on a PR
repairing spree trying to fix all of
these things. The fact that Anthropic is
by far the hardest company for me to
talk to and get information from, I can
get our rate limits increased on Gemini
via a DM or a text message. I don't hit
them on OpenAI because we're already
cleared through that. In fact, OpenAI
hits me up letting me know about things
early. They're awesome. OpenAI has been
so nice to work with as an influencer
and as a builder building on top of
their stuff. Anthropic took I had to
pull so many teeth to get a slight bump
in the rate limits on Anthropic's
infrastructure. It was miserable in
order to get Cloud 4 working on T3 chat
day one.
Wouldn't wish that on our worst enemies
or our competition even. The best I can
put it is what I put in the title here.
I just get really weird vibes from them.
My experience working with Anthropic,
building on top of Anthropic, covering
Anthropic, looking into their practices,
finding all these things. Every time I'm
looking more into Anthropic, it just
feels weird. And it sucks because some
of my really smart friends are working
and they're doing awesome things. Some
of their research is genuinely
incredible. The level of detail in their
system cards is super useful. Their
models are making awesome stuff happen,
but the way they run their company is
cringe at best, problematic or damaging
at worst, and we need to talk about it
more. I'm tired of pretending that
Anthropics models being smart for
writing Tailwind means that we can
forgive them for everything when in
reality they are the worst company in
the modern models market when it comes
to working with them as a developer.
We'll see how long it takes for them to
cut off my access now that I've
published this video. Fingers crossed
that they won't, but at the very least,
we'll get some good content if they do.
I got nothing else to say. Thank you for
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.
Works with YouTube, Coursera, Udemy and more educational platforms
Get Instant Transcripts: Just Edit the Domain in Your Address Bar!
YouTube
←
→
↻
https://www.youtube.com/watch?v=UF8uR6Z6KLc
YoutubeToText
←
→
↻
https://youtubetotext.net/watch?v=UF8uR6Z6KLc