0:02 I use around 10 AI tools for 90% of my
0:04 work, [music] and each one excels in one
0:07 specific area. But figuring out which
0:09 tool works best for what task usually
0:12 takes months of trial and error. So,
0:14 I'll share the one thing each tool does
0:16 better than alternatives, so you walk
0:18 away with a clear mental model for when
0:20 to use what. I've grouped these tools
0:22 into four categories across a two-part
0:24 series. There's just too much to cover.
0:26 This video covers everyday and
0:29 specialist AI, while part two covers the
0:31 remaining two categories. Let's get
0:33 started. Kicking things off with
0:35 everyday AI. These are your general
0:37 purpose chatbots. Chachi, Gemini, and
0:38 Claude. And while they seem
0:40 interchangeable, their quote unquote
0:43 moes, the specific things they do best
0:45 have actually become quite distinct.
0:47 Starting with the OG Chachet. While
0:49 Gemini and Claude are arguably just as
0:52 capable in raw power, Chachib still
0:54 holds the crown in one area. It's the
0:57 most obedient model. [music] In plain
0:59 English, Chachib drops fewer balls when
1:02 you hand it a complex checklist. Other
1:04 models might be just as smart, but give
1:05 them a lengthy set of instructions, and
1:08 they'll sometimes skip a step or decide
1:10 they know better. If you want proof of
1:12 this, just ask each model to optimize a
1:14 rough prompt for itself. Chacht will
1:16 generate a noticeably longer and more
1:19 detailed prompt because it knows it can
1:20 handle the complexity. And if you run
1:23 that optimized chachib prompt through
1:25 both chacht and gemini for example,
1:28 you'll notice two things. First, chachib
1:30 thinks longer because it's actually
1:32 checking every requirement and it
1:35 follows each instruction to the letter.
1:37 Gemini on the other hand often takes
1:40 shortcuts. Pro tip, I share the exact
1:42 prompt optimizer in the essential power
1:44 prompts template linked below, but you
1:45 can test this yourself with something as
1:47 simple as optimize this prompt for
1:50 Chachib insert model number here. Here's
1:52 my rough prompt. Diving into a real
1:54 world example, I gave both Chachet and
1:56 Gemini the same complex prompt, a hiring
1:59 rubric with a dozen requirements. Chachi
2:01 delivered every single one. Gemini's
2:03 output looked right at first glance, but
2:05 when I checked it against my original
2:07 list, it had quietly dropped a few
2:10 rules. That's the key difference.
2:11 Chachib doesn't decide which
2:13 instructions matter. It just follows
2:16 them. Here's a second simpler example.
2:17 Sometimes when you explicitly tell
2:19 Gemini to search the web, it just
2:21 doesn't, which is wild since Gemini and
2:24 Google search are both Google products,
2:26 right? Whereas with ChachiT, when you
2:28 enable web search, it performs the web
2:31 search every single [music] time. I know
2:32 this is a small example, but it's
2:35 downstream from Chachib's core
2:37 superpower. Obedience means you can
2:39 trust the behavior you ask for. So, as a
2:41 rule of thumb, if a task has a lot of
2:43 moving parts, and getting one wrong
2:45 breaks the whole thing, start with
2:48 Chachib. Next up, Gemini. Where ChachiT
2:50 wins on obedience, Gemini wins on
2:53 multiodality. In plain English, Gemini
2:55 is able to process a massive amount of
2:57 mixed media, video, audio, images, and
2:59 text natively. Taking a look at this
3:01 table, we see that only Gemini can
3:04 handle all four types of media natively.
3:06 It's able to quote unquote listen to
3:08 audio and quote unquote watch videos,
3:11 while Tragic and Claude use roundabout
3:13 ways to access that information. What's
3:16 more, Gemini's massive 1 million token
3:18 context window means it can handle large
3:20 video recordings, hour-long audio
3:23 recordings, full slide decks, all
3:25 together that would literally choke
3:27 other models. If you watch my latest
3:29 Gemini video, you'll remember the use
3:31 case where I screen recorded a messy
3:33 walkthrough of myself completing a task,
3:35 uploading that video onto Gemini, and
3:36 asking Gemini to turn it into a
3:40 readytouse SOP with perfect formatting,
3:42 which is an example of Gemini ingesting
3:45 video and turning it into text. Now,
3:47 let's take that a step further. Imagine
3:48 you just finished a weekly meeting. You
3:50 have a video recording of the call, a 20
3:52 slide deck, and a photo of a messy
3:54 whiteboard session. You can upload all
3:56 three and ask Gemini to summarize what
3:58 was discussed, pull out the key
4:00 decisions, and draft the follow-up
4:03 email. Gemini is the only tool that can
4:05 synthesize all three in one go. All that
4:07 said, I have to point out that Gemini's
4:10 raw reasoning capabilities sometimes
4:13 feels slightly behind CatchBT. But when
4:15 the task involves video, audio, or
4:17 massive files, the trade-off is
4:19 obviously worth it. Speaking of matching
4:20 the right tool to the task, today's
4:22 sponsor HubSpot put together a free
4:24 guide called the AI productivity stack
4:27 that covers 50 tools organized by use
4:29 case. Here's why I like it. While this
4:31 video focuses on my personal favorites,
4:33 your workflow probably needs something
4:35 different. Maybe you're in marketing and
4:38 need SEO specific tools or you manage a
4:39 team and want to build automated
4:42 workflows with reliable AI. This guide
4:43 breaks down tools across business
4:45 functions like research, design, and
4:47 marketing. And for each tool, it shows
4:49 you the best use case, key features,
4:51 pricing, and a step-by-step workflow.
4:53 What I found most useful is the decision
4:56 logic at the end of each section. So,
4:58 for example, the research category tells
5:00 you exactly when to use Perplexity
5:03 versus Claude versus Humatada based on
5:05 what you're actually trying to do. It's
5:07 a great way to quickly understand what
5:09 each tool does. [music] Well, I'll leave
5:11 a link to this free guide down below.
5:12 Thank you, HubSpot, for sponsoring this
5:14 video. Rounding out the everyday AI
5:16 category, Claude. Claude superpower is
5:18 producing higher quality first drafts
5:21 than the other models. In plain English,
5:23 that means Claude's first attempt is
5:25 usually closer to done. This superpower
5:28 shows up in two areas. First, coding.
5:30 Here's a fun fact. The latest version of
5:32 Gemini beat the older version of Claude
5:36 in every single benchmark score except
5:39 for the coding one, which is crazy. So
5:41 obviously Anthropic has figured out
5:44 something related to coding the others
5:47 haven't. And in practice, developers
5:49 universally agree that Claude writes
5:51 functional code on the first try more
5:54 consistently than alternatives. Here's a
5:56 real world example. I needed to bulk
5:58 export conversations from a customer
5:59 service platform, but their support team
6:01 said only developers could do it. I
6:03 described the problem and Claude not
6:05 only gave me step-by-step instructions
6:08 but also wrote a script in Go that
6:10 worked on the first try. I don't even
6:12 know what Go is nor can I write code.
6:14 Another example, I asked all three
6:16 models to turn a static image into an
6:18 interactive chart and Claude performed
6:20 the best on the first try. So basically,
6:21 anything that requires generating
6:24 working code tends to favor Claude. Pro
6:26 tip, when it comes to diagrams, you can
6:28 ask Claw to generate mermaid code, which
6:31 you can then paste directly into tools
6:33 like Excaliraw to get clean visuals in
6:36 minutes. Area two, polishing copy.
6:38 Beyond code, Claude produces written
6:40 drafts that sound human and need fewer
6:42 revisions. When you need to tighten an
6:44 argument or match a specific voice,
6:46 Claude just gets it. Put simply, it's
6:49 exceptionally good at style matching.
6:51 Once you share examples of your existing
6:53 work, it replicates your tone almost
6:55 perfectly. When I was in corporate, I'd
6:57 shared previous documents so Claude
6:58 could replicate that voice across
7:00 presentations and performance reviews.
7:02 And now, as a creator, I feed it my
7:04 existing YouTube scripts to help refine
7:06 new drafts. At this point, you might be
7:07 wondering how I use all three everyday
7:09 AI tools together. In a nutshell,
7:12 Chachip or Gemini usually handles the
7:14 beginning of my work, ideation,
7:16 research, drafting the outline of a
7:18 presentation. Claude then handles the
7:21 last mile, turning that rough output
7:22 into something I'm ready to present or
7:24 publish. Quick note on Grock. A lot of
7:25 people ask why I don't use it. It's
7:27 actually very simple. Uh Grock's
7:29 superpower is its direct access to the
7:31 Twitter/x fire hose, right? So it's the
7:33 best option for people who need to
7:35 analyze breaking news events in real
7:37 time. I never needed that. And as a rule
7:39 of thumb, we should never use tools just
7:41 for the sake of using tools. We should
7:43 only add them to our toolkit when they
7:45 solve an actual problem we have. Here's
7:46 a quick recap of the three models and
7:47 when to use them. And if you're
7:49 wondering whether you need all three,
7:51 the short answer is no. Most people
7:52 should stick with the paid version of
7:55 ChachiBT and get really good at it. But
7:57 if you can afford multiple subscriptions
7:59 and your workflow can take advantage of
8:01 their individual superpowers, mix and
8:03 match as needed. Fun fact, according to
8:05 this study on open router data, models
8:07 from different labs like Chadypt and
8:10 Gemini expand the pie of AI use cases
8:12 precisely because they excel at
8:13 different things. Onto the second
8:15 category, specialist AI. Before diving
8:17 in, let's clear up a very common
8:20 misconception. Tools like Perplexity are
8:22 not foundational models. Here's a simple
8:25 visual. OpenAI, a Frontier AI lab,
8:28 develops the GPT family of models. They
8:31 also created ChatGpt as the userfriendly app
8:31 app >> [music]
8:31 >> [music]
8:34 >> layer. Perplexity is different. It
8:36 fine-tunes existing foundational models
8:39 for speed and accuracy and is optimized
8:42 for search. Their own sonar model, for
8:44 example, is just a fine-tuned version of
8:47 Meta's openweight llama model. So, on
8:49 that note, Perplexity superpower is
8:52 finding accurate information fast. In
8:53 plain English, the general purpose
8:55 chatpots are built for reasoning. You
8:57 use them to help you think, brainstorm,
9:00 or write a draft. Perplexity is built
9:02 for fetching. You need a specific fact,
9:04 and you need it now. Starting off with a
9:07 simple real life example, I used chachib
9:09 to plan a trip to Japan with my brother
9:11 because that is a creative task. It
9:13 requires weighing trade-offs, building a
9:14 narrative, and for that kind of task,
9:16 I'm happy to wait while the model
9:18 thinks. But when I need grab-and-go
9:20 information, like whether a specific
9:21 restaurant is foreigner friendly because
9:23 we don't speak Japanese, I'd want
9:25 Perplexity to give me accurate and
9:27 update information within seconds.
9:29 Second example, going back to how I use
9:31 the three everyday AI tools, let's say
9:33 Gemini or Chachet helps me brainstorm
9:35 and structure my newsletter. Claude
9:38 produces the final draft. Perplexity in
9:40 this case is the search scalpel that
9:42 verifies information like whether
9:44 Gemini's contact window is 1 million or
9:46 2 million tokens. In case you're
9:48 curious, consumers get 1 million,
9:50 enterprises get 2 million. Pro tip, you
9:52 can use Google style search operators
9:55 like [music] site colon reddit.com to
9:56 narrow your results to a specific
9:58 source. [music] I have an entire video
9:59 on the most useful Google search
10:01 operators, so I'll link that down below.
10:03 As a rule of thumb, think of perplexity
10:05 as a replacement for Google AI mode.
10:07 They're both for fetching information
10:09 and not as a replacement for general
10:11 purpose chatbots. Actually, let me know
10:12 if you want an entire video breaking
10:15 down the AI search apps like Perplexity,
10:17 Google Search, Google AI overviews,
10:19 Google AI mode, because they're all made
10:20 for different things. Rounding out
10:23 Specialist AI, Notebook LM superpower is
10:24 that it only answers from the sources
10:27 you give it, meaning it won't make
10:29 things up. Think of it like a walled
10:31 garden. You upload your sources and
10:33 Notebook LM answers questions using only
10:35 those documents. It can't really
10:37 hallucinate because it has no outside
10:39 knowledge to draw from. Going back to
10:41 the visual around how perplexity is
10:44 optimized for search, Notebook LM uses a
10:46 fine-tuned Google Gemini model that
10:48 minimizes hallucinations. For instance,
10:50 when I was at Google before publishing
10:52 marketing materials, I would upload the
10:54 final draft alongside the source
10:56 documents and ask Notebook LM if the
10:58 draft made any claims that contradicted
11:00 the sources and it would catch these
11:03 tiny discrepancies other AI might have
11:04 missed. I use a similar workflow today
11:07 for my videos. Before I start filming, I
11:09 upload my script and all my research
11:11 into Notebook LM and ask it to flag
11:13 anything not directly supported by the
11:16 source material. The obvious caveat here
11:18 is that the output is only as good as
11:20 the sources we give it. So if the
11:23 sources are incorrect, Notebook LM is
11:25 going to be confidently incorrect. So as
11:27 a rule of thumb, if the accuracy matters
11:29 more than creativity and you have source
11:31 materials to check against, use Notebook
11:33 LM. There are a few more specialist AI
11:35 tools I use but didn't make this list
11:36 because I don't use them every day. But
11:38 to quickly go through them, Gamma for
11:40 presentations, 11 Labs for voice
11:43 cloning, Zapier and N for automation,
11:45 and Excaliraw and Napkin AI for quick
11:47 visuals. As a reminder, I'll cover the
11:49 remaining two categories in part two, so
11:51 keep an eye out for that. See you on the
11:53 next video. In the meantime, have a