0:00 Large language models are rewriting the
0:02 playbook. And if you're not riding the
0:03 wave, you're stuck on the shore. But
0:05 who's time to test drive every single
0:07 model out there. That's why we've done
0:09 the heavy lifting for you. This video
0:11 will speedrun the hottest LMS, break
0:13 down what they're best at, show you
0:15 exactly how to prompt them, and drop
0:17 insider tips you won't hear anywhere
0:18 else. By the end, you will know which
0:20 model fits your style, and how to make
0:22 it sing. This is the last LM crash
0:24 course you'll ever need.
0:28 Chad GBT is the goat. Everyone knows
0:30 that. Take any person's laptop and you
0:32 will have Chad GBT open in a separate
0:34 tab. It was the first one to hit the
0:36 market and all other LMS were modeled
0:38 after it. The interface, the features,
0:41 the internal logic. All other LLMs get
0:44 new features only after Chad GBT gets
0:46 them. And in this channel, we have tons
0:48 of videos already about everything you
0:50 might want to know about Chad GBT. And I
0:51 suggest you watch those. Here's our full
0:53 guide to the GBT 4. It's a halfhour
0:56 crash course into chat GBT. Here's
0:58 another one where we go over all the new
1:01 features like canvas task scheduling or
1:04 deep research. And here's our latest
1:05 guide to the new image generation in the
1:07 GBT40. So if you want to learn CHBT,
1:10 watch those videos. And for learning
1:12 generative AI in general, welcome to our
1:14 brand new course into generative
1:17 AI. I know you might imagine typing over
1:20 your webcam and mic whenever you hear
1:22 meta, but trust me, it's actually pretty
1:24 cool. Meta AI comes in two versions.
1:26 There is a separate website and there's
1:28 an AI that's built right into Meta's
1:31 larger platform. The web version is
1:32 pretty simple and has almost everything
1:34 you need. I'd put in about the same
1:36 level as Sorro when it comes to replies
1:38 and features. Of course, it can write
1:39 articles for you or blog posts and it's
1:42 right in style is pretty close to Chad
1:44 GBTS. Meta AI runs in a model called
1:47 Lama and they've been updated like
1:49 crazy. bigger context windows, improved
1:52 ways to handle different kinds of
1:53 content, and better logic. It's honestly
1:56 a good model, and the fact that they let
1:58 people use it for free is a nice move.
2:00 Feature-wise, Meta AI in the web is
2:02 really flexible. Can handle nearly
2:04 anything you throw at it, like images,
2:06 PDFs, text documents, and data files. It
2:09 also has a memory you can update right
2:11 in the chat. I haven't seen a special
2:12 menu for managing or wiping that memory
2:15 like Chad GBT has, but it still
2:17 remembers whatever you tell it. It
2:19 doesn't pull info from different chats
2:21 though, so I'd recommend using that one
2:23 conversation as your go-to place for
2:25 instructions. Just start with remember
2:27 that and then list your preferences.
2:30 There is a canvas mode too which feels a
2:32 bit like Chad GBTS, but it's more like a
2:34 write in workshop. You can highlight any
2:36 bit of text and ask for a rewrite,
2:38 change the entire text or drop in
2:40 images. It's pretty fun. Just highlight
2:42 a word, tap imagine, and the AI
2:44 generates four images right away for
2:46 that spot in the text. You can even
2:48 tweak the prompt to change what it
2:50 shows. The canvas editing window itself
2:52 is bigger than chat GBTS with a smaller
2:55 chat window. It's missing some
2:56 convenient controls, but it does have
2:58 formatting tools. Overall, it's a nice
3:00 feature, especially if you like to
3:02 write. And of course, if you're aiming
3:04 to replace chat GBT, you need image
3:06 generation. Meta AI has that covered.
3:08 It's sort of like Adobe Firefly. You can
3:10 generate an image in the chat, pick the
3:12 one you like, and then edit it if you
3:14 want. You can also adjust the settings
3:16 for all four images, like changing the
3:18 aspect ratio, lighting, or style, and
3:21 you'll see them update in real time.
3:23 It's not the best image generator out
3:24 there, but it's solid and gets the job
3:26 done. It struggles a bit with putting
3:29 text and images, though. That's the web
3:30 version of Meta AI, but Meta AI has
3:33 really grown into an entire ecosystem.
3:36 You can even use it in Facebook chats
3:38 and comments or through Meta's glasses.
3:40 I'm not the biggest fan of everything
3:42 Zuck does, but you have to admit it's
3:44 neat. Can find info, handle vision
3:46 tasks, all that good stuff. The glasses
3:49 are basically the full Meta AI
3:51 experience, but you don't actually need
3:53 them to use the tool. Even without the
3:54 fancy hardware, Meta AI is a strong
3:57 option. AI is the mind behind all the
4:00 innovations. If you want to master these
4:01 skills and understand how these tools
4:03 are built, or maybe build one, then
4:05 Skillup is the platform to learn these
4:08 skills. They're sponsoring this video
4:09 and Skillup by Simply Learn honestly
4:11 feels like it was designed for all of us
4:13 AI people. Whether you are a student
4:15 who's just dipping your toes into the
4:17 world of tech developer trying to level
4:19 up with the latest tools or even a
4:21 manager looking to stay ahead of the
4:23 curve, Skillup has you covered. They've
4:25 got a massive library of self-paced
4:27 courses and everything you can think of.
4:30 AI, data science, cyber security, cloud
4:32 computing, digital marketing, project
4:35 management, you name it. And these
4:36 aren't just slapped together lectures.
4:38 Each course is built in collaboration
4:40 with industry pros and top tier partners
4:43 like Google, Microsoft, and AWS. So,
4:46 you're not only learning the theory
4:48 behind these topics, you're getting the
4:50 kind of knowledge that's actually used
4:52 out in the real world. One thing I
4:54 absolutely love about Skillup is how
4:56 flexible it is. I mean, I've been
4:58 watching videos on my iPad while waiting
5:00 for a coffee to brew and even sneaking
5:02 in a few lessons on my phone. It's all
5:04 self-paced, so you don't need to worry
5:06 about juggling strict schedules. And
5:09 here's the cherry on top. Once you
5:10 finish a course, you get a free
5:12 certificate to show off your new skills.
5:14 No hidden fees, no annoying strings
5:17 attached. So, if you're looking to pivot
5:19 your career into tech, boost your skill
5:21 set in your current job, or maybe just
5:23 learn something totally new for fun,
5:25 Skill Up is kind of a no-brainer. You
5:27 can sign up using the link in the
5:29 description below or the pinned comment.
5:31 And when you do, let me know which
5:32 course you pick.
5:35 Answer this. What is a Chad GBC killer?
5:38 Is it just some tool that does
5:39 everything better? I think can also be a
5:42 tool that lets you do a bunch of stuff
5:44 in one place, like using different AI
5:46 models under one subscription. That's
5:48 exactly what PO is. And it's really
5:50 interesting if you know how AI tools
5:52 differ and which ones specialize in
5:54 what. With PO, you can switch between
5:56 them to get better results. That's the
5:58 whole point. You're not stuck with the
6:00 single model. You basically get them
6:02 all. The list of models is crazy. I
6:04 scroll and scroll and it keeps going.
6:08 Lamok, Grock, Gemini, Deepseek. I can
6:10 even use Chad GBT 3.5 here. Then each
6:13 model has its own chat with most of the
6:16 features you'd expect. Multimodality is
6:18 on board. So I can upload images for
6:20 analysis, PDFs to grab text from, and so
6:23 on. Which features you get depends on
6:25 the model. So if it can do something, Po
6:28 won't magically force it. But all the
6:30 usual prompt and tricks still work like
6:32 they normally do for each model. It's a
6:34 great way to explore AI and figure out
6:35 which model fits your needs best. I love
6:38 how flexible PO is. If a model has
6:41 canvas, PO will open it without any
6:43 trouble. If a model like 11 Labs can do
6:45 audio, gives me simple audio tools. If
6:48 the model needs more reasoning, it does
6:50 that too. I can even create my own apps
6:52 because of the built-in cloud
6:54 integration. For instance, I made a
6:56 quick app for removing the background
6:57 from an image. One click and I'm already
7:00 uploading my photo. PO has a model of
7:02 its own, but it's just okay.
7:04 Fortunately, each response has a button
7:06 underneath to compare that same response
7:09 with another model. So, unlike with Chad
7:11 GBT, you can swap to a better model in
7:13 the middle of the chat if you feel like
7:16 it. Don't say you weren't waiting for
7:18 this one. Gemini 2.5 Pro. For years,
7:22 Gemini has been a mustave in videos like
7:24 this, but this time it really deserves
7:26 its spot. Right now, it's pretty much
7:27 what GBT5 could be, except it's from
7:30 Google and free to use. You get up to 50
7:33 messages per day with its top model.
7:35 Then it drops to smaller one that's
7:37 still almost as good as 40. Until GPT
7:40 4.1 arrived, the simplest advantage of
7:42 Gemini was its 1 million token context
7:45 window. Now, Chad GBT has cut up to that
7:47 number and Google is only weeks away
7:50 from doubling Gemini's token limit. By
7:52 the time you're watching, this might
7:54 have already happened. Gemini can
7:55 remember a ton of information at once,
7:57 which is a big deal if you like long AI
8:00 chats, maybe you're planning project,
8:02 summarizing huge sections of text, or
8:04 building each step as you go. Chat GBT
8:07 does have memory, too, but it might lose
8:09 some details if your conversation
8:11 stretches on too long. Gemini can handle
8:14 more data without forgetting, which
8:15 helps a lot when you're writing lengthy
8:18 essays, digging through piles of
8:20 information, or needing the AI to recall
8:22 earlier points from your discussion.
8:24 When it comes to searching online,
8:26 Gemini often goes a step beyond Chad GBT
8:29 because web search is on by default and
8:32 also works with its best model. It's not
8:34 just about giving a quick answer. Gemini
8:36 can pull info from different places and
8:38 mix it into one detailed response even
8:41 if you didn't specifically ask. Sure,
8:44 there is a deep research mode as well
8:45 and it performs as well as Chad GBTS,
8:48 but most of the time you can get what
8:50 you need without it. One of the best
8:52 things about Gemini is how smoothly it
8:54 ties into Google services. You can
8:56 connect it to Docs, Sheets, Drive, Maps,
8:59 and plenty more. That means you can grab
9:02 notes from Drive, ask for directions, or
9:04 gather travel tips without juggling
9:06 multiple tabs. Chad GBT can do something
9:08 like that with plugins, but those can
9:10 feel like an extra layer that doesn't
9:13 always work perfectly. The only area
9:15 where Gemini lags behind Chad GBT is
9:17 image generation. Don't get me wrong,
9:19 it's solid here, just not as good as
9:22 GBT40. Sometimes it messes up text,
9:24 messes a part of your prompt, or isn't
9:26 totally consistent. Still, for a free
9:29 image generator, it's quite good. And to
9:31 your surprise, Gemini is actually better
9:33 than Chad GBT when it comes to
9:34 multimodality. It works with a wider
9:36 range of files. Of course, Gemini
9:38 handles large PDFs, word docs, or text
9:41 logs easily. You can attach these files,
9:44 mention them in your question, and
9:45 Gemini will summarize or analyze them.
9:48 It also works with images, audio, and
9:50 even videos. Yes, videos. Its coding
9:53 mode is also stronger. It's fantastic at
9:55 debugging. supports popular coding
9:58 languages and can even run code right in
10:00 the tool. Look at this game it fixed. No
10:02 plugins needed. Just paste the code and
10:05 it works. And sure, Canvas supports code
10:07 in here, too. Gemini today isn't the
10:09 same Gemini we knew year ago. Now, it's
10:11 leading the pack and OpenAI will need
10:13 something really special for GBT to
10:16 catch up to
10:17 Google. Grock was first released a few
10:20 years ago and has changed a lot since
10:22 then. Now, it's a worthy alternative to
10:25 Chad GBT, especially considering how
10:27 much you get for the price compared to
10:29 the features. For starters, it has
10:30 pretty much all the same essential
10:32 abilities as Chad GBT, can write
10:34 articles and do web searches just as
10:37 well. So, that alone isn't the big draw
10:39 anymore. Rock works a lot like Chad GBT
10:42 and serves as a perfect example of
10:45 generative AI, specifically an LM. Since
10:48 most LM share the same basic principles,
10:50 learning the fundamentals of generative
10:52 AI will let you use almost any LM
10:54 effectively. That's exactly what we
10:56 focus on in our brand new 101 crash
10:59 course into generative AI at Geek
11:01 Academy, where we show you how AI
11:03 interprets prompts, how to prompt
11:05 properly, and what common mistakes to
11:07 avoid. We're adding new lessons every
11:09 week and the course covers everything
11:11 from the inner workings of AI and the
11:13 logic behind tools like Chad GBT to
11:15 in-depth tips and prompting for image
11:17 generators complete with concrete
11:19 examples and insights into their popular
11:22 features. We also explore developer
11:24 tools and coding assistance in real life
11:26 situations plus essential prompts and
11:29 templates. Beyond that, we dive into
11:31 music and video generation tools, AI
11:33 avatars, texttospech options, and so
11:36 many others. Basically, if it has to do
11:39 with generative AI, it's in our course
11:41 guiding you from zero to pro under one
11:44 Geek Academy subscription. And right
11:46 now, we're offering a massive 80%
11:48 discount on a 6-month access to Geek
11:50 Academy. It's a limited time offer, so
11:52 don't miss out. What really stood out to
11:54 me was how convenient and logical Grock
11:56 feels. If I want to research something,
11:58 I have three ways to do it. One is to
12:00 just ask a question normally, which
12:02 triggers a quick simple web search. The
12:04 second option is deep search, which
12:06 takes around a minute and pulls together
12:08 more thorough and concise info. Grog
12:10 gathers data from various articles and
12:13 in other half a minute puts together a
12:15 solid chunk of text with conclusions and
12:17 summaries. The last option is deeper
12:19 search, which is basically Grock's
12:21 version of deep research in Chad GBT,
12:23 but way faster. The request I tried
12:25 would have taken Chad GBT at least 5 to
12:28 10 minutes and Grock did it in three and
12:30 a half complete with links, detailed
12:33 info, and neat formatting. Another cool
12:35 feature is that I can turn on reasoning
12:37 anytime without having to switch models.
12:39 This reasoning mode works about as well
12:42 as GBT1 or GBT3, only faster, and it
12:46 doesn't cost anything extra. By now,
12:48 Grock is basically that omni model. GBT5
12:51 is aiming to be one model that can do it
12:54 all. It writes, handles files, does web
12:57 searches, and even generates images all
12:59 from the same place. Image generation
13:01 here is pretty cool, too. I can upload a
13:03 picture and make edits like adding
13:05 glasses or upscaling. And unlike the
13:08 latest Chad GBT image tools, Gro keeps
13:11 everything consistent. Chad GBT
13:12 sometimes shifts the whole image around
13:14 when you edit it. Granted, Grog might
13:17 not be as sharp as Chad GBT at
13:20 generating text and images, but the
13:21 overall edits work really well. Creating
13:24 new images is super simple and follows
13:26 the same rules as Chad GBT. The editing
13:29 window just works differently. Grock
13:31 doesn't let you pick a specific part of
13:33 the image to edit. Instead, you choose
13:35 between subject, background, or style.
13:37 Once you pick, you don't confirm
13:39 anything. The changes appear almost
13:42 instantly. There is also a prompt for
13:44 bigger tweaks. I really like this image
13:46 generation. It's consistent. It's
13:48 reliable and free. I agree that Grog has
13:51 fewer flashy features than Chad GBT, but
13:54 some parts are just as good or better.
13:56 Consider the workplaces for instance.
13:58 They're basically the same idea as
14:00 spaces in Chad GBT. You have your own
14:03 files, your own chats, and your own
14:05 custom instructions for every
14:07 conversation in that workspace. There
14:09 aren't a ton of settings to tinker with,
14:12 but you can set your own custom
14:13 instructions or just presets for
14:16 different response styles. You can also
14:17 manually switch gro into one of the
14:20 suggested roles like specialist, doctor,
14:23 or therapist. It's basically the same as
14:25 typing a prompt beforehand. It's still a
14:28 handy extra. I really like Grog. I
14:30 definitely need to do a full video on
14:32 it. Oh, wait. I'm already working on
14:34 that. So, subscribe if you don't want to
14:36 miss it.
14:38 I always used to wonder why can't I run
14:41 Chad GBT right on my own laptop. I know
14:44 the model is huge and needs tons of
14:46 resources, but come on, wouldn't that be
14:48 cool? Well, guess what? Now I've got
14:50 Chad GBT on my MacBook through a
14:52 console. The only catch is that it's not
14:55 really Chad GBT at all. It's a tool
14:57 called Deepseek. The biggest advantage
14:59 is that Deep Seek is totally free. Chad
15:02 GBT hides his best stuff like advanced
15:05 reasoning and unlimited image generation
15:07 behind a monthly fee. Deepseek doesn't
15:10 charge anything for advanced reasoning
15:12 or any other main feature. You sign up
15:14 and you get everything and offers no
15:16 payw wall. Right now the tool does lack
15:19 some of Chad GBT's extras like image
15:22 generation, deep research, and canvas,
15:24 but it does have advanced reasoning and
15:26 web search. Even though the web search
15:28 doesn't always work perfectly, that
15:30 reasoning feature is basically a direct
15:32 clone of GBT1, but it's actually better.
15:36 When you ask it something, it breaks
15:38 down the steps in a little thought
15:39 process panel that you can see. This
15:41 step-by-step method makes the AI's
15:43 answers clearer and more accurate. It
15:46 takes about as long as GB01 with results
15:49 just as thorough, clever, and original.
15:52 Then, DeepC goes further, can handle
15:54 files. GBT01 can work with images that
15:57 have text in them, but can tackle an
15:59 Excel data set. Deepseek is exactly what
16:01 you use if you want to parse data, find
16:04 patterns, or spot correlations, and
16:06 because it runs locally, it's a perfect
16:08 sidekick for data analysis. It can
16:10 generate graphs yet, but will probably
16:13 change soon. Those same offline
16:15 abilities also make it great for
16:16 developers. You can ask for a simple
16:19 HTML layout, a small Python script, or
16:22 even a basic JavaScript game. Deepseek
16:24 doesn't just generate the code, can
16:26 actually run certain demos right in the
16:29 chat. For example, it might whip up a
16:31 quick snake game and you can play then
16:34 and there. Think about how much easier
16:36 coding could be with a model like this
16:39 running in the background. Granted, you
16:41 would need a powerful, pretty powerful
16:42 computer, but it's still awesome. How do
16:45 you install it? All you need is a single
16:47 app, bit of storage, and a few clicks.
16:49 Then you're all set and your data stays
16:51 on your machine. You install it using a
16:54 tool called Alama which sets up DeepSeek
16:56 for you without a bunch of technical
16:58 headaches. Don't forget to check out our
17:00 full Deepseek guide if you want detailed
17:02 instructions on how to do it right.
17:04 Deepseek also has a mobile app. It's not
17:06 as slick as Chad GBT's official app, but
17:09 it does have the same features as
17:11 Deepseek on on your computer like
17:13 reasoning, data analysis, and web
17:15 search. It's laid out pretty much the
17:16 same with a simple chat window. Two
17:19 toggles for advanced reasoning and web
17:21 access, plus a button for adding files
17:23 or snapping pictures on the spot for
17:25 OCR. I've only found two real issues
17:27 with DeepSeek. One, the servers can be
17:30 busy a lot because there's so much hype.
17:33 It's not as bad now, but sometimes
17:35 you'll still see a server busy message.
17:38 And second, if you want image
17:40 generation, you've got to use another
17:42 tool like Janice from the same
17:44 developers. Janice doesn't give you a
17:46 ton of control, but it can generate high
17:48 quality images pretty fast with no fuss.
17:50 I really hope they bake that into
17:52 DeepSeek soon. I'm not fully switching
17:55 over yet, but I keep it on my Mac for
17:57 those moments when I'm working on with
17:59 private files or stuck without internet
18:03 access. I get this question from friends
18:06 all the time. I don't like switching
18:08 models in CGBT. What should I do? And my
18:10 answer is always the same. Use Claude.
18:12 The latest version does all the model
18:14 picking by itself. For easy stuff, it
18:16 uses simpler models. And for more
18:18 difficult tasks, flips to its reasoning
18:20 model. And by the way, that reasoning
18:22 model is right up there with GBT1, but
18:25 with 03 level speeds. Claude isn't just
18:28 CHBT clone, and it can really shine in
18:31 areas where Chad GBT might slip up. You
18:33 might notice something about Claude's
18:35 responses. is they tend to feel more
18:38 thoughtful, more careful, and sometimes
18:40 more detailed than Chad GBTs. This
18:42 reflective quality isn't just random.
18:44 It's because Claude is trying to weigh
18:46 context very carefully and lean toward
18:49 clarity over confusion. Claude's big
18:51 context windows might sound like a
18:53 technical detail, but they actually make
18:55 a big difference in keeping the AI
18:57 focused. It's better at sticking to
19:00 single narrative over long
19:01 conversations. I tried copying entire
19:04 chapters of text or big sets of data
19:06 into Claude, then asked specific
19:09 questions about each part. It rarely
19:11 mixed up questions or forgot what it
19:13 read and it really seemed like it was
19:15 actually holding on to older messages
19:17 instead of guessing once things got
19:19 complicated. If I want to compare it to
19:22 Chad GBT for writing, Claude can
19:24 definitely keep up. Its paragraphs
19:26 usually feel more structured and
19:28 cohesive. Chad GBT sometimes loops back
19:30 or jumps around if you throw it complex
19:32 prompts while Claude breaks ideas down
19:35 more directly. Both AIs can produce
19:38 usable text, but Claude's calm and
19:40 organized style can come off more
19:42 purposeful. That said, Claude could use
19:44 an upgrade in a few areas. It needs to
19:46 learn how to generate images and handle
19:49 them better, and its OCR could stand to
19:51 improve. It also needs better
19:52 multimodality. Right now, it doesn't
19:54 support a ton of formats and it and
19:56 isn't amazing at data analysis. As for
19:59 prompting, Claude is good at
20:00 understanding natural language, but some
20:02 of its prompt and practices feel
20:04 outdated, like something from 2020. We
20:07 have a whole guide on our channel about
20:09 Claude, plus some handy PDFs and posts
20:11 over at Geek Academy if you want to dive
20:13 deeper. Another interesting point is how
20:15 Claude deals with writing style
20:17 preferences. If you pace your own
20:19 writing and ask Claude to mimic it, it
20:22 usually does so without sounding forced.
20:24 Chbtt can do that, too. But sometimes it
20:27 goes too far or misses the subtle things
20:29 in your wording. Claude is better at
20:31 picking up on those little hints like
20:33 how fast or slow your sentences flow and
20:36 exact tone you're aiming for. That might
20:38 seem like a small edge, but if you need
20:40 the AI to match your personal voice for
20:43 a big writing project, it's really
20:45 helpful. Sometimes Claude will refuse to
20:47 answer certain questions, but that
20:49 doesn't happen much in everyday use. Of
20:51 course, Chajiti can still be more
20:53 playful or creative if you're just
20:55 messing with it for stories or
20:57 brainstorming. Claude can be
20:59 imaginative, but usually balances that
21:01 creativity with a bit more logic and
21:05 caution. Mestral is a French AI that's
21:08 been quietly improving behind the
21:10 scenes, and now it's finally a solid
21:12 option you might actually want to check
21:14 out. I won't pretend it's flawless. No
21:16 AI is. But what's great about Mistl is
21:19 that it kind of reminds me of Chad GBT
21:21 in in its early days before it turned
21:24 into the enormous beast it is now.
21:26 Mestral is simple, almost bare bones,
21:28 and that's part of its appeal. When it
21:30 comes to general writing, it's basically
21:32 on the same level as GBT40 in terms of
21:35 quality and depth, but Mistral is
21:37 faster. It splits out answers in just a
21:40 few seconds, much faster than Chad GBT
21:42 can do. Web searches, look over your
21:45 code, and generate images. It even has a
21:47 canvas feature. The catch is that using
21:49 Mistral can feel a bit awkward because
21:51 even if you switch on all these tools,
21:53 you still have to actually call them out
21:56 by name. Canvas doesn't pop up
21:58 automatically, so you have to literally
22:00 say use canvas. Still, it's a pretty
22:02 cool mode. You can highlight chunks of
22:04 text to rewrite and you get handy
22:06 controls for length, style, and other
22:09 editing settings. I like how the canvas
22:11 stays in the middle while the chat
22:13 window shifts to the right. Just don't
22:15 try generating images inside canvas. It
22:17 won't insert them the way Meta AI does.
22:20 As for coding, it's decent, but nothing
22:23 too advanced. It can look at code
22:25 catchbooks and point out errors, but it
22:28 doesn't have built-in frameworks to
22:30 preview your app. So you can't see a
22:32 live version of your code like you can
22:34 in Gemini. Personally, I wouldn't rely
22:36 on Mistral for coding projects. Image
22:39 generation is okay. Nothing
22:40 earthshattering. It follows prompts well
22:42 enough and the results look all right,
22:44 but it's it'sn't a lot of controls. I
22:46 also notice it's more sensitive to
22:48 detailed prompts than other image
22:50 generators. You really have to specify
22:53 style, framing, composition, that sort
22:56 of thing. So, no, Mistral isn't going to
22:59 replace GBT40's new image generator
23:01 anytime soon. One area does shine is
23:04 file handling. You can build a little
23:06 library for each chat, making it simpler
23:08 to refer back to those files later on.
23:11 That's a small but handy
23:13 feature. My favorite feature in Chad GBT
23:16 has always been deep research. I'm
23:18 serious. Every video we make starts with
23:21 deep research, but it costs money. So,
23:23 for a free option, I always suggest
23:25 Perplexity. It may not be a GBT killer,
23:28 but it's definitely a deep research
23:30 killer. Perplexity is really a research
23:32 tool at heart. It doesn't try to do
23:34 every single thing. It just does
23:36 research really well. Its results are
23:38 almost as accurate and in-depth as Chad
23:40 GBTS, and it's basically free. It
23:43 doesn't go for Chad GBT's friendly tone
23:45 either. Focuses on delivering factual
23:47 answers in a direct nononsense way. I
23:50 actually appreciate how perplexity gets
23:53 straight to the point. I agree that file
23:54 handling is better in Chad GBT. It's
23:57 fully multimodal and perplexity is more
23:59 limited. Perplexity isn't designed to
24:01 process huge data sets, but it's perfect
24:04 for PDFs, short code snippets, or
24:07 smaller word docs. You upload them,
24:09 describe what you need, and the AI
24:11 either summarizes or breaks the text
24:14 into manageable parts. Perplexity also
24:16 lets you group these files into spaces
24:19 which act like folders for your chat
24:22 threads, documents, and even
24:24 screenshots. Going back to research, the
24:26 basic mode is good, but Perplexity also
24:29 has a proarch mode which works like the
24:32 deep research we all know and love, but
24:34 instead of Chad GBT's methods, it uses
24:36 its own algorithms. So, you'll usually
24:38 see a different approach. If you turn it
24:41 on, it makes Perplexity dig deeper and
24:44 use more advanced reasoning. And instead
24:46 of hiding footnotes or skipping them
24:47 altogether, Perplexity shows its sources
24:50 right at the top of the reply. This is
24:52 one area where it might actually do a
24:54 better job than Chad GBT for those who
24:56 really want transparency. But there's
24:59 more. Focus is a neat feature that
25:01 tailor the system for different types of
25:03 data. There are five options: web,
25:05 academic, video, social, math, or
25:08 writing. In academic, for example,
25:10 Perplexity pays special attention to
25:12 peer-reviewed articles and well-known
25:14 research. Perplexity's coding skills are
25:17 practical, but they're not the
25:19 centerpiece. You can paste a Python
25:21 script or a C++ snippet, and it will
25:23 point out small bugs or suggest minor
25:26 improvements. A few parts of Perplexity
25:27 might feel niche, like the pages tool
25:30 that turns your chat into an
25:31 article-like format. You might not use
25:33 that every day, but it can be useful now
25:35 and then. I love Perplexity. But to
25:38 really make the most out of it, you
25:39 should stick to research tasks. For
25:42 everyday stuff, I'd still choose Chad
25:44 GBT. Over the years, Chad GBT has become
25:46 such a big name that it's hard to find
25:49 true replacements. Every new tool tries
25:51 to match it or outdo it. Some do manage
25:54 and some don't. GBT5 will probably be a
25:57 fantastic model and might pull ahead of
25:59 the pack for a while until Google or X
26:02 AI steps in again with an update. So,
26:04 would you really ditch Chad GBT for
26:07 something else, especially after you
26:09 learn how to use it well in our new AI
26:11 crash course? Thanks for watching and
26:14 see you in the next video.