0:03 Chad GPT5 is the best AI model ever
0:05 released. The issue is if you don't
0:07 prompt it correctly, your results are
0:09 going to be horrible. After using GPT5
0:11 non-stop since it came out, I've come up
0:13 with a prompting system that will make
0:15 your results 10 times better. I'm
0:17 telling you, if you steal the prompt I'm
0:19 about to show you, you'll be oneshotting
0:20 things with AI you never thought
0:23 possible. Let's get into it. So, I came
0:25 up with this master GPT5 prompt. I'm
0:26 going to show you in this video. We'll
0:28 go through all five stages of this
0:30 prompt. So, there's five parts of it.
0:32 You'll be able to copy and paste it and
0:33 use it however you want, but I'm going
0:36 to walk you through each one of these
0:38 five stages in the prompt. I'll unlur
0:41 for you step by step. Last week, after a
0:44 lot of complaining about GPT5, Open AI
0:46 released an entire cookbook of how to
0:48 use it. This is what that cookbook
0:50 looked like. And it's this huge guide on
0:52 exactly how GPT5 works and how you
0:54 should be talking to it. So, I basically
0:56 went through all of this and I went in
0:59 and I created a master prompt based on
1:01 the recommendations that OpenAI gave.
1:03 This basically takes hours of advice
1:05 OpenAI came up with and puts it into one
1:08 simple prompt. You can copy and paste,
1:09 fill in the template, and you're going
1:11 to start getting incredible results.
1:12 It's really critical you stick through
1:14 this and learn the entire prompt because
1:16 this is going to be the only way you get
1:18 really good results out of GPT5.
1:22 Unfortunately, GPT5 isn't that good if
1:23 you're not prompting it correctly, which
1:25 I understand people don't like. But if
1:27 you do use it correctly and you use this
1:29 prompt I'm about to give you, the
1:30 results you're going to get are way
1:31 better than any other model you can use.
1:33 So, let's get straight into it. Let's
1:35 get into the master prompt. The first
1:38 thing you want to give GPT5 when
1:40 prompting it is the role it's taking.
1:41 Right? Just straight up the role. Very
1:44 simple, one sentence. What role is GBT5?
1:48 The reason why roll is so important in
1:52 the prompt is GPT5 only does exactly
1:54 what you tell it to do. Nothing more,
1:56 nothing less. So, if you want it to do
1:58 something really well, you need to give
2:00 it a very specific role. So, the example
2:02 I'm going to be showing you throughout
2:03 this video is we're going to be building
2:06 a master prompt for planning and
2:08 building out an application. This is
2:10 going to be an application for YouTube
2:12 creators. It helps them plan videos and
2:14 make scripts. And so I'm going to build
2:16 out an entire prompt for but first
2:18 giving the role you were a product full
2:20 stack app planner for indie creators. So
2:22 for you depending on what you're doing
2:23 right you'd make your role you know
2:25 you're a product full stack app planner
2:27 for video games or whatever you want or
2:29 you're a master researcher for trending
2:32 news items whatever that task is you
2:34 want to do you have to think about the
2:36 perfect role for the AI. So the first
2:39 line in our master prompt is going to be
2:41 that role, right? We want to make sure
2:43 right away, right off the rip, before
2:45 GPT5 goes into anything else, it knows
2:47 what its role is. So here's where things
2:49 start to get interesting going into the
2:51 second part of the master prompt, we
2:54 have a control panel. And so if you read
2:57 through OpenAI's cookbook for GBT5, one
2:59 of the biggest parts here is the amount
3:02 of options they give you. They talk
3:03 about making control the verbosity of
3:05 the model. They talk about controlling
3:08 the amount of thinking the model does.
3:10 There are a whole bunch of factors that
3:13 on the negative side GPT5 isn't
3:15 fantastic at figuring out on its own.
3:17 Out on the positive side, it's fantastic
3:20 if you control those parameters and tell
3:22 it how much to think and how verbose it
3:23 should be. So, going back into the
3:25 master prompt here, you want to put a
3:28 control panel with all your first main
3:29 prompts, right? You want to control
3:31 reasoning, verboseity, the tools it
3:34 uses, for it to self-reflect, for it to
3:35 metaphix, and I'll go into what those
3:37 two mean in a second. But you want to
3:39 have when you build your kind of first
3:41 master prompt, when you're taking on a
3:44 major task, to have this control panel
3:46 where you control all of these
3:48 variables. So, going back into our
3:50 prompt we're building here, I put in the
3:53 control panel. For this use case, I'm
3:54 going to have it do ultra think, so
3:56 think as hard as possible for this
3:58 prompt and planning out this app. I'm
4:00 going to have the verbosity be medium,
4:02 so it's not a ton of extra information.
4:05 I want to have it use most of its tools,
4:08 so web, code, PDF, and images. I want it
4:10 to have self-reflection on, and I wanted
4:12 to have metapixing on. So, real quick,
4:14 you can probably figure out what these
4:16 three mean. And for a full list of
4:17 tools, I'll include that down below as
4:20 well for what tools GPT5 can use. But
4:22 what does self-reflect and metaphix
4:24 mean? These are two very important
4:26 features of GPT5 that actually makes it
4:29 very very powerful. Self-reflection is
4:32 basically GPT5's ability to when it's
4:35 about to execute on a prompt, it'll
4:37 actually reflect on what it's doing in
4:39 that prompt and improve the prompt
4:41 before it executes. Right? So, if you
4:43 give it a prompt, it's about to execute
4:46 on it, it will actually self-reflect on
4:48 that prompt and figure out ways to
4:50 improve it before it actually does the
4:52 tasks in the prompt. And this is a
4:54 really powerful feature in GPT5 you need
4:56 to be taking advantage of. This is how
4:58 you get really really good results. And
5:00 then there's also metaphix. This is
5:02 another really powerful feature. And
5:04 again, these are all things talked about
5:07 in here in the GPT5 cookbook. Metaphix
5:10 is actually its ability to reflect after
5:12 it executes. So self-reflect reflects
5:14 before it executes. Metaphix reflects
5:17 after it executes. And so after it
5:18 executes on your prompt, it'll actually
5:20 go back, look at the results, and
5:23 improve the results if the results
5:25 didn't match exactly what you were
5:28 looking for, right? And so these two you
5:30 want to have on most of the time. As
5:32 long as you have time to spare waiting
5:34 for the answer, you want to have these
5:36 on. If you're in a rush and timeliness
5:39 and quickness of the model is important,
5:41 you'll go ahead and you can turn these
5:43 off. And I'll go at towards the end of
5:45 the video into a lot more detail on how
5:47 these two work because there's more to
5:48 this prompt that actually ties into
5:50 these two options. But basically what
5:52 you want to do here is with every one of
5:55 your first prompts you do, right? You
5:57 want to have this control panel and you
5:58 want to control these different
6:00 variables in GPT5, right? So for
6:02 reasoning, you can do think, think hard
6:04 or ultra think. For verbosi, you can do
6:06 low, medium, and high. And for tools,
6:08 here are your different options. If you
6:10 don't want to choose specific tools, you
6:12 can just use auto. But getting into the
6:14 granularity of controlling each one of
6:17 these variables is going to give you way
6:19 way way better results with GPT V. I'm
6:21 not kidding. If you do these things,
6:22 your results are going to be amazing.
6:24 Equally as important as this control
6:26 panel is what we are going to go into
6:29 next in this master prompt. So the next
6:32 part of the master prompt is a really
6:34 simple task. So usually what people do
6:36 when prompting GPT5 is they actually do
6:38 only step three and none of the other
6:40 things. They usually just give it a
6:42 task. So this step three is probably the
6:44 one you're most familiar with, which is
6:47 giving the model a really simple one-s
6:49 sentence task. The reason why you don't
6:51 need to go into super detail here is the
6:52 rest of your prompt is going into the
6:54 important detail. This is just simply
6:56 what you want the model to do. So for
6:58 this example, when it comes to building
7:00 and planning the app, we're going to say
7:02 plan and scaffold a minimal app called
7:04 YouTube topic scout that finds trending
7:06 ideas, scores them, and generates a
7:08 script outline. And so basically we're
7:09 building like a YouTube planner and
7:12 scriptor for us. And so the task very
7:14 simple just planned and scaffold out
7:16 this minimal app and we gave a short
7:18 description what we want this app to do.
7:20 We don't need to go into a ton of detail
7:22 here because the rest of our master
7:24 prompt goes into all the other details
7:27 we need. So again just the task just
7:29 exactly what we want the model to do and
7:31 nothing else. So next, the fourth and
7:33 second to last part of this master
7:34 prompt that you really need to learn to
7:37 get incredible results from GPT5 is
7:40 going to be inputs. Right? So this is
7:42 probably the only optional part of the
7:44 prompt. Right? If you haven't done much
7:45 planning or thinking about what you
7:48 want, you don't need to put this inputs,
7:50 but if you have context you want to put
7:53 in, this is very important. Basically,
7:56 what inputs are is important context you
7:58 want to give GPT5. This is basically
8:00 going to be a list of notes, of links,
8:02 of other data, of other thoughts, of
8:04 other ideas, of comparisons, things like
8:06 that. I'll show you a couple examples in
8:08 a second here. But this is important
8:10 context you want to give to the model
8:13 before it does its execution for the
8:14 task you're asking for. So, for example,
8:16 for building out this app that we're
8:18 planning here, I'm putting in the user.
8:20 So, who I think's going to be using this
8:22 app, the core loop, what I want the
8:24 experience of the app to be, different
8:25 information about how I want them to
8:27 score the inputs of the app and score
8:29 what the topics are that they're
8:32 planning in the app, non-negotiables,
8:34 tech preferences, tone. So, I'm just
8:35 giving as much context of things I
8:37 already thought about for this app that
8:39 I wanted to build. Again, this is
8:41 completely optional, but you want to be
8:43 brain dumping in as much context as
8:44 possible. If you're building out or
8:47 planning an app, you just want to put in
8:49 all the different ideas you have for the
8:51 app already. And this is going to be
8:52 really important context. Maybe if
8:54 you're having the AI build you content,
8:56 what you would put in here is other
8:58 comparable content or other comparable
9:00 content creators, you want the content
9:02 to sound like, right? This is going to
9:05 be all the context that's important for
9:07 this prompt we're building for GPT5. As
9:09 you can see, our master prompt here is
9:11 looking pretty good. We're giving it a
9:13 lot of information so far. At this
9:15 point, it's already probably giving you
9:17 way better results than any other
9:19 prompting you've been doing with GPT5.
9:21 But let's go to the final part or the
9:22 final part of the master prompt for
9:24 GPT5. So, the fifth part I want to show
9:26 you in this mass prompt, this is
9:27 actually not the last part. I have one
9:28 more thing I want to show you after this
9:30 that's important to include in the
9:31 prompt. So, stick around for that. But
9:33 the fifth part that you actually want to
9:35 customize in this prompt is the
9:37 deliverable. So, this is a list of
9:40 precisely what you want in return from
9:42 GPT5. what you want it to actually
9:44 output to you after it does all its
9:46 self-reflection. So here's what it looks
9:49 like for us in our example of what we're
9:51 building here is when we're planning out
9:54 this app. We wanted to give us a PRD, so
9:56 a product requirements document. We want
9:57 to give it a competitor scan. So we
9:59 wanted to go research all our
10:01 competitors for this app. We wanted to
10:03 give us an architecture, an API spec,
10:06 what the UI looks like, starter code. So
10:08 this is actually a really complex list
10:10 of deliverables. And if we didn't
10:12 include these deliverables, GBT5
10:14 probably wouldn't output this. But
10:17 here's the thing. GPT5 is so powerful
10:19 that it can actually handle doing all of
10:22 these things in one shot. Because we're
10:25 using this master prompt to get the
10:27 output from GPT5, it'll actually be able
10:29 to get us all of these deliverables,
10:30 right? Because we also included what
10:32 tools we wanted to use, the amount of
10:34 reasoning we want to use. because we
10:36 included those options, it'll be able to
10:39 oneshot a lot of really complex things.
10:41 And so what you want after your inputs,
10:44 after your context, is the deliverables
10:46 of what you actually want out of this,
10:48 the results you want to get from the
10:50 model. And because we're doing all
10:53 these, again, fine-tuning using specific
10:55 variables, we're going to be able to get
10:57 not only all these deliverables, but
10:59 also all these deliverables done really,
11:01 really well. But these deliverables are
11:03 only high quality if you include this
11:06 last part in your prompt, which is going
11:09 to be this little private ops part here.
11:11 So, this is down below as well. I'll
11:12 have this entire example prompt down
11:15 below, but private ops is basically
11:20 describing to GPT5 how the self-reflect
11:22 and metaphix works. In the cookbook that
11:25 Open AI put out, they had things like
11:26 how to score itself when it's
11:28 self-reflecting, right? had talked about
11:30 how you should be using a self-scoring
11:32 rubric. And basically what that means is
11:35 when GPT5 reflects on the prompt and
11:37 reflects on its output, it should score
11:40 its output from like 1 to 7 and then
11:42 improve the output based on that rubric
11:45 scoring. Right? This is how the GBT5
11:48 model was built. And now what we're
11:50 doing is we're basically going in and
11:53 we're defining that for the model. We're
11:55 saying this is how you self-reflect.
11:57 This is how you metapix. This is how you
11:59 build the rubric for yourself.
12:01 Self-reflect and metaphix so that you
12:03 get even better results. Now, I know
12:05 what you're thinking. I got this comment
12:07 on my last GPT5 video, which is why the
12:09 heck do we need to do all these things?
12:11 If this was a good model, we shouldn't
12:14 have to control the verbosity and the
12:16 reasoning and all this. And you know
12:18 what? I agree with you. I agree. You
12:20 shouldn't have to do these things. And I
12:21 think in the future, you're not going to
12:23 have to do these things. I think in the
12:26 future as the model evolves and we get
12:28 GPT6, you're not going to have to
12:31 control all of these things, it's going
12:32 to be able to kind of figure it out on
12:35 its own. But at the moment, you have to
12:37 control these things. And in my opinion,
12:39 it is very much worth it because here,
12:40 I'll show you. I'll hit enter on this
12:43 and we'll start running this prompt. At
12:46 the moment, GPT5 from a raw power
12:48 perspective, just raw power of what it
12:50 can accomplish is the best. It is the
12:53 best model out there from a raw power
12:55 perspective. The outputs you get, the
12:57 things it can accomplish in one shot. It
13:01 is completely unmatched. So, yes, do you
13:03 have to do a lot of work to get good
13:06 results? Yes, you do. Would I rather it
13:08 so you don't have to do all this work?
13:10 Right. I would rather not have to do a
13:12 lot of work. You're 100% correct about
13:14 that. But here's the thing. If you put
13:16 in the work, you're going to get
13:18 exponentially better results, which I'll
13:20 show you in a second here as it thinks
13:22 through this, your results are going to
13:23 be way better. So, it's up to you. Do
13:25 you want quick results? That's fine. You
13:27 can use GBT5 like you were before. But
13:30 if you want great results, you want to
13:32 be controlling all these different
13:33 variables. And here's the thing, you
13:35 don't need to do this entire prompt
13:38 every single time you prompt GPT5.
13:39 That'd be ridiculous. If you had to do
13:41 that, I wouldn't be using GPT5, right?
13:42 But if you're starting a new
13:44 conversation, you're starting to go down
13:46 on a very important path, like you're
13:48 building out a new app like we're doing
13:51 here, right? This first prompt, you want
13:53 to have all these variables and options.
13:55 The first prompt, you have all this
13:57 control that puts it's almost like the
13:59 planning mode in clawed code in a way
14:01 where it's like the plan controls
14:03 everything else and everything else can
14:05 you can do really quick. Once you got
14:08 this really good strong first prompt in,
14:09 you get your first results. From there,
14:11 you can iterate very quickly. You don't
14:13 need to use this master prompt every
14:15 single time. But if you're going down a
14:17 path where you're doing something really
14:19 complex, like building out an app, that
14:21 first prompt you want to use, you want
14:23 to use this master prompt. And you're
14:24 going to see why in a second once it
14:26 outputs all this information. Okay, so
14:29 it's all done. It took 2 minutes and 21
14:31 seconds. Let's see what GPT5 came up
14:34 here. It first it has the initial code.
14:36 Okay, so it wrote all the code for the
14:38 app. So we'll be able to take that code
14:40 and we'll have V1 of the app. It also
14:42 has C data in there. So you can run this
14:43 and it'll have C data. But let's see
14:46 what else we got here. We have the PRD
14:47 markdown. Oh, and it had a metaphix
14:49 applied. So apparently it built the
14:51 product requirements doc and then went
14:52 back and fixed it to make it better
14:54 because it didn't match the scoring on
14:56 the rubric. So here's the product
14:58 requirements doc. It has a goal. Primary
15:00 users, jobs to be done, MVP, what should
15:03 be in the MVP, nice to have, success
15:05 metrics, non-negotiables. Okay, so it
15:06 has a competitor scan. So, it searched
15:10 the web and found four competitors. I
15:11 actually use Vid IQ quite a bit. It's a
15:13 pretty good app. Uh, so this is real.
15:15 This is true. These are true competitors
15:17 architecture. So, it has the entire
15:19 architecture. It has the schema for the
15:22 database. It has the API spec, so how
15:24 you would plug in the API to get data
15:27 for the app. It even has wireframes. So,
15:28 let's let's take a look at the
15:30 wireframes. We'll open this up. Here's
15:32 the wireframe. So, you can see what the
15:34 kind of homepage, a little mockup of
15:36 what the homepage would look like. Let's
15:37 just check out what the uh the other
15:39 ones look like here. I'll download this.
15:41 It'll show you some other result cards
15:42 for the different topics it comes up
15:45 with. So, it makes the PNGs. So, we're
15:47 making PDFs, PNGs. We're writing code.
15:49 And then it even has an entire zip file
15:51 for all the code that we can scaffold
15:54 the app out with and an explanation of
15:56 what's inside that zip. So, did what we
15:59 asked for. That's pretty incredible. All
16:01 in one shot. Now, from here on out, I
16:02 can prompt it just as you normally would
16:04 any other model. We don't need to use
16:06 that huge big prompt every single time.
16:09 Now, it already has enough context where
16:11 we can just go back and forth now and
16:12 tinker with what we have here. Using
16:15 this master prompt is going to get you
16:18 so much better results with GPT5. I
16:20 promise you, if you use that prompt I
16:21 just showed you, you're going to get the
16:23 best results you ever got out of an AI
16:25 model. If you learned anything at all,
16:26 make sure to hit subscribe. Make sure to
16:29 join the free vibe coding community I
16:31 built, link down below. Only free vibe
16:32 coding community on the internet. I
16:33 promise you'll learn a ton. You'll love