0:02 Most people using AI are doing it wrong,
0:04 which is why it's surprisingly easy to
0:07 get ahead of 99% of them. I have spent
0:10 over 20 years in tech and AI as a CEO,
0:12 board member, investor, building
0:15 billiondoll companies. And here's what
0:17 I'm seeing. The gap between people who
0:19 understand AI and those who don't is
0:22 getting wider faster. In this video,
0:24 I'll give you a clear sevenstep road map
0:28 to master AI like the top 1%. And the
0:30 best part is you can actually do it in
0:32 just 30 days, even if you're a total
0:35 beginner. Let's dive in. Week one starts
0:37 with learning what I call machine
0:40 English. Most people talk to AI like
0:42 it's a person. And that's a huge
0:44 mistake. Why? Because the generative AI
0:47 systems like Chad GPT don't actually
0:48 understand our language. They predict
0:51 it. And that's where most people get
0:55 stuck. If I said Humpty Dumpty sat on a
0:58 Your brain's going to fire wall, you
0:59 knew what was coming. Your brain
1:01 predicted it. You could have said Humpty
1:04 Dumpty sat on a roof. Now it's accurate,
1:07 but you knew wall was more likely based
1:09 on what you've seen before. Think about
1:12 Google search. It does autocomplete the
1:15 same way. Why? Because it has seen so
1:17 many search queries before. It has
1:19 learned from it and now is giving you
1:21 the most likely option. AI models like
1:23 Chat GPT or Gemini work in a similar
1:25 fashion, but they're different than
1:27 search engines because they don't store
1:29 any [music] pre-baked answers. They
1:32 generate the answer on the fly. How do
1:33 they generate it? Like at a very high
1:36 level, AI breaks your text into smaller
1:40 parts called tokens. Each token is a
1:42 word or sometimes a part of a word.
1:45 Humpty is probably one token. Dumpty
1:47 could be another token. Sat another
1:49 token. Wall another token. [music]
1:52 Then AI converts each token into a list
1:54 of numbers, also known as
1:56 multi-dimensional vectors. Those numbers
2:00 are placed inside a massive mathematical
2:03 space called an embedding space. And in
2:06 that massive space, similar ideas tend
2:09 to live closer together. The system has
2:11 learned from previous experiences. So,
2:14 it knows that the word Humpty, egg,
2:16 wall, and fall will be closer, [music]
2:18 but they're going to be far from words
2:21 like motorcycle or chocolate. Now, when
2:23 it's time to generate the answer, AI
2:26 looks at the context and predicts the
2:29 most likely next token. So, when it sees
2:32 Humpty Dumpty had a great, it weighs all
2:34 the options. Humpty Dumpty had a great
2:36 party. Humpty Dumpty had a great day.
2:39 Humpty Dumpty had a great chocolate. and
2:41 it sees that the word fall is the most
2:44 likely outcome. So the line is generated
2:47 and finished not from memory, not from
2:50 stored facts, but from probability and
2:54 proximity. That's why AI can feel so
2:56 smart, but also so alien. Now,
2:58 [clears throat] I'm skipping a lot of
3:00 details here, but the important takeaway
3:03 here is that when your prompt is vague,
3:04 [music] this guessing machine called
3:08 Chat GPT or Gemini will produce guesses
3:11 that are also vague. And if your prompt
3:14 is sharp and targeted, AI will come back
3:17 to you with sharp and targeted guesses.
3:19 That's what I call machine English. It
3:22 helps AI to compute your intent, not
3:25 just try to comprehend it. So, what does
3:28 a sharper prompt look like? I call it
3:31 aim. A for actor. Tell the model who
3:34 it's acting as. I is for input. Give it
3:36 the context and data it needs. And M for
3:38 mission. What do you want it to do?
3:40 Instead of typing, let's say, fix my
3:42 resume, try typing, [music]
3:45 hey, at GPT, you are the world's most
3:47 sought after ré editor and business
3:50 writer. You've reviewed thousands of
3:52 résumés that led to interviews at top
3:55 tech companies. You've told the AI what
3:58 its persona is, [music] what it's acting
4:02 as. A second line, I'm attaching my
4:04 resume and the job description for a
4:07 senior product manager role at a fintech
4:09 company. That's your input. Third,
4:12 mission. Review it and give me a bullet
4:15 list of 10 specific ideas [music] on how
4:18 to improve clarity, measurable impact,
4:21 align with the role. Your mission is to
4:24 help me build the best resume that gets
4:27 me hired. That's how you take aim. It
4:30 turns a prompt into a structure. The
4:32 model can understand, compute, and
4:35 reason with. You can use this three-part
4:38 structure in almost all prompts. And
4:40 from now on, you will start seeing the
4:42 results to be at least five or 10 times
4:44 better than before. Only when you learn
4:47 its language does AI finally start
4:50 working for you. Now that you understand
4:52 how to speak to AI, we're going to pick
4:54 your instrument. Here's the thing. Most
4:56 people start their AI journey the wrong
4:59 way. They Google top 50 AI tools. They
5:02 pick 10 and they jump from one to the
5:04 other. They skim through all of them.
5:07 That's a recipe for failure because
5:09 there's so much out there. My
5:12 recommendation, pick one, go deep. Think
5:15 of learning AI the same way you would
5:17 learn an instrument. You know, there is
5:19 a study in Frontier Psychology that
5:21 found that drummers pick up guitar
5:23 faster than complete beginners. [music]
5:26 Drumming is not even about melody and it
5:28 requires very different physical skills. [music]
5:29 [music]
5:30 But I personally had the same
5:33 experience. I spent tens of thousands of
5:36 hours as a drummer. [music] And when I
5:39 picked up guitar, it wasn't easy, but it
5:41 wasn't uncomfortable because I already
5:43 knew [music] how to practice and my
5:46 brain was trained to see structures and
5:48 patterns. [music] The deeper you dig
5:50 into one foundational model, the faster
5:52 you will find the rhythm of all the
5:54 others. So, which one do you pick? If
5:57 you want the most mature one, pick Chat
6:00 GPT. If you're deep into Google stack
6:03 and Google's ecosystem, try Gemini. If
6:06 you want more business and projectbased
6:09 AI, go with Claude. But really, it
6:11 doesn't matter what you pick. In the
6:13 first week, spend time with one of them
6:15 [music] and learn its personality, its
6:19 cadence, its limits, its strengths. The
6:22 goal is to [music] start feeling the
6:24 rhythm. Once you get comfortable, try
6:26 using the aim [music] framework that we
6:29 talked about. By the end of week one,
6:31 you should be able to write a structured
6:33 prompt without thinking. All right, so
6:36 we've started using AI. Now, let's talk
6:39 about what actually makes your outputs
6:41 smart, and that's context. [music] The
6:43 world's smartest AI will sound clueless
6:46 unless you feed it context. Every answer
6:49 AI gives depends on how it understands
6:51 the question. If you don't give it
6:53 context, it has no grounding. Remember [music]
6:53 [music]
6:56 that inside these AI models, there is
6:59 nothing but a crazy mathematical space
7:01 filled with billions of numbers. [music]
7:04 Context is the map that helps you
7:07 navigate that space to tell AI where to
7:10 look and what matters. And the best way
7:13 to build that map is with an acronym I
7:17 call [music] map. M is for memory. the
7:19 conversation history or the notes that
7:21 carry over from previous chat sessions
7:23 that you've had with the AI. Now, you
7:26 can repaste the thread or ask the model
7:28 to summarize before starting again.
7:30 That's how you'll start building
7:31 continuity [music] in your
7:34 conversations. A is for assets. The
7:37 files, data, the resources [music] that
7:40 you attach or copy paste in your prompt.
7:44 These assets help you ground the model
7:48 in reality. Second A is for actions. Now
7:50 these are the tools that the model can
7:52 call to do work. The action could be
7:56 search the web or scan your drive or
7:59 write this code or create a notion doc
8:01 and P is the prompt and the prompt is
8:04 the instruction itself. So the better
8:07 you get with memory assets and external
8:10 actions, the better context you'll give
8:13 AI in the prompt. And the richer the
8:16 context, the better the AI reasoning and
8:18 response. Once you start using these
8:21 frameworks like AIM and MAP, you have
8:25 joined the top 10% of AI users. But if
8:27 you want to hit that absolute expert
8:29 level, there is one more thing that you
8:32 really need. Debug your thinking, which
8:34 is step four. When you're not getting
8:35 the right answer, the problem is not the
8:38 AI, it's your thinking. [music] I
8:40 remember the first time I ever prompted
8:43 an AI. It was one of those earliest
8:47 models from OpenAI and I spent an entire
8:50 day trying to make sense of it and by
8:52 the end of it I was super frustrated
8:54 because it was random. It was
8:57 unpredictable. But back then no one
8:59 understood. The phrase prompt
9:02 engineering hadn't even existed yet
9:04 because prompting isn't typing. It's
9:07 iterating. When the output is weak, I
9:11 assume the fault is mine because it is. [music]
9:12 [music]
9:15 Did I get it the right persona? Did I
9:17 provide the right context? Did I give it
9:19 the right goal? And sometimes I even ask
9:21 the model itself, what did you do? And
9:23 why did you choose that answer? [music]
9:25 It will explain its logic. He'll explain
9:28 his chain. And that's when the magic
9:31 starts. You're not just using AI, you're
9:34 learning how it thinks. There are three
9:36 cheat codes I use for that. The first is
9:39 the chain of thought pattern. When the
9:42 answer seems off, I would say think step
9:45 by step. Show your reasoning. Then give
9:47 me the final concise answer. The second
9:50 is the verifier pattern. I would say to
9:52 the AI, ask me three questions that
9:55 would clarify my intent to you. Ask them
9:57 [music] one at a time and then combine
10:00 what you've learned and try again. And
10:03 the third is the refinement pattern
10:06 where you're refining your input itself.
10:08 Before answering, propose two sharper
10:10 versions of my question. Ask which one I
10:13 prefer. So AI will tell me how to ask
10:15 the right [music] way. And then we
10:17 continue. And you have to keep iterating
10:20 with these patterns because these loops
10:22 can teach the model how to understand
10:24 you [music] and teach you how to
10:27 understand the model. test, tweak, tune
10:30 up, push until you can tell why [music]
10:32 something is working and why something
10:35 is off. That's when it clicks. You're
10:38 not talking at AI anymore. You're having
10:41 an ongoing conversation. You and AI are
10:44 learning together from each other. But
10:46 here's the thing, it's not enough to
10:49 just debug your mind. If your post
10:52 sounds like every other LinkedIn post I
10:54 see that's pasted from [music] chat GPT,
10:56 you still have a problem. And that's why
11:01 step five is to steer to experts. When
11:03 you ask Chat GPT [music] a question,
11:05 you're not searching a database of
11:08 answers. You're sampling from millions
11:11 of probable ideas that AI has learned
11:12 over time [music]
11:15 and is storing as billions of numbers.
11:18 is some are brilliant, some are average,
11:20 some are completely made up, [music] and
11:23 some are flat out wrong. If you prompt
11:27 vaguely, like explain how to make a team
11:29 more innovative, the model will give you
11:32 a superficial generic blah answer full
11:35 of buzzwords. And you'll read it and
11:38 think, "Yeah, I already knew that." So,
11:40 how do you [music] fix that? You direct
11:42 the model away from the middle and
11:45 toward the sharper edges of its brain.
11:47 [music] So instead of that vague prompt,
11:49 you can say this. Explain how to make a
11:52 team more innovative using ideas from
11:55 Pixar's brain trust, Satya dea strategy,
11:58 [music] and Harvard's research. Now you
12:02 pull the model from mediocrity into
12:05 mastery by navigating it toward experts, [music]
12:05 [music]
12:09 frameworks, depth. What if you want to
12:11 learn about black holes and you don't
12:13 know who the experts [music] are? No
12:17 problem. Ask AI first. List the top
12:19 experts, researchers, and [music]
12:22 research papers and current thinking on
12:25 black holes. Then feed the same thing
12:27 back to [music] the model and prompt
12:30 using these experts and sources
12:33 synthesize the original framework that
12:35 fills the current gap on the science of
12:37 black holes or whatever it is that
12:39 you're after. That's [music] the way you
12:42 make sure AI is not an echo chamber
12:43 anymore. But remember, you're going to
12:46 need to verify what you get. That's our
12:49 step six. Sometimes AI will tell you
12:51 things like 68% of Americans are getting
12:53 divorced. I mean, you know, it's not
12:56 true. But the scary part is AI will
12:59 sound just as confident when it's wrong
13:02 as when it's right. So, you can tell AI
13:06 100 times, stop making stuff up. [music]
13:09 But all models are essentially
13:11 generative by design. [music] Making
13:14 things up is why they exist. So, what do
13:17 you do about that? You simply verify. [music]
13:17 [music]
13:20 Don't just consume. Critique. There are
13:23 five ways to separate intelligence from
13:27 illusion. Assumptions, sources, counter
13:29 evidence, auditing, and cross model
13:31 verification. Let's take one at a
13:34 [music] time. Assumptions, ask. List
13:37 every assumption you made and rank them
13:40 each by confidence. Second is sources.
13:41 [music] Ask. Site two independent
13:43 sources for each major claim that you
13:46 just made. Include title, [music]
13:48 URL, and a oneline quote. Now you can
13:50 check it yourself. That's the [music]
13:52 scaffolding behind the answer. Counter
13:55 evidence. Push it. Find one credible
13:57 source [music] that disagrees with your
14:00 answer. Explain the dependencies. That's
14:02 where real reasoning lives. Auditing is
14:04 the fourth one. Ask. [music]
14:07 Recomputee every figure. Show your math
14:10 or code. You'll be shocked how often the
14:13 numbers change once you make it slow
14:15 down and [music] start auditing. And
14:18 finally, crossmodel verification. This
14:21 one's my favorite. I run the same prompt
14:23 in ChatgPT and Gemini and Claude.
14:26 [music] I take the output from one model
14:28 and ask another to critique it. Or
14:30 [music] I feed the claims of one model
14:33 into the other and say, "Verify this."
14:34 That's how you separate [music] noise
14:37 from knowledge. By the end of your third
14:39 week, you'll start feeling more [music]
14:42 in control of your output. But here's
14:45 the problem. The best AI output aren't
14:47 the ones that sound the most original,
14:49 [music] they're the ones that sound like
14:52 you. That's why step seven is about
14:55 developing tastes. Most people use AI
14:58 like a vending machine. They push a
15:01 button, grab the same junk food output
15:03 everyone else gets, and call it a day.
15:05 If you did that, most people will know
15:08 you just copy pasted it. But you are
15:10 past that now, right? It's your fourth
15:12 week. It's time to step into the ring.
15:15 Treat AI like your sparring partner.
15:18 Argue with it. Push back. Sharpen your
15:21 thinking. Sharpen its thinking. That's
15:23 where the ocean framework comes in. Is
15:26 how you turn generic answers into
15:29 tasteful insights. Something that sounds
15:32 like you. Oh, original. Look at the
15:35 response. Is there a nonobvious idea in
15:39 it? If not, push it. [music] Ask, give
15:41 me three angles. no one else has thought
15:44 about. Label one as risky and recommend
15:46 the one that you like the most. C
15:49 concrete. Are there names, examples, and
15:52 numbers that make sense? If not, ask.
15:55 Back every claim with one real example.
15:58 E is [music] evident. Is the reasoning
16:00 visible? Is there enough evidence? If
16:02 not, ask. Show your logic in three
16:04 bullets. [music] Provide evidence before
16:08 you provide final answer. A assertive.
16:10 Does it take a stance? you could agree
16:13 or disagree with. If not, push it again.
16:15 Don't tell me what I want to hear. Pick
16:17 a side. State your thesis, defend
16:19 [music] it, and then address the best
16:22 counterpoint. Narrative.
16:24 What's the story? [music] Does it flow?
16:26 Is it tight? Guide it. Write it like a
16:28 story. Hook, problem, insight, proof,
16:30 actions, whatever you want in that
16:33 story. So, that's the ocean framework to
16:36 add taste to your output. Now, as you
16:39 apply this over 30 days, you will start
16:42 noticing something [music] deeper. Every
16:45 prompt you write, every revision you
16:49 push, every judgment you make, you're
16:51 not just [music] training the model, you
16:54 are training you. AI is coming whether
16:57 we like it or [music] not. To some, it
17:00 might be triggering lots of deep fears,
17:04 but I remain a perpetual optimist.
17:07 [music] I think AI is not here to
17:10 replace human work. It's here to restore
17:13 human worth. If you like this video,
17:15 [music] don't forget to subscribe and
17:18 check out my most recent video here.