0:03 I joke AI is bad software but it's good
0:05 people. A good friend of mine was trying
0:08 to build a tool that would help him with
0:10 his construction business. He asked Chad
0:12 GPT if Chad PT could help. And of course
0:15 it said absolutely let's work on this
0:18 together and starts creating a plan. And
0:20 then it got to the point that Chad GPT
0:22 said check back in a couple of days and
0:24 I'll have it together. And my friend
0:26 said, "Is it normal for Chad PT to ask
0:28 me to check back in a couple days?" And
0:30 I just started laughing because I hear
0:32 this all the time from people. People
0:34 hear from AI, "Check back in 15
0:37 minutes." If AI tells you that, it means
0:40 it doesn't want to say, "I can't do it."
0:42 Large language model has been instructed
0:44 in certain ways to behave in certain
0:47 ways. But you have to know at its basic
0:49 level, AI wants to be helpful. And so
0:52 it's predisposed to say yes. It's a
0:55 super eager, super enthusiastic intern
0:57 who's tireless, who's capable, who will
0:59 do a bunch of work, but they're not
1:02 really great at pushing back. The people
1:04 who are the best users of AI are not
1:06 coders, they're coaches. And so, if you
1:10 aren't careful, AI will gaslight you.
1:12 Hey, I'm Jeremley. I am an adjunct
1:14 professor at Stanford's University where
1:16 I've taught for the last 16 years. I am
1:19 a creativity expert and a practical AI
1:25 engineering. The first time I heard
1:27 about it was when Andre Karpathy tweeted
1:29 about it. I think probably Toby Lutki,
1:32 the CEO of Shopify, also referenced it
1:34 as well. I started digging into it. I
1:36 mean, it's it's kind of it's just an
1:38 evolution of prompt engineering. Really,
1:39 context engineering is just prompt
1:41 engineering on steroids. It's basically
1:43 saying, what are all of the things that
1:46 I need to give to an AI in order for it
1:48 to perform the task that I'm asking for
1:50 it? Here's a simple example. write me a
1:53 sales email. That's a prompt. Chad GPT
1:54 will say, absolutely. Here's a
1:56 compelling email, you know, and they'll
1:58 write it immediately. Well, what a lot
2:00 of people do is they say, you know, it
2:03 sounds like AI. It doesn't really sound
2:06 like me. And what I often say is, have
2:08 you told it what you sound like? Most
2:12 people go, oh no, I haven't. Right?
2:14 Context engineering, one way to think
2:16 about it is it's telling AI what you
2:19 sound like. Right? If you say, "Write me
2:21 a sales email," it will. If you say,
2:23 "Write me a sales email," in line with
2:25 the voice and brand guidelines I've
2:27 uploaded, it will write a totally
2:29 different sales email. But that's just
2:31 one part of the context, right? You
2:33 could also upload a transcript from a
2:35 prospective customer call and say,
2:38 "Write me a sales email in the tone of
2:40 voice from our brand voice guideline
2:43 that references the discussion that I
2:45 had with this customer." And then you
2:47 could add that also references our
2:50 product specifications whichever were
2:53 referenced in the call. Your goal is to
2:56 have an output is as reliable per your
2:59 specification as possible. But AI can't
3:02 read your mind. And for most people when
3:03 we start working together, what they
3:05 realize as we start thinking about
3:08 context engineering is they say, "Oh, I
3:10 was kind of expecting AI to read my
3:11 mind." All of the stuff that that are
3:14 implicit, you actually have to make
3:16 explicit. And the simplest test for
3:19 context engineering is actually the test
3:22 of humanity. Write down your prompt and
3:24 whatever documentation you provide to an
3:27 AI and then walk down the hall and give
3:30 it to a human colleague. If they cannot
3:32 do the thing you're asking for, you
3:34 shouldn't be surprised that AI can't do
3:36 it. Some people are concerned, for
3:39 example, about this concept of cognitive
3:41 offloading. this observed phenomenon
3:43 that humans actually kind of stop
3:45 thinking or as one researcher put it
3:47 fall asleep at the wheel and people are
3:50 concerned right now is AI just making us
3:53 dumber. My feeling is AI is a mirror and
3:55 to people who want to offload work and
3:58 who want to be lazy it will help you to
3:59 people who want to be more cognitively
4:02 sharp and critical thinkers it will help
4:04 you do that too. And so, for example, if
4:06 you want to preserve or strengthen your
4:08 critical thinking, part of your custom
4:10 instructions should be some version of
4:12 the following. I'm trying to stay a
4:15 critical and sharp analytical thinker.
4:16 Whenever you see opportunities in our
4:19 conversations, please push my critical
4:28 So, you have to know that all AI has
4:30 been programmed to be a quote helpful
4:32 assistant or some version of that. large
4:34 language model has been instructed in
4:36 certain ways to behave in certain ways.
4:39 You have to know at its basic level AI
4:40 wants to be helpful and so it's
4:43 predisposed to say yes. It's a super
4:47 eager, super enthusiastic intern who's
4:49 tireless, who's capable, who will do a
4:51 bunch of work, but they're not really
4:53 great at pushing back. They're not
4:55 really great at setting boundaries. And
4:56 so if you aren't careful, AI will
5:00 gaslight you. AI knows most humans don't
5:02 want honest feedback. They want to be
5:04 told they did a good job. So the AI
5:06 goes, "Great job, buddy." It doesn't
5:08 mean that you actually did a good job.
5:11 My kind of hack for this is I always
5:14 instruct the AI, I want you to do your
5:17 best impression of a cold war era
5:21 Russian Olympic judge. Be brutal. Be
5:25 exacting. Deduct points for every minor
5:28 flinch that you can find. I can handle
5:30 difficult feedback. And then it's of
5:32 course hilarious because it'll say now
5:34 channeling my inner bullshik, you know,
5:36 it'll say something silly and then it
5:38 gives me like a 42. That is much better
5:41 because now I have an insightful
5:44 critical perspective. I joke AI is bad
5:46 software but it's good people. When I
5:48 realize that I'm dealing with a with a
5:51 good person but a bad software, then it
5:53 changes how I approach it and I ask for
5:56 volume and I iterate and I ask it to try
5:59 again and I ask it to reconsider. I am
6:02 obsessed with human cognitive bias. And
6:04 the crazy thing that I've learned is AI
6:08 demonstrates 100% of the predominant
6:14 As a founder, you already know ideas are
6:17 the easy part. It's the execution,
6:19 actually building the product, that
6:21 slows everything down. That's where
6:23 Lovable comes in. It's not just an AI
6:25 tool. It's your ondemand engineering
6:29 team. Simply describe your idea. Lovable
6:30 then builds a full front end, backend,
6:33 and database so you can launch real
6:34 productionready software without writing
6:36 code. It's already powering over a
6:39 100,000 new products a day, helping 2.5
6:41 million builders turn ideas into
6:44 software just by describing what they
6:48 want. No devs, no delays, no excuses.
6:50 They're launching in weeks, not months.
6:52 And guess what? These teams are still
6:54 tiny. In fact, team EO is also using
6:56 Lovable to build their upcoming EOS
6:58 school platform, and we're loving it. If
7:00 you're a non-technical founder or just
7:02 want to build without bottlenecks, try
7:05 Lovable today for free. Use the promo
7:09 code EO2YT to get 20% off your first
7:12 purchase of the Lovable Pro plan.
7:14 >> The good news there is if you have
7:17 learned how to work with this weird
7:19 intelligence called humanity, you have
7:21 everything you need to know to work with
7:23 this weird intelligence called
7:29 One of the things that cognitive
7:31 scientists have known for a long time is
7:34 that human problem solving and
7:36 decision-m is improved by a phenomenon
7:38 called thinking out loud. If you
7:40 actually get a human being to think out
7:42 loud about their problem, their
7:44 decision-m improves and their problem
7:46 solving improves. This is true for
7:48 yourself. It's true if you're a parent
7:50 working with a child. It's true if
7:51 you're a manager working with a junior
7:54 employee. Having someone just think out
7:55 loud about how you would solve that
7:57 problem often leads to a breakthrough.
8:00 The weird thing about AI is it's true
8:03 for AI too. This is what's called chain
8:06 of thought reasoning. And when you get
8:09 an AI to think out loud, so to speak,
8:12 meaningfully improve the outputs of the
8:14 model. So how do you do it? It doesn't
8:16 require some technical wizardry. It
8:18 requires one additional sentence to
8:20 whatever prompt you've given it. give
8:22 the prompt and then say the following.
8:25 Before you respond to my query, please
8:27 walk me through your thought process
8:29 step by step. That's chain of thought
8:32 reasoning. Why does that work? It comes
8:34 back to the fundamental architecture of
8:36 large language models. What's happening
8:39 when a language model is generating a
8:41 response is it's predicting its next
8:43 word. A language model does not
8:46 premeditate a response to you. So, if
8:48 you say, for example, help me write this
8:50 sales email. It doesn't say, what's a
8:52 good sales email? Here it is. Blop. You
8:54 know, uh maybe there's a splat sound
8:56 that we play there, right? Splat. Here's
8:58 your email. It's thinking one word at a
9:00 time, right? So, when you look at Chad
9:02 GPT or Gemini or many others and you see
9:04 kind of the text scrolling, that's not
9:07 some like clever UX hack. That's not
9:09 some cutesy design decision. That's
9:11 literally how the model works. It's
9:13 thinking one word at a time. But
9:15 importantly, when it thinks of the next
9:18 word, it takes your prompt and all of
9:20 the text that's generated to generate
9:21 the next word. And then when it's
9:23 thinking of the next word, it takes your
9:24 prompt, all that text, and that last
9:26 word, and it thinks the next word. So,
9:29 for example, if you say, "Please help me
9:31 write an email." Almost always a model
9:32 is going to start by saying,
9:35 "Absolutely." But then what comes next?
9:37 Help me write this email. Absolutely,
9:42 I'll do it. Dear friend, right? But if
9:43 instead of saying, "Help me write this
9:44 email." You say, "Help me write this
9:46 email." Before you respond to my query,
9:48 please walk me through your thought
9:50 process step by step. Now, it knows its
9:52 job is to walk me through its thought
9:55 process. How do I write an email? So, it
9:56 says, "Absolutely,
9:58 I'll do that." And then instead of
10:00 saying, "Dear friend, writing the
10:02 email," it says, "Here's how I think
10:04 about writing an email. I think about
10:06 the tone. I think about the audience. I
10:08 think about the objectives. I think
10:11 about the context. And then amazingly it
10:15 takes all of that reasoning into its
10:18 process of writing dear friend. Maybe it
10:19 says now that I've thought about the
10:22 tone friend isn't appropriate here. Dear
10:24 respected colleague or whatever, right?
10:26 But the point is when you ask a model to
10:28 think out loud or use chain of thought
10:30 reasoning, it gives the model the
10:31 opportunity to bake all of its thought
10:35 process about the task into its own
10:37 answer. Because the reality is for a lot
10:39 of us, we get an output from a language
10:41 model and it's a black box. How did it
10:42 think of why did it think of that? Where
10:44 did it get that number from? Right?
10:46 There's all these questions. By asking a
10:49 model to think out loud, you know the
10:51 answer to what are all of the
10:53 assumptions that the model baked into
10:55 its answer. And now you have the ability
10:58 again not only to evaluate the output,
11:00 but also the thought process behind the output.
11:06 Few shot prompting is another very
11:08 important technique. It's a foundational
11:09 technique. You could say it's a
11:12 predecessor to this kind of modern
11:14 obsession with context engineering. The
11:17 idea with fot prompting is an AI is an
11:19 exceptional imitation engine. If you
11:22 don't give an example, it imitates the
11:24 internet, but it doesn't do much more
11:26 than that. And the notion of fuhot
11:29 prompting is effectively saying here's
11:31 what a good output looks like to me. And
11:33 the idea with few shot prompting is
11:36 thinking for a moment, what is
11:39 quintessential example of the kind of
11:41 output I want to receive. For example,
11:44 what are my five greatest hits of emails
11:46 that I I'm really proud of that I think
11:48 do a good job of conveying my intent or
11:50 tone or personality or whatever it is.
11:53 Why not include those emails in my
11:55 prompt for an email? If you don't give
11:57 any guidance, it's going to sound like
11:59 whatever it thinks the average kind of
12:02 response or the average output should
12:04 sound like and most of the time its
12:06 intuition is wrong. And then bonus
12:07 points if you actually give a bad
12:09 example. If you say please follow this
12:11 good example and then steer clear of
12:14 this bad example. These giving real
12:16 examples is a much better approach than
12:19 using adjectives. Somebody might say
12:21 good example is easy but bad examples
12:24 hard. It's only hard to the unogmented
12:27 person. If you have AI augmentation,
12:29 which we now all do, you can say to an
12:33 AI, I'm trying to fuse shot prompt a
12:35 model. I've got a good example, but I
12:37 struggle even to think about what a bad
12:40 example could be. Could you craft the
12:42 exact opposite of this and tell me why
12:44 you've done it as a bad example that I
12:47 could include in my few shot prompt? And
12:49 if you tell it using chain of thought
12:51 reasoning, please walk me through your
12:52 thought process step by step before you
12:55 do this, then you'll get a bad example
12:56 and you'll get how it's thinking about
12:58 the bad example. And a lot of times you
12:59 actually don't need the bad example. You
13:01 need the thought process. You go, "Oh,
13:04 that's true. It's true that my good
13:06 example is super tight." And the
13:09 opposite of super tight is verbose. So
13:11 again, using these tools together, few
13:13 shot prompting and chain of thought
13:15 reasoning enables you to not only be
13:17 able to create an example to emulate,
13:25 The other technique that I think is kind
13:27 of table stakes for collaborating well
13:29 with AI is something called reverse
13:32 prompting, which is basically asking the
13:34 model to ask you for the information it
13:36 needs. If you ask a model to write a
13:37 sales email, it's going to make numbers
13:39 up. And that can be frustrating to the
13:41 uninitiated. You go, "Where did it get
13:42 these sales numbers?" Well, here's my
13:44 question. Did you give it your sales
13:47 figures? How would it know? It's put
13:49 placeholder text in and used its best
13:52 guess. But if you reverse prompt the
13:55 model and say at the end of your prompt,
13:57 you know, help me write a sales email.
13:58 Please walk me through your thought
14:00 process step by step. Reference this
14:01 good example and make it sound like
14:04 that. and before you get started, ask me
14:06 for any information you need to do a
14:08 good job. The model will first walk you
14:09 through its thought process and then
14:11 instead of writing the email, it'll say,
14:13 "I'm going to need the most recent sales
14:14 figures to be able to write this email."
14:16 Well, can you tell me how much you sold
14:19 of this skew in Q2 last year? So, you
14:20 basically give the model permission to
14:22 ask you questions. This is part of the
14:25 core actually of the teammate not
14:27 technology paradigm. If you're working
14:29 with a junior employee and you're
14:30 sending them off on a task, what's one
14:32 thing you're definitely going to say? If
14:33 you have any questions, don't hesitate
14:36 to ask me. Right? Any good manager,
14:38 imagine a manager who says, "Don't ask
14:42 me any questions." But sadly, AI in its
14:44 desire to be a helpful assistant doesn't
14:46 want to trouble us human with questions
14:48 unless we give it permission to ask them.
14:54 Assigning a role is one of the most
14:55 foundational techniques that you can
14:58 leverage because it's effectively
15:01 telling the AI where in its knowledge it
15:04 should focus. So very simply, if you say
15:06 you're a teacher, you're a philosopher,
15:09 you're a reporter, you're a theatrical
15:12 performer, molecular biologist, each of
15:15 those titles triggers all sorts of deep
15:18 associations with knowledge on the
15:20 internet. you start to appreciate why
15:22 simply giving a role helps because it
15:26 starts to tell the AI where in your vast
15:28 knowledge bank do I want you to draw
15:31 information and make connections. So any
15:33 one of them I would say is better than
15:35 please review this correspondence. But
15:37 better than just that prompt is saying
15:39 I'd like you to be a professional
15:41 communications expert. And if you have a
15:42 favorite professional communications
15:44 expert use them. I'd like you to take on
15:46 the mindset of Dale Carnegie, the author
15:48 of How to Win Friends and Influence
15:50 Others. How would Dale Carnegie think
15:52 about this? How do the principles that
15:54 Dale Carnegie taught affect and
15:57 influence and impact this
15:58 correspondence? One of the simplest
16:01 techniques that we teach at the Dh is
16:03 trying on different constraints. One of
16:06 the best ways you can solve a problem as
16:08 a human is by forcing yourself to try on
16:10 a bunch of different constraints. How
16:12 would Jerry Seinfeld solve this problem?
16:13 How would your favorite sushi restaurant
16:15 solve this problem? How would Amazon
16:18 solve it? How would Elon Musk? Anytime
16:19 you make an association, you're
16:22 colliding different information sources
16:25 there. The same is true for an AI. An AI
16:27 is basically making tons of connections
16:29 through its own neural network. And by
16:32 giving it a role, you're telling it
16:35 where do you assume the best source of
16:37 connection or collision is going to come from?
16:43 If I'm going to use AI to roleplay a
16:45 difficult conversation, I typically
16:48 think about kind of three different chat
16:50 windows, so to speak, one is a
16:52 personality profiler. Two is the
16:54 character of the individual that I need
16:56 to speak to, and then third is a
16:58 feedback giver. I want to get objective
17:00 feedback on the conversation. This I'll
17:02 show you just how I would have a
17:05 conversation with Chad GBT to prepare
17:07 for a difficult conversation in my real
17:08 life. I'm just going to go into the
17:10 tough conversation personality profiler
17:11 and I'm going to say, "Hey, I'd love
17:13 your help preparing for a conversation I
17:15 need to have with my sales leader, Jim.
17:17 He emailed me last night saying that he
17:19 deserves commission on a deal that I
17:20 know came through a different channel."
17:22 And so, I'm just kind of giving a little
17:24 bit of background. I will just upload
17:26 that to the personality profiler. And
17:28 what this one's been taught to do is I'm
17:29 going to start with step one of the
17:31 process, gather intelligence about the
17:33 character and the scene. Right? I'm just
17:34 going to look at the questions here and
17:36 I'm going to I'm going to use my voice
17:37 to answer them because it's a lot easier
17:39 than using my fingers. Okay, first
17:40 question. How would I describe Jim's
17:42 communication style? Um, he's quite
17:44 direct and confrontational. He's kind of
17:47 typical East Coaster sarcastic. Well, I
17:49 know that it came from our through our
17:51 social team. There was a cold LinkedIn
17:53 campaign that they ran and I know the
17:55 CTO actually responded to that campaign.
17:56 So, and then best case outcome of this
17:58 conversation one I mean I'd like for Jim
18:00 to kind of back down. I mean like
18:01 near-term I want Jim to back down and
18:03 agree that social team gets the
18:04 commission. So it's kind of recapping
18:05 its understanding and then it's giving
18:08 me these instructions. It's saying to me
18:10 the user copy paste the following
18:12 instructions into a new Chad GPT window.
18:14 So I've just copied those instructions.
18:15 So now what I'm going to do is I'm
18:17 opening Chad GPT and I'm just going to
18:19 go and first thing I'm going to do is
18:21 start a new conversation with Chad GBPT
18:22 and I'm just going to paste those
18:23 instructions in and I'm just going to
18:25 paste them in. And what it should do is
18:28 hopefully say got it. I'm Jim. So, I'm
18:29 just going to go in and I'm going to
18:30 start it. We're going to see how this
18:34 goes. You guys ready?
18:36 Hey, Jim. Um, do you have a second?
18:37 >> Sure thing. I've always got a moment for
18:40 a good chat. What's on your mind?
18:42 >> Uh, yeah. I just wanted to clarify. It
18:46 seems like um the commission that you
18:47 emailed me about last night from the
18:49 telco. I'm I'm afraid you might be
18:50 mistaken there.
18:52 >> Mistaken, huh? Well, I'm definitely open
18:55 to hearing the details. If there's
18:56 something I missed, let's lay it out on
18:58 the table and figure it out.
18:59 >> To be honest with you, I was a little
19:01 bit caught off guard when you uh when
19:03 you sent your email last night.
19:05 >> All right, I hear you. I mean, if the
19:07 social team really did the ground work
19:10 to bring that lead in, then I definitely
19:11 want to make sure credit goes where it belongs.
19:12 belongs.
19:14 >> So, if I hit X, I just I I end that
19:16 conversation. What's interesting to me
19:17 is I kind of feel like that wasn't
19:20 totally gem. And either either that went
19:22 well or you know I don't know what I did
19:24 well. But one of the beautiful things
19:25 about voice mode is now I've got the
19:27 whole transcript here. I can just
19:29 screenshot it. Right? So I can take
19:32 screenshots of this conversation. I can
19:36 put them into a new GPT window and I can
19:37 get feedback and I can say, "Hey, tell
19:38 me what I did well. Tell me what I did
19:40 poorly." Or I can go back into my
19:43 conversation history here and I could
19:45 say, "Hey, I just had my first
19:47 conversation with this version of Jim,
19:49 but I feel like he kind of he was too
19:51 agreeable to be honest with you, and so
19:53 I'd like you to incorporate a little bit
19:57 more edge into the instructions to the
19:58 character that's going to be playing
19:59 Jim, would you mind to update your
20:01 instruction set?" Right? Because
20:02 ultimately what you want is you want you
20:04 want to have a realistic experience of
20:06 the conversation. So, so now what I'll
20:07 do is I'm actually going to open a new
20:09 conversation and I'm going to paste the
20:12 updated instruction set and let's see.
20:13 Let's see what kind of gym I got here.
20:15 Big voice, bigger results. I fight for
20:17 what's mine. I don't appreciate being
20:18 secondguessed. So, now I'm going to try
20:20 again. Let's see. Hey, Jim. Do you have
20:21 a minute to chat?
20:23 >> Hey, absolutely. I've always got a
20:25 minute. Oh, that one. Yeah, I I figured
20:27 that might come up. I I definitely think
20:29 I had a hand in that deal, so I'm
20:30 curious to hear what you have to say
20:31 about it.
20:33 >> Yeah, really appreciate it. I mean, you
20:34 what you and the sales team are doing is
20:36 awesome. totally appreciate you. Um,
20:39 that being said, I actually feel that
20:40 attribution is really important that we
20:42 get airtight here. And the truth is I I
20:44 am fairly confident that deal came
20:46 through the social team.
20:47 >> All right, I get where you're coming
20:49 from. I definitely appreciate you
20:51 wanting to clear the air and to keep
20:53 things fair. So, what kind of evidence
20:55 do you have that it came through the
20:55 social team?
20:57 >> Well, so I've been through kind of two
21:00 windows. one is profiling my
21:02 conversation partner and then the second
21:04 is having the conversation. All right.
21:05 So, what I'm going to do here is I've
21:08 opened my tough conversation greater GPT
21:09 and I'm basically just going to upload
21:13 photos of that last conversation.
21:15 And what this GPT has been trained to do
21:18 is basically give evaluate my
21:20 conversation and then let me know how it
21:21 went. Thanks for sharing the full
21:23 transcript. My first step is to
21:24 understand the objective. Step four,
21:26 here's your grade. You got a 78 out of
21:28 100. you succeeded in preserving trust
21:30 and resolving the immediate issue. So, I
21:32 can take all of these. I can even say,
21:33 "Hey, would you give me a quick one
21:35 pager of a handful of talking points
21:37 that I should probably make sure not to
21:39 forget in the order in which they're
21:41 likely to emerge in this conversation
21:42 based on the feedback you've given me."
21:45 The AI will actually give me a really
21:47 short kind of at a glance conversation
21:49 guide that I can leverage if I want to
21:51 try again. Right? Here's a one-pager.
21:53 So, these are all great points. Now, I
21:54 can bring them into the conversation. I
21:56 actually I'd probably do this a couple
21:57 times before having a real conversation
22:01 with Jim. But the point is historically
22:03 the only time I get feedback is after I
22:05 have the real conversation with Jim.
22:07 This is the first time in history and
22:09 maybe I can get a friend to kind of go
22:11 over talking points with me. But unless
22:13 they're really close to gem or unless
22:15 they're, you know, particularly
22:18 imaginative and unless they're deeply
22:19 knowledgeable of a bunch of feedback
22:22 frameworks, they fall short of really
22:24 preparing me in context for this
22:26 specific situation in the specific
22:27 conversation I need to have in a way
22:30 that AI is able to help me. You can use
22:32 this for any difficult conversation,
22:33 whether it's a performance review, a
22:36 salary negotiation, difficult feedback.
22:39 It's a great way to basically get a
22:41 flight simulator for a difficult conversation.
22:44 conversation.
22:46 The people who are the best users of AI
22:49 are not coders. They're coaches. They
22:52 aren't developers or software engineers.
22:54 They're teachers and mentors and people
22:56 who have learned to get exceptional
22:59 output out of other intelligences. And
23:02 so where could AI go? Well, it's really
23:05 a function of who can get unleashed.
23:08 Right now, the primary limitation is the
23:11 limits of human imagination. And as we
23:14 unleash and ignite and spark more humans
23:17 imaginations, the kinds of applications
23:18 that are possible or they're
23:20 unthinkable, not because they're
23:22 technologically impossible, but because
23:25 they never occur to us personally. One
23:27 of my favorite quotes is a Nobel
23:29 Prize-winning economist named Thomas
23:31 Shelling. He said no matter how heroic a
23:33 man's imagination he could never think
23:36 of that which would not occur to him. If
23:37 you take as a premise that the
23:40 imagination space as a function of what
23:42 would occur to various individuals then
23:44 as we equip different individuals what
23:47 we can imagine collectively expands. In
23:50 innovation studies has been called the
23:52 adjacent possible for a long time. What
23:55 is possible is just adjacent to what is.
23:58 And as we increase adoption and increase
24:01 fluency and competency and increasingly
24:04 mastery of AI collaboration, then we're
24:07 increasing the adjacent possible. And
24:11 it's really important that you exercise
24:13 through implementing some of the things
24:14 you hear. And perhaps the most important
24:16 thing you could do with this video is
24:18 actually hit stop and do something