0:03 I took Google's 8hour Gen AI leadership
0:05 course so you don't have to. And once
0:06 we've gone through the condensed
0:08 version, I'll share a bonus at the end
0:11 outlining the exact steps to follow if
0:12 you want to pass Google's official
0:15 certification exam. I'm Ali Salam. I
0:16 currently work as a director in a tech
0:18 company. And on this channel, I'll help
0:20 you turn tech and finance into your
0:23 personal advantage. Let's go. The course
0:26 is structured into five modules. The
0:28 first module explores the full
0:30 capabilities of generative AI. The
0:32 second module unlocks foundational
0:35 concepts of generative AI by defining it
0:37 and differentiating it from AI and
0:39 machine learning. The third module
0:41 provides a comprehensive overview of the
0:44 Gen AI landscape. The fourth module
0:46 teaches you how to use Gen AI apps by
0:48 covering key prompting techniques and
0:50 concepts like grounding and retrieval
0:52 augmented generation to transform your
0:55 work. And the last module explores how
0:57 to build and deploy generative AI
0:59 agents, covering their core components,
1:01 advanced techniques, and outlining a
1:03 plan for transforming your organization.
1:06 Module one is called Gen AI beyond the
1:08 chatbot. Let's kick things off with the
1:11 definition. Generative AI is a specific
1:13 type of AI that focuses on generating
1:16 new content and ideas. It's multimodal,
1:18 meaning that it can work with text,
1:20 images, code, and more. And Google
1:23 groups its capabilities into four
1:25 categories. The ability to create new
1:28 content, summarize information, discover
1:30 information at the right time, and
1:33 automate tasks that used to be manual.
1:35 And at the core of these capabilities
1:37 are something called foundational
1:39 models. Examples would be Google's
1:43 Gemini, OpenAI's GPT, or Anthropics
1:45 Claude. And if you're wondering how
1:47 foundational models fits into the
1:48 broader landscape of artificial
1:51 intelligence, don't worry about it.
1:53 We'll cover that in module 2. But for
1:56 now, just remember that foundational
1:58 models share three key features. They
2:01 are trained on diverse data, making them
2:04 flexible across many use cases, but at
2:08 the same time adaptable to niche domains
2:10 through targeted training. So trained on
2:12 diverse data, flexible and adaptable.
2:15 And the way we interact with them is
2:17 through something called prompting which
2:19 is essentially what you do when you're
2:22 talking to chat GPT either in the chat
2:24 or through your voice. Now since this is
2:26 a Google course there are two key
2:29 products that you need to be aware of
2:31 and first is Gemini which is the
2:34 foundational model behind Gemini the app
2:36 which is Google's equivalent of chat
2:38 GPT. It also powers workspace
2:41 integrations like Docs, Gmail, and
2:43 Slides where you can draft emails,
2:45 generate images, or just summarize
2:48 notes. Also, Gemini powers Google's
2:50 cloud service where it helps you write
2:53 and debug code or analyze large amounts
2:55 of data in BigQuery. And the second
2:57 product that you need to be aware of is
3:00 a product called Vertex AI, which is
3:02 Google's unified machine learning
3:04 platform. It gives you access to models
3:07 like Gemini and lets you fine-tune them
3:08 and eventually drop them into
3:10 production. And if you haven't heard
3:12 about Vertex AI, don't worry about it.
3:15 It's mainly an offering that is targeted
3:17 against businesses rather than retail.
3:19 And lastly on module one are two more
3:21 highle topics that you need to be aware
3:23 of. The first is that Google calls
3:25 itself an AI first company. That means
3:28 that AI is integrated across their
3:30 ecosystem, built with security and
3:32 ethics at its core and most importantly
3:35 for you is that they advocate for an
3:37 open approach which is awesome. It
3:39 essentially means that you're not locked
3:41 into using models like Gemini. Instead,
3:44 you can plug in other models like GPT,
3:46 Claude or Llama when you're setting up
3:49 your AI workflows in Google's ecosystem.
3:52 And the second highle topic comes down
3:54 to the strategy of how you apply AI
3:56 adoption in your company. Google
3:59 advocates a combined top-down and bottom
4:01 up approach where leaders set the vision
4:04 and priority of what AI should achieve.
4:06 Whereas employees on the ground try to
4:09 identify practical applications of AI
4:12 within their workspace and feed them up.
4:14 When done right, these two streams will
4:16 help reinforce each other. Moving on to
4:18 module two, which is called unlock
4:21 foundational concepts. And this module
4:23 is all about connecting all of those
4:25 different terms that you've heard in AI
4:27 and showing how they all fit together.
4:29 And we'll start at the top. Artificial
4:32 intelligence is simply machines doing
4:34 tasks that would normally require human
4:38 level intelligence. Inside AI, you have
4:40 machine learning, which are algorithms
4:43 that learn from data to perform specific
4:45 tasks. And a subset of machine learning
4:47 is called deep learning which uses
4:50 multi-layered neural networks to
4:52 identify complex patterns. And within
4:55 this space you will find generative AI
4:57 which again are machine learning that
4:59 focuses on creating new content. And at
5:02 the core are the foundational models
5:04 which are machine learning models that
5:06 are trained to execute various different
5:09 tasks. And lastly, a subset of those are
5:11 large language models which are
5:13 specifically designed to understand and
5:15 generate human language. Now let's talk
5:18 about the fuel for these models which is
5:21 data. And really there are two types of
5:23 data. You have structured data which is
5:26 clean, organized, often times divided
5:29 into columns and rows. Think about
5:31 databases or spreadsheets. And then you
5:32 have the other type of data which is
5:34 often times referred to as unstructured
5:37 data. This is usually raw, messy data
5:39 that doesn't have a predefined
5:41 structure. Think about data like
5:43 customer emails, social media posts, or
5:46 call transcripts. And our AI models can
5:48 of course work with both. But what
5:50 really matters are two things. And the
5:52 first is quality of the data. As
5:55 famously said by someone very smart,
5:58 garbage in equals garbage out. And the
6:00 second thing is accessibility. meaning
6:02 that the data needs to be available at
6:06 the right time in the right format. Now
6:08 the data can include numbers, dates,
6:12 text, images, even sound. But it needs
6:15 to comply with these two conditions. And
6:18 once we have the right data, models can
6:20 start to learn using one of three
6:22 approaches. The first is called
6:24 supervised learning where models are
6:26 trained on labeled data to predict
6:28 outcomes. And the second approach is
6:30 called unsupervised learning where
6:33 models get trained on unstructured data
6:35 to try and find complex patterns. And
6:37 the last approach is called
6:39 reinforcement learning where models
6:40 learn through trial and error and
6:42 feedback loops. And by the way, if
6:45 you're a nerd like me, reinforcement
6:47 learning is what powered those Starcraft
6:49 and Dota bots a couple of years ago that
6:51 ended up beating all the pro players.
6:53 Anyways, I digress. Let's talk about how
6:55 all of this fits into practice. Google
6:57 frames the machine learning life cycle
7:00 into four stages. First you have data
7:03 preparation where you collect, clean and
7:05 transform raw data. From there you do
7:06 your model training which essentially
7:08 builds your model based on the data.
7:10 Third step is deployment where you put
7:12 your model in production. And lastly is
7:15 management where you monitor, maintain
7:17 and improve your model over time. So in
7:20 short module two connects the dots. what
7:23 AI really is, how data drives it, and
7:25 the way machines learn. Kind of like a
7:27 simple road map that makes everything in
7:30 the gen AI space feel less confusing.
7:32 Module three is called navigating the
7:34 landscape, and it covers two main
7:36 topics. First is what you need to
7:38 consider before starting a Gen AI
7:41 project, and second are the five layers
7:43 of the AI landscape. Before starting any
7:46 Gen AI project, Google says that you
7:49 should assess two areas, needs and
7:51 resources. And it breaks down the needs
7:53 into six categories for evaluation.
7:56 First is scale. And scale refers to the
7:58 overall breadth of the use case across
8:00 the organization such as the number of
8:03 users, data volume, and workflows.
8:05 Second is customization. How tailored
8:07 does the AI need to be in order to fit
8:09 your organizational needs? Is general
8:11 purpose models enough or do you need
8:13 something that is fine-tuned? And third
8:15 is user interactions. How are people
8:18 going to engage with the AI? Will it be
8:20 through a chat? Will it be embedded into
8:22 certain workflows? Or will it just run
8:23 automatically in the background? And
8:26 fourth is privacy. How sensitive is the
8:27 data that is going to be involved in the
8:30 workflow? Is it public information,
8:32 internal knowledge or regulated data
8:35 like in healthcare or finance? Fifth is
8:38 latency. So how fast does the AI need to
8:40 respond? Is a few seconds okay or do you
8:42 need something that is real time? And
8:44 the last topic is connectivity which are
8:46 the network conditions that the model
8:48 needs to run under. Will it always be
8:50 cloud connected or does it need to
8:52 function in low connectivity devices
8:54 such as factories, fieldwork or maybe
8:57 even edge devices. Shifting gear into
8:59 the second assessment category, which
9:01 are your resources. This is actually
9:03 super straightforward. It boils down to
9:05 people, money, and time. So, do you have
9:07 access to the right talent such as AI
9:09 expertise? What's your project budget
9:11 and what's the project timeline? And
9:13 really, that's the list to consider if
9:15 you're going to start an AI project.
9:16 Let's take a look at the second part of
9:18 the module, which covers the five layers
9:21 of the AI landscape. The first layer is
9:25 Gen AI powered applications. This layer
9:27 you're likely very familiar with. It's
9:29 going to be your Claude, Chachi PT,
9:32 Llama, and Friends. One level deeper are
9:34 the agents. They are autonomous systems
9:37 that use foundational models to reason
9:40 and act. They operate in reasoning loops
9:42 observing, interpreting, and iterating.
9:45 They use tools to interact with data,
9:47 software, and hardware. And they rely on
9:50 models such as chat GPT as the brain of
9:52 the system. And an example here could be
9:54 as simple as an AI assistant that
9:57 researches prospects and updates CRM
9:58 once it's found the relevant
10:00 information. From there, you have
10:01 platforms. These are managed
10:04 environments that provides the tools and
10:06 infrastructure to build, deploy, and
10:09 manage AI. And here's where Vertex AI
10:12 comes in. It will let you do two key
10:15 things. First is Model Garden, which
10:17 lets you pick Google models, third party
10:20 options, or even open-source models. And
10:22 really, this is a nod to Google's
10:23 approach to openness when it comes to
10:25 AI, which we discussed in the first
10:28 module. either fully custom at scale
10:30 with various machine learning frameworks
10:33 or via something called AutoML which
10:35 automates creation and training of your
10:37 models for users with limited technical
10:40 knowledge. The fourth layer are the
10:42 models which is the core engines like
10:44 Gemini and it's important to
10:47 distinguish. Gemini the model powers
10:51 applications while Gemini the app is the
10:53 interface you interact with when you are
10:54 chatting with it in your browser. And
10:57 the last layer is the infrastructure
11:00 layer, the foundational GPUs, TPUs, and
11:03 servers. Most of it runs in the cloud,
11:05 but sometimes you'll hear about edge AI,
11:07 where compute happens locally on the
11:10 device. And a good example use case is
11:12 self-driving cars, which just can't
11:14 afford cloud latency when making split
11:17 decisions. So, navigating the AI
11:19 landscape means checking your needs and
11:21 resources first, then understanding the
11:23 five layers powering the AI landscape
11:25 from the apps that you and me use all
11:27 the way down to the infrastructure.
11:30 Quick pause. I have a favor to ask. If
11:32 you're enjoying the video so far, you
11:34 should consider becoming a part of the
11:37 small but very exclusive group of around
11:40 5% of viewers that have subscribed so
11:42 far. And if you've already subscribed, I
11:44 just want to say thank you. You're the
11:46 reason why this channel can keep growing
11:49 and keep getting better. Next is module
11:50 four, which is called transform your
11:53 work. And module four is about how to
11:56 actually work with Gen AI in practice
11:58 through better prompting, refining
12:00 outputs, and streamlining workflows.
12:03 Let's talk about prompting techniques.
12:06 And again, prompting is simply how you
12:08 talk to the model like when you use chat
12:11 GPT or Gemini. And Google highlights
12:15 three key techniques. First is to assign
12:19 a role. So give the model a persona. For
12:22 example, act as a lawyer or act as a
12:26 sales coach. This changes its tone,
12:28 style, and focus. The second technique
12:31 is something called prompt chaining.
12:33 Don't expect a perfect answer in one
12:36 prompt. Instead, treat it like a back
12:38 and forth conversation, refining the
12:40 outputs in a step-by-step manner. And
12:42 the next technique is something called
12:46 zero, one, or few shoting. In the world
12:49 of AI, the word shot refers to the
12:52 number of examples that you provide in
12:55 your prompt. So, for example, zero shot
12:58 means no example, and that's great for
13:01 simple tasks. One shot means that you
13:03 provide one example, which is great if
13:05 you want to give your model a bit of
13:08 context. And lastly, few shot means
13:11 multiple examples. And this is great for
13:14 complex tasks. So role assignment,
13:17 prompt chaining, and shot selection.
13:19 Those are essentially your three levers.
13:22 Next in module four is model guidance
13:25 and refinement. And the key concept here
13:27 is something called grounding. It
13:29 essentially means reducing a model's
13:32 hallucination by connecting the AI to
13:35 real verifiable sources of information.
13:37 And the most common method of grounding
13:40 is something called rag which is short
13:43 for retrieval augmented generation. Step
13:46 one is retrieve where the model
13:47 retrieves relevant information from
13:50 external sources. And step two is
13:52 augment where the retrieved information
13:55 is incorporated into the prompt of the
13:57 large language model. And step three is
14:00 generate where the LLM processes the
14:02 prompt and generates a response. And the
14:05 last topic for module four is that
14:07 Google recommends ways to make prompting
14:10 more efficient and repeatable. In
14:12 summary, it comes down to three things.
14:15 First is reusing prompts. Store your
14:18 best prompts as templates. Second is
14:22 used save info in Gemini. You can store
14:24 context in the model so you can recall
14:26 it consistently. And the third is to
14:30 explore gems. This is essentially a
14:33 personalized AI assistant inside Gemini
14:36 that bundles templates, instructions,
14:38 and guided interactions into one
14:42 workflow. So module 4 is all about
14:44 control, prompting well, grounding your
14:46 outputs, and streamlining your workflows
14:49 so that AI becomes a reliable teammate
14:51 instead of just a novelty. And with
14:53 that, let's shift into the last module
14:56 of the course called transform your
15:00 organization. And module five goes one
15:03 level deeper on agents, reasoning,
15:06 tooling, and customer engagements.
15:08 Starting off on the agents, Google
15:10 categorizes them into two main types.
15:13 You have deterministic agents that are
15:16 traditional rule-based systems that
15:19 follow a strict predefined script. They
15:20 are predictable and designed for
15:23 specific tasks with a limited set of
15:26 actions but lack flexibility to handle
15:28 unexpected inputs. Think of simple chat
15:31 bots like only respond to commands like
15:34 check order status. And the second type
15:36 is generative AI and they are built on
15:39 large language models. These agents use
15:42 natural language and can reason, learn
15:45 and adapt on the fly. Their behavior is
15:47 not hard-coded. Instead, they generate
15:50 responses dynamically, leading to a more
15:53 conversational and adaptive style of
15:55 interaction. Think of an AI assistant
15:57 that can brainstorm ideas or write
16:00 creative stories. And the key
16:02 distinction is that deterministic agents
16:05 follow a rigid script while generative
16:09 agents reason and responds dynamically.
16:11 And the enabler of that flexibility in
16:13 generative agents is through something
16:15 called a reasoning loop. And the
16:17 reasoning loop is how it thinks through
16:19 a problem to find the solution. It's all
16:22 about using different thinking styles to
16:24 get to the right answer. And Google
16:26 highlights three key styles. The first
16:30 one is called react, short for reason
16:33 and act. Think of react as the agent who
16:35 reasons out their next move before
16:37 taking action. For example, if you ask
16:40 an agent to find a good restaurant, it
16:42 first reasons, I need to find a place
16:45 that is highly rated and nearby. And
16:47 then it acts by using a search tool to
16:51 find one. This loop of reasoning and
16:53 acting helps it tackle really complex
16:56 questions. And the second thinking style
16:58 is something called chain of thought.
17:02 Think of an agent who is thinking out
17:04 its thought process step by step instead
17:06 of just jumping straight to the final
17:09 answer. The agent breaks down a larger
17:11 problem into smaller logical steps. For
17:14 example, solve a tricky math problem and
17:17 the agent would first show how it adds
17:19 the numbers and then how it subtracts it
17:22 in the next line etc. This approach make
17:25 the reasoning visible and more accurate.
17:27 And the last thinking style is something
17:29 called metarrompting. Very advanced
17:32 word. This is the equivalent of an agent
17:34 who tells a junior agent how to do their
17:37 job. And it's using one prompt to guide
17:41 the AI to create, change, or understand
17:43 other prompts. It's a powerful technique
17:46 for fine-tuning the AI's behavior and to
17:48 make sure that it follows a specific
17:51 instruction more precisely. Now, in
17:54 order for an AI to have the ability to
17:57 act, it needs access to tools, bringing
17:59 us into the next key concept within the
18:02 module, which is tooling for agents. And
18:04 Google boils them down into four
18:07 categories. First, you have extensions.
18:09 And an extension could, for example,
18:11 connect the agent to a live weather API
18:14 to get the current forecast. Second is
18:16 function. And a function allows the
18:18 agent to execute a specific action like
18:21 sending a confirmation text. Third is
18:24 data stores. And a data store provides
18:26 the agent with access to a company's
18:28 product catalog for example to answer
18:30 customer questions. And lastly, you have
18:33 the plugins. And a plug-in gives the
18:35 agent a new capability such as
18:37 generating an image from a text
18:40 description. And together they all make
18:43 agents not just conversational but
18:45 actually useful in real workflows. And
18:48 the last piece outlines one of Google's
18:50 core offerings that applies all of this
18:53 in practice and that is the customer
18:55 engagement suite. This suite provides
18:57 tools to help a company effectively
19:00 engage with its customer and can be
19:02 built directly on top of Google's
19:04 contact center as a service. And it
19:07 really has three main features. The
19:09 first is conversational agents. Those
19:12 are AI chat bots that acts as firstline
19:14 support for your customers. The second
19:17 is agent assist which is a feature to
19:20 support your live human agents during
19:23 customer interactions. And the last
19:25 piece is something called conversational
19:28 insights which at its core provides
19:31 analytics on your customer communication
19:35 to help you draw deeper insights. So
19:38 module five shows how agents go from
19:41 simple scripts to adaptive systems with
19:44 reasoning tools integration and how
19:47 Google is packaging this into an
19:49 enterprise solution. And that is the
19:52 8hour course summarized for you in a few
19:55 minutes. Now if you're planning to take
19:58 the certification exam, you will want to
20:00 have a plan in place. I'll tell you what
20:03 I did to pass and what I would do
20:05 differently if I were to do it again.
20:08 The exam was not easy. I wouldn't say it
20:10 was super difficult. I probably put it
20:13 in the moderate difficulty level, but
20:15 you will definitely need to get
20:17 prepared. So, I'll give you a three-step
20:20 approach. Step one is to skim through
20:23 the official course plus the study guide
20:25 and flag the areas where you feel less
20:29 confident. For me, that was definitely
20:31 Google's own offerings. Things like
20:34 Verdict AI and Agent Space, which I
20:36 honestly didn't have that much exposure
20:39 to before. So, I had to spend quite a
20:41 bit of time familiarizing myself with
20:43 them. I'll leave a link to the course
20:45 and the study guide in the description,
20:48 and then I'll also pin it in the chat.
20:50 Step two is to do the tests in each
20:53 course module. That will help you lock
20:55 in the fundamentals. From there, you can
20:57 move over to Google's official mock test
20:59 to get a feel for the full exam. And
21:01 I'll leave a link for it in the
21:03 description and in the chat as well. And
21:06 step three is to build the mileage. So
21:08 Google's module tests and mock exams
21:11 alone won't be enough. You will need
21:14 additional practice content. So find
21:16 online tests to practice. I use
21:20 something called Skillert Pro, which
21:22 costs around 20 bucks for a bunch of
21:25 practice tests. I'm not affiliated. In
21:27 fact, I have never heard about them
21:29 before. It's just a path I took. So, in
21:30 case you want to do the same, I'll leave
21:32 the link for that in the description as
21:34 well. All right, let's shift gear into
21:36 some general good to knows about the
21:39 exam. So, the exam itself is 90 minutes
21:43 long with around 40 to 60 scenario-based
21:45 questions. That means that they will
21:48 come in the format of something like you
21:50 are working for a pharmaceutical company
21:52 that deployed AI agents to summarize
21:55 client data. Analysts say that summaries
21:58 are inaccurate. What do you do about it?
21:59 And from there, you get to choose from
22:03 four options. Here's my top tip.
22:06 Don't just go skimming down into the
22:09 answers for which option is most likely
22:13 to be true. Instead, try to imagine that
22:16 it's your boss or your customer asking
22:18 the same question. Think about what you
22:20 would default into being the right
22:23 answer. That way of approaching it is
22:25 much more effective than skimming for
22:28 the most probable test answers. Because
22:30 what I can tell you is when you do the
22:35 mock exam or the module exams, the
22:36 options that is going to be handed to
22:38 you are actually super obvious. you
22:40 could probably just keep going through
22:41 the course and you'll probably strike
22:44 the right one anyways. When you're doing
22:46 the real test, the answer is not going
22:49 to be blatantly staring in your face.
22:51 So, you need to be really prepared to
22:53 figure out which of the potential
22:56 options is the right one. In my case, I
22:58 did the course and then I spent an
23:00 additional 2 three hours on the mock tests.
23:02 tests.
23:06 Um, I passed, but honestly,
23:08 I was not feeling very confident about
23:09 myself going through the test, and I
23:12 would not recommend doing so little. If
23:14 you put in the time and follow the steps
23:18 that I outlined earlier, I'm sure that
23:19 you're going to pull off a home run on
23:21 the exam. And if you enjoyed this
23:24 episode, hit like and let me know in the
23:26 comments if you'd like to see more
23:29 summaries like this. And if not, let me
23:32 know that, too. this channel is for you.
23:34 So, your feedback really matters. And as
23:36 always, thank you for trusting me with
23:40 your time. And I'll see you in the next one.