This content is a comprehensive review of generative AI, covering its foundational concepts, practical applications across various industries, implementation details using Google Cloud services, and essential considerations for responsible AI development, governance, and value measurement.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
Welcome to Birdsy. We move fast in these
videos to cover as much ground as
possible. If you need more time to
think, just hit that pause button before
Let's get started with a review of
generative AI foundations.
Which core concept describes the process
by which a generative AI model creates
new data similar to its training data?
Is it a classification,
b synthesis, c segmentation
or d regression?
The answer is B synthesis.
Generative AI models learn patterns from
training data and use those patterns to
generate new similar data. This process
is fundamental to generative AI and
distinguishes it from discriminative
models which only classify or predict.
Which statement about generative AI
models is not accurate? Is it A they can
generate text, images or audio? B they
learn patterns from large data sets. C
they always require labeled data for
training. or D they are used in
applications like chat bots and image synthesis.
synthesis.
The answer is C. They always require
labeled data for training.
Why this question may seem a bit tricky.
The use of a negative phrasing not
accurate can mislead test takers into
selecting a true statement. It's
important to carefully evaluate each
option and identify the one that is incorrect.
incorrect.
A generative AI model that produces
outputs indistinguishable from real data
is guaranteed to be unbiased. True or
false? Is it A true, B false, C only for
text data or D only if the model is supervised?
supervised?
The answer is B false.
Why this question may seem a bit tricky?
The statement uses an absolute
guaranteed which can mislead even if
outputs seem realistic. Biases and
training data can persist in generated content.
content.
A marketing team uses a generative AI
model to create product descriptions for
a new campaign, but the outputs contain
repetitive phrases. What core concept
should they focus on to improve output
diversity? Is it a sampling strategies,
b data labeling, c model compression or
d feature extraction?
The answer is a sampling strategies.
Improving output diversity involves
adjusting parameters like temperature or
using techniques such as sampling which
are core to generative AI and help
produce varied less repetitive results.
During a healthcare hackathon, a team
wants to use generative AI to create
synthetic patient data for testing
algorithms without exposing real patient
information. Which core generative AI
concept enables this? Is it A transfer
learning, B reinforcement learning, C
clustering or D syn synthetic data generation?
generation?
The answer is D synthetic data generation.
generation.
Synthetic data generation is a core
concept in generative AI, allowing teams
to create realistic data for testing
How'd you do on the first five
questions? Go ahead and subscribe to
Birdsy now and turn your progress into
exam day confidence. Ready to jump back
in? Let's go.
What is the primary purpose of a
generative AI model in the context of
machine learning? Is it A to classify
input data into predefined categories, B
to generate new data similar to the
training data, C to optimize hardware
performance for AI workloads, or D to
encrypt sensitive information in data sets.
sets.
The answer is B to generate new data
similar to the training data.
Generative AI models are designed to
create new data samples that resemble
the distribution of their training data
such as generating text, images, or audio.
audio.
Which of the following statements about
generative AI is least accurate? Is it
A. Generative AI can be used to create
synthetic data for training other
models? B. Generative AI models can
learn patterns from large data sets. C.
Generative AI always requires labeled
data for training or D. Generative AI
can produce outputs such as text,
images, or music.
The answer is C. Generative AI always
requires labeled data for training. Why
this question may seem a bit tricky? The
phrasing asks for the least accurate
statement, requiring careful reading.
Generative AI models do not always
require labeled data. Many use
unsupervised or self-supervised learning.
learning.
A generative AI model trained on biased
data will always produce fair and
unbiased outputs, true or false. Is it A
true, B false, C only if the model is
supervised, or D only when using
reinforcement learning?
The answer is B. False.
Why this question may seem a bit tricky.
The use of always is an absolute that
can mislead. If a model is trained on
biased data, it is likely to reproduce
or even amplify those biases in its outputs.
outputs.
Models modalities and capabilities.
Which modality is primarily used by
generative AI models like GPT4 to
process and generate human language? Is
it A text,
B image, C audio or D video?
The answer is a text.
Text is the primary modality for models
like GPT4 which are designed to
understand and generate human language.
Other modalities such as images or audio
require specialized models or multimodal architectures.
architectures.
Which statement about generative AI
models is not accurate? Is it A they can
generate new content based on learned
patterns? B they are capable of
multimodal processing in some architectures.
architectures.
C they always require labeled data for
training. or D they can be used for
tasks like text summarization and image creation.
creation.
The answer is C. They always require
label data for training.
Why this question may seem a bit tricky?
The negative phrasing and plausible
distractors can mislead test takers. Not
all generative AI models require labeled
data. Many are trained unsupervised,
learning patterns from raw data without
Have you noticed the code in the bottom
corner? The next time you're stumped,
take that code to birdsy.ai.
You'll get immediate help, complimentary
study sessions, and access to hundreds
of additional questions. It's the best
A generative AI model that produces
realistic images from text prompts must
be trained exclusively on labeled image
caption pairs to function correctly.
Is it a true labeled pairs are the only
way to train such models? B false models
can use various data sources and
training strategies.
C true but only for text to image
generation not other modalities or D
false because generative models do not
require any data.
The answer is B. False. Models can use
various data sources and training strategies.
strategies.
Why this question may seem a bit tricky.
The use of exclusively is misleading.
While labeled image caption pairs are
useful, models can also leverage
unlabeled data or other forms of
supervision to learn associations.
A healthcare startup wants to use
generative AI to automatically summarize
patient notes and generate synthetic
medical images for training. Which model
capabilities should they prioritize to
meet both needs? Is it A only text
summarization as image generation is unrelated?
unrelated?
B only image generation since text
summarization can be done manually. C O
text summarization and image generation
possibly via multimod models or D speech
recognition since patient notes are
often dictated.
The answer is C both text summarization
and image generation possibly via
multimodal models.
The startup should prioritize models
with both text summarization and image
generation capabilities, possibly
leveraging multimodal models or
integrating specialized models for each task.
task.
A retail company wants to deploy a
generative AI chatbot that can answer
customer questions, recommend products,
and process images of receipts. Which
modalities should the underlying model
support to fulfill all these functions?
Is it A text only, B image only, C audio
and video or D text and image?
The answer is D text and image.
The chatbot must support both text and
image modalities to handle customer
queries, recommendations, and receipt
image processing.
What is the primary function of a
generative AI model in the context of
content creation? Is it A classifying
existing data into categories,
B producing new data or content based on
learned patterns, C detecting anomalies
in data sets, or D compressing data for
storage efficiency.
The answer is B. Producing new data or
content based on loaned patterns.
Generative AI models are designed to
create new data or content such as text,
images, or audio rather than simply
analyzing or classifying existing data.
Which of the following statements is
least accurate regarding the modalities
supported by state-of-the-art generative
AI models? Is it A. Some generative AI
models can process both text and images.
B. Audio generation is a capability of
certain generative AI models. C. All
generative AI models can seamlessly
handle text, image, and audio modalities
without adaptation. Or D. Multimodal
models are designed to work with more
than one type of input data.
The answer is C. All generative AI
models can seamlessly handle text,
image, and audio modalities without adaptation.
adaptation.
Why this question may seem a bit tricky?
The options may all sound plausible, but
only one is clearly inaccurate. Not all
generative AI models can natively
process every modality, and some are
specialized for certain types of data.
A marketing team wants to use generative
AI to create personalized product
descriptions and generate promotional
images for an online campaign. Which
model capabilities are essential to
support both tasks effectively? Is it A
only natural language processing capabilities,
capabilities,
B only image synthesis capabilities,
C audio generation and text summarization
summarization
or D multimodal generation supporting
both text and images.
The answer is D multimodal generation
supporting both text and images.
The scenario requires both text and
image generation. So the model must
support multimodal capabilities to
handle both types of content creation.
Google Cloud AI products.
Which Vertex AI feature allows users to
manage the entire machine learning
workflow including data preparation,
training, and deployment within a
unified interface? Is it A. Vert.Ex AI Workbench,
Workbench,
B. Cloud Functions, C. Big Query NL or D
data flow.
The answer is A. Vertex AI workbench.
Vert.Ex AI provides an integrated
platform that streamlines the end-to-end
machine learning workflow enabling users
to handle data training and deployment
from a single interface.
Which statement about Vertex AI model
monitoring is not accurate? Is it A. It
can detect data drift in input features.
B, it can alert users when prediction
distributions change. C, it
automatically retrains models when drift
is detected. Or D, it supports
monitoring for both classification and
regression models.
The answer is C. It automatically
retrains models when drift is detected.
Why this question may seem a bit tricky?
The question asks for the incorrect
statement and all options may sound
plausible. Model monitoring in Vertex AI
does not automatically retrain models.
It only detects and alerts on data drift
or prediction anomalies.
When using Vertex AI pipelines, which of
the following is always required for
pipeline execution? Is it A a custom
training container,
B a pipeline definition, C a big query
data set, or D a pre-trained model.
The answer is B a pipeline definition.
Why this question may seem a bit tricky?
The word always can mislead as not all
components are mandatory for every
pipeline. Only a pipeline definition is
always required. Other resources depend
on the specific pipeline.
A retail company wants to automate
hyperparameter tuning for their demand
forecasting model using Vertex AI. Which
Vertex AI feature should their data
science team use to efficiently search
for optimal parameters? Is it Avertex AI
feature store? Bertex AI model registry.
C. Vertex AI data labeling or D. Vertex
AI hyperparameter tuning.
The answer is D. Vertex AI
hyperparameter tuning.
Vertex AI hyperparameter tuning
automates the process of searching for
the best model parameters, saving time
and improving model performance for
tasks like demand forecasting.
A healthcare startup needs to deploy a
trained image classification model to
serve predictions with low latency for a
mobile app and they want to use Vertex
AI. Which deployment option best meets
their requirements? Is it A Vertex AI
batch prediction, Bertex AI workbench, C
Vertex AI online prediction, or D Vertex
AI model monitoring?
The answer is C. Vertex AI online prediction.
prediction.
Vert.Ex AI online prediction is designed
for realtime low latency serving making
it ideal for mobile app integration
where immediate responses are needed.
Which Vertex AI component is
specifically designed to automate the
process of training and evaluating
machine learning models using structured data?
data?
Is it Avertex AI workbench, bvertex AI
feature store, C Vertex AI AutoML
Tables, or D Vertex AI model registry.
The answer is C. Vertex AI AutoML tables.
tables.
Vert.Ex X AI's AutoML tables is tailored
for automating the training and
evaluation of models on structured data,
making it easier for users to build
highquality models without extensive ML expertise.
expertise.
Which of the following is not a
capability provided by Vertex AI
pipelines for orchestrating machine
learning workflows?
Is it a automated execution of
multi-step ML workflows? B, manual data
labeling for supervised learning. C,
integration with Vertex AI model
registry or D parameterization of
pipeline components.
The answer is B, manual data labeling
for supervised learning.
Why this question may seem a bit tricky?
The options all sound plausible, but
Vert.Ex AI pipelines does not provide
direct data labeling capabilities. It
focuses on workflow orchestration, not
manual data annotation.
A financial analyst at a bank wants to
ensure that their deployed fraud
detection model in Vertex AI continues
to perform well as new transaction data
arrives. Which Vertex AI feature should
they use to monitor model performance
over time?
Is it A. Vert.Ex AI model monitoring, B.
Vertex AI feature store, cvertex AI
workbench, or D, Vertex AI pipelines.
The answer is A. Vertex AI model monitoring.
monitoring.
Vert.Ex AI model monitoring enables
users to track deployed model
performance and detect data drift,
ensuring ongoing reliability in
production environments.
Gemini and Workspace AI.
Which Google Workspace feature leverages
Gemini to help users draft, summarize,
and refine content directly within
Google Docs? Is it A Smart Compose, B
help me write, C explore, or D voice typing?
typing?
The answer is B help me write.
Gemini for Google Workspace enables
generative AI powered assistance in
Google Docs, allowing users to draft,
summarize, and edit content efficiently
within the document interface.
Which statement about Gemini's
integration with Google Workspace is not
accurate? Is it A. Gemini can generate
meeting summaries in Google Meet? B.
Gemini can assist with email drafting in
Gmail. C. Gemini automatically schedules
calendar events based on email content
without user input. Or D. Gemini can
help create presentations in Google Slides.
Slides.
The answer is C. Gemini automatically
schedules calendar events based on email
content without user input.
Why this question may seem a bit tricky.
The options may all sound plausible, but
only one contains a subtle inaccuracy
regarding Gemini's current capabilities
and integration within workspace applications.
applications.
When using Gemini in Google Sheets,
which of the following is always true?
Is it A. Gemini will automatically
correct all data errors? B. Gemini
generates charts without any user input.
C. Gemini replaces all manual formulas
with AI generated ones. Or D. Gemini can
assist with data analysis based on user prompts.
prompts.
The answer is D. Gemini can assist with
data analysis based on user prompts.
Why this question may seem a bit tricky?
The use of always is meant to mislead as
AI features often have exceptions or
limitations depending on context and
data quality.
A marketing manager wants to quickly
generate a project summary from a
lengthy email thread using workspace AI.
Which Gemini powered feature should they
use in Gmail? Is it A summarize this
email, B schedule meeting,
C smart reply, or D confidential mode?
The answer is A summarize this email.
Gemini and Gmail offers a summarization
feature that can condense long email
threads into concise summaries,
streamlining information retrieval for
users like marketing managers.
During a team brainstorming session in
Google Slides, two designers want to
generate visual ideas based on text
prompts. Which Gemini feature should
they use to create images directly in
their presentation? Is it A insert
chart, B explore,
C help me visualize,
or D voice type?
The answer is C help me visualize.
Gemini for Google Slides includes an
image generation feature that allows
users to create visuals from text
prompts, supporting creative
Which Google Workspace AI feature allows
users to automatically generate meeting
summaries in Google Meet using Gemini
technology? Is it A smart compose in
Gmail? B AI powered meeting summaries. C
voice typing in Google Docs or D
Explorer in Google Sheets.
The answer is B. AI powered meeting summaries.
summaries.
Gemini powers the automatic meeting
summary feature in Google Meet, enabling
users to receive concise recaps of
discussions without manual note takingaking.
takingaking.
Which of the following statements about
Gemini's integration with Google
Workspace AI is least accurate? Is it A.
Gemini can help users draft emails
directly in Gmail? B. Gemini can
generate images in Google Slides based
on text prompts. C. Gemini automatically
schedules calendar events based on email
content without user review. Or D.
Gemini can summarize long documents in
Google Docs.
The answer is C. Gemini automatically
schedules calendar events based on email
content without user review.
Why this question may seem a bit tricky.
The options all sound plausible, but one
subtly misrepresents Gemini's
capabilities or integration, requiring
careful reading to spot the inaccuracy.
A project coordinator is using Google
Sheets to analyze survey data and wants
to quickly generate insights and
visualizations using Gemini. Which
workspace AI feature should they utilize
to achieve this? Is it A help me organize,
organize,
B Smart Fill, C voice typing, or D
exploring Google Docs?
The answer is A. Help me organize.
The help me organize feature in Google
Sheets powered by Gemini assists users
in generating insights and
visualizations from data efficiently.
If you want more than just rapidfire
questions, head over to birdsy.ai.
You can take unlimited practice exams
and get instant help from your personal
AI study partner. It's like having a
tutor 24/7. Try it free. You'll find a
link in the description.
>> APIs, tools, and developer services.
Which Google Cloud AI product provides
pre-trained models for vision, language,
and structured data tasks, allowing
developers to make predictions without
building models from scratch? Is it A
Cloud Functions, B Vertex AI,
C Cloud Run, or D?
The answer is B. Vertex AI.
Vert.Ex AI offers pre-trained models for
various tasks, enabling developers to
quickly integrate AI capabilities
without extensive model training or expertise.
expertise.
Which Google Cloud AI API is not
primarily designed for processing
natural language text? Is it A Cloud
Natural Language API, B Dialogue Flow, C
Vision API, or D translation API?
The answer is C, Vision API.
Why this question may seem a bit tricky?
The options may all sound related to AI,
but only the vision API is not focused
on natural language. It processes
images, not text.
All of the following Google Cloud AI
tools require users to train their own
models except
is it A AutoML,
Bertex AI custom training, C TensorFlow
on AI platform, or D CubeFlow pipelines.
The answer is A AutoML.
Why this question may seem a bit tricky?
The phrasing may lead you to overlook
that AutoML provides pre-trained models
and automated training so users don't
always need to train from scratch.
A retail company wants to automatically
extract product information from
thousands of scanned receipts uploaded
by customers. Which Google Cloud AI API
should their developer use to
efficiently extract structured data from
these images? Is it A speechtoext API, B
translation API,
C vision API
or D document AI?
The answer is D. Document AI.
Document AI is designed to extract
structured data from documents and
images, making it ideal for processing
scanned receipts and similar use cases.
A healthcare startup needs to build a
chatbot that can answer patient
questions using medical documents stored
in Google Cloud Storage. Which
combination of Google Cloud AI tools
should the team use to enable document
retrieval and conversational responses?
Is it A vision API and AutoML tables? B
vertex AI search and conversation.
C cloud speechtoext and translation API
or Dialogflow
CX only.
The answer is B. Vertex AI search and conversation.
conversation.
Vertex AI search and conversation
enables building chat bots that can
search documents and provide
conversational answers integrating with
data stored in Google Cloud.
Which Google Cloud AI service enables
developers to transcribe spoken audio
into text using pre-trained machine
learning models? Is it A Cloud Vision
API, B cloud translation API, C cloud
speech to text, or Dialogue Flow?
The answer is C cloud speech to text.
Cloud speechtoext is a Google cloud AI
API that converts spoken language into
written text using advanced pre-trained
models. It supports multiple languages
and is widely used for voice recognition tasks.
tasks.
Which Google Cloud AI tool cannot be
used directly for image classification
tasks without first exporting a trained
model? Is it A. AutoML Vision, B. Vertex
AI workbench, C cloud vision API or D
Vertex AI prediction.
The answer is B. Vertex AI workbench.
Why this question may seem a bit tricky?
The question includes the word directly
and references exporting models which
may confuse those who know several tools
support image classification.
Autoeno vision provides a direct
interface for image classification while
Vertex AI workbench is an environment
for development and requires exporting
models for deployment.
A media company wants to automatically
generate video captions in multiple
languages for their global audience
using Google Cloud AI products. Which
combination of APIs should their
developer integrate to achieve both
transcription and translation? Is it A
cloud vision API and cloud natural
language API?
B dialogue flow and AutoML tables,
C Vertex AI and cloud video intelligence
API or D cloud speech to text and cloud
translation API.
The answer is D. Cloud speechtoext and
cloud translation API.
To generate multilingual captions, the
developer should use cloud speechtoext
for transcribing audio and cloud
translation API to translate the
transcribed text into different
languages, enabling efficient captioning
for global viewers.
Which responsible AI principle
emphasizes the need for AI systems to be
understandable and interpretable by
users and stakeholders?
Is it A privacy, B fairness, C transparency,
transparency,
or D security?
The answer is C transparency.
Transparency is a core responsible AI
principle that ensures AI systems are
understandable, interpretable, and their
decisions can be explained to users and stakeholders.
stakeholders.
Which of the following is not typically
considered a core principle of
responsible AI even though it may be
important for general software
development? Is it a fairness,
b scalability,
c privacy or d transparency?
The answer is B scalability.
Why this question may seem a bit tricky?
All options sound important, but only
some are core responsible AI principles.
Scalability is crucial for software but
is not a foundational responsible AI
principle like fairness, privacy or transparency.
transparency.
If an AI system is highly accurate but
consistently produces biased outcomes
for certain groups, which responsible AI
principle is most clearly being
violated? Is it a fairness,
b security, c reliability
or d privacy?
Why this question may seem a bit tricky.
High accuracy may distract from
underlying bias. The principle violated
is fairness as the system treats groups
unequally despite overall accuracy.
A health care organization uses an AI
tool to recommend treatments, but
patients and doctors are unsure how
decisions are made. Which responsible AI
principle should the organization
prioritize to address this concern? Is
it a security, b privacy,
c fairness, or d transparency?
The answer is D transparency.
The organization should prioritize
transparency, ensuring that AI decisions
are explainable and interpretable to
build trust among users.
A financial services company is
deploying a generative AI model and
wants to ensure it does not
unintentionally leak sensitive customer
data in its outputs. Which responsible
AI principle is most directly relevant
to this goal? Is it a reliability,
b privacy, c transparency
or d inclusiveness?
The answer is B privacy.
Privacy is the most directly relevant
principle as it focuses on protecting
sensitive data from being exposed or
misused by AI systems.
Which responsible AI principle focuses
on ensuring that AI systems do not cause
harm to individuals or society during
their development and deployment? Is it
A transparency,
B safety,
C inclusiveness,
or D sustainability?
The answer is B safety.
The principle of safety is central to
responsible AI, emphasizing the need to
prevent harm and mitigate risks
associated with AI systems throughout
their life cycle.
Which of the following statements about
responsible AI principles is least
accurate? Is it A. Responsible AI
principles include fairness,
transparency, and accountability?
B. Responsible AI principles aim to
minimize bias and promote inclusivity.
C. Responsible AI principles require
that AI systems always operate without
any human oversight. Or D. Responsible
AI principles encourage ongoing
monitoring and evaluation of deployed models.
models.
The answer is C. Responsible AI
principles require that AI systems
always operate without any human oversight.
oversight.
Why this question may seem a bit tricky?
The phrasing asks for the least accurate
statement requiring careful reading.
Some options may sound plausible but do
not align with established responsible
AI principles.
All responsible AI principles must be
applied equally in every AI project
regardless of context or risk level. Is
it A true because responsible AI
requires strict adherence to all
principles at all times? B true since
omitting any principle would violate
responsible AI standards. C false
because some principles are optional
depending on the organization.
or D false because the application of
principles should be tailored to the
specific risks and context of each project.
project.
The answer is D false because the
application of principles should be
tailored to the specific risks and
context of each project.
Why this question may seem a bit tricky?
The use of all and equally can mislead
as responsible AI principles are
important but their application is often
context dependent and riskbased
data privacy and compliance.
Which principle is fundamental to
ensuring that AI systems comply with
data privacy regulations such as GDPR?
Is it A maximizing data retention, B
sharing data with third parties by
default, C collecting only data
necessary for the intended purpose, or D
encrypting all data at rest only?
The answer is C, collecting only data
necessary for the intended purpose.
Data minimization is a core principle in
regulations like GDPR, requiring
organizations to collect only the data
necessary for a specific purpose,
thereby reducing privacy risks and
ensuring compliance.
Which of the following is not a
recommended practice for responsible AI
development even if it appears to
enhance security? Is it A regularly
updating access controls, B storing user
consent logs indefinitely, C
implementing audit trails for data
access, or D applying least privilege
principles to user roles?
The answer is B. Storing user consent
logs indefinitely.
Why this question may seem a bit tricky?
The distractor suggests a practice that
could seem beneficial for security, but
storing user consent logs indefinitely
actually violates data minimization and
retention policies required for compliance.
compliance.
When deploying a generative AI model,
which statement about data anonymization
is least accurate?
Is it a anonymization always guarantees
that individuals cannot be reidentified?
B. Effective anonymization techniques
are essential for privacy compliance.
C. Combining anonymized data with
external data sets can increase
reidentification risk or D.
Anonymization should be regularly
reviewed as techniques evolve.
The answer is A. Anonymization always
guarantees that individuals cannot be reidentified.
reidentified.
Why this question may seem a bit tricky?
The distractors are plausible, but the
correct answer overstates
anonymization's effectiveness.
Reidentification is possible if
anonymization is weak or data is
combined with external sources.
A healthcare startup uses Google Cloud
to train a generative AI model on
patient records and wants to ensure
compliance with HIPPA. Which technical
measure should the data engineering team
prioritize to protect sensitive health
information during model training?
Is it A publishing anonymized data sets
for public use? B allowing unrestricted
access to training data,
C storing data in a non-compliant
region, or D encrypting data both at
rest and in transit.
The answer is D. Encrypting data both at
rest and in transit.
Encrypting data both at rest and in
transit is critical for HIPPO
compliance, ensuring that sensitive
health information is protected
throughout the AI model training process.
process.
A retail company's AI chatbot collects
customer feedback and the compliance
officer notices the bot stores full
names and email addresses. What
immediate action should the officer
recommend to align with privacy best
practices? Is it A increase the
retention period for customer data? B
remove or mask personally identifiable
information from stored feedback, C
share the feedback data with third-party
vendors, or D disable all chatbot
logging features.
The answer is B. Remove or mask
personally identifiable information from
stored feedback.
Removing or masking personally
identifiable information PII from stored
feedback reduces privacy risks and
aligns with data minimization and
compliance requirements.
Which concept ensures that individuals
can request the deletion of their
personal data from AI systems in
accordance with privacy regulations?
Is it A data minimization,
B data portability,
C right to eraser, or D purpose limitation?
limitation?
The answer is C right to eraser.
The right to eraser, also known as the
right to be forgotten, is a key data
privacy principle that allows
individuals to request the removal of
their personal data from systems,
supporting compliance with regulations
like GDPR.
Which of the following is least likely
to violate data privacy regulations when
developing a generative AI model? Is it
A. training on unencrypted customer emails.
emails.
B using synthetic data sets generated
from random values.
C storing user chat logs without consent
or D sharing model outputs that include
personal identifiers.
The answer is B using synthetic data
sets generated from random values.
Why this question may seem a bit tricky?
The phrase asks for the least likely
violation which can mislead test takers
into picking a common violation. Using
synthetic data is generally privacy
compliant while the other options
involve risks of exposing real personal data.
data.
Which statement about encryption in AI
data pipelines is not accurate? Is it a
encryption alone is sufficient to ensure
full regulatory compliance in AI
systems? B. Encryption helps protect
data both at rest and in transit. C. Key
management is a critical aspect of
secure encryption practices. Or D.
Regulations often require encryption as
part of a broader security strategy.
The answer is A. Encryption alone is
sufficient to ensure full regulatory
compliance in AI systems.
Why this question may seem a bit tricky?
The negative phrasing not accurate can
cause confusion, especially since all
options may sound plausible. Encryption
does not guarantee compliance alone.
Other controls are also necessary.
Prompting and evaluation.
Which prompt design technique is most
effective for reducing ambiguity in user
instructions to a generative AI model?
Is it A. using open-ended questions, B
providing clear and detailed
instructions, C relying on default model
behavior, or D including multiple
unrelated tasks in one prompt.
The answer is B, providing clear and
detailed instructions.
Explicitly specifying requirements in
prompts helps the AI understand exactly
what is expected, minimizing ambiguity,
and improving the quality of generated responses.
responses.
Which of the following is least likely
to improve the consistency of outputs
from a generative AI model when
designing prompts?
Is it A. using structured templates for prompts.
prompts.
B providing example outputs within the
prompt. C frequently varying the prompt
structure or D specifying the desired
response format.
The answer is C frequently varying the
prompt structure.
Why this question may seem a bit tricky?
The options all sound plausible, but
only one actually does not contribute to
consistency. Consistency is improved by
clear instructions and examples, not by
frequently changing prompt formats.
A product manager at a retail company
wants the AI to generate customer
support responses that are both
empathetic and concise. Which prompt
design technique should they use to
ensure these qualities are reflected in
every response?
Is it A clearly specify tone and length
requirements in the prompt? B allow the
AI to infer the appropriate style from
previous outputs.
C use only generic instructions like
respond appropriately or D rely on
post-processing to edit gi responses.
The answer is A. Clearly specify tone
and length requirements in the prompt.
Explicitly stating the desired tone and
length in the prompt ensures the AI
consistently produces empathetic and
concise responses aligning with the
manager's requirements.
When designing prompts, which approach
is not recommended if the goal is to
minimize model hallucinations?
Is it A requesting sources or citations
for factual outputs?
B. using vague or open-ended prompts, C
providing explicit context and
constraints, or D, limiting the scope of
the prompt to specific topics.
The answer is B, using vague or
open-ended prompts.
Why this question may seem a bit tricky?
The question uses a negative phrasing
and all options seem helpful, but only
one actually increases hallucination
risk. using vague or open-ended prompts.
During a hackathon, a team needs their
AI assistant to generate step-by-step
instructions for assembling furniture,
but the initial outputs are unordered
and incomplete.
What prompt design adjustment should
they make to address this? Is it Acrease
the model's temperature setting?
B ask for more creative responses.
C. Shorten the prompt to a single
sentence. Or D. Explicitly request a
numbered stepbystep list in the prompt.
The answer is D. Explicitly request a
numbered step-bystep list in the prompt.
Instructing the AI to provide a numbered
step-by-step list ensures ordered and
complete instructions directly
addressing the team's issue.
What is the primary purpose of using
explicit instructions when designing
prompts for generative AI models? Is it
A to increase the randomness of
responses? B to ensure the model
understands and follows user intent accurately,
accurately,
C to make prompts shorter and less detailed,
detailed,
or D to allow the model to ignore user constraints.
constraints.
The answer is B. To ensure the model
understands and follows user intent accurately.
accurately.
Explicit instructions help guide the AI
model to produce outputs that closely
match user expectations by reducing
ambiguity and clarifying intent.
Which of the following is not a
recommended technique for enhancing
prompt reliability when working with
generative AI models? Is it A. Providing
clear context within the prompt. B,
specifying the desired output format. C,
using vague language to encourage creativity,
creativity,
or D, including examples of correct responses.
responses.
The answer is C, using vague language to
encourage creativity.
Why this question may seem a bit tricky?
The options may all sound beneficial,
but one actually undermines reliability
by introducing ambiguity or
inconsistency, which can mislead test
takers who skim for positive sounding techniques.
techniques.
When designing prompts, which approach
is least likely to help in reducing bias
in generative AI outputs? Is it A using
ambiguous language to avoid leading the model?
model?
B, providing balanced examples in the
prompt. C, explicitly instructing the
model to avoid stereotypes, or D,
reviewing and refining prompts based on
output analysis.
The answer is A, using ambiguous
language to avoid leading the model. Why
this question may seem a bit tricky? The
question uses a negative phrasing, least
likely, and some options may appear
helpful at first glance, but only one
truly fails to address bias.
Model evaluation and iteration.
Which metric is most commonly used to
evaluate the accuracy of a
classification model during model
evaluation? Is it a mean squared error,
b accuracy,
The answer is b accuracy.
Accuracy is a standard metric for
evaluating classification models
representing the proportion of correct
predictions out of all predictions made.
Which of the following is not a
recommended practice when iterating on a
generative AI model's evaluation
process? Is it A collecting diverse
feedback from end users, B analyzing
failure cases in detail, C changing
evaluation metrics with each iteration,
or D using both quantitative and
qualitative evaluation methods.
The answer is C. Changing evaluation
metrics with each iteration.
Why this question may seem a bit tricky?
The options all sound plausible, but
only one is clearly not recommended.
Changing evaluation metrics frequently
undermines consistency and comparability
of results
when evaluating a model. Which statement
about overfitting is least accurate? Is
it A. Overfitting improves a model's
ability to generalize to new data. B.
Overfitting occurs when a model performs
well on training data but poorly on
unseen data. C. Regularization
techniques can help mitigate overfitting
or D. Monitoring validation loss is a
way to detect overfitting.
The answer is A. Overfitting improves a
model's ability to generalize to new data.
data.
Why this question may seem a bit tricky?
All options mention overfitting, but
only one misrepresents it. Overfitting
does not improve generalization. It
harms it. A product manager and a data
scientist are reviewing user complaints
about irrelevant chatbot responses.
Which evaluation approach should they
prioritize to identify the root cause?
Is it A. Increase the training data set
size immediately.
B, rerun automated accuracy tests only.
C, tune hyperparameters based on
previous experiments.
Or D, conduct qualitative analysis of
user conversations.
The answer is D. Conduct qualitative
analysis of user conversations.
Qualitative analysis of user
conversations helps uncover nuanced
issues in chatbot responses that
quantitative metrics might miss, making
it the most effective first step in this scenario.
scenario.
During a model evaluation workshop, an
engineering team in a healthcare startup
notices their generative model produces
inconsistent outputs for similar patient
queries. What is the most effective next
step to improve evaluation consistency?
Is it A increase the model's parameter
count, B standardize the prompts used in evaluation,
evaluation,
C switch to a different evaluation
metric, or D deploy the model to
production for live testing?
The answer is B, standardize the prompts
used in evaluation.
Standardizing prompts ensures that model
outputs can be reliably compared which
is crucial for consistent evaluation in
sensitive domains like healthcare.
What is the primary purpose of using a
validation data set during model evaluation?
evaluation?
Is it A to increase the size of the
training set, B to assess model
performance during development, C to
deploy the model in production or D to
store historical predictions.
The answer is B to assess model
performance during development.
A validation data set is used to tune
model parameters and assess performance
before final testing, helping to prevent
overfitting and ensuring the model
generalizes well to unseen data.
Which statement about model evaluation
metrics is least accurate when iterating
on a generative AI model? Is it A.
Multiple metrics may be needed to
capture different aspects of model
performance. B. Human evaluation can
complement automated metrics for
generative tasks. C. A single metric is
always sufficient for evaluating
generative model quality.
Or D. Metric should align with the
intended use case of the model.
The answer is C. A single metric is
always sufficient for evaluating
generative model quality.
Why this question may seem a bit tricky?
The options may all sound plausible, but
one subtly misrepresents how evaluation
metrics should be used, especially
regarding their limitations or
applicability to generative models.
When iterating on a model, which of the
following is not a recommended approach
for addressing poor evaluation results?
Is it A, ignore evaluation feedback and
proceed with deployment? B. Analyze
error cases to identify patterns. C.
Adjust model parameters based on
evaluation findings. Or D. Collect
additional data to address identified weaknesses.
weaknesses.
The answer is A. ignore evaluation
feedback and proceed with deployment.
Why this question may seem a bit tricky.
The options all seem like reasonable
actions, but one is a common pitfall
that does not actually improve model
Did you get a different answer? Or maybe
you have a question about the exam in
general? Drop it in the comments below.
We read and answer them personally.
>> Business use cases.
Which business function is most likely
to benefit from using generative AI for
automating customer support interactions?
interactions?
Is it A finance, B customer service, C
logistics, or D manufacturing?
The answer is B. Customer service.
Customer support can leverage generative
AI to automate responses, handle routine
queries, and provide 247 assistance,
improving efficiency and customer satisfaction?
satisfaction?
Which of the following is least likely
to be a direct industry use case for
generative AI in healthcare?
Is it A generating patient discharge
summaries, B synthesizing medical images
for training, C automating equipment sterilization
sterilization
or D drafting clinical trial documentation?
documentation?
The answer is C automating equipment sterilization.
sterilization.
Why this question may seem a bit tricky?
All options sound plausible, but only
some are direct use cases. Generative AI
is not typically used for physical
equipment sterilization, which is a
manual or mechanical process, not a
datadriven or generative task.
A retail company wants to improve
product recommendations for online
shoppers by analyzing browsing patterns
and generating personalized suggestions.
Which functional use case does this
scenario best represent? Is it a
personalized recommendation engines? B,
inventory forecasting, C fraud
detection, or D, automated invoice processing.
processing.
The answer is A personalized
recommendation engines.
This scenario describes a recommendation
system, a common functional use case for
generative AI in retail to enhance
customer experience and drive sales.
In which industry is generative AI most
commonly used to automate the drafting
of legal contracts and agreements?
Is it A hospitality,
The answer is D. Legal.
The legal industry frequently uses
generative AI to automate contract
drafting, reducing manual effort and
improving accuracy.
A financial services firm wants to use generative AI to summarize lengthy
generative AI to summarize lengthy regulatory documents for compliance
regulatory documents for compliance officers aiming to reduce manual review
officers aiming to reduce manual review time. Which industry use case does this
time. Which industry use case does this align with? Is it a automated loan
align with? Is it a automated loan approval? B, fraudulent transaction
ignore ongoing operational changes, making them less reliable for ROI in
making them less reliable for ROI in dynamic settings.
dynamic settings. When evaluating the value of a
When evaluating the value of a generative AI solution, which statement
generative AI solution, which statement is most misleading if taken at face
is most misleading if taken at face value? Is it A. High user adoption
value? Is it A. High user adoption always indicates high business value? B.
always indicates high business value? B. ROI calculations should include both
ROI calculations should include both direct and indirect benefits. C. Value
direct and indirect benefits. C. Value realization may require ongoing
realization may require ongoing measurement beyond initial deployment or
measurement beyond initial deployment or D. Stakeholder feedback can reveal
D. Stakeholder feedback can reveal qualitative value not captured by
qualitative value not captured by metrics.
metrics. The answer is A. High user adoption
The answer is A. High user adoption always indicates high business value.
always indicates high business value. Why this question may seem a bit tricky?
Why this question may seem a bit tricky? The statement may sound plausible, but
The statement may sound plausible, but equating high user adoption with high
equating high user adoption with high business value ignores whether the
business value ignores whether the adoption leads to meaningful outcomes or
adoption leads to meaningful outcomes or ROI.
Thanks for watching. If you found this exam review helpful, be sure to
exam review helpful, be sure to subscribe for more real world practice
subscribe for more real world practice and topic focused study videos. And
and topic focused study videos. And visit birdsy.ai to start your free trial
visit birdsy.ai to start your free trial of Birdsy, your AI powered study
of Birdsy, your AI powered study partner.
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.