0:01 Right now, the average user treats
0:03 Notebook LM like the usual basic
0:05 chatbot. They upload a file or two, ask
0:07 a question, and maybe click around
0:08 hoping to find something useful. But
0:10 that is a massive underutilization of
0:12 the tool because Notebook LM isn't just
0:14 a chatbot. It is a complete research
0:15 intelligence system. We are talking
0:17 about autonomous sourcing that finds
0:18 data for you, validation methods that
0:20 ensure your research is reliable, and
0:22 even generative workflows that turn that
0:24 data into slide decks and infographics
0:26 instantly. So, in this video, I'm going
0:27 to walk you through the complete
0:29 advanced workflow to get you using this
0:31 tool better than 99% of people. And by
0:33 the end of this, you'll know exactly how
0:35 to use Notebook LM the way it is
0:36 actually meant to be used. Let's get
0:38 started. First, I'll head over to the
0:39 Notebook LM website. And on the
0:41 homepage, you'll see your notebook list
0:43 if you've created any before or an empty
0:44 state if this is your first time. So, we
0:46 can create a new notebook here. But
0:48 before we do, here is a critical detail
0:49 that often gets overlooked. It's that
0:51 notebooks should be topic specific.
0:53 Don't create a notebook called research
0:55 or general notes. create focus notebooks
0:58 like competitive analysis Q12025
1:00 or AI video generation research. And
1:02 that might seem insignificant, but it
1:04 actually matters because notebook LM
1:05 performs better when sources are related
1:07 and focused on a single topic or
1:09 project. I'll go ahead and create a new
1:10 notebook here by pressing this plus
1:12 icon. And this immediately takes you to
1:13 the source upload screen. This is where
1:15 the quality of your output gets decided.
1:16 The common mistake here is simply
1:18 uploading one or two files without
1:19 thinking strategically about what
1:21 sources you actually need. Instead, take
1:23 a step back. Think about the complete
1:25 information landscape for your topic.
1:26 What formats does your information exist
1:29 in? PDFs, YouTube videos, websites,
1:31 Google Docs? The power of Notebook LM
1:33 comes from combining multiple formats
1:34 and sources, creating a web of
1:36 information rather than just a stack of
1:38 isolated documents. And later in this
1:40 video, I will show you exactly how to
1:42 professionally execute that multiiformat
1:43 strategy. So, make sure you stick around
1:45 for that. For this example, I'm going to
1:47 create a research notebook on AI
1:48 alignment. Let's hit escape, and I'll go
1:50 right up here and title this AI
1:51 alignment research. Now, here's a
1:53 feature that was added quite recently,
1:54 which is the deep research source
1:56 discovery. Previously, there was only
1:58 fast research, which was good but
2:00 limited. But now, you can access a much
2:01 more powerful agent by clicking this
2:03 drop down arrow and selecting deep
2:04 research. So, I'm going to select that
2:06 and write AI alignment and safety
2:08 challenges. Hit submit. And here's what
2:11 happens. Notebook LM launches an agentic
2:13 AI tool that autonomously researches
2:14 your topic. It doesn't just do keyword
2:16 matching like a basic search. It
2:18 actually analyzes the topic, finds
2:20 sources, evaluates them, adapts its
2:22 search strategy to fill gaps, and
2:23 generates a comprehensive research
2:25 report, which means deep research will
2:27 be able to discover around 50 sources
2:29 related to your topic. It generates a
2:30 detailed research report synthesizing
2:32 those sources. Then it selects the most
2:34 relevant sources and imports them
2:35 directly into your notebook. What you
2:37 get is both a curated research report
2:39 and high-quality sources already loaded
2:40 and ready to work with. And that matters
2:42 a lot because most people spend hours
2:44 manually searching for sources,
2:46 evaluating quality, and uploading them
2:48 one by one. Deep Research does that work
2:50 in minutes and often find sources you
2:52 wouldn't have discovered manually. All
2:53 right, it's finished. I now have a
2:55 curated list of citations imported
2:57 automatically, plus a full research
2:59 report that's also added as a source.
3:00 Scrolling down, you'll notice some
3:02 sources might fail to import if they're
3:04 behind pay walls. There's a remove all
3:05 failed sources button that cleans those
3:07 up in one click instead of deleting them
3:08 individually, which I'm going to do
3:10 right now. Now I have a strong
3:11 foundation of sources to work with.
3:13 Before we start asking questions or
3:14 generating content, here's a step that
3:16 gets skipped nine times out of 10, which
3:19 is source validation. Notebook LM is
3:20 extremely good at reducing
3:22 hallucinations because it grounds
3:23 everything in your sources. But that
3:25 only works if your sources are reliable
3:26 and current. If your sources are
3:28 outdated, biased toward one perspective,
3:30 or mixing primary research with opinion
3:32 pieces, Notebook LM will give you
3:33 answers based on flawed information
3:35 without distinguishing between them. So
3:37 here's the validation framework I use
3:39 for every single notebook. go to the
3:40 chat interface in the center of the
3:42 screen. Before asking any topic
3:43 questions, I run through these checks.
3:45 First, I ask, create a table showing
3:47 each source with its publication date,
3:49 author credentials, and whether it's a
3:51 primary source, secondary analysis, or
3:53 opinion piece. This gives me a clear
3:54 view of what I'm actually working with.
3:56 If I see that most of my sources are
3:58 from 2020 or earlier on a fast-moving
4:00 topic like AI, I know I need newer
4:01 material. If everything is opinion
4:03 pieces with no primary research, that's
4:05 a problem. Let me ask that now. Notebook
4:07 LM is generating a table analyzing all
4:09 of the sources. I can immediately see
4:11 the spread, when these were published,
4:12 who wrote them, and what type of source
4:14 each one is. In this case, I'm seeing a
4:16 good mix of recent academic papers,
4:18 industry reports, and technical
4:20 documentation. Most of them are pretty
4:21 recent, which is what I want for current
4:23 AI alignment research. Second, I ask
4:25 which of these sources are most
4:26 frequently cited or referenced by other
4:28 sources in this notebook. This shows me
4:29 which sources are foundational to the
4:31 topic versus which ones are peripheral.
4:33 The highly cited sources are usually the
4:35 ones I should prioritize when I'm
4:36 filtering sources later. And third, I
4:38 ask, summarize the primary perspective
4:40 or bias of the top five most substantial
4:42 sources. This tells me whether I'm
4:43 looking at this topic from multiple
4:45 angles or whether all my sources share
4:47 the same viewpoint. For controversial or
4:48 evolving topics, you want diverse
4:50 perspectives. For technical
4:52 documentation, perspective matters less.
4:54 These three checks take about 5 minutes
4:55 total, but they give me a complete
4:57 picture of my source quality before I
4:59 build my entire workflow on top of it.
5:00 With our sources validated, the next
5:02 critical step is configuration. This is
5:04 something the vast majority of users
5:05 ignore, but it dramatically improves
5:07 response quality. In the top right
5:09 corner, click right here. This opens
5:10 settings that control how Notebook LM
5:12 responds to you. First, set your
5:13 conversational goal. You have three
5:16 options. Default for general research,
5:18 learning guide for educational content,
5:20 or custom for specific use cases. For
5:21 this research notebook, I'm choosing
5:23 custom, and I'll define the role as
5:25 research analyst focused on AI safety
5:27 and alignment debates. This tells
5:29 notebook LM to frame all responses from
5:30 that perspective instead of giving
5:32 generic answers. Next, choose response
5:35 length. You have default, longer, or
5:37 shorter. For research work, I typically
5:39 choose longer because I want detailed
5:41 analysis, not brief summaries. Click
5:43 save. These settings now apply to every
5:45 chat in this notebook. You set them once
5:46 and forget about them, but they shape
5:48 every interaction from this point
5:49 forward. The majority of the people use
5:51 notebooks in default mode and wonder why
5:53 responses feel generic. Configured
5:55 settings give you targeted rosp specific
5:57 answers optimized for your exact use
5:59 case. Now let's look at how to work with
6:00 sources strategically instead of just
6:02 accepting all of the sources for every
6:04 query. On the left side you'll see your
6:06 source list with these checkboxes next
6:08 to each file. And a very common mistake
6:09 that people make is that they leave
6:10 everything checked all the time. When
6:12 you ask a question with all of the
6:14 sources selected, Notebook LM tries to
6:16 synthesize an answer from every single
6:18 document. This dilutes your results. It
6:20 forces the AI to generalize, giving you
6:22 a vague surface level summary instead of
6:24 a deep answer. So let's say I want to
6:26 focus specifically on existential risk.
6:27 If I leave the mechanistic
6:29 interpretability checked, I am confusing
6:31 the model by forcing it to look at
6:32 conflicting topics. So I'm going to
6:34 uncheck everything. Then I will go
6:36 through and select only the three
6:37 technical papers that contain the actual
6:39 code logic. Now effectively the other
6:41 documents do not exist to the AI. It can
6:43 only see what is checked. When I ask,
6:45 how do these agents handle memory
6:47 management? Notebook LM creates the
6:48 answer exclusively from those three
6:50 technical papers. The answer comes out
6:52 sharper, more technical, and completely
6:54 free of irrelevant information. This
6:55 gives you surgical control over your
6:57 research. You can keep one massive
6:59 master notebook with 50 sources, but by
7:01 toggling these check boxes, you can
7:03 instantly turn it into a focused subnote
7:05 for any specific query. All right, now
7:06 let's generate some content from our
7:07 sources. We'll start with an audio
7:09 overview, which is one of Notebook LM's
7:11 signature features. On the right side,
7:12 you'll see the studio panel. Click on
7:14 audio overview. Now, don't just click
7:16 generate yet. Most people just blindly
7:18 hit generate and accept whatever random
7:20 conversation the AI spits out. If you
7:21 want a result, you can actually use for
7:23 work. You need to take control of the
7:25 conversation first. In the instruction
7:26 input box below is where you tell
7:29 notebook LM exactly what to focus on,
7:30 what tone to use, and how long the
7:32 overview should be. For this research
7:33 notebook, I don't need a balanced
7:35 overview of all of the sources covering
7:37 every aspect of AI alignment. I need the
7:39 podcast to focus specifically on the key
7:41 debates and disagreements we identified
7:42 earlier. So, I'll write, "Focus
7:44 exclusively on the main disagreements
7:46 between AI safety researchers regarding
7:48 alignment approaches. Explain each
7:49 perspective clearly and keep the
7:51 discussion under 15 minutes. Use
7:53 accessible language, avoiding
7:55 unnecessary jargon. Above the
7:57 instruction box, you have two critical
7:59 settings, which are format and length.
8:01 For format, you aren't limited to the
8:02 standard deep dive option. You can
8:04 switch to brief if you need a quick
8:05 summary, or select critique, which
8:07 essentially turns the AI into a strict
8:09 editor that reviews your material for
8:10 gaps and weaknesses. But since our
8:12 prompt is specifically asking to uncover
8:14 disagreements, I'm actually going to
8:15 switch this to debate. This instructs
8:17 the host to actively illuminate
8:18 conflicting perspectives rather than
8:20 just having a friendly chat. For length,
8:22 you can choose short or default. I'll
8:23 keep this on default, which usually
8:25 gives us a solid 10-minute discussion,
8:26 perfect for digging into the details
8:28 without broadening the topic too much.
8:30 Now, click generate. Notebook LM will
8:31 take a few minutes to create a custom
8:33 podcast with two AI hosts discussing
8:35 your sources based on those specific
8:37 instructions. The difference between
8:39 default audio and customized audio is
8:41 massive. The default version covers
8:42 everything equally. The customized
8:44 version becomes a targeted research
8:46 brief focused on exactly what you need
8:48 to understand. And here's a pro tip. Do
8:49 not hesitate to regenerate. Think of the
8:51 first pass as a rough draft. If it came
8:53 out too technical, regenerate it with
8:55 instructions to simplify the language.
8:57 If it wasted time on background history,
8:59 tell it to cut the intro and focus only
9:01 on current debates. Most people generate
9:02 once and just accept whatever they get.
9:04 But the top users iterate on these
9:06 instructions until the output matches
9:08 their specific research goals perfectly.
9:09 While that audio overview is generating,
9:11 let's create visual content using
9:13 another brand new feature on Notebook
9:15 LM, which is the infographic generation
9:17 powered by Nano Banana Pro, which is
9:18 Google's advanced image generation
9:20 model. To access that, click infographic
9:22 in the studio panel. You'll see three
9:24 main settings to configure here. First
9:25 is orientation, where you can choose
9:27 landscape, portrait, or square. Next is
9:29 level of detail, which ranges from
9:31 concise to detailed. And finally, you
9:33 have the custom instruction field. For
9:34 most use cases, I recommend standard
9:36 detail level and landscape orientation.
9:38 The detailed option can introduce minor
9:40 text errors with complex topics, and
9:42 concise sometimes oversimplifies. In the
9:44 instruction field, I'll write, "Create a
9:46 professional infographic mapping the
9:47 different AI alignment approaches and
9:49 the key researchers associated with each
9:51 approach. Use clean design with blue and
9:53 gray color scheme and hit generate."
9:54 This will take a couple of minutes and
9:56 what comes back is a fully designed
9:58 infographic pulling information directly
9:59 from your sources, including talking
10:02 charts, diagrams, text hierarchies,
10:03 visual layouts, everything you'd
10:05 normally need a designer to create. The
10:07 quality is legitimately publication
10:08 ready. Minor spelling errors can appear
10:11 in detailed mode with complex topics,
10:12 but standard mode is consistently
10:14 accurate. All right, here's the result.
10:16 This is a clean, well-designed visual
10:17 representation of AI alignment
10:19 approaches with key researchers mapped
10:21 to different strategies. The design is
10:22 professional. The information is
10:24 accurate and cited from my sources, and
10:26 this would have taken hours to create
10:27 manually. You can also regenerate this
10:29 with different instructions if you want
10:31 to adjust the style or focus. Next,
10:33 let's create a presentation deck, which
10:34 is the other new Nano Banana Pro
10:36 feature. In the studio panel, click
10:38 slide deck. You'll see two deck types:
10:39 detailed deck, which creates
10:41 comprehensive slides with full text
10:43 suitable for sending as a standalone
10:45 document, or presenter slides, which
10:46 creates clean visual slides with minimal
10:48 text designed to support you while
10:49 speaking. For most presentations,
10:51 presenter slides is better because it
10:53 keeps slides visual and text minimal.
10:55 For length, you have two main choices.
10:57 Short for a 10 slide summary or default
11:00 for a full 15 to 20 slide deck. I want
11:01 just the key points, so I'm going to
11:03 choose short. In the instruction field,
11:04 I'll write create a presentation
11:06 explaining the three main schools of
11:08 thought in AI alignment for a technical
11:10 audience. Focus on key differences and
11:12 trade-offs. Click generate. This will
11:13 take a few minutes to create a fully
11:14 designed slide deck. While it's
11:16 generating, let me explain why this is
11:18 powerful. Most people spend hours
11:19 building presentations from research.
11:21 They read through sources, extract key
11:24 points, design slides, find or create
11:25 visuals, and structure the narrative.
11:27 Notebook LM does all of that
11:28 automatically. It pulls information from
11:31 your sources, structures it logically,
11:33 designs professional slides, and creates
11:34 supporting visuals. And just like audio
11:36 overviews, you can regenerate with
11:38 different instructions if the first
11:39 version isn't quite right. All right,
11:41 the deck is ready. Let's take a look.
11:43 This is a clean, professionally designed
11:44 presentation. Each slide has a clear
11:46 visual hierarchy supporting graphics and
11:48 text pulled directly from my sources
11:50 with proper structure. Slide one
11:52 introduces the topic. Slide two breaks
11:53 down the three main approaches. Each
11:55 slide explores one approach in detail
11:57 with visuals that illustrate the key
11:58 concepts. This is presentation ready
12:00 output that would normally take several
12:02 hours to build manually generated in
12:04 minutes from your sources. The audio
12:05 overview we generated earlier should be
12:07 ready now. So, let's open it. You'll see
12:09 a standard podcast player with two AI
12:11 hosts discussing AI alignment based on
12:13 our custom instructions. Let me play a
12:13 bit of it.
12:16 >> Welcome to the debate. We're diving into
12:18 what I think is probably the most
12:20 consequential question of our time. How
12:22 do we make sure that these incredibly
12:25 powerful AI systems we're building are,
12:26 you know, fundamentally aligned with
12:27 human values.
12:29 >> Audio is great for understanding the big
12:30 picture, but for precision work, we need
12:32 the chat interface. In the center panel,
12:34 you can ask any question about your
12:35 sources. The key is asking precise
12:37 questions instead of vague ones. Instead
12:39 of asking, "What does this say about AI
12:40 alignment?" asks, "Compare the three
12:42 main technical approaches to AI
12:43 alignment and explain the key trade-off
12:45 each approach makes. That specific
12:46 question gets you a structured, useful
12:48 answer. You'll also notice little
12:49 numbers scattered through the text.
12:51 Those are citations. When you click one,
12:53 it highlights the exact passage in the
12:54 original document, letting you verify
12:56 the accuracy of the text instantly. But
12:58 if you need something more engaging than
13:00 just audio, there is the video overview.
13:02 This just got a major upgrade with
13:03 custom visual styles. In the studio
13:05 panel, click video overview. This
13:07 creates a narrated explainer video with
13:09 AI generated visuals based on your
13:11 sources. It's similar to audio overview
13:13 but with slideshow style visuals that
13:14 illustrate the concepts as they're
13:16 explained. You'll see two content
13:18 options. Explainer, which creates a
13:19 comprehensive overview connecting
13:21 concepts from your sources, or brief,
13:23 which gives you a quick bite-sized
13:25 summary of core ideas. For most use
13:27 cases, explainer is better because it
13:29 provides depth and proper context. Below
13:30 is an option to choose custom visual
13:32 styles powered by Nano Banana Pro. You
13:34 can choose auto select to let notebook
13:36 LM pick a style from their preset
13:38 library. Or you can choose custom and
13:39 describe your own visual aesthetic. Let
13:42 me try custom. I'll write clean, modern
13:43 design with blue and white color scheme,
13:45 minimalist graphics, and professional
13:47 typography. You can also guide what the
13:48 AI host should focus on in the
13:50 instruction field, similar to audio
13:52 overviews. Click generate. This takes a
13:53 few minutes to create the full video
13:55 with narration, visuals, and
13:56 transitions. All right, it's finished
13:58 processing. Let's play a quick clip to
13:59 see how it handled our custom design
14:01 request. You know, this isn't just a
14:03 technical puzzle. It's a whole series of
14:05 really deep debates about the very
14:07 nature of these artificial minds. See,
14:09 to make an AI safe, you first have to
14:11 understand it. That seems obvious,
14:13 right? But that opens up this truly
14:16 fascinating question. We can see what an
14:18 AI does, but what's actually happening
14:20 on the inside? And here's the core of
14:22 the problem. Our most powerful AI models
14:24 are basically black boxes.
14:25 >> And look at that. It didn't just grab
14:27 random stock footage. It actually
14:29 followed my prompt for a clean blue and
14:30 white color scheme with minimalist
14:32 graphics. The narration is synced
14:33 perfectly with the visuals and the
14:35 structure follows the logical flow of
14:37 our source documents. This is perfect
14:38 for creating educational content,
14:40 presentation materials, or sharable
14:42 explanations of complex research. All
14:44 right, let's wrap up the studio panel by
14:45 looking at the remaining tools, which
14:48 are reports, flashcards, quiz, and mind
14:50 maps. These are all found in the studio
14:52 panel on the right, and each serves a
14:54 specific organizational purpose. Let's
14:55 start with reports. Click reports and
14:57 you'll see several options here. The
14:58 first one which we're going to look at
14:59 is the briefing dock. This creates
15:01 several pages of executive summary of
15:03 your entire knowledge base featuring key
15:05 insights and quotes from your sources.
15:06 I'll click to generate one now. And
15:08 here's the result. This is a clean,
15:09 professionally structured document
15:11 summarizing the key findings from all of
15:13 the sources. I can export this to Google
15:15 Docs, edit it if needed, and use it as a
15:17 foundation for reports or presentations.
15:19 But here's the feature a lot of people
15:20 miss. You aren't limited to these
15:22 defaults. You can click create your own
15:24 to specify the exact structure, style,
15:26 and tone you want. Let's try that. I'll
15:28 write create a technical white paper
15:30 analyzing the three main approaches to
15:32 AI alignment written for researchers
15:34 include methodology comparison and
15:36 future research directions. Hit generate
15:37 and look at this result. Unlike the
15:39 generic briefing doc, this is highly
15:40 technical. It actually followed my
15:42 structure. It gave me the specific
15:43 methodology comparison and the future
15:45 direction section I asked for. This
15:47 essentially did 90% of the drafting work
15:49 in seconds. Next, you have flashcards
15:51 and quiz sections. Flash cards generate
15:53 quick Q and A pairs for memorization.
15:55 While the quiz tool builds a full
15:57 interactive test. The value here is that
15:59 they pull directly from your sources. So
16:00 you aren't testing yourself on general
16:02 knowledge. You are testing yourself on
16:04 the specific data you just uploaded. And
16:06 finally, there is the mind map. If you
16:08 click this notebook LM generates an
16:10 interactive diagram showing how the key
16:11 concepts in your sources actually
16:13 connect to each other. You can click any
16:15 node to expand it into subtopics or
16:17 click it again to trigger a detailed
16:19 chat response about that specific idea.
16:20 This is massive for visual learners
16:22 because it helps you spot connections
16:24 between files that you would definitely
16:26 miss just by reading them linearly. And
16:27 that is the key takeaway here. It is a
16:29 mistake to limit yourself to just the
16:30 chat and audio overview. These
16:32 organizational tools are what actually
16:34 transform raw information into a
16:35 structured knowledge system. Now, to
16:36 bring this full circle, I want to
16:38 deliver on that promise I made at the
16:40 start of the video. We need to talk
16:42 about source strategy, specifically how
16:43 to mix different formats to create a
16:45 truly comprehensive research system. The
16:47 vast majority of users upload one type
16:50 of source. Maybe they add five PDFs or
16:52 maybe they add three YouTube videos, but
16:54 they don't think strategically about
16:55 combining formats. Here's what you
16:58 should do. Notebook LM accepts PDFs,
17:00 websites, YouTube videos, audio files,
17:02 Google Docs, and plain text. The power
17:04 comes from mixing these formats to cover
17:06 your topic from multiple angles. For
17:08 example, in this notebook, I can layer
17:09 YouTube lectures for accessible
17:11 explanations on top of company blog
17:13 posts for industry perspective and even
17:14 add podcast transcripts for
17:16 conversational insights. This creates a
17:19 360 degree view of the topic that you
17:20 just can't get from a single file type.
17:22 Let me add a YouTube video to
17:24 demonstrate. Click add source. Click
17:26 YouTube and paste a video URL. I'm
17:28 adding a lecture on AI alignment from a
17:30 recent conference. Notebook LM pulls the
17:32 transcript and adds it as a source. Now
17:33 I can ask questions that synthesize
17:35 across formats. Compare the technical
17:37 approaches discussed in the research
17:38 papers with the practical concerns
17:40 raised in the YouTube lecture. Notebook
17:42 LM will analyze both the written
17:43 research and the video transcript and
17:45 create a synthesis you couldn't get by
17:47 analyzing each format separately. This
17:48 multiiformat approach is especially
17:50 powerful because different formats offer
17:52 different value. Academic papers give
17:53 you rigor. Videos give you accessible
17:55 explanations. Blog posts give you
17:57 industry context. Podcasts give you
17:59 conversational insights. The typical
18:00 user stays within one format while
18:02 advanced users strategically mix every
18:04 format to build comprehensive knowledge
18:06 bases. So, at this point, you've seen
18:08 the complete workflow for using Notebook
18:09 LM, the way research professionals
18:11 actually use it. We started with deep
18:13 research to automatically build a
18:14 comprehensive source base. We validated
18:16 those sources to ensure quality and
18:18 identified gaps. We configured notebook
18:20 settings for targeted responses. We used
18:22 source filtering for focused analysis.
18:24 We generated custom audio overviews,
18:25 professional infographics, and
18:27 presentation ready slide decks. We used
18:29 mind maps for active learning. And we
18:30 built a living research system using
18:32 multi-format source mixing. The
18:33 difference between someone who uses
18:35 Notebook LM as an amateur and someone
18:37 who uses it at a professional level
18:39 isn't just knowing these features exist.
18:40 It's following the complete workflow
18:42 from source discovery through
18:44 validation, configuration, content
18:46 generation, and organization. If you
18:47 found this video valuable, you could
18:48 click right here to check out another
18:50 video I posted. It's a master class on
18:53 using Gemini 3.0 Pro at an elite level.
18:55 You'll see that these strategies like
18:56 source validation and structured
18:58 prompting don't just work in notebooks.
18:59 They are the secret to getting the most
19:01 out of Google's flagship AI as well.
19:03 Thank you so much for watching and I'll