0:02 Google, as we all know, has been on an
0:04 incredible run. And to kickstart this
0:06 new year, they've actually introduced
0:08 some major upgrades that a lot of people
0:10 don't actually know about to their agent
0:12 platform. Turning AI Studio into
0:14 something far more powerful. From
0:18 extended input support in the Gemini API
0:20 to enhance AI vibe coding capabilities
0:23 to the introduction of VO3.1 directly
0:25 inside the studio and a lot more. So,
0:26 with that thought, let's dive straight
0:28 to it. But for those who do not know
0:31 what Google AI Studio is, this is a
0:33 remarkable tool by Google that is
0:36 completely free to access. It's Google's
0:37 promptu product platform that lets you
0:40 vibe code full AI first apps using
0:43 natural language with AI features like
0:45 image generation, video understanding,
0:47 search grounding, and editing built
0:50 inside by default. On top of that, you
0:51 get access to state-of-the-art models
0:54 like Gemini 3 Pro completely for free.
0:56 And what's especially powerful is that
0:58 the AI studio isn't just for app
1:00 building. It could be used to work as an
1:02 agent builder where you let your
1:05 automation workflows being built within
1:07 the studio and have tasks being
1:09 automated directly from the build mode.
1:11 Now to start off, the latest upgrade is
1:13 one of the biggest highlights which is
1:16 the introduction of VO3.1 within the
1:18 studio. This is now available inside
1:20 both the Gemini API and Google AI
1:23 Studio, which gives anyone far more
1:24 creative control and productionready
1:27 video quality directly within the actual
1:29 studio. With this, you have enhanced
1:32 ingredients to video. The updated model
1:34 intelligently combines your inputs while
1:36 preserving character identity and
1:37 background details when you're working
1:39 within the studio. And this will enable
1:41 characters and environments to stay
1:44 consistent across your video generation.
1:46 You also have native vertical video
1:48 generation where you can also generate
1:51 social ready 9x6 ratio videos directly
1:54 within portrait mode. This is built for
1:56 mobile first use cases and you can
1:58 produce faster results with better
2:00 framing since it generates full-frame
2:02 vertical videos instead of cropping from
2:04 landscape. And finally, you also have
2:08 higher resolution output with VO3.1. It
2:11 delivers cleaner, sharper 1080p videos,
2:13 and it can even generate full 4K videos,
2:14 which is going to give you
2:16 professionalgrade results straight
2:18 inside your workflow. All these
2:20 capabilities are fully free for you to
2:22 actually access directly within the
2:24 studio. And you can even access it
2:27 through the Gemini API and Vertex AI for
2:29 enterprise use. Just take a look at this
2:31 demo app which was built directly within
2:33 the AI studio. This is a typemotion demo
2:36 that transforms the text phrases into
2:38 cinematic motion typography using
2:40 two-step generative workflows. And you
2:42 can see the quality of content that it's
2:44 capable of generating, which also has
3:02 Now, isn't that amazing? You can enter
3:05 in your content, choose a text style,
3:07 and you can provide a reference image,
3:09 and then the app is able to call the
3:12 Gemini 3 Pro image model and the V3.1
3:15 model together to reimagine your prompt
3:18 so that it is fully styled, and it is
3:20 able to create this animated scene,
3:22 which you see. It's a great example of
3:25 having powerful prompted production that
3:27 can be inside within your studio, and it
3:29 can basically create more versatile
3:31 applications for you. Before we get
3:33 started, I just want to mention that you
3:34 should definitely go ahead and subscribe
3:37 to the world of AI newsletter. I'm
3:39 constantly posting different newsletters
3:41 on a weekly basis. So, this is where you
3:44 can easily get up-to-date knowledge
3:46 about what is happening in the AI space.
3:48 So, definitely go ahead and subscribe as
3:50 this is completely for free. With the
3:52 new latest API improvements, you now
3:54 have the ability to take something like
3:57 for example a Python script and generate
3:59 it directly from Google AI Studio. From
4:01 there, you can drag it and drop it into
4:04 a framework like LOD code or agent zero,
4:06 which is what you're seeing on the
4:08 screen, and immediately turn it into a
4:10 working automation. This is where you
4:12 can create these AI agents to
4:13 practically do anything with the new
4:16 Gemini API. In this demo, agent zero
4:19 spins up the task, generates an image
4:22 using the API, and even notifies me when
4:24 the process is actually running with no
4:26 manual wiring, no waiting for native
4:28 integration. And what's powerful here is
4:30 the mindset shift. Instead of waiting
4:32 for tools to ship features, you can just
4:35 build it yourself using these modern AI
4:38 APIs to agent frameworks like cloud
4:40 code. You have Gemini CLI that can help
4:41 you with that and even something like
4:44 agent zero. Next is where Google has
4:47 made data ingestion with the Gemini API
4:49 more production ready. And it is
4:51 definitely a gamecher because this is
4:53 going to enable anyone to pass files
4:56 directly from Google Cloud Storage or
4:59 any public or signed HTTPS URL. Meaning
5:02 no more re-uploading data just to see it
5:04 with Gemini. You can use this and have
5:06 it work across providers too where you
5:10 can include signed URLs from AWS S3 as
5:12 well as the Zur blob storage and on top
5:15 of that you even have inline file size
5:17 limits which is where it has been
5:20 increased to 20 MB to 100 MB which is
5:22 actually remarkable in this case and
5:23 it's going to make it easier to handle
5:25 larger images, audio as well as
5:27 documents during prompting and prototyping.
5:29 prototyping.
5:31 And if you do not know by now, you have
5:33 the ability to use the Gemini 3 flash
5:35 within the studio as well as the build
5:37 mode. So you have the ability to select
5:40 these amazing powerful models directly
5:43 within these two different areas. This
5:45 is definitely a subtle but a really
5:46 great quality of life feature that
5:49 Google AI Studio has shipped, which is
5:52 the upgraded dashboard usage tab. This
5:53 is where you can easily track API
5:56 request success rates as well as Gemini
5:58 embedding model usage. You can zoom into
6:01 specific days for detailed analysis and
6:02 explore everything through a cleaner
6:05 redesigned graph layout. It is an update
6:06 that's going to make it easier for you
6:08 to monitor your performance as well as
6:10 debugging issues and understanding how
6:12 your Gemini APIs are actually being used
6:15 over time. You can easily access it by
6:17 heading over to the main dashboard. And
6:19 once you click here, you want to click
6:21 on usage and billing. And you can
6:23 actually take a look at the overview of
6:26 API usage for different projects, the
6:28 rate limits for it, and the billing,
6:30 which you can monitor over here.
6:31 Something interesting to highlight is
6:34 that Google's product lead Logan had
6:35 also dropped some interesting hints
6:37 today on X, where he called AI Studio
6:39 the best place to get started and
6:40 confirmed that there is going to be a
6:43 GitHub import feature, which is already
6:45 currently working internally and with
6:47 plans to actually ship it publicly once
6:49 it's polished. Someone had also asked
6:52 about the Gemini 3 going generally
6:54 available. This is essentially the
6:56 upgraded version from the preview phase
6:58 and it is something that is more
7:00 enhanced than the current version of
7:02 what we're seeing. And he had stated
7:04 that it is coming soon by saying that
7:06 CPUs are humming. And on the bigger
7:09 question of full app readiness, it is
7:12 where the Google AI studio will soon
7:14 also include back-end support with
7:16 authentication, Stripe integration and
7:19 deployment. This is essentially a full
7:21 stack development tool that Google is
7:24 building completely for free for anyone
7:25 to actually access which is just
7:28 incredible and Logan had confirmed that
7:30 many teams inside Google already testing
7:32 some of these things and the experience
7:35 is truly remarkable and for those who
7:36 haven't actually used the Google AI
7:39 studio it is a remarkable tool that you
7:41 truly need to get started with. What you
7:43 will first get started with is the main
7:45 dashboard of the Google AI studio but
7:47 you have two options. You can use the
7:49 playground to access many of the other
7:51 Gemini features like the Gemini uh
7:54 agents. You can use the live feature or
7:56 you can use the native audio and flash
7:58 to directly interact with the studio as
8:00 well as the models using the image
8:03 models like nano banana and then video
8:06 with video 3.1 and even audio models.
8:09 But if you are to work on creating
8:10 different sorts of apps, you can use the
8:12 full stack development vibe coding tool
8:15 which is the build agent. And this is
8:16 essentially where you can prompt in
8:18 anything to build any sort of app that
8:21 you describe. For example, based off of
8:22 this prompt, I'm going to be able to
8:25 build a finance app and just take a look
8:26 at the quality of what it's able to
8:28 output. You also have the ability to
8:30 attach files and you can even basically
8:32 transcribe what you're saying from voice
8:35 into textual prompts. And you can simply
8:37 go ahead and build whatever you had
8:39 requested based off of the prompt that
8:42 was sent in to the build agent. And what
8:44 you can also do is actually visualize
8:45 the code being written which is going to
8:48 give you a preview of whatever it is
8:50 working on. Right now it is planning I
8:52 believe. And once it finalizes the
8:55 implementation you can then see the code
8:56 being visualized which you see right here.
9:02 And after a couple seconds you have a
9:04 beautiful finance app that it generated
9:05 based off of the prompt that you
9:08 provided which even has Gemini features
9:11 integrated. You can even have an AI
9:13 feature directly within your app which
9:15 is going to be able to give you insights
9:18 in this particular case. So this is the
9:19 type of quality that you can get from
9:21 the build mode which is just remarkable
9:23 guys. You can visualize it in different
9:26 devices. You can download the app. You
9:28 can even upload this straight to GitHub.
9:30 There's so much that you can do with the
9:32 build mode which is why I highly
9:33 recommend that you take a look at our
9:35 previous videos on how you can truly use
9:37 this even further. If you like this
9:39 video and would love to support the
9:41 channel, you can consider donating to my
9:43 channel through the super thanks option
9:45 below. Or you can consider joining our
9:47 private Discord where you can access
9:49 multiple subscriptions to different AI
9:52 tools for free on a monthly basis, plus
9:55 daily AI news and exclusive content,
9:57 plus a lot more. But that is basically
9:59 it guys for today's video on the Google
10:01 AI studio and the new agent mode that
10:03 has been upgraded. I'll leave all these
10:04 links in the description below so that
10:05 you can easily get started. But with
10:06 that thought guys, thank you guys so
10:07 much for watching. Subscribe to the
10:09 second channel if you haven't already.
10:11 Join the newsletter. Join our Discord.
10:12 Follow me on Twitter. And lastly, make
10:14 sure you guys subscribe, turn on
10:15 notification bell, like this video, and
10:16 please take a look at our previous
10:18 videos so that you can stay up to date
10:20 with the latest AI news. But with that
10:21 thought, guys, have an amazing day,
10:23 spread positivity, and I'll see you guys