Hang tight while we fetch the video data and transcripts. This only takes a moment.
Connecting to YouTube player…
Fetching transcript data…
We’ll display the transcript, summary, and all view options as soon as everything loads.
Next steps
Loading transcript tools…
Deep learning project end to end | Potato Disease Classification - 7 : Model Deployment To GCP | codebasics | YouTubeToText
YouTube Transcript: Deep learning project end to end | Potato Disease Classification - 7 : Model Deployment To GCP
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
Video Summary
Summary
Core Theme
This content details the process of deploying a machine learning model as a Google Cloud Function, enabling it to be accessed via HTTP requests from applications like React Native.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
I did a project on Bangalore property
price prediction and in that project I
used AWS for model deployment. But this
time in this project we are going to use
Google Cloud, especially Google Cloud
function or GCF and we'll see how the
whole thing works. And let's get started
[Music]
At the beginning of the project in the
intro video, I said that I will be
converting the regular model
into tf light and then tf light will be
deployed on Google Cloud but I'm going
to change that plan a little bit
because
I think tf flight it's better if we
deploy to mobile application directly
and that
video I will do in the future if I find
the time. But for now
we will just deploy our regular model
directly to Google Cloud.
Our React Native app will call
that function using http, so that will be
the change in the architecture.
Let's look at installation requirements.
Go to this github page of course video
description has a link of that.
It has installation instructions for
Google Cloud. So follow this section
and here go to
the GCP account
when you click on this
it will ask you to
enter your
gmail ID. Now if you already have a GCP
account, then you can skip some of these
steps but let's say you don't have an
account you enter your email I here.
Here you enter
let's say personal project,
then continue.
Then it will ask you for a phone number.
So you enter phone number it will send a
code. You verify it.
Upon verification
you can enter
your payment method. Now here when you
enter your
credit card. Do not worry you get some
credit for free 300 or something like
that and then
you also
don't have to worry about the auto
recharge because it will not charge you
until
you give the permission. So don't worry
just enter your credit card information.
They're not going to charge anything.
You're getting some credit for free
which will be enough for this project.
When you say start my free trial it will
ask you couple of questions you can just
say okay I'm here for..
I'm not sure yet. You can just give
anything. I'm interested in AI
engineer developer role. Done. Just couple
of
basic questions and you are done. Now you
can create a project. So see,
first you created
a GCP account. Now you're creating a
project from GCP account. There is a
documentation link here. You can read
through documentation but it's really
very easy. You just go here
and you click on new project
and you will say
potato disease classification
and
just say create.
So now it's creating the project. So in
this project, we're going to deploy our
google cloud cloud function so this
project is kind of an area where
our Google Cloud function will be
deployed so here you can click at the
top
select potato this is classification as
a project and here is the project ID
that you will be using later on
now create a GCP bucket
so GCP bucket is like Amazon s3 you
create a
it's like Google Drive, you know. Like you
have some area in the cloud where you
are deploying your objects.
Okay, so that's what it is. So here I can
say create a bucket
and
let's say
core basics tf models. Now it has to be
globally unique. I have already taken
this name so it's not working but I will
just say code basics tf models two. You
need to give your own unique name and in
your project you use your bucket name. If
you start using my bucket name probably
it's not gonna work. So
make sure you create your own bucket.
In the region, I will select US the
saphir from
London, Belgium whatever country
Mumbai just select the appropriate
region so that your
physical server data center is
you know in that region and you
don't have issues with the speed. So just
continue with all pretty much all the
default
values here
and now you can
create a folder. So in this bucket I'm
going to create a folder called models
in that I will upload my model so here
you can say upload files
and potatoes.h5
that h5
object I will upload.
Okay now I have my model
available here
and the next step
is see next up this is the step which we
already perform. You need to now install
Google Cloud sd sdk. So go here
and
[Music]
click here.
Click on this link. See I'm on Windows
MAC OS whatever OS you are on. You need
to follow that instruction. I'm on
windows, I will
click on this
it will
download that file. See it's downloading
here
and then
you will go through the
installation here
I already have it installed actually
so but I'm
going to probably
override it just kind of to kind of show
you all the steps.
It is still going. It's gonna take some
time. So just be patient
it will finish it at some point for sure.
So here is almost done you can click on
next
just click all this checkbox
so that you know start Google Cloud SDK
itself. So when you do
finish here
it's going to start
the Google Cloud SDK cell.
I already have this project selected. You
might get a different screen
but I already have this project selected.
So
I will just say create a new
configuration here
and I will just say
whatever
core basics two.
I will log in with a new account just to
show you
you know how it works.
So you give your credentials and then it
will say you are
authenticated.
Okay.
Now here
you need to
pick your project. So my project was what
So let's go to
here.
Gifted Pillar that's my
project ID. So I will select gifted
pillar so number two as my project ID
Okay.
I think we have already done this but
let me just just to make sure Google
cloud auth login.
okay I'm logged in. Now
all right
so I'm
authenticated and I'm ready to deploy my
my code to Google Cloud but hey which
core should we deploy? We need to write
code that we can deploy to Google Cloud
So here I will create in my project
structure a new directory called GCP,
okay?
And in that
I will
add a Python file called main.pi
and this file will have a code that will
be deployed to Google Cloud.
You also need to have a requirement.txt
file. So requirements.txt
Why do we need this file? Because when
you deploy this particular code to
Google Cloud you need to tell
which
libraries should be installed first
because these libraries are installed on
your local computer they're not
installed on Google Cloud yet so
you need to specify all of that and once
you have that here when you deploy your
code from here it will first install the
libraries from
requirement.txt file. Now I'm going to go
into my
meditation
all right
meditation
and let's start writing some code.
We'll import some essential libraries
I'm importing Google Cloud.
This particular module to access that
Google Cloud bucket
object. You know the model that we have
uploaded
and my bucket name
is
basically code basics tf
models. Okay what is my bucket now? I
forgot
my
tf models too
and then
okay my class names
they are same as what I had in my
notebook
and I'm going to create a variable for
model. Here I will
you know I will use this variable
to store my model. Now first
let's download the model and we are
calling it blob
binary large object,
you know. So
this is a generic function we are typing
where you specify bucket name
your source
blob name your
destination
file name
So source blob name meaning the blob on
your bucket
but destination file name means
this function will be running on a
different server in Google Cloud
and that server will be downloading the
model from the bucket.
So when it downloads it locally on that
server on cloud
here is the destination file path where
it should be storing it.
We'll see it in a minute, don't worry. Too
much about this it's not that
complicated. So now storage client is
basically storage.client.
We are doing this to download our object
save you know potatoes.h5 file from
the cloud
and
the storage client dot
get bucket.
What is my bucket? Well whatever you pass
in the function
parameter.
And there goes your bucket.
Once you have bucket you do bucket dot
blog
source blob name
and that is your blog
and you download this blog
to download to file name
so you download it to
this particular destination
file path.
And I just
I'll just now
write a predict function. You know how we
wrote it in our in our API folder
and in predict we'll get a request as an
input.
First thing you do is of course you want
to download the
the model
and the first thing is bucket name.
So bucket name is that
this one
source
blob name is this
why this models potatoes.h5? Well we
already saw it right?
models models directory inside that
there is a file called potatoes dot
five
and that
so this is
in the bucket on cloud somewhere on some
machine.
I want to download it locally into
the
dmp directory
as this file name.
okay?
And then
you can just say tf tf.keras
dot load model
So now you are loading this model
because once this call is executed the
model is available here. You are loading
it here.
Now this cloud function will be executed
multiple times and you need to execute
this function only on the first call.
Hence I will say if model is not none
then do this
otherwise if model is downloaded already
why do you want to download it again?
and
this particular variable is global
this is how python works basically.
Now request
will have
a parameter called files. Don't ask me
why that's how it is and it will have a
key call file.
If you remember from our postman request
and that is your image. So from postman
when you upload a file
or from mobile app.
It will come in the image variable
and that
image could be big file you know 800 by
600 you need to resize it to 256 by 256.
You need to convert it
to numpy array as well.
So
do
image
dot
open what image? okay.
Now I got the image I need to convert it
into rgb
because remember our images are in rgb
format you know they have three channels
and you need to resize it to
our standard size that the model was
trained on is 256 by 256.
And when you wrap
this whole thing into np dot array
what you get back
is numpy array presenting that image
and that image is
needs to be divided by 255 because
the values are between 0 to 255 range
You want to convert it to 0 to 1 range.
I have the image, but when I do
model or predict it expects a batch of
image.
So it expects image array and to convert
the single image into an image array
something like this ,you want to do
something like this, right?
is
expand dimension
okay?
And
you get your predictions back. I will
just print my prediction.
Just to for the debugging purpose. You
know
and then the remaining code is same.
We already looked at it like this
predictions will be
prediction for each of the three class.
We have
early blight late blight and healthy.
Let's say the values come to be like
0.5, 0.9 then this is the
class you know
let's say this is healthy, so then this
is the class and when you do np
dot arg max on this type of array
it will return you the index of the
maximum element. Maximum element is 0.9
and the index is 2. So that's what it
will do so we will
just pass prediction here. And by the way
predictions are for multiple arrays. See
it's an image array my image area has
only one
Hence, I always take the zeroth image.
I hope this is clear
and once you get an index you can use
this class name. So let's say index is
two. So if you use class name you get
healthy
so here
I do this
okay
and my confidence is this we have looked
into this before. So I'm not going to go
in detail too much.
And then you return this back
that's it my cloud function is ready and
I'm going to deploy this particular
cloud function. This predictive function
is the one
that I will be
deploying to my cloud
and how do I deploy it? Well there is a
command here.
So..
But you have to remember you need to use
this Google Cloud console okay don't use
GitBash
So i in Google Cloud console
I will
I'll go to this directory and then go to
like GCP directory you know
and here
I will just say
Gcloud
functions deploy
predict. What is predict? Predict is that
function name. See this function name?
That's this particular thing here
and I will deploy
that. Wait
whatever
memory 512 megabyte. Okay fine.
What is my project ID? Project ID is this
here
and enter
okay.. So
all right, project ID, I need to say
hyphen hyphen project.
It will take some time okay
What is this would you like to enable
and retry yes.
So Google Cloud API needs to be enabled
and if it is not enabled
you will get this prompt. You should just
say yes
we want anyone in the world
to be able to call our function.
Usually if you're working corporate
environment, you will have some kind of
authentication. But since this is a study
project, we are saying okay anyone can
call the function. Now here we got an
error because the cloud API didn't work.
Don't worry you can just copy this
particular
link here and you have to just click on
the button. You just say okay enable and
then retry that command
So I'm going here.
I'll say enable it will take few seconds.
So cloud build api will be enabled it
will bootstrap your environment
and then you can run the same command
again.
Perfect! It got enabled.
now,
I will go here
run same command. Again,
it takes some time, you know. Up to two
minutes, three minutes
to deploy. This because it will
install those modules, it will
prepare a server you know start a
process. So there is some
some setup involved here that's why it
takes time.
Wonderful it is deployed now and here is
the url
that we can use to call the function.
You can
get an error ,it's quite possible
and in case of error
I would suggest this
go here
go to let's say your home page here.
This is the home page
and
here in the error reporting you will
find
the errors you face the error while
deployment. Okay.
Now in our case
we did not see any error
So let me just refresh this real quick.
Here you will see cloud functions and
there you will see our function running.
So it's green the predict function is up
and running
you can click on it.
You can see the activity. You can see the
logs and so on.
We'll use a postman now to call that
function. So I'm just going to create a
new tab. We'll be making a post request
and the request will have
this url
This is the url that you're calling.
copy.
okay.
You can get the url
by
going
here in a function detail.
Okay go to function detail
and
trigger
Yeah see same url. You can just click on
this and it will copy
in the body. We know it's form data the
key is file we have looked into this
before
and the file now I'm going to create.
Select some files which are downloaded
from internet. Now these are the files
which have different
unix and why different size, basically.
So for example this one is 800 by 539 so
I wanted to test my function on
entirely different image.
Okay saying could not handle a request
What happened here?
Let's check it out.
Okay function is deployed. Okay let's
check the logs.
So looks like it crashed. So here is the
crash details
predictions is model dot break image
area attribute none time object has
no attribute predict. Okay so model, this
parameter
is
none.
Looks like that.
So model came out
to be none.
Oh because I think I made a mix I. should
have put here model is none. If model is
none then you're doing that, okay cool.
So I made this code
change
and
I will deploy it again. So just just
rerun the same command, that's it.
You need to wait for
two minutes more. I know it's
it's kind of annoying but
you got to wait
perfect. It is deployed again.
I will go to postman and make a call
hoping it will work this time. If it
doesn't then you can go to logs again
and figure out what is an error.
First time usually takes more time
because it's loading the model from
Google Bucket and
doing the prediction.
There is a concept of cold and hot
invocations.
Okay again, we faced an error. So let's
see what's going on.
I'm just going to refresh the logs here.
All right! Why am I getting the same
error again?
There's something wrong in the code.
So let's see global model is
none then download blob potatoes models
What is it h5 tmp
and I want to make sure codebasics tf
models 2. So code basics
tf
models 2 potatoes h5.
Here you can say print
I will download the
model now
mortar download dead
and I will
do this
because model is coming to be none. I
think right that's what the
error
is.
So predict model is coming to be
none the model is none.
Oh I made a big mistake actually it
should be tf.keras.models.loadmodel
[Music]
I think that was the issue.
Okay again rerun the same command.
Okay after waiting for some time
again it is
up active.
Let's try one more time.
Wonderful so it's working fine now. You
can see
I'm able to call Google Cloud function
the response came up fine. I will try
some other files. This is this one is for
late blight
and
let's see what happens with this one
okay that worked. Okay as well so overall
my function is ready. In the next video
we will be using mobile app to call this
function.
I have all the instructions
on my
Github page and
all the code everything is available. So
try it out if you like this series. So
far please give it a thumbs up and share
it with your friends. Thank you.
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.