0:00 Any data science project starts with
0:01 data collection process.
0:03 AtliQ Agriculture has three options
0:06 of collecting data. First, we can use
0:08 ready-made data. We can either buy it
0:09 from third-party vendor or get it from
0:12 kaggle etc. Second option is we can have a
0:15 team of data annotators whose job is to
0:19 collect these images from farmers and
0:22 annotate those images either as a
0:25 healthy potato leaves or having a early
0:28 or late blight disease. So this team of
0:30 annotators
0:32 can work with farmers you know they can
0:34 maybe go to the
0:35 fields, farmer fields and they can ask
0:38 farmers to take the pictures or they can
0:40 take pictures themselves
0:42 and they can classify with the help of
0:44 farmer or by some means you know by
0:47 domain knowledge that
0:48 okay these are classified as
0:51 deceased potato plants versus the
0:53 healthy potato plants so they can
0:56 manually collect the data- this option is
0:58 expensive it requires budget so you have
1:01 to work with your stakeholders and kind
1:03 of get the budget approved and it might
1:06 be time consuming as well.
1:08 The third option is data scientists can
1:10 write web scraping scripts to go through
1:12 different websites which has potato
1:15 and collect those images and then use
1:17 the tools like Docano there are so many
1:20 tools that are available which can help
1:22 you annotate the data so either you
1:24 annotate that or
1:26 you get annotated images by using those
1:30 web scraping tools.
1:31 In this project we are going to use
1:33 ready-made data from kaggle.
1:35 We will be using this kaggle data set
1:38 for our model training you can click on
1:40 this download button. It's
1:42 326 29 megabyte data whatever
1:46 and it has
1:49 not only the images for potato disease
1:52 classification but
1:53 some tomato and pepper this is
1:56 classification as well. We are going to
1:58 ignore all of this we will just focus on
2:00 these three directories so I had already
2:02 previously downloaded this zip file when
2:05 I right click and extract all I get this
2:08 folder. And in this folder
2:10 I had the you know tomato all these
2:13 directories but those directories I have
2:15 deleted so I deleted those directories
2:17 manually so I asked you to do the same
2:20 thing. Go here
2:22 delete all the directories
2:24 except these three
2:26 then you will copy paste this directory
2:29 into your project directory. Now for
2:31 project directory I have C code folder
2:34 and here I am going to create a new
2:36 folder called potato
2:39 disease.
2:41 So I want all of you to practice this
2:44 code along with me. If you just watch my
2:46 video it's a waste of your time you
2:49 practice as you watch this video only
2:52 then it is useful. You know this is the
2:54 best advice that someone can give you
2:57 okay?
2:58 I have this
3:00 folder ready for my project
3:02 and in that I'm going to create a new
3:05 folder called training okay.?
3:07 and
3:08 Then i'm going to launch get bash so I
3:12 have this
3:13 get bash
3:14 which allows me to run all the unix
3:16 commands you can use windows command
3:18 prompt as well.
3:20 and I will run python minus m notebook
3:23 which is gonna launch you know my
3:25 jupyter notebook here
3:27 and in this
3:29 I will locate my potato this is folder
3:31 go training
3:33 create a new python 3 file and this will
3:36 be
3:37 my
3:38 model okay? So you can say okay training
3:41 whatever just give some name to
3:44 this particular
3:46 notebook
3:47 and then
3:48 we are going to
3:50 import some essential libraries so the
3:52 purpose of this video actually is
3:55 to download the data set into tf
4:00 dataset TF data input pipeline and then
4:03 we will do some data cleaning and we
4:05 will make our data set ready for model
4:09 training. So that's the purpose of this
4:11 video.
4:13 S here
4:15 let me
4:16 download some essential
4:18 you know
4:20 modules here
4:22 and then
4:23 the first thing I'm going to do is
4:26 okay? So we had
4:29 this.
4:33 Okay so in the download my download
4:35 folder
4:36 somewhere I had this planned village
4:39 directory right? So planned village
4:40 directory I'm going to the
4:43 do control C
4:44 and then
4:46 control V here. So I will copy all those
4:48 images
4:49 into the same folder where I'm running
4:51 this
4:52 notebook you know my IPV notebook so you
4:55 see now I have this directory and if you
4:57 look at all this
5:00 this is like early blight so there are
5:02 thousand images here and if you look at
5:04 all these images you see there is there
5:06 are these black dots
5:08 So this is showing that this
5:10 potato plant has some kind of disease. If
5:13 you look at healthy
5:15 plants
5:16 healthy leaves are healthy you know.
5:17 There are no blacks spots and they look
5:21 pretty good
5:23 the other one late blight will also have
5:27 late blight is a little more
5:29 deteriorating. See you look at all these
5:31 leaves they look pretty horrible so we
5:34 have all this data here in our directory
5:38 and
5:40 now I'm going to use tensorflow's
5:43 data set
5:44 to
5:44 [Music]
5:46 download these images into
5:48 tf.data.dataset.
5:50 Now if you don't know about TF data set
5:52 you need to pause this video right now
5:54 go to Youtube search for tensorflow data
5:56 input pipeline and you will see my video.
5:59 Here you need to watch this video it
6:01 will clarify your concepts. Basically
6:03 what's the purpose of tf.data.dataset
6:06 let's say you have all these images on
6:07 your hard disk okay and you can download
6:11 these images into batches
6:13 because there could be so many images
6:14 right? So if you
6:17 read these images and in batches into
6:20 this TF data set structure then you can
6:22 do like dot filter dot map you can do
6:25 amazing things so please watch this
6:28 video and I will now assume that your
6:30 concepts around
6:32 TF data sets are clear and we can now
6:37 load that data using
6:39 our tf dot
6:41 like this this particular API so TF dot
6:44 carer dot pre-processing image data set
6:46 from directory. Okay now
6:49 okay what does this do so you can search
6:52 tensorflow
6:54 image data set from directory. It will
6:56 show you an API for this directory so
6:59 you specify a directory first.
7:02 So let's say you have
7:04 main directory you have your classes and
7:06 you these are all the images so this one
7:09 call will load all the images into
7:12 your tensor. Basically into your data set.
7:15 Okay so our so the first argument is
7:19 what directory? Okay what is our
7:20 directory? Okay so let me write this here
7:24 our directory name
7:26 is
7:28 plant village, correct?
7:32 See plant village that's our data
7:34 directory
7:36 then
7:37 um
7:39 I will say shuffle is equal to true so
7:41 that it will just randomly shuffle the
7:44 images and
7:45 load them
7:47 and then I will say image size
7:50 Okay what is my image size?
7:52 So let me go here
7:54 and open this directories you know like
7:57 if you look at this image size you see
7:59 256 by 256 all of these images are 256
8:05 by 256.
8:06 you can verify that.
8:08 So
8:10 I will say 256 by 256 but I will store
8:14 you know I will create couple of
8:15 constants where
8:17 because I need to refer to this
8:19 constants later so I will say okay 256
8:22 by 256 is my image size
8:25 my batch size you know 32 is kind of
8:28 like a standard
8:31 batch size I will again store that into
8:33 a constant and initialize it here
8:36 and that's
8:38 pretty much it. I will just say
8:41 store this into a data set
8:45 okay
8:52 okay I did not run this okay
8:55 so it loaded two one five two files
8:58 belonging to three classes well which
9:00 three classes? so you can just do
9:03 this dot class names, you know. I will
9:07 just store that into a variable so that
9:10 I can
9:11 refer to it later
9:13 and these are the class names basically
9:15 your folder names are your class names
9:18 See these are the three folder names
9:20 and if you look at this. This has
9:22 thousand images
9:23 the second one has 152
9:27 third has thousand so
9:29 two thousand one five two
9:33 look if I do
9:35 length of data set..
9:40 Do you have any clue why is it showing
9:42 68
9:43 just think about pause the video think
9:45 about it,
9:46 because every element in the data set is
9:49 actually a batch of 32 images so if you
9:52 do 60 to 8 into 32
9:55 see you
9:57 the last batch is not perfect so it is
9:58 showing little more than two one five
10:00 two images but you got an idea
10:03 why this is 68?
10:05 okay.
10:08 let's just
10:09 explore this data set so I will say for
10:12 images batch
10:15 okay for image batch
10:17 label badge in
10:19 dataset.take.
10:21 Yu know when you do this
10:23 it gives you
10:25 one batch. One batch is how many images?
10:27 32 images, okay?
10:29 So
10:31 I will print
10:33 just the
10:35 shape of this thing.
10:37 I will say shape this
10:39 and
10:40 labels
10:43 bash dot.
10:44 I will just do see numpy like
10:48 every element that you get is a tensor
10:50 so you need to convert that to numpy
10:52 again. If you don't know the concept or
10:54 refer to the video that I talked about
10:57 earlier
10:58 and you find that there are 32 images
11:00 each image is 256 by 256. Do you know
11:03 what is this?
11:05 You guys are smart
11:06 it's RGB. It's channels basically you
11:09 have RGB channels so it's
11:12 basically three channels and I'm going
11:14 to initialize that as well here
11:17 so that
11:18 you know I can refer to it little
11:22 later
11:24 and
11:24 these images label batch has you
11:27 already realize zero one two. So this is
11:32 this is one this is two.
11:34 So there are three classes three images
11:36 and if you are
11:38 you know if you want to print let's say
11:39 each individual image. So I will okay
11:41 forget about this I will just print
11:44 first image this this has 32 images I
11:47 will print first image so for our first
11:50 image you see it's a tensor.
11:52 I you want to convert tensor to a numpy
11:54 you do this and you find all these
11:57 numbers 3d array
11:59 every number is between 0 to 255. the
12:02 color is represented bit with 0 to 255
12:05 So that's what this is okay
12:08 and again
12:09 if you do shape of this you'll find 256
12:12 by 250 by 3 okay first image
12:15 got it all right now
12:18 Let's try to visualize these images
12:22 okay let's say I want to visualize this
12:24 image so I can use
12:27 plt dot I am show
12:30 so this is matplotlib okay plot c
12:33 matplotlib and when you do I am show
12:36 it expects 3d array so what is my 3d
12:38 area well
12:39 my 3d array
12:41 is this so I'm printing by the way the
12:44 first image okay?
12:46 So Numpy
12:52 there is some problem so what I need to
12:54 do is
12:56 it is float so
13:04 I converted it to end and
13:07 now you should see it working okay
13:10 I don't care about all these numbers so
13:12 I will just do
13:17 you know hide that
13:18 and by the way every time it is
13:20 shuffling so that's why every time
13:22 you're seeing different image because it
13:23 has shuffle randomness to it
13:26 access is off now I want to display
13:29 the label like what image is that
13:32 so how do I display that label? Well
13:35 you can do plt dot
13:38 title
13:39 okay
13:40 and what is my title? Well
13:42 my title is label batch, okay?
13:46 This is my title but this will give you
13:48 number zero, one, two how can you get the
13:51 exit this class name? Well we have class
13:54 name so you supply that as an index I
13:57 hope you are getting the point
13:59 See potato early blight
14:02 okay
14:03 I want to
14:04 display a couple of these images so I
14:06 will just
14:08 run
14:09 a full loop I'll say maybe I want to
14:11 display out of you know first batch is
14:13 32 I want to display the let's say 12
14:16 images
14:17 and instead of this I will say I
14:22 got it
14:24 okay?
14:24 I hope that is clear and
14:28 if you want to show this in a see if you
14:31 run this it's gonna
14:34 it's just showing one why because you
14:36 need to make a subplot. So sub
14:39 plot three by four is like almost like a
14:42 matrix
14:43 and
14:45 if you do this
14:51 okay it shows all the images but
14:54 the dimension is kind of messed up so I
14:56 will just increase the area you know of
14:59 each of these images to 10 by 10 and
15:01 look wonderful it just shows me all the
15:03 images beautifully.
15:05 This is healthy leafy, this is early
15:08 blight late blight and so on
15:10 now we are going to split our data set
15:13 into train test split, okay? So let's say
15:16 data set length is 68. Actual length is
15:18 by the way 68 into 32 because each
15:21 element is 32 batch. Okay?
15:23 Now
15:25 what we will do is we will keep eighty
15:27 percent data as training data then we
15:30 get remaining twenty percent, right? In
15:32 remaining twenty percent we will do
15:35 two split so one ten percent split we
15:38 will do validation and remaining 10 10
15:41 percent will do test. So
15:44 this validation set will be used during
15:47 the training process
15:49 on when you run each epoch after each
15:51 epoch you do validation on this 10
15:53 percent. Okay? So you run let's say
15:57 you know let me define the epoch. So I am
16:00 going to
16:01 run
16:02 50 epochs
16:04 this is style and error okay? You could
16:06 be 20 30. so we'll
16:09 we'll run let's say 50 epochs and
16:12 at the end of every epoch we use this
16:15 validation data set to do the validation.
16:17 Once we are done through 50 epochs, once
16:20 we have final model
16:22 then we use this 10 person data set. This
16:25 is called test data set to measure the
16:28 accuracy of our model.
16:30 Before we deploy our model into the wild
16:34 we want to use this 10 percent
16:39 before we deploy our model into the wild
16:41 we'll use this 10 test data set to test
16:45 the performance
16:47 of our
16:48 model.
16:50 how do you get this split? You know in
16:52 sklearn we have trained a split method
16:56 trained is split if you use statistical
16:59 machine learning in escalant. We have
17:01 that. We don't have that intensive flow
17:03 we are going to use data set dot take so
17:06 when you do data set dot take
17:09 okay.
17:10 Let's say 10 it will take first 10
17:12 samples now.
17:14 What is our train size okay so trainings
17:17 size is 0.8 because it is 80, okay?
17:22 And
17:23 what is the length of our data set 68
17:27 okay?
17:28 I'm going to say okay what is 80 of 68
17:32 well 54.
17:33 so I can now do?
17:35 Take first 54 samples
17:38 first 54 batches. Actually each batch is
17:40 32 so it's much more simple and call it
17:43 a train data set.
17:46 Okay?
17:49 Okay so
17:52 that's my train data set and if you do a
17:55 length
17:56 I hope you're practicing along with me
17:58 you find 54
18:01 and if you do data set dot skip 54 it
18:06 means you are skipping first 54 and you
18:08 are getting remaining 54. You know this
18:11 is like if you use the slicing operator
18:14 in in Python list it is like
18:17 54 and words onwards and this one is
18:20 like
18:21 first 54
18:23 okay? So I hope
18:25 if you know Python a little bit this
18:27 this should be clear
18:29 and
18:31 this one.
18:32 Okay so first data. Okay? So this will be
18:35 my test data set actually this is not
18:37 test data set. So this will give you
18:38 remaining 10, 20.
18:40 In that you need to again split into
18:44 validation and test.
18:46 Correct?
18:47 So
18:48 I mean temporarily I will
18:50 save it as a test data set but if this
18:52 is not axillary S data set
18:54 I have 14 and out of that
18:57 you know my validation size is what? 10
18:59 percent?
19:00 Okay
19:01 and what I'm doing is
19:03 a 10 percent of my actual data set is
19:07 so I need six samples basically from my
19:10 taste data set and when I do that
19:13 I get
19:15 my
19:18 validation data data set and if you do
19:20 validation data set that is six samples
19:24 and then you will do skip
19:28 and that will be
19:30 your
19:32 actual test data set. So we
19:34 just
19:35 split our data set into validation test
19:39 and
19:40 train data set. Now the code I wrote was
19:43 using all the hard coded numbers and you
19:45 know
19:46 doesn't it's just a prototype so if you
19:48 want to wrap all of this into a nice
19:51 looking Python function let's call it
19:53 this function
19:54 and that function. The goal of this
19:57 function is
19:59 to take
20:00 the
20:01 tensorflow data set
20:03 okay
20:04 it should also take what is your split
20:07 ratio.
20:09 So I'm just saying if you don't supply
20:11 anything by default it will say 80 train
20:14 10 validation
20:16 10 test
20:19 and I'm also going to do
20:21 shuffle
20:22 I'll explain why
20:24 and shuffle size is 10 000. If you don't
20:26 know about sample size again watch to my
20:29 other video that I referred it's very
20:31 important
20:32 you watch that okay?
20:36 Now
20:38 what I will return in the end
20:42 this
20:43 so we are doing whatever code we are
20:44 doing. We are just creating a nice
20:46 looking Python function, that's it, okay?
20:49 So my train size is okay what is my data
20:53 set size first of all so my data set
20:55 size is this length of data set
20:58 then
21:00 train size, my training size is
21:05 train split like 80 of this and I want
21:07 to convert it into integer because see I
21:09 don't want to get
21:11 these float numbers
21:14 that's my train size and my validation
21:19 is this
21:22 okay?
21:25 All right now
21:27 my train data set is basically
21:30 whatever we did previously which is
21:33 you know ds dot take train size
21:36 and then when you do ds dot skip
21:39 train size you get remaining 20 percent
21:43 samples
21:44 in that
21:46 you will again take validation size
21:50 and that's where
21:52 you get your validation data set
21:55 and if you do the same thing
21:57 and just do skip here
22:00 you get your
22:04 test data set.
22:06 Okay so I hope that is clear.
22:08 Now,
22:10 we have shuffle arguments so if
22:14 I want to
22:16 just suffer the data set
22:19 you know so that
22:20 before we split into train test split
22:23 the
22:26 suffering happens
22:28 and seed is just for predictability you
22:30 know if you do same seed every time it
22:32 will give you same result this is just a
22:35 seed number it can be anything it can be
22:37 5 7 anything okay my function is ready
22:40 and
22:41 I can now call my function
22:45 on my data set okay what is the name of
22:46 my data set here is my data set
22:49 you see
22:50 data set
22:52 so we read all the images into this data
22:54 set now we are doing
22:59 train test split. Sorry.
23:01 Okay see this ran like
23:05 it ran so fast
23:07 and I will just confirm the size of my
23:10 validation
23:12 set my test set and so on and they are
23:17 coming to be
23:18 what we expect it to be actually.
23:21 Now once again if you have seen my video
23:24 on tensorflow data input pipeline you
23:26 would have understood the concepts
23:28 behind caching
23:29 and pre-fetching etc. So that's what we
23:32 are going to do here so we are
23:34 the training data set that we have.
23:36 We will first do caching this will
23:40 you know, it will
23:43 read the image from the disk and then
23:45 for the next iteration when you need the
23:47 same image
23:48 it will keep that image in the memory.
23:52 So this improves the performance of your
23:54 pipeline again. Watch that video because
23:57 you will get good understanding on this
23:59 shuffle. Okay how shuffle 1000 works
24:02 again you need to watch that video so
24:04 shuffle 1000 will
24:06 again like shuffle the images I think
24:09 this since our
24:12 yeah it can be less than thousand as
24:15 well um
24:16 but anyways and then prefetch you know
24:19 prefetch if you're using GPU and CPU.
24:22 if GPU is busy training
24:24 pre-patch will load the next set of
24:27 batch from your disk
24:29 and that will
24:31 improve the performance.
24:33 Actually if you look at my
24:35 deep learning playlist I have
24:37 prefetch and cache
24:39 video here. So
24:40 you know this this video talks about
24:42 prefetch and cache and I can quickly
24:45 show you.
24:47 So usually
24:49 when you are loading
24:51 batches you know let's say 32 images at
24:53 a time and I have a GPU Titan RTX when
24:57 it is training
25:00 you know
25:01 you are not
25:03 reusing CPU when the GPU is training
25:05 because CPU is sitting idle then when
25:08 you're done now CPU again
25:10 reads the batch and GPU is added so this
25:13 let's say for this example it takes
25:15 around 12 seconds but if you use
25:17 prefetch and caching so what's gonna
25:20 happen is
25:21 see
25:23 when you use prefetch and caching
25:25 while
25:26 GPU is training batch one
25:29 CPU will be loading that batch
25:32 you see
25:33 so that's your prefetch basically
25:36 and your cache is something where okay so
25:39 this is preface and cache is basically
25:43 if you have read an image so see here
25:46 I think
25:49 usually see
25:51 red you read an image so this is that
25:53 blue dot
25:54 and during the second epoch you are
25:56 reading the same images again okay?
25:59 Bt if you use
26:01 cache
26:02 here you don't see this blue
26:05 block here so you save time reading
26:08 those images so go to.. I will link all
26:10 these videos by the way but if you do
26:12 code basics deep learning tutorials you
26:15 know these are the two videos I am
26:16 referring to so back to the
26:19 tutorial
26:20 once again. So that's what I'm doing and
26:23 here
26:24 I'm letting tensorflow determine how
26:27 many batches to load you know while GPU
26:30 is training
26:32 and then
26:34 you can load this here
26:37 okay?
26:38 Now
26:39 my validation and test data set again
26:42 will use the same paradigm
26:45 and
26:46 now
26:47 my these data sets are kind of
26:50 optimized for training performance. So my
26:53 training will run fast
26:56 now we need to do
26:57 some pre-processing
26:59 you all know if you have worked on any
27:01 image processing
27:03 you know the first thing we do is scale
27:05 so the image the numpy array that we saw
27:07 previously was between 0 to 255 you know
27:10 it's an RGB scale.
27:12 You want to divide that by 255 so that
27:16 you get a number between 0
27:19 one and the way you do that is by doing
27:22 tf dot eras dot
27:26 sequential
27:27 okay?
27:28 And here
27:29 I'm supplying my pre-processing
27:32 pipeline okay? So
27:35 the way you do rescaling is by using
27:37 this API. Now don't worry about
27:38 experimental by the way this is stable.
27:40 okay I had a conversation with
27:42 tensorflow folks actually on this
27:44 they're saying it is stable.
27:46 So don't worry 1.0 there by 255 this
27:50 will just scale the image to 255 and we
27:54 will supply this layer when we actually
27:56 build our model
27:58 okay?
28:00 We need to do one more thing
28:03 which is resizing. We will resize every
28:05 image to 256 by 256. so this will resize
28:09 the image now you will immediately ask
28:11 me our images are already 256 by 256.
28:14 Why do we need to resize it?
28:16 But
28:18 this layer that we are creating okay
28:21 See. Let me create this layer so this
28:23 resize and rescale layer will eventually
28:25 go to
28:26 our ultimate model
28:29 and when we have a trained model and
28:31 when it starts predicting..
28:33 during prediction if you're supplying
28:35 any image which is not 256 by 256 you
28:38 know some different dimension
28:40 this will take care of
28:42 resizing it. So that's essentially
28:45 the idea here.
28:47 Once we have created this layer one more
28:50 thing we are going to do in terms of
28:52 preprocessing is use data augmentation
28:55 to
28:57 you know make our model
28:59 robust.
29:01 Let's say
29:02 you train a model using some images and
29:04 then when you
29:06 try predicting
29:08 you know at that time if you are
29:09 supplying an image which is rotated or
29:11 which is not which is different in a
29:14 contrast.
29:15 Then your model will not perform better
29:17 so for that we use a concept of data
29:19 augmentation.
29:20 In youtube you search for tensorflow
29:22 data augmentation you will find my this
29:24 my video you must watch that video
29:28 what we do in that is let's say you have
29:31 this kind of original image in your
29:32 training data set you create four new
29:36 samples four new training samples out of
29:38 that. You apply different transformation
29:41 let's say horizontal flip contrast, you
29:43 see contrast is
29:45 increased in this image. So you're taking
29:47 same image you applying some filter, some
29:49 contrast,
29:51 some transformation you are generating
29:53 new training samples.
29:55 See here I rotated the images you see
29:58 and I will use now all five images for
30:01 my training. So I have one image I create
30:04 four extra image. I use all five images
30:06 for my training so that my model is
30:09 robust tomorrow when I start
30:11 predicting in wild if someone gives me a
30:14 rotated image, my model knows how to
30:16 predict that. Okay so that's
30:19 the idea behind data augmentation and if
30:22 you have seen that video tensorflow
30:24 provides
30:25 beautiful APIs again you are doing same
30:27 thing
30:28 where
30:29 you are creating couple of layers
30:32 I'm going to apply a random flip and
30:35 some rotation you know if you watch that
30:38 video or the other video you will get a
30:40 clear understanding. So that's my
30:43 data
30:44 augmentation layer which I am going to
30:47 store here and by the way resize rescale.
30:50 All these
30:51 layers I'm going to use ultimately
30:54 in my
30:55 actual model so I had only this much for
30:57 this video in the next video we are
31:00 going to
31:01 build a model and train it. In this video
31:04 just to summarize we loaded our data
31:07 into tensorflow data set
31:09 we did some visualization then we did
31:12 train test split and then we did some
31:16 preprocessing. We have not completed
31:17 pre-processing we just created layers
31:21 for pre-processing. By the way and we
31:23 will use these layers into our actual
31:26 model.
31:27 I hope you're liking it. I hope you are
31:29 excited to see the next video coming
31:32 where we'll be actually training the
31:33 model. It's going to be a lot of fun.
31:35 If you're liking this series please
31:37 share it with your friends give it a
31:40 up
31:41 it helps me with Youtube ranking and
31:43 this project can go to more people who
31:46 are trying to learn and the thing about
31:48 Youtube is you know the learning is free.
31:50 So if you are doing free learning
31:52 at least you can give it a thumbs up you
31:53 know. I mean give it a thumbs down if you
31:56 don't like it I don't mind it. But if you
31:58 give thumbs down please leave a comment
32:00 so that I can improve. Thank you for
32:01 watching.