0:02 hey this is anthony tavelos your cloud
0:05 instructor at exam pro
0:07 bringing you a complete study course for
0:10 the google cloud associate cloud
0:13 engineer made available to you here on
0:16 free code camp and so this course is
0:19 designed to help you pass and achieve
0:21 google issued certification the way
0:23 we're going to do that
0:26 is to go through lots of lecture content
0:29 follow alongs and using my cheat sheets
0:32 on the day of the exam so you pass
0:34 and you can take that certification and
0:37 put it on your resume or linkedin
0:39 so you can get that cloud job or
0:42 promotion that you've been looking for
0:45 and so a bit about me is that i have 18
0:47 years industry experience
0:50 seven of it specializing in cloud and
0:53 four years of that as a cloud trainer i
0:56 previously been a cloud and devops
0:58 engineer and i've also published
1:01 multiple cloud courses and i'm a huge
1:05 fan of the cartoon looney tunes as well
1:07 as a coffee connoisseur and so i wanted
1:10 to take a moment to thank viewers like you
1:10 you
1:12 because you make these free courses
1:15 possible and so if you're looking for
1:17 more ways of supporting more free
1:19 courses just like this one
1:22 the best way is to buy the extra study
1:25 material at
1:27 co example.com in particular for this
1:31 certification you can find it at gcp
1:34 hyphen ace there you can get study notes
1:37 flash cards quizlets
1:39 downloadable lectures which are the
1:42 slides to all the lecture videos
1:45 downloadable cheat sheets which by the
1:48 way are free if you just go sign up
1:50 practice exams
1:52 and you can also ask questions and get
1:55 learning support and if you want to keep
1:57 up to date with new courses i'm working on
1:58 on
2:01 the best way is to follow me on twitter
2:02 at antony's cloud
2:04 and i'd love to hear from you if you
2:06 passed your exam
2:08 and also i'd love to hear on what you'd
2:11 like to see next [Music]
2:15 [Music]
2:16 welcome back
2:18 in this lesson i wanted to quickly go
2:22 over how to access the course resources
2:24 now the resources in this course are
2:27 designed to accompany the lessons and
2:30 help you understand not just the theory
2:32 but to help with the demo lessons that
2:35 really drive home the component of
2:36 hands-on learning
2:38 these will include study notes lesson
2:42 files scripts as well as resources that
2:44 are used in the demo lessons
2:46 these files can be found in a github
2:49 repository that i will be including
2:52 below that are always kept up-to-date
2:55 and it is through these files that you
2:56 will be able to follow along and
2:59 complete the demos on your own to really
3:01 cement the knowledge learned
3:04 it's a fairly simple process but varies
3:07 through the different operating systems
3:09 i'll be going through this demo to show
3:12 you how to obtain access through the
3:14 three major operating systems being
3:18 windows mac os and ubuntu linux
3:21 so i'm first going to begin with windows
3:22 and the first step would be to open up
3:24 the web browser
3:26 and browse to this url which i will
3:33 and this is the course github repository
3:36 which will house all the course files
3:38 that i have mentioned before
3:40 keeping the course up to date will mean
3:42 that files may need to be changed and so
3:45 as i update them they will always be reflected
3:46 reflected
3:49 and uploaded here in the repo
3:51 so getting back to it there are two ways
3:54 to access this repository so the easiest
3:57 way to obtain a copy of these files will
3:59 be to click on the clone or download
4:03 button and click on download zip
4:05 once the file has been downloaded you
4:08 can then open it up by clicking on it here
4:09 here
4:12 and here are the files here in downloads
4:14 and this will give you a snapshot of all
4:18 the files and folders as you see them
4:20 from this repository
4:21 now although this may seem like the
4:24 simple way to go this is not the
4:27 recommended method to download as if any
4:30 files have changed you will not be up to
4:32 date with the latest files and will only
4:34 be current from the date at which you've
4:36 downloaded them now the way that is
4:39 recommended is using a source control
4:42 system called git and so the easiest way
4:47 to install it would be to go to this url https
4:49 https
4:52 colon forward slash forward slash
4:56 git dash scm.com
4:57 and this will bring you to the git
4:59 website where you can download the
5:02 necessary software for windows or any
5:04 other supported operating system
5:07 and so i'm going to download it here
5:09 and this should download the latest
5:11 version of git
5:13 for windows and it took a few seconds
5:15 there but it is done
5:17 and no need to worry about whether or
5:19 not you've got the proper version
5:21 usually when you click that download
5:23 button it will download the latest
5:26 version for your operating system
5:28 so i'm going to go over here and open
5:33 you'll get a prompt where you would just
5:35 say yes
5:37 and we're going to go ahead and accept
5:39 all the defaults here this is where it's
5:43 going to install it let's hit next
5:44 these are all the components that
5:46 they're going to be installed let's
5:49 click on next
5:51 and again we're going to go through everything
5:52 everything
5:55 with all the defaults
5:58 and once we've reached
6:00 installing all the defaults it's gonna
6:03 take a couple minutes to install
6:05 and again it took a minute or so
6:07 we're going to just click on next
6:09 and it's going to ask if you want to
6:12 view the release notes and we don't
6:14 really need those so
6:16 we can click on ok
6:19 and simply close that
6:21 and we're just going to go over and see
6:24 if git is installed
6:31 and i'm going to just zoom in here so we
6:33 can see a little better
6:35 and there we go and we are just going to
6:36 type in git
6:38 and as you can see it's been installed
6:41 and so now that we've installed git we
6:43 want to be able to pull down all the
6:45 folders and the files within them from
6:48 the repository to our local system and
6:50 so i'm just going to clear the screen here
6:51 here
6:53 and we're going to do a cd to make sure
6:55 that i'm in my home directory
6:57 and then we're going to make a directory
6:59 called repos
7:01 and in order to do that we're going to
7:04 do mkdir space
7:05 space
7:07 repos and then we're going to move into
7:08 that directory
7:10 so cd
7:13 space repos and so again here we want to
7:14 clone those files that are in the
7:17 repository to our local system
7:19 so in order to do that we're going to
7:22 use the command git clone so get space clone
7:24 clone
7:25 and then we're going to need our
7:28 location of the git repository so let's
7:30 go back to the browser
7:32 and we're going to go over here to clone
7:33 or download
7:37 and here you will see clone with https
7:40 so make sure that this says https and
7:42 you can simply click on this button
7:45 which will copy this to the clipboard
7:47 and then we'll move back to our command prompt
7:48 prompt
7:50 and paste that in
7:53 and once that's pasted just hit enter
7:56 and it will clone your repository into
7:58 the repos directory and so just to
8:00 verify that we've cloned all the
8:02 necessary files
8:04 we're going to cd into the master
8:13 and we're going to do a dir
8:16 and there you have it all of the files
8:19 are cloned exactly as it is here in the repository
8:21 repository
8:23 now just as a note in order to keep
8:26 these files up to date we need to run a
8:29 different command which would be a git pull
8:30 pull
8:33 and this can be run at any time in order
8:35 to pull down any files or folders that
8:38 have been updated since you did the
8:40 first pull which in this case
8:43 would be cloning of the repository
8:45 again this will provide you with the
8:48 latest and most up-to-date files at any
8:51 given moment in time and in this case
8:53 since nothing has changed i have been
8:56 prompted with a message stating that i'm
8:59 up to date if nothing is changed you
9:01 will always be prompted with this message
9:02 message
9:03 if there was
9:05 it will pull your changes down to your
9:08 synced local copy and the process for
9:11 windows is completed and is similar in
9:14 mac os and i'll move over to my mac os
9:16 virtual machine and log in
9:19 and once you've logged in just going to
9:20 go over here to the terminal and i'm
9:22 just going to cd
9:24 to make sure i'm in my home directory
9:26 then i'm going to do exactly what we did
9:28 in windows so i'm going to run the
9:30 command mk dir
9:31 dir
9:33 space repos
9:35 and create the repos directory and i'm
9:39 going to move in to the repos directory
9:42 and then i'm going to run git
9:44 now for those of you who do not have get
9:46 installed you will be prompted with this
9:48 message to install it and you can go
9:50 ahead and just install you'll be
9:52 prompted with this license agreement you
9:55 can just hit agree
9:57 and depending on your internet
9:59 connection this will take a few minutes
10:02 to download and install so as this is
10:04 going to take a few minutes i'm going to
10:06 pause the video here and come back when
10:08 it's finished installing
10:10 okay and the software was successfully installed
10:11 installed
10:14 so just to do a double check i'm going
10:15 to run git
10:17 and as you can see it's been installed
10:19 so now that we have git installed we
10:22 want to clone all the directories and
10:24 the files from the github repository to
10:27 our local repos folder so i'm going to
10:29 open up my browser and i'm going to
10:32 paste my github repository url right here
10:34 here and
10:35 and
10:38 you'll see the clone button over here so
10:40 we're going to click on this button
10:43 and here we can download zip but like i
10:45 said we're not going to be doing that
10:47 we're going to go over here
10:50 and copy this url for the github
10:54 repository again make sure it says https
10:56 and we're going to copy this to our clipboard
10:58 clipboard
10:59 and we're going to go back to our terminal
11:01 terminal
11:03 and we are going to
11:06 run the command git space clone
11:15 and as you can see here i've cloned the
11:17 repository and all the files and folders
11:20 within it and so as is my best practice
11:22 i always like to verify that the files
11:25 have been properly cloned and so i'm
11:27 going to run the command ls
11:30 just to make sure and go into the master directory
11:32 directory
11:34 and do a double check and as you can see
11:36 the clone was successful as all the
11:38 files and folders are here and again to
11:41 download any updates to any files or
11:43 directories we can simply run the
11:47 command git space poll and because we've
11:48 already cloned it it's already up to
11:51 date and so the process is going to be
11:53 extremely similar on linux so i'm going
11:56 to simply move over to my linux machine
11:58 and log in
12:00 i'm going to open up a terminal
12:01 and i'm going to make my terminal a
12:04 little bit bigger for better viewing and
12:06 so like the other operating systems i
12:08 want to clone all the files and
12:12 directories from the github repository
12:14 to my machine and so i'm going to cd
12:16 here to make sure i'm in my home
12:18 directory and like we did before we want
12:21 to create a directory called repos so
12:25 i'm going to run the command mkdir space
12:28 repos and we're going to
12:30 create the repos directory we're now
12:33 going to move into the repos directory
12:35 and here we're going to run the git command
12:36 command
12:38 and because git is not installed on my
12:40 machine i've been prompted with the
12:42 command in order to install it so i'm
12:45 going to run that now so the command is sudo
12:46 sudo space
12:47 space apt
12:48 apt space
12:50 space install
12:51 install space
12:53 space get
12:54 get
12:57 and i'm going to enter in my password
13:06 and just to verify i'm going to run the
13:08 command git and i can see here it's been
13:10 installed so now i'm going to go over
13:12 here to my browser and i'm going to
13:15 paste in the url to my repository and
13:16 over here we'll have the same clone
13:19 button and when i click on it i can get
13:21 the url for the github repository in
13:24 order to clone it again make sure before
13:27 you clone that this says https
13:30 if it doesn't say https you'll have the
13:32 option of clicking on a button that will
13:35 allow you to do so once it says https
13:37 then you can simply copy this url to
13:40 your clipboard by clicking on the button
13:42 and then move over back to the terminal
13:44 and we are going to clone this
13:49 repository by typing in the get space
13:50 clone command
13:53 along with the url of the repository
13:56 and when we hit enter it'll clone it
13:58 right down to our directory so i'm just
14:00 going to move into the master directory
14:06 and again they're all here so again if
14:09 you're looking to update your repository
14:12 with any new updated changes you can
14:13 simply run
14:16 the get space pull command
14:18 to update those files
14:20 and so that's the linux setup so you
14:23 have a local copy of the lesson files
14:25 now there's just one more thing that i
14:27 highly recommend you do and to
14:29 demonstrate it i'm going to move back
14:31 over to my windows virtual machine now
14:33 i'm going to open up the web browser again
14:35 again
14:37 open up a new tab
14:46 https colon forward slash forward slash code.visualstudio.com
14:47 code.visualstudio.com [Music]
14:53 and i'll make sure that the url is in
14:55 the text below there is a version of
14:58 this code editor available for windows
15:01 mac os and linux you can simply click on
15:03 this drop down and you'll find the link
15:05 to download it for your operating system
15:07 but in most cases it should
15:10 automatically show the correct version
15:17 and it should start downloading
15:19 automatically and you should be able to
15:29 now the reason behind me asking you to
15:32 install this utility is for editing code
15:34 of different sorts
15:36 whether you're adjusting yaml or python
15:39 documents for deployment manager
15:41 or even managing scripts
15:43 a code editor will give you the ease of
15:47 use when it comes to managing editing
15:49 and even syntactical highlighting of
15:51 code as shown here below
15:53 it will highlight the code to make it
15:55 easier to understand now if you have
15:57 your own editor that you would prefer to
16:00 use go ahead and use that but for those
16:02 that don't my recommendation will be to
16:05 use visual studio code so to install
16:07 visual studio code we're just going to
16:09 accept this license agreement
16:11 and then we're going to click on next
16:13 and we're just going to follow all the defaults
16:22 to install it
16:24 it's going to take a minute or two
16:26 and for those running windows you want
16:28 to make sure that this box is checked
16:31 off so that you can launch it right away
16:36 another recommendation would be to go
16:39 over here to the task bar so you can pin
16:42 it in place so that it's easier to find
16:45 and so now you have access to all the
16:47 resources that's needed for this course
16:49 but with that that's everything that i
16:51 wanted to cover for this lesson so you
16:53 can now mark this lesson as complete and
16:55 let's move on to the next one [Music]
16:59 [Music]
17:02 welcome back and in this lesson i wanted
17:05 to discuss the various certifications
17:07 available for google cloud as this
17:10 number keeps on growing and i am looking
17:12 to keep this lesson as up to date as
17:14 possible so with that being said let's
17:16 dive in
17:18 now google cloud has released a slew of
17:21 certifications in many different areas
17:23 of expertise as well as different
17:25 experience levels
17:27 now there are two levels of difficulty
17:29 when it comes to the google cloud
17:31 certifications starting off with the
17:33 associate level we see that there is
17:36 only the one certification which is the
17:38 cloud engineer the associate level
17:41 certification is focused on the
17:44 fundamental skills of deploying
17:46 monitoring and maintaining projects on
17:49 google cloud this is a great starting
17:52 point for those completely new to cloud
17:54 and google recommends the associate
17:57 cloud engineer as the starting point to
18:00 undergoing your certification journey
18:02 this was google cloud's very first
18:05 certification and to me was the entry
18:07 point of wanting to learn more as an
18:10 engineer in cloud in my personal opinion
18:13 no matter your role this certification
18:15 will cover the general knowledge that is
18:18 needed to know about starting on google
18:21 cloud and the services within it which
18:24 is why i labeled it here as the
18:26 foundational level course i also
18:28 consider this the stepping stone
18:30 into any other professional level
18:33 certifications which also happens to be
18:36 a recommended path by google with a
18:39 great course and some dedication i truly
18:41 believe that anyone with even a basic
18:44 skill level in it should be able to
18:46 achieve this associate level
18:48 certification now it is recommended from
18:51 google themselves that prior to taking
18:54 this exam that you should have over six
18:56 months experience building on google
18:59 cloud for those of you with more of an
19:01 advanced background in google cloud or
19:04 even other public clouds this
19:06 certification should be an easy pass as
19:08 it covers the basics that you should be
19:10 familiar with adding a google twist to
19:13 it at the time of this lesson this exam
19:17 is two hours long and the cost is 125 us
19:20 dollars the exam is a total of 50
19:22 questions which consists of both
19:25 multiple choice and multiple answer
19:27 questions each of the questions contain
19:30 three to four line questions with single
19:33 line answers that by the time you finish
19:35 this course you should have the
19:37 confidence to identify the incorrect answers
19:38 answers
19:40 and be able to select the right answers
19:42 without a hitch moving into the
19:44 professional level certifications there
19:47 are seven certifications that cover a
19:50 variety of areas of specialty depending
19:52 on your role you might want to take one
19:55 or maybe several of these certifications
19:57 to help you gain more knowledge in
20:00 google cloud or if you love educating
20:02 yourself and you're really loving your
20:05 journey in gcp you will probably want to
20:07 consider pursuing them all in my
20:10 personal opinion the best entry point
20:12 into the professional level would be the
20:16 cloud architect it is a natural step up
20:18 from the associate cloud engineer and it
20:21 builds on top of what is learned through
20:24 that certification with a more detailed
20:27 and more thorough understanding of cloud
20:29 architecture that is needed for any
20:31 other certification there is some
20:34 overlap from the cloud engineer which is
20:36 why in my opinion doing this
20:39 certification right after makes sense it
20:41 also brings with it the ability to
20:45 design develop and manage secure
20:47 scalable and highly available dynamic
20:50 solutions it is a much harder exam and
20:53 goes into great depth on services
20:55 available the professional cloud
20:58 architect is a great primer for any
21:00 other professional level certification
21:03 and can be really helpful to solidify
21:06 the learning that is needed in any other
21:08 technical role i find it the most common
21:11 path that many take who look to learn
21:14 google cloud which is why i personally
21:16 recommend it to them and at the time of
21:18 this lesson it also holds the highest
21:20 return on investment
21:23 due to the highest average wage over any
21:26 other current cloud certification in the
21:28 market google recommends over three
21:31 years of industry experience including
21:34 one year on google cloud before
21:36 attempting these exams with regards to
21:39 the exams in the professional tier they
21:41 are much harder than the associate level
21:43 and at the time of this course is two
21:47 hours long and the cost is 200 us
21:49 dollars these exams are a total of 50
21:52 questions which consists of both
21:54 multiple choice and multiple answer
21:56 questions it's the same amount of
21:59 questions with the same amount of time
22:02 but it does feel much harder each of the
22:04 questions contain four to five line
22:07 questions with one to three line answers
22:10 it's definitely not a walk in the park
22:12 and will take some good concentration
22:14 and detailed knowledge on google cloud
22:17 to solidify a pass after completing the
22:20 cloud architect certification depending
22:22 on your role my suggestion would be to
22:24 pursue the areas that interest you the
22:27 most to make your journey more enjoyable
22:30 for me at the time i took the security
22:32 engineer out as i am a big fan of
22:34 security and i knew that i would really
22:37 enjoy the learning and make it more fun
22:38 for me this is also a great
22:41 certification for those who are looking
22:44 to excel their cloud security knowledge
22:46 on top of any other security
22:49 certifications such as the security plus
22:51 or cissp
22:53 now others may be huge fans of
22:55 networking or hold other networking
22:58 certifications such as the ccna and so
23:00 obtaining the network engineer
23:03 certification might be more up your
23:04 alley and give you a better
23:07 understanding in cloud networking now if
23:09 you're in the data space you might want
23:12 to move into the data engineer exam as
23:14 well as taking on the machine learning
23:17 engineer exam to really get some deeper
23:20 knowledge in the areas of big data
23:22 machine learning and artificial
23:24 intelligence on google cloud now i know
23:27 that there are many that love devops me
23:29 being one of them and really want to dig
23:32 deeper and understand sre and so they
23:35 end up tackling the cloud developer and
23:38 cloud devops engineer certifications so
23:40 the bottom line is whatever brings you
23:43 joy in the area of your choosing start
23:46 with that and move on to do the rest all
23:48 the professional certifications are
23:50 valuable but do remember that they are
23:53 hard and need preparation for study last
23:56 but not least is the collaboration
23:58 engineer certification and this
24:01 certification focuses on google's core
24:04 cloud-based collaboration tools that are
24:06 available in g suite or what is now
24:10 known as google workspaces such as gmail
24:13 drive hangouts docs and sheets now the
24:15 professional level collaboration
24:18 engineers certification dives into more
24:20 advanced areas of g suite such as mail
24:23 routing identity management and
24:25 automation of it all using tools
24:27 scripting and apis
24:30 this certification is great for those
24:32 looking to build their skill set as an
24:35 administrator of these tools but gives
24:37 very little knowledge of google cloud
24:40 itself so before i move on there is one
24:42 more certification that i wanted to
24:44 cover that doesn't fall under the
24:47 associate or professional certification
24:49 levels and this is the google cloud
24:51 certified fellow program
24:53 now this is by far
24:55 one of the hardest certifications to
24:58 obtain as there are very few certified
25:00 fellows at the time of recording this
25:02 lesson it is even harder than the
25:04 professional level certifications and
25:06 this is due to the sheer level of
25:09 competency with hybrid multi-cloud
25:12 architectures using google cloud anthos
25:14 google's recommended experience
25:17 is over 10 years with a year of
25:19 designing enterprise solutions with
25:22 anthos then a four-step process begins
25:25 first step is to receive a certified
25:27 fellow invitation from google and once
25:30 you've received that invitation then you
25:32 need to submit an application with some
25:34 work samples that you've done
25:37 showing google your competency in hybrid
25:40 multi-cloud once that is done the third
25:43 step is a series of technical hands-on
25:46 labs that must be completed and is a
25:47 qualifying assessment that must be
25:50 passed in order to continue and after
25:52 all that the last step is a panel
25:55 interview done with google experts in
25:57 order to assess your competency of
25:59 designing hybrid and multi-cloud
26:02 solutions with anthos so as you can see
26:05 here this is a very difficult and highly
26:07 involved certification process
26:10 to achieve the title of certified fellow
26:12 this is definitely not for the faint of
26:15 heart but can distinguish yourself as a
26:18 technical leader in anthos and a hybrid
26:21 multi-cloud expert in your industry now
26:23 i get asked many times whether or not
26:25 certifications hold any value are they
26:28 easy to get are they worth more than the
26:30 paperwork that they're printed on and
26:32 does it show that people really know how
26:35 to use google cloud and my answer is
26:37 always yes as the certifications hold
26:40 benefits beyond just the certification
26:43 itself and here's why targeting yourself
26:45 for a certification gives you a
26:47 milestone for learning something new
26:49 with this new milestone it allows you to
26:52 put together a study plan in order to
26:54 achieve the necessary knowledge needed
26:57 to not only pass the exam but the skills
26:59 needed to progress in your everyday
27:02 technical role this new knowledge helps
27:05 keep your skills up to date therefore
27:07 making you current instead of becoming a
27:10 relic now having these up-to-date skills
27:13 will also help advance your career
27:15 throughout my career in cloud i have
27:17 always managed to get my foot in the
27:20 door with various interviews due to my
27:22 certifications it gave me the
27:24 opportunity to shine in front of the
27:27 interviewer while being able to
27:29 confidently display my skills in cloud
27:31 it also allowed me to land the jobs that
27:34 i sought after as well as carve out the
27:37 career path that i truly wanted on top
27:39 of landing the jobs that i wanted i was
27:42 able to achieve a higher salary due to
27:45 the certifications i had i have doubled
27:47 and tripled my salary since i first
27:49 started in cloud all due to my
27:52 certifications and i've known others
27:54 that have obtained up to five times
27:56 their salary because of their
27:58 certifications now this was not just
28:00 from achieving the certification to put
28:03 on my resume and up on social media but
28:05 from the knowledge gained through the
28:08 process and of course i personally feel
28:11 that having your skills constantly up to date
28:12 date
28:14 advancing your career and getting the
28:17 salary that you want keeps you motivated
28:20 to not only get more certifications but
28:23 continue the learning process i am and
28:26 always have been a huge proponent of
28:29 lifelong learning and as i always say
28:31 when you continue learning you continue
28:34 to grow so in short google cloud
28:37 certifications are a great way to grow
28:39 and so that about covers everything that
28:41 i wanted to discuss in this lesson so
28:43 you can now mark this lesson as complete
28:45 and i'll see you in the next one [Music]
28:50 [Music]
28:51 welcome back
28:53 and in this lesson i'm going to be
28:54 talking about the fictitious
28:57 organization called bow tie inc that i
29:00 will be using throughout the course
29:01 now while going through the
29:04 architectures and demos in this course
29:07 together i wanted to tie them to a real
29:09 world situation
29:11 so that the theory and practical
29:14 examples are easy to understand
29:16 tying it to a scenario is an easy way to
29:19 do this as well it makes things a lot
29:21 more fun
29:23 so the scenario again
29:26 that i will be using is based on bow tie ink
29:27 ink
29:29 so before we get started with the course
29:31 i'd like to quickly run through the scenario
29:32 scenario
29:34 and don't worry it's going to be very
29:38 high level and i will keep it brief
29:40 so bow tie ink is a bow tie
29:42 manufacturing company that designs and
29:45 manufactures bow ties within their own factories
29:46 factories
29:49 they also hold a few retail locations
29:52 where they sell their bow ties as well
29:55 as wholesale to other thai and men's
29:57 fashion boutiques and department stores
29:58 across the globe
30:00 being in the fashion business
30:04 they mainly deal with commerce security
30:06 and big data sets
30:09 bow tie inc is a global company and they
30:12 are headquartered in montreal canada
30:16 they employ about 300 people globally
30:18 with a hundred of them being in sales
30:20 alone to support both the brick and
30:24 mortar stores and wholesale branches
30:25 there are many different departments to
30:28 the company that make it work such as
30:30 in-store staff i.t
30:31 i.t
30:33 marketing for both in-store and online sales
30:34 sales
30:38 manufacturing finance and more
30:40 the types of employees that work in bow
30:43 tie inc vary greatly due to the various
30:46 departments and consists of many people
30:48 such as sales for both in-store and
30:52 wholesale managers that run the stores
30:54 and sewers that work in the
30:55 manufacturing plant
30:58 and many more that work in these various departments
30:59 departments
31:01 the business has both offices
31:03 and brick and mortar stores
31:05 in montreal
31:08 london and los angeles
31:10 now due to the thrifty mindset of
31:12 management concentrating all their
31:15 efforts on commerce and almost none in
31:17 technical infrastructure has caused
31:20 years of technical debt
31:23 and is now a complete disaster
31:25 within the brick and mortar location
31:28 there contains two racks with a few
31:30 servers and some networking equipment
31:34 the global inventory of bow ties are
31:37 updated upon sales in both stores
31:40 and wholesale as well as new stock that
31:43 has been manufactured from the factory
31:45 there are point-of-sale systems in each
31:47 store or office location
31:50 these systems are all connected to each
31:54 other over a vpn connection in order to
31:57 keep updates of the inventory fresh
31:59 all office and store infrastructure are
32:01 connected to each other and the montreal headquarters
32:03 headquarters
32:05 and the point of sale systems and kiosk
32:08 systems are backed up to tape in the
32:10 montreal headquarters as well and like i
32:12 said before
32:15 management is extremely thrifty but they
32:17 have finally come to the realization
32:19 that they need to start spending money
32:22 on the technical infrastructure in order
32:25 to scale so diving into a quick overview
32:28 of exactly what the architecture looks
32:30 like the head office is located in
32:32 montreal canada
32:35 it has its main database for the crm and
32:37 point-of-sale systems
32:40 as well as holding the responsibility of
32:42 housing the equipment for the tape
32:45 backups the tapes are then taken off
32:48 site within montreal by a third-party
32:49 company for storage
32:52 the company has two major offices one in
32:55 london covering the eu
32:57 and the other in the west coast us
33:00 in los angeles these major offices
33:03 are also retail locations that consume
33:06 i.t services from the headquarters in
33:08 montreal again being in the fashion
33:11 business bowtie inc employs a large
33:13 amount of sales people and the managers
33:15 that support them
33:16 these employees operate the
33:18 point-of-sale systems so we're
33:21 constantly looking to have the website
33:24 sales and the inventory updated at all
33:27 times each salesperson has access to
33:31 email and files for updated forecasts on
33:34 various new bowtie designs
33:36 most sales people communicate over a
33:40 voice over ip phone and chat programs
33:42 through their mobile phone the managers
33:45 also manually look at inventory on
33:46 what's been sold
33:48 versus what's in stock
33:51 to predict the sales for stores in
33:52 upcoming weeks
33:54 this will give manufacturing a head
33:57 start to making more bow ties for future sales
33:58 sales
34:00 now whatever implementations that we
34:02 discuss throughout this course
34:04 we'll need to support the day-to-day
34:07 operations of the sales people and the managers
34:08 managers
34:10 and because of the different time zones
34:11 in play
34:13 the back-end infrastructure needs to be available
34:14 available
34:18 24 hours a day seven days a week
34:21 any downtime will impact updated
34:25 inventory for both online sales as well
34:28 as store sales at any given time
34:29 now let's talk about the current
34:32 problems that the business is facing
34:36 most locations hold on premise hardware
34:38 that is out of date and also out of warranty
34:39 warranty
34:41 the business looked at extending this
34:45 warranty but became very costly as well
34:47 management is on the fence about whether
34:50 to buy new on-premise hardware or just
34:53 move to the cloud they were told that
34:55 google cloud is the way to go when it
34:57 comes to the retail space and so are
34:59 open to suggestions
35:02 yet still very weary now when it comes
35:05 to performance there seems to be a major
35:08 lag from the vpn connecting from store
35:09 to store
35:12 as well as the head office that's
35:15 responsible for proper inventory
35:17 thus slowing down the point of sale
35:20 systems and to top it all off
35:23 backups taking an exorbitant amount of time
35:24 time
35:26 is consuming a lot of bandwidth with the
35:29 current vpn connection now bowtie inc
35:31 has always struggled with the lack of
35:35 highly available systems and scalability
35:37 due to cost of new hardware this is
35:40 causing extreme stress for online
35:42 e-commerce whenever a new marketing
35:44 campaign is launched
35:46 as the systems are unable to keep up
35:48 with the demand
35:50 looking at the forecast for the next two
35:52 quarters the business is looking to open
35:55 up more stores in the eu
35:57 as well as in the us
35:59 and with the current database in place
36:01 providing very inefficient high
36:04 availability or scalability
36:06 there is a major threat of the main
36:08 database going down
36:09 now when it comes to assessing the
36:12 backups the tape backups have become
36:15 very slow especially backing up from
36:18 london and the off-site storage costs
36:21 continuously go up every year
36:23 the backups are consuming a lot of
36:25 bandwidth and are starting to become the
36:26 major pain point
36:29 for connection issues between locations
36:32 on top of all these issues the small it
36:35 staff that is employed have outdated i.t
36:38 skills and so there is a lot of manual
36:40 intervention that needs to be done
36:43 to top it all off all the running around
36:45 that is necessary to keep the outdated
36:47 infrastructure alive
36:50 management is also now pushing to open
36:53 new stores to supply bow ties globally
36:55 given the ever-growing demand
36:57 as well as being able to supply the
37:00 demand of bow ties online through their
37:02 e-commerce store
37:04 now these are some realistic yet common
37:07 scenarios that come up in reality for a
37:09 lot of businesses that are not using
37:11 cloud computing
37:14 and throughout the course we will dive
37:16 into how google cloud can help ease the pain
37:17 pain
37:20 of these current ongoing issues
37:22 now at a high level with what the
37:24 business wants to achieve and what the
37:26 favorable results are
37:29 they are all interrelated issues so
37:31 bowtie inc requires a reliable and
37:33 stable connection between all the
37:36 locations of the stores and offices
37:38 so that sales
37:41 inventory and point-of-sale systems are
37:44 quick and up-to-date at all times
37:46 this will also allow all staff in these
37:50 locations to work a lot more efficiently
37:52 with a stable and reliable connection in place
37:53 place
37:56 backups should be able to run smoothly
37:59 and also eliminate the cost of off-site backup
38:00 backup
38:02 not to mention the manpower and
38:04 infrastructure involved to get the job done
38:05 done
38:08 while scaling up offices and stores due
38:10 to increase in demand
38:11 the business should be able to deploy
38:14 stores in new regions using pay as you
38:16 go billing while also meeting the
38:19 requirements and regulations when it
38:23 comes to gpdr and pci
38:25 this would also give the business
38:28 flexibility of having a disaster
38:30 recovery strategy in place
38:32 in case there was a failure of the main
38:35 database in montreal now as mentioned
38:38 before the business is extremely thrifty
38:40 especially when it comes to spend on it infrastructure
38:41 infrastructure
38:44 and so the goal is to have the costs as
38:46 low as possible yet having the
38:50 flexibility of scaling up when needed
38:52 especially when new marketing campaigns
38:55 are launched during high demand sales periods
38:56 periods
38:58 this would also give bowtie inc the
39:01 flexibility of analyzing sales ahead of time
39:02 time
39:06 using real-time analytics and catering
39:08 to exactly what the customer is demanding
39:09 demanding
39:12 thus making inventory a lot more
39:14 accurate and reducing costs in
39:16 manufacturing items that end up going on
39:19 sale and costing the company money in
39:20 the end
39:22 finally when it comes to people
39:25 supporting infrastructure automation is key
39:26 key
39:28 removing manual steps and a lot of the
39:30 processes can reduce the amount of
39:33 manpower needed to keep the
39:34 infrastructure alive
39:37 and especially will reduce downtime when
39:39 disaster arises
39:41 putting automation in place will also
39:44 reduce the amount of tedious tasks that
39:47 all departments have on their plate
39:49 so that they can focus on more important
39:50 business needs
39:53 now that's the scenario at a high level
39:56 i wanted to really emphasize that this
39:58 is a typical type of scenario that you
40:00 will face as a cloud engineer and a
40:02 cloud architect
40:04 the key to this scenario is the fact
40:06 that there are areas that are lacking in detail
40:08 detail
40:10 and areas that are fully comprehensible
40:13 and this will trigger knowing when and
40:16 where to ask relevant questions
40:19 especially in your day-to-day role as an engineer
40:20 engineer
40:22 it will allow you to fill the gaps so
40:24 that you're able to figure out what
40:26 services you will need and what type of
40:28 architecture to use
40:31 this is also extremely helpful when it
40:32 comes to the exam
40:35 as in the exam you will be faced with
40:37 questions that pertain to real life
40:39 scenarios that will test you in a
40:42 similar manner knowing what services and
40:44 architecture to use based on the
40:46 information given
40:48 will always give you the keys to the
40:51 door with the right answer and lastly
40:53 when it comes to the demos
40:55 this scenario used throughout the course
40:58 will help put things in perspective
41:00 as we will come to resolve a lot of
41:02 these common issues
41:04 real world scenarios can give you a
41:07 better perspective on learning as it is
41:09 tied to something that makes it easy to comprehend
41:11 comprehend
41:13 and again bow tie inc is the scenario
41:15 that i will be using throughout the
41:18 course to help you grasp these concepts
41:20 so that's all i have to cover this
41:23 scenario so you can now mark this lesson
41:25 as complete and let's move on to the
41:26 next one [Music]
41:29 [Music]
41:32 hey this is anthony cevallos and what i
41:35 wanted to show you here is where you can
41:38 access the practice exam on the exam pro platform
41:40 platform
41:41 so once you've signed up for your
41:44 account you can head on over to the course
41:45 course
41:47 and you can scroll down to the bottom of
41:50 the curriculum list and you will see the
41:52 practice exams here at the bottom
41:54 now just as a quick note
41:56 you should generally not attempt the
42:00 practice exam unless you have completed
42:02 all the lecture content including the
42:05 follow alongs as once you start to see
42:08 those questions you will get an urge to
42:10 start remembering these questions
42:13 and so i always recommend to use the
42:17 practice exam as a serious attempt
42:19 and not just a way to get to the final
42:22 exam at a faster pace taking your time
42:23 with the course
42:26 will allow you to really prevail through
42:29 these practice exams and allow you for a
42:32 way better pass rate on the final exam
42:35 looking here we can see two practice
42:38 exams with 50 questions each and so i
42:41 wanted to take a moment here and dive
42:43 into the practice exam
42:44 and show you what some of these
42:47 questions will look like and so clicking
42:49 into one of these exams we can get right
42:52 into it and so as you can see i've
42:55 already started on practice exam one and
42:57 so i'm going to click into that right
43:00 now and as you can see the exam is
43:02 always timed and in this case will be
43:06 120 minutes for this specific exam there
43:09 are 50 questions for this practice exam
43:11 and you will see the breakdown in the
43:12 very beginning
43:14 of the types of questions you will be
43:17 asked now for the google cloud exams at
43:20 the associate level they are usually
43:22 structured in a common format
43:25 they generally start with one or two
43:27 lines of sentences which will typically
43:30 represent a scenario followed by the
43:33 question itself this question tends to
43:36 be brief and to the point immediately
43:39 following that you will be presented
43:41 with a number of answers
43:44 usually four or five in nature and can
43:47 sometimes be very very technical as they
43:50 are designed for engineers like asking
43:53 about which gcloud commands to use to
43:57 execute in a given scenario as well as
43:59 theoretical questions that can deal with
44:02 let's say best practices or questions
44:05 about the specific services themselves
44:08 now these answers will come in two
44:11 different styles either multi-choice or
44:14 multi-select the multi-choice is usually
44:17 about identifying the correct answer
44:20 from a group of incorrect or less
44:23 correct answers whereas the multi-select
44:26 will be about choosing multiple correct
44:30 solutions to identify the answer as well
44:32 for this associate exam the overall
44:36 structure is pretty simple in nature and
44:39 typically will be either right or wrong
44:41 now sometimes these questions can get
44:44 tricky where there are multiple possible answers
44:45 answers
44:47 and you will have to select the most
44:49 suitable ones
44:51 now although most of these types of questions
44:52 questions
44:55 usually show up in the professional exam
44:58 they can sometimes peek their heads into
45:01 the associate and so a great tactic that
45:04 i always like to use is to immediately
45:07 identify what matters in the question itself
45:08 itself
45:10 and then to start ruling out any of the
45:13 answers that are wrong and this will
45:15 allow you to answer the question a lot
45:18 more quickly and efficiently as it will
45:21 bring the more correct answer to the
45:24 surface as well as making the answer a
45:26 lot more obvious
45:29 and making the entire question less
45:31 complex so for instance with this
45:34 question here you are immediately asked
45:37 about google's recommended practices
45:40 when it comes to using cloud storage as
45:43 backup for disaster recovery and this
45:46 would be for a specific storage type and
45:48 so quickly looking at the answers you
45:51 can see that standard storage and near
45:53 line storage will not be part of the
45:56 answer and so that will leave cold line
45:59 storage or archive storage as the two
46:00 possible choices
46:03 for the answer of this question and so
46:06 these are the typical techniques that i
46:07 always like to use
46:10 for these exams and so provided that
46:12 you've gone through all the course
46:14 content you will be able to answer these
46:17 technical questions with ease and
46:20 following the techniques i've just given
46:22 and applying them to each question
46:25 can really help you in not only this
46:28 practice exam but for the final exam
46:31 landing you a passing grade getting you certified
46:33 certified [Music]
46:38 [Music]
46:40 welcome back and in this section i
46:43 wanted to really hone in on the basics
46:46 of cloud computing the characteristics
46:48 that make it what it is
46:50 the different types of computing
46:52 and how they differ from each other as
46:55 well as the types of service models now
46:58 in this lesson i wanted to dive into the
47:01 definition of cloud computing and the
47:04 essential characteristics that define it
47:06 now for some advanced folk watching this
47:08 this may be a review
47:09 and for others
47:12 this may fulfill a better understanding
47:15 on what is cloud now cloud is a term
47:17 that is thrown around a lot these days
47:20 yet holds a different definition or
47:22 understanding to each and every individual
47:24 individual
47:27 you could probably ask 10 people on
47:29 their definition of cloud and chances are
47:30 are
47:34 everyone would have their own take on it
47:37 many see cloud as this abstract thing in
47:38 the sky
47:41 where files and emails are stored but
47:43 it's so much more than that
47:46 now the true definition of it can be put
47:48 in very simple terms
47:51 and can be applied to any public cloud
47:53 being google cloud aws
47:54 aws
47:56 and azure
47:58 moving on to the definition
48:01 cloud computing is the delivery of a
48:04 shared pool of on-demand computing
48:07 services over the public internet
48:09 that can be rapidly provisioned
48:10 and released
48:13 with minimal management effort or
48:15 service provider interaction
48:17 these computing services
48:20 consist of things like servers storage
48:21 storage
48:24 networking and databases they can be
48:26 quickly provisioned and accessed from
48:28 your local computer
48:30 over an internet connection now coupled
48:33 with this definition are five essential
48:35 characteristics that define the cloud
48:38 model that i would like to go over with
48:40 you and i believe that it would hold
48:43 massive benefits to understanding when
48:46 speaking to cloud this information can
48:49 be found in the white paper published by
48:52 the national institute of standards and
48:54 technology i will include a link to this
48:57 publication in the lesson notes for your
48:59 review now these essential
49:02 characteristics are as follows the first one
49:03 one
49:06 is on-demand self-service
49:08 and this can be defined as being able to
49:11 provision resources automatically
49:14 without requiring human interaction on
49:15 the provider's end
49:17 so in the end you will never need to
49:20 call up or interact with the service
49:22 provider in order to get resources
49:24 provisioned for you
49:26 as well you have the flexibility of
49:29 being able to provision and de-provision
49:31 these resources
49:33 whenever you need them and at any given
49:35 time of the day
49:37 the second characteristic is broad
49:39 network access
49:41 now this simply means that cloud
49:43 computing resources are available over
49:46 the network and can be accessed by many
49:49 different customer platforms such as
49:50 mobile phones
49:53 tablets or computers
49:54 in other words
49:56 cloud services are available over a
49:59 network moving into the third
50:02 is resource pooling
50:04 so the provider's computing resources
50:06 are pooled together to support a
50:10 multi-tenant model that allows multiple
50:13 customers to share the same applications
50:16 or the same physical infrastructure
50:18 while retaining privacy and security
50:20 over their information
50:22 this includes things like processing power
50:23 power
50:26 memory storage and networking
50:28 it's similar to people living in an
50:31 apartment building sharing the same
50:33 building infrastructure like power and
50:35 water yet they still have their own
50:38 apartments and privacy within that infrastructure
50:39 infrastructure
50:42 this also creates a sense of location
50:44 independence in that the customer
50:48 generally has no control or knowledge
50:51 over the exact location of the provided resources
50:52 resources
50:55 but they may be able to specify location
50:58 at a higher level of abstraction so in
51:00 the end the customer does not really
51:02 have the option of choosing exactly
51:05 which server server rack or data center
51:07 for that matter
51:09 of where the provided resources are
51:10 coming from
51:12 they will only be able to have the
51:16 option to choose things like regions or
51:19 sections within that region
51:21 the fourth essential characteristic is
51:24 rapid elasticity
51:25 this to me
51:27 is the key factor of what makes cloud
51:31 computing so great and so agile
51:33 capabilities can be elastically
51:35 provisioned and released
51:38 in some cases automatically to scale
51:42 rapidly outwards and inwards in response
51:43 with demand
51:46 to the consumer the capabilities
51:48 available for provisioning often appear
51:50 to be unlimited
51:53 and can be provisioned in any quantity
51:56 at any time and touching on the fifth
51:59 and last characteristic cloud systems
52:02 automatically control and optimize
52:05 resource usage by leveraging a metering capability
52:06 capability
52:09 resource usage can be monitored
52:12 controlled and reported providing
52:15 transparency for both the provider and
52:17 consumer of the service
52:19 now what this means is that cloud
52:22 computing resource usage is metered and
52:25 you can pay accordingly for what you've used
52:26 used
52:27 resource utilization
52:30 can be optimized by leveraging
52:33 pay-per-use capabilities
52:35 and this means that cloud resource usage
52:37 whether they are instances that are running
52:38 running
52:42 cloud storage or bandwidth it all gets
52:44 monitored measured and reported by the
52:47 cloud service provider the cost model is
52:48 based on
52:51 pay for what you use and so the payment
52:54 is based on the actual consumption
52:56 by the customer
52:58 so knowing these key characteristics of
53:00 cloud computing along with their benefits
53:01 benefits
53:04 i personally find can really give you a
53:06 leg up on the exam
53:08 as well as speaking to others in your
53:10 day-to-day role
53:12 as more and more companies start moving
53:15 to cloud i hope this lesson has
53:17 explained to you on what is cloud
53:20 computing and the benefits it provides
53:22 so that's all i have for this lesson
53:24 so you can now mark this lesson as
53:27 complete and let's move on to the next one
53:32 welcome back
53:34 in this lesson i wanted to go over the
53:37 four common cloud deployment models and
53:40 distinguish the differences between
53:41 public cloud multi-cloud
53:42 multi-cloud
53:44 private cloud and hybrid cloud
53:46 deployment models
53:48 this is a common subject that comes up a
53:50 fair amount in the exam
53:52 as well as a common theme in any
53:55 organization moving to cloud knowing the
53:57 distinctions between them can be
54:00 critical to the types of architecture
54:03 and services that you would use for the
54:06 specific scenario you are given
54:08 as well as being able to speak to the
54:11 different types of deployment models as
54:13 an engineer in the field getting back to
54:15 the deployment models let's start with
54:18 the public cloud model which we touched
54:20 on a bit in our last lesson
54:23 now the public cloud is defined as
54:25 computing services offered by
54:27 third-party providers
54:29 over the public internet
54:32 making them available to anyone who
54:34 wants to use or purchase them so this
54:37 means that google cloud will fall under
54:41 this category as a public cloud
54:43 there are also other vendors that fall
54:47 under this category such as aws and azure
54:48 azure
54:51 so again public cloud is a cloud that is
54:54 offered over the public internet
54:57 now public clouds can also be connected
55:00 and used together within a single
55:03 environment for various use cases
55:06 this cloud deployment model is called multi-cloud
55:07 multi-cloud
55:10 now a multi-cloud implementation can be
55:13 extremely effective if architected in
55:15 the right way
55:17 one implementation that is an effective
55:20 use of multi-cloud is when it is used
55:23 for disaster recovery this is where your
55:26 architecture would be replicated across
55:29 the different public clouds in case one
55:31 were to go down
55:33 another could pick up the slack what
55:35 drives many cases of a multi-cloud deployment
55:36 deployment
55:39 is to prevent vendor lock-in where you
55:41 are locked into a particular cloud
55:44 provider's infrastructure and unable to
55:47 move due to the vendor-specific feature
55:49 set the main downfall to this type of
55:52 architecture is that the infrastructure
55:54 of the public cloud that you're using
55:56 cannot be fully utilized
55:59 as each cloud vendor has their own
56:02 proprietary resources that will only
56:05 work in their specific infrastructure in
56:06 other words
56:08 in order to replicate the environment it
56:12 needs to be the same within each cloud
56:14 this removes each cloud's unique
56:17 features which is what makes them so
56:20 special and the resources so compelling
56:23 so sometimes finding the right strategy
56:27 can be tricky depending on the scenario
56:29 now the next deployment model i wanted
56:32 to touch on is private cloud private
56:34 cloud refers to your architecture that
56:37 exists on premise
56:39 and restricted to the business itself
56:42 with no public access
56:44 yet it still carries the same five
56:47 characteristics that we discussed with
56:50 regards to what defines cloud each of
56:52 the major cloud providers shown here
56:54 all have their own flavor of private
56:57 cloud that can be implemented on site
57:00 google cloud has anthos aws
57:01 aws
57:03 has aws outposts
57:06 and azures is azure stack
57:09 they show the same characteristic
57:12 and leverage similar technologies that
57:14 can be found in the vendor's public
57:18 cloud yet can be installed on your own
57:20 on-premise infrastructure please be aware
57:21 aware
57:24 any organizations may have a vmware
57:27 implementation which holds cloud-like
57:30 features yet this is not considered a
57:31 private cloud
57:34 true private cloud will always meet the
57:37 characteristics that make up cloud now
57:40 it is possible to use private cloud with
57:41 public cloud
57:44 and this implementation is called hybrid cloud
57:45 cloud
57:48 so hybrid cloud is when you are using
57:51 public cloud in conjunction with private
57:54 cloud as a single system a common
57:57 architecture used is due to compliance
58:00 where one cloud could help organizations
58:03 achieve specific governance
58:05 risk management and compliance
58:08 regulations while the other cloud could
58:10 take over the rest
58:12 now i'd really like to make an important
58:13 distinction here
58:16 if your on-premise infrastructure is
58:19 connected to public cloud this is not
58:22 considered hybrid cloud this is what's
58:25 known as hybrid environment or a hybrid
58:28 network as the on-premises
58:31 infrastructure holds no private cloud
58:34 characteristics true hybrid cloud allows
58:37 you to use the exact same interface and
58:40 tooling as what's available in the
58:42 public cloud so being aware of this can
58:45 avoid a lot of confusion down the road
58:47 so to sum up everything that we
58:50 discussed when it comes to public cloud
58:53 this is when one cloud provided by one
58:56 vendor that is available over the public internet
58:57 internet
59:01 multi-cloud is two or more public clouds
59:03 that are connected together to be used
59:07 as a single system a private cloud
59:10 is considered an on-premises cloud that
59:12 follows the five characteristics of cloud
59:13 cloud
59:16 and is restricted to the one
59:19 organization with no accessibility to
59:22 the public and finally hybrid cloud
59:25 is private cloud connected to a public cloud
59:26 cloud
59:29 and being used as a single environment
59:31 again as a note
59:33 on-premises architecture connected to
59:36 public cloud is considered a hybrid
59:40 environment and not hybrid cloud
59:42 the distinction between the two
59:44 are very different and should be
59:47 observed carefully as gotchas may come
59:49 up in both the exam
59:52 and in your role as an engineer so these
59:53 are all the different cloud deployment
59:56 models which will help you distinguish
59:59 on what type of architecture you will be using in any scenario that you are given
60:02 using in any scenario that you are given and so this is all i wanted to cover
60:04 and so this is all i wanted to cover when it comes to cloud deployment models
60:06 when it comes to cloud deployment models so you can now mark this lesson as
60:08 so you can now mark this lesson as complete
60:09 complete and let's move on to the next one
60:16 welcome back so to finish up the nist definition of
60:19 so to finish up the nist definition of cloud computing i wanted to touch on
60:22 cloud computing i wanted to touch on cloud service models which is commonly
60:25 cloud service models which is commonly referred to as zas
60:27 referred to as zas now this model is usually called zas or
60:31 now this model is usually called zas or xaas
60:32 xaas standing for anything as a service it
60:35 standing for anything as a service it includes all the services in a cloud
60:38 includes all the services in a cloud that customers can consume and x can be
60:41 that customers can consume and x can be changed to associate with the specific
60:44 changed to associate with the specific service
60:45 service so in order to describe the cloud
60:47 so in order to describe the cloud service models i needed to touch on some
60:50 service models i needed to touch on some concepts that you may or may not be
60:52 concepts that you may or may not be familiar with this will make
60:54 familiar with this will make understanding the service models a
60:56 understanding the service models a little bit easier as i go through the
60:58 little bit easier as i go through the course and describe the services
61:01 course and describe the services available and how they relate to the
61:03 available and how they relate to the model this lesson will make so much
61:05 model this lesson will make so much sense by the end
61:07 sense by the end it'll make the services in cloud easier
61:09 it'll make the services in cloud easier to both describe and define
61:12 to both describe and define now when it comes to deploying an
61:14 now when it comes to deploying an application they are deployed in an
61:16 application they are deployed in an infrastructure stack
61:18 infrastructure stack like the one you see here
61:20 like the one you see here now a stack is a collection of needed
61:23 now a stack is a collection of needed infrastructure that the application
61:26 infrastructure that the application needs to run on it is layered and each
61:29 needs to run on it is layered and each layer builds on top of the one previous
61:31 layer builds on top of the one previous to it to create what it is that you see
61:34 to it to create what it is that you see here now as you can see at the top this
61:37 here now as you can see at the top this is a traditional on-premises
61:40 is a traditional on-premises infrastructure stack that was typically
61:42 infrastructure stack that was typically used pre-cloud now in this traditional
61:45 used pre-cloud now in this traditional model
61:46 model all the components are managed by the
61:48 all the components are managed by the customer the purchasing of the data
61:50 customer the purchasing of the data center and all the network and storage
61:53 center and all the network and storage involved the physical servers the
61:56 involved the physical servers the virtualization the licensing for the
61:59 virtualization the licensing for the operating systems the staff that's
62:01 operating systems the staff that's needed to put it all together
62:03 needed to put it all together including racking stacking cabling
62:06 including racking stacking cabling physical security was also something
62:08 physical security was also something that needed to be taken into
62:10 that needed to be taken into consideration in other words for the
62:12 consideration in other words for the organization to put this together by
62:15 organization to put this together by themselves they were looking at huge
62:18 themselves they were looking at huge costs
62:18 costs now the advantages to this is that it
62:21 now the advantages to this is that it allowed for major flexibility as the
62:24 allowed for major flexibility as the organization is able to tune this any
62:27 organization is able to tune this any way they want to satisfy the application
62:30 way they want to satisfy the application compliance standards
62:32 compliance standards basically anything that they wanted now
62:34 basically anything that they wanted now when talking about the cloud service
62:36 when talking about the cloud service model concepts
62:37 model concepts parts are always managed by you and
62:40 parts are always managed by you and parts are managed by the vendor now
62:42 parts are managed by the vendor now another concept i wanted to touch on is
62:45 another concept i wanted to touch on is that unit of consumption is how the
62:47 that unit of consumption is how the vendor prices what they are serving to
62:50 vendor prices what they are serving to their customer now just before cloud
62:52 their customer now just before cloud became big in the market there was a
62:55 became big in the market there was a model where the data center was hosted
62:58 model where the data center was hosted for you so a vendor would come along and
63:00 for you so a vendor would come along and they would take care of everything with
63:02 they would take care of everything with regards to the data center
63:04 regards to the data center the racks
63:05 the racks the power to the racks the air
63:07 the power to the racks the air conditioning
63:09 conditioning the networking cables out of the
63:10 the networking cables out of the building and even the physical security
63:13 building and even the physical security and so the unit of consumption here was
63:16 and so the unit of consumption here was the rack space within the data center so
63:18 the rack space within the data center so the vendor would charge you for the rack
63:20 the vendor would charge you for the rack space and in turn they would take care
63:23 space and in turn they would take care of all the necessities within the data
63:25 of all the necessities within the data center now this is less flexible than
63:28 center now this is less flexible than the traditional on-premises model but
63:31 the traditional on-premises model but the data center is abstracted for you so
63:34 the data center is abstracted for you so throughout this lesson i wanted to
63:36 throughout this lesson i wanted to introduce a concept that might make
63:38 introduce a concept that might make things easier to grasp which is the
63:41 things easier to grasp which is the pizza as a service so now the
63:43 pizza as a service so now the traditional on-premises model
63:45 traditional on-premises model is where you would buy everything and
63:48 is where you would buy everything and make the pizza at home
63:49 make the pizza at home now as we go on in the lesson
63:52 now as we go on in the lesson less flexibility will be available
63:54 less flexibility will be available because more layers will be abstracted
63:57 because more layers will be abstracted so the next service model that i wanted
63:59 so the next service model that i wanted to introduce is infrastructure as a
64:02 to introduce is infrastructure as a service
64:03 service or i as for short this is where all the
64:06 or i as for short this is where all the layers from the data center up to
64:08 layers from the data center up to virtualization is taken care of by the
64:11 virtualization is taken care of by the vendor this is the most basic model
64:14 vendor this is the most basic model which is essentially your virtual
64:16 which is essentially your virtual machines in a cloud data center
64:18 machines in a cloud data center you set up configure
64:21 you set up configure and manage instances that run in the
64:23 and manage instances that run in the data center infrastructure and you put
64:26 data center infrastructure and you put whatever you want on them on google
64:28 whatever you want on them on google cloud google compute engine would
64:30 cloud google compute engine would satisfy this model and so the unit of
64:33 satisfy this model and so the unit of consumption here would be the operating
64:35 consumption here would be the operating system as you would manage all the
64:38 system as you would manage all the operating system updates and everything
64:40 operating system updates and everything that you decide to put on that instance
64:43 that you decide to put on that instance but as you can see here you are still
64:45 but as you can see here you are still responsible for the container the run
64:48 responsible for the container the run time the data and the application layers
64:51 time the data and the application layers now bringing up the pizza as a service
64:53 now bringing up the pizza as a service model is would be you picking up the
64:57 model is would be you picking up the pizza and you cooking it at home moving
64:59 pizza and you cooking it at home moving on to platform as a service or paz for
65:03 on to platform as a service or paz for short this is a model that is geared
65:05 short this is a model that is geared more towards developers and with pass
65:08 more towards developers and with pass the cloud provider provides a computing
65:11 the cloud provider provides a computing platform typically
65:13 platform typically including the operating system the
65:15 including the operating system the programming language execution
65:17 programming language execution environment the database and the web
65:20 environment the database and the web server now typically with pass you never
65:23 server now typically with pass you never have to worry about the operating system
65:25 have to worry about the operating system updates or managing the runtime and
65:28 updates or managing the runtime and middleware and so the unit of
65:29 middleware and so the unit of consumption here would be the runtime
65:32 consumption here would be the runtime now the runtime layer would be the layer
65:34 now the runtime layer would be the layer you would consume as you would be
65:36 you would consume as you would be running your code in the supplied
65:38 running your code in the supplied runtime environment that the cloud
65:41 runtime environment that the cloud vendor provides for you the provider
65:43 vendor provides for you the provider manages the hardware and software
65:46 manages the hardware and software infrastructure and you just use the
65:48 infrastructure and you just use the service this is usually the layer on top
65:51 service this is usually the layer on top of is and so all the layers between the
65:54 of is and so all the layers between the data center and runtime is taken care of
65:57 data center and runtime is taken care of by the vendor a great example of this
66:00 by the vendor a great example of this for google cloud is google app engine
66:03 for google cloud is google app engine which we will be diving into a little
66:05 which we will be diving into a little bit later getting back to the pizza as a
66:08 bit later getting back to the pizza as a service model
66:09 service model pass would fall under the pizza being
66:11 pass would fall under the pizza being delivered right to your door
66:13 delivered right to your door now with the past model explained i want
66:16 now with the past model explained i want to move into the last model which is sas
66:19 to move into the last model which is sas which stands for software as a service
66:22 which stands for software as a service now with sas all the layers are taken
66:24 now with sas all the layers are taken care of by the vendor so users are
66:27 care of by the vendor so users are provided access to application software
66:30 provided access to application software and cloud providers manage the
66:32 and cloud providers manage the infrastructure and platforms that run
66:34 infrastructure and platforms that run the applications g suite and microsoft's
66:37 the applications g suite and microsoft's office 365 are great examples of this
66:41 office 365 are great examples of this model now sas doesn't offer much
66:43 model now sas doesn't offer much flexibility but the trade-off is that
66:46 flexibility but the trade-off is that the vendor actually takes care of all
66:48 the vendor actually takes care of all these layers so again the unit of
66:50 these layers so again the unit of consumption here is the application
66:53 consumption here is the application itself and of course getting to the
66:55 itself and of course getting to the pizza as a service model sas
66:58 pizza as a service model sas is pretty much dining in the restaurant
67:01 is pretty much dining in the restaurant enjoying your pizza now to summarize
67:03 enjoying your pizza now to summarize when you have a data center on site you
67:06 when you have a data center on site you manage everything
67:08 manage everything when it's infrastructure as a service
67:10 when it's infrastructure as a service part of that stack is abstracted by the
67:13 part of that stack is abstracted by the cloud vendor with platform as a service
67:16 cloud vendor with platform as a service you're responsible for the application
67:19 you're responsible for the application and data
67:20 and data everything else is abstracted by the
67:22 everything else is abstracted by the vendor with software as a service again
67:25 vendor with software as a service again using the pizza as a service analogy on
67:28 using the pizza as a service analogy on premise you buy everything and you make
67:30 premise you buy everything and you make the pizza at home infrastructure as a
67:33 the pizza at home infrastructure as a service
67:34 service you pick up the pizza and you cook it at
67:36 you pick up the pizza and you cook it at home when it comes to platform as a
67:38 home when it comes to platform as a service the pizza is delivered
67:41 service the pizza is delivered and of course software as a service is
67:43 and of course software as a service is dining in the restaurant now there will
67:46 dining in the restaurant now there will be some other service models coming up
67:48 be some other service models coming up in this course such as function as a
67:50 in this course such as function as a service and containers as a service and
67:53 service and containers as a service and don't worry i'll be getting into those
67:55 don't worry i'll be getting into those later but i just wanted to give you a
67:57 later but i just wanted to give you a heads up so now for some of you this may
68:00 heads up so now for some of you this may have been a lot of information to take
68:02 have been a lot of information to take in but trust me
68:04 in but trust me knowing these models will give you a
68:06 knowing these models will give you a better understanding of the services
68:09 better understanding of the services provided in google cloud as well as any
68:12 provided in google cloud as well as any other cloud vendor so that's all i
68:14 other cloud vendor so that's all i wanted to cover in this lesson so you
68:16 wanted to cover in this lesson so you can now mark this lesson as complete and
68:18 can now mark this lesson as complete and let's move on to the next one
68:27 welcome back in this lesson i wanted to discuss google cloud global
68:29 discuss google cloud global infrastructure how data centers are
68:31 infrastructure how data centers are connected how traffic flows when a
68:34 connected how traffic flows when a request is done
68:35 request is done along with the overall structure of how
68:38 along with the overall structure of how google cloud geographic locations are
68:41 google cloud geographic locations are divided for better availability
68:44 divided for better availability durability and latency
68:47 durability and latency now google holds a highly provisioned
68:50 now google holds a highly provisioned low latency network where your traffic
68:52 low latency network where your traffic stays on google's private backbone for
68:55 stays on google's private backbone for most of its journey ensuring high
68:58 most of its journey ensuring high performance and a user experience that
69:01 performance and a user experience that is always above the norm google cloud
69:03 is always above the norm google cloud has been designed to serve users all
69:06 has been designed to serve users all around the world by designing their
69:08 around the world by designing their infrastructure with redundant cloud
69:11 infrastructure with redundant cloud regions connected with high bandwidth
69:14 regions connected with high bandwidth fiber cables as well as subsea cables
69:17 fiber cables as well as subsea cables connecting different continents
69:19 connecting different continents currently google has invested in 13
69:22 currently google has invested in 13 subsea cables connecting these
69:25 subsea cables connecting these continents at points of presence as you
69:28 continents at points of presence as you see here in this diagram hundreds of
69:30 see here in this diagram hundreds of thousands of miles of fiber cables have
69:33 thousands of miles of fiber cables have also been laid to connect points of
69:36 also been laid to connect points of presence for direct connectivity
69:39 presence for direct connectivity privacy and reduced latency
69:42 privacy and reduced latency just to give you an idea of what a
69:44 just to give you an idea of what a subsea cable run might look like i have
69:47 subsea cable run might look like i have included a diagram of how dedicated
69:50 included a diagram of how dedicated google is to their customers as there is
69:53 google is to their customers as there is so much that goes into running these
69:55 so much that goes into running these cables that connect continents as you
69:58 cables that connect continents as you can see here this is the north virginia
70:00 can see here this is the north virginia region being connected to the belgium
70:03 region being connected to the belgium region from the u.s over to europe a
70:07 region from the u.s over to europe a cable is run from the north virginia
70:09 cable is run from the north virginia data center as well as having a point of
70:12 data center as well as having a point of presence in place
70:14 presence in place going through a landing station before
70:16 going through a landing station before going deep into the sea on the other
70:19 going deep into the sea on the other side the landing station on the french
70:22 side the landing station on the french west coast
70:23 west coast picks up the other side of the cable and
70:25 picks up the other side of the cable and brings it over to the data center in the
70:28 brings it over to the data center in the belgium region and this is a typical
70:30 belgium region and this is a typical subsea cable run for google so
70:33 subsea cable run for google so continents are connected for maximum
70:35 continents are connected for maximum global connectivity
70:37 global connectivity now at the time of recording this video
70:40 now at the time of recording this video google cloud footprint spans 24 regions
70:44 google cloud footprint spans 24 regions 73 zones and over
70:47 73 zones and over 144 points of presence across more than
70:52 144 points of presence across more than 200 countries and territories worldwide
70:55 200 countries and territories worldwide and as you can see here the white dots
70:58 and as you can see here the white dots on the map are regions that are
71:00 on the map are regions that are currently being built to expand their
71:02 currently being built to expand their network for wider connectivity now to
71:06 network for wider connectivity now to show you how a request is routed through
71:08 show you how a request is routed through google's network i thought i would
71:10 google's network i thought i would demonstrate this by using tony bowtie
71:14 demonstrate this by using tony bowtie now tony makes a request to his database
71:17 now tony makes a request to his database in google cloud and google responds to
71:20 in google cloud and google responds to tony's request
71:21 tony's request from a pop or edge network location that
71:25 from a pop or edge network location that will provide the lowest latency this
71:27 will provide the lowest latency this point of presence is where isps can
71:30 point of presence is where isps can connect to google's network google's
71:33 connect to google's network google's edge network receives tony's request and
71:36 edge network receives tony's request and passes it to the nearest google data
71:38 passes it to the nearest google data center over its private fiber network
71:41 center over its private fiber network the data center generates a response
71:44 the data center generates a response that's optimized to provide the best
71:46 that's optimized to provide the best experience for tony at that given moment
71:49 experience for tony at that given moment in time the app or browser that tony is
71:51 in time the app or browser that tony is using retrieves the requested content
71:54 using retrieves the requested content with a response back from various google
71:57 with a response back from various google locations including the google data
72:00 locations including the google data centers edge pops and edge nodes
72:03 centers edge pops and edge nodes whichever is providing the lowest
72:05 whichever is providing the lowest latency this data path happens in a
72:08 latency this data path happens in a matter of seconds and due to google's
72:11 matter of seconds and due to google's global infrastructure it travels
72:13 global infrastructure it travels securely and with the least amount of
72:16 securely and with the least amount of latency possible
72:17 latency possible no matter the geographic location that
72:20 no matter the geographic location that the request is coming from
72:22 the request is coming from now i wanted to take a moment to break
72:24 now i wanted to take a moment to break down how the geographic areas are broken
72:28 down how the geographic areas are broken out and organized in google cloud
72:31 out and organized in google cloud we start off with the geographic
72:33 we start off with the geographic location such as the united states of
72:36 location such as the united states of america and it's broken down into
72:38 america and it's broken down into multi-region into regions and finally
72:41 multi-region into regions and finally zones and so to start off with i wanted
72:44 zones and so to start off with i wanted to talk about zones now a zone is a
72:47 to talk about zones now a zone is a deployment area for google cloud
72:49 deployment area for google cloud resources within a region a zone is the
72:52 resources within a region a zone is the smallest entity in google's global
72:55 smallest entity in google's global network you can think of it as a single
72:57 network you can think of it as a single failure domain within a region now as a
73:00 failure domain within a region now as a best practice resources should always be
73:03 best practice resources should always be deployed
73:04 deployed in zones that are closest to your users
73:07 in zones that are closest to your users for optimal latency
73:09 for optimal latency now next up we have a region
73:12 now next up we have a region and regions are independent geographic
73:15 and regions are independent geographic areas that are subdivided into zones so
73:18 areas that are subdivided into zones so you can think of a region as a
73:20 you can think of a region as a collection of zones and having a region
73:23 collection of zones and having a region with multiple zones is designed for
73:26 with multiple zones is designed for fault tolerance and high availability
73:29 fault tolerance and high availability the intercommunication between zones
73:32 the intercommunication between zones within a region is under five
73:34 within a region is under five milliseconds so rest assured that your
73:37 milliseconds so rest assured that your data is always traveling at optimal
73:40 data is always traveling at optimal speeds
73:41 speeds now moving on into a multi-region
73:44 now moving on into a multi-region now multi-regions are large geographic
73:47 now multi-regions are large geographic areas that contain two or more regions
73:51 areas that contain two or more regions and this allows google services to
73:53 and this allows google services to maximize redundancy and distribution
73:57 maximize redundancy and distribution within and across regions
73:59 within and across regions and this is for google redundancy or
74:02 and this is for google redundancy or high availability having your data
74:04 high availability having your data spread across multiple regions
74:07 spread across multiple regions always reassures that your data is
74:10 always reassures that your data is constantly available
74:12 constantly available and so that covers all the concepts that
74:14 and so that covers all the concepts that i wanted to go over when it comes to
74:17 i wanted to go over when it comes to geography and regions within google
74:20 geography and regions within google cloud
74:21 cloud note that the geography and regions
74:23 note that the geography and regions concepts are fundamental not only for
74:26 concepts are fundamental not only for the exam but for your day-to-day role in
74:30 the exam but for your day-to-day role in google cloud
74:31 google cloud so just as a recap a zone is a
74:34 so just as a recap a zone is a deployment area for google cloud
74:37 deployment area for google cloud resources within a region a zone is the
74:40 resources within a region a zone is the smallest entity of google's global
74:42 smallest entity of google's global infrastructure now a region is an
74:45 infrastructure now a region is an independent geographic area that are
74:48 independent geographic area that are subdivided into zones and finally when
74:51 subdivided into zones and finally when it comes to multi-region
74:53 it comes to multi-region multi-regions are large geographic areas
74:57 multi-regions are large geographic areas that contains two or more regions
75:00 that contains two or more regions again
75:01 again these are all fundamental concepts that
75:03 these are all fundamental concepts that you should know for the exam and for
75:06 you should know for the exam and for your day-to-day role in google cloud and
75:09 your day-to-day role in google cloud and so that's all i had for this lesson so
75:12 so that's all i had for this lesson so you can now mark this lesson as complete
75:14 you can now mark this lesson as complete and let's move on to the next one
75:17 and let's move on to the next one [Music]
75:21 [Music] welcome back
75:22 welcome back this lesson is going to be an overview
75:24 this lesson is going to be an overview of all the compute service options that
75:27 of all the compute service options that are available in google cloud
75:29 are available in google cloud how they differ from each other
75:31 how they differ from each other and where they fall under the cloud
75:33 and where they fall under the cloud service model again this lesson is just
75:36 service model again this lesson is just an overview of the compute options as we
75:39 an overview of the compute options as we will be diving deeper into each compute
75:41 will be diving deeper into each compute option
75:42 option later on in this course so google cloud
75:45 later on in this course so google cloud gives you so many options when it comes
75:47 gives you so many options when it comes to compute services ones that offer
75:50 to compute services ones that offer complete control and flexibility others
75:53 complete control and flexibility others that offer flexible container technology
75:55 that offer flexible container technology managed application platform and
75:58 managed application platform and serverless environments and so when we
76:00 serverless environments and so when we take all of these compute options and we
76:03 take all of these compute options and we look at it from a service model
76:05 look at it from a service model perspective you can see that there's so
76:07 perspective you can see that there's so much flexibility starting here on the
76:09 much flexibility starting here on the left with infrastructure as a service
76:12 left with infrastructure as a service giving you the most optimal flexibility
76:15 giving you the most optimal flexibility moving all the way over to the right
76:17 moving all the way over to the right where we have function as a service
76:20 where we have function as a service offering less flexibility but the upside
76:23 offering less flexibility but the upside being less that you have to manage and
76:25 being less that you have to manage and we'll be going through these compute
76:27 we'll be going through these compute options starting on the left here with
76:30 options starting on the left here with infrastructure as a service we have
76:32 infrastructure as a service we have compute engine now compute engine is
76:35 compute engine now compute engine is google's staple infrastructure the
76:37 google's staple infrastructure the service product that offers virtual
76:39 service product that offers virtual machines or vms called instances these
76:43 machines or vms called instances these instances can be deployed in any region
76:46 instances can be deployed in any region or zone that you choose you also have
76:48 or zone that you choose you also have the option of deciding what operating
76:50 the option of deciding what operating system you want on it as well as the
76:53 system you want on it as well as the software so you have the option of
76:55 software so you have the option of installing different types of flavors of
76:57 installing different types of flavors of linux or windows and the software to go
77:00 linux or windows and the software to go with it google also gives you the
77:02 with it google also gives you the options of creating these instances
77:04 options of creating these instances using public or private images
77:07 using public or private images so if you or your company have a private
77:09 so if you or your company have a private image that you'd like to use you can use
77:12 image that you'd like to use you can use this to create your instances google
77:14 this to create your instances google also gives you the option to use public
77:17 also gives you the option to use public images to create instances and are
77:19 images to create instances and are available when you launch compute engine
77:21 available when you launch compute engine as well there are also pre-configured
77:24 as well there are also pre-configured images and software packages available
77:27 images and software packages available in the google cloud marketplace and we
77:29 in the google cloud marketplace and we will be diving a little bit deeper into
77:31 will be diving a little bit deeper into the google cloud marketplace in another
77:34 the google cloud marketplace in another lesson just know that there are slew of
77:36 lesson just know that there are slew of images out there that's available to
77:39 images out there that's available to create instances giving you the ease to
77:42 create instances giving you the ease to deploy now when it comes to compute
77:44 deploy now when it comes to compute engine and you're managing multiple
77:46 engine and you're managing multiple instances these are done using instance
77:49 instances these are done using instance groups
77:50 groups and when you're looking at adding or
77:52 and when you're looking at adding or removing capacity for those compute
77:54 removing capacity for those compute engine instances automatically you would
77:57 engine instances automatically you would use auto scaling in conjunction with
77:59 use auto scaling in conjunction with those instance groups compute engine
78:01 those instance groups compute engine also gives you the option of attaching
78:04 also gives you the option of attaching and detaching disks as you need them as
78:07 and detaching disks as you need them as well google cloud storage can be used in
78:10 well google cloud storage can be used in conjunction with compute engine as
78:12 conjunction with compute engine as another storage option and when
78:14 another storage option and when connecting directly to compute engine
78:17 connecting directly to compute engine google gives you the option of using ssh
78:20 google gives you the option of using ssh to securely connect to it so moving on
78:22 to securely connect to it so moving on to the next compute service option
78:24 to the next compute service option we have google kubernetes engine also
78:27 we have google kubernetes engine also known as gke
78:29 known as gke now gke
78:31 now gke is google's flagship container
78:33 is google's flagship container orchestration system
78:35 orchestration system for automating
78:36 for automating deploying
78:37 deploying scaling and managing containers
78:40 scaling and managing containers gke is also built on the same open
78:44 gke is also built on the same open source kubernetes project that was
78:46 source kubernetes project that was introduced by google to the public back
78:49 introduced by google to the public back in 2014
78:51 in 2014 now before google made kubernetes a
78:53 now before google made kubernetes a managed service there was many that
78:55 managed service there was many that decided to build kubernetes on premise
78:58 decided to build kubernetes on premise in their data centers and because it is
79:01 in their data centers and because it is built on the same platform
79:03 built on the same platform gke
79:04 gke offers the flexibility of integrating
79:07 offers the flexibility of integrating with these on-premise kubernetes
79:09 with these on-premise kubernetes deployments now under the hood gke uses
79:12 deployments now under the hood gke uses compute engine instances as nodes in a
79:16 compute engine instances as nodes in a cluster and as a quick note a cluster is
79:19 cluster and as a quick note a cluster is a group of nodes or compute engine
79:21 a group of nodes or compute engine instances and again we'll be going over
79:24 instances and again we'll be going over all this in much greater detail in a
79:27 all this in much greater detail in a different lesson so if you haven't
79:29 different lesson so if you haven't already figured it out google kubernetes
79:32 already figured it out google kubernetes engine is considered container as a
79:34 engine is considered container as a service now the next compute service
79:36 service now the next compute service option that i wanted to go over
79:39 option that i wanted to go over that falls under platform as a service
79:41 that falls under platform as a service is app engine
79:43 is app engine now app engine is a fully managed
79:46 now app engine is a fully managed serverless platform for developing and
79:48 serverless platform for developing and hosting web applications at scale now
79:52 hosting web applications at scale now with app engine google handles most of
79:54 with app engine google handles most of the management of the resources for you
79:57 the management of the resources for you for example if your application requires
79:59 for example if your application requires more computing resources because traffic
80:02 more computing resources because traffic to your website increases google
80:04 to your website increases google automatically scales the system to
80:06 automatically scales the system to provide these resources if the system
80:09 provide these resources if the system software needs a security update as well
80:12 software needs a security update as well that's handled for you too and so all
80:14 that's handled for you too and so all you need to really take care of is your
80:17 you need to really take care of is your application
80:18 application and you can build your application in
80:20 and you can build your application in your favorite language go java.net and
80:24 your favorite language go java.net and many others
80:25 many others and you can use both pre-configured
80:27 and you can use both pre-configured runtimes or use custom runtimes to allow
80:31 runtimes or use custom runtimes to allow you to write the code in any language
80:34 you to write the code in any language app engine also allows you to connect
80:36 app engine also allows you to connect with google cloud storage products and
80:38 with google cloud storage products and databases seamlessly app engine also
80:41 databases seamlessly app engine also offers the flexibility of connecting
80:44 offers the flexibility of connecting with third-party databases as well as
80:47 with third-party databases as well as other cloud providers and third-party
80:49 other cloud providers and third-party vendors app engine also integrates with
80:52 vendors app engine also integrates with a well-known security product in google
80:54 a well-known security product in google cloud called web security scanner as to
80:57 cloud called web security scanner as to identify security vulnerabilities and so
81:00 identify security vulnerabilities and so that covers app engine in a nutshell
81:02 that covers app engine in a nutshell moving on to the next compute service
81:04 moving on to the next compute service option
81:05 option we have cloud functions and cloud
81:08 we have cloud functions and cloud functions fall under function as a
81:10 functions fall under function as a service this is a serverless execution
81:13 service this is a serverless execution environment
81:14 environment for building and connecting cloud
81:16 for building and connecting cloud services with cloud functions you write
81:19 services with cloud functions you write simple single purpose functions that are
81:22 simple single purpose functions that are attached to events
81:24 attached to events that are produced from your
81:26 that are produced from your infrastructure and services in google
81:28 infrastructure and services in google cloud your function is triggered when an
81:31 cloud your function is triggered when an event being watched is fired your code
81:34 event being watched is fired your code then executes in a fully managed
81:36 then executes in a fully managed environment there is no need to
81:38 environment there is no need to provision any infrastructure or worry
81:41 provision any infrastructure or worry about managing any servers and cloud
81:43 about managing any servers and cloud functions can be written using
81:45 functions can be written using javascript python 3
81:48 javascript python 3 go or java runtimes so you can take your
81:51 go or java runtimes so you can take your function and run it in any of these
81:54 function and run it in any of these standard environments which makes it
81:56 standard environments which makes it extremely portable now cloud functions
81:59 extremely portable now cloud functions are a good choice for use cases that
82:01 are a good choice for use cases that include the following
82:03 include the following data processing or etl operations such
82:06 data processing or etl operations such as video transcoding and iot streaming
82:09 as video transcoding and iot streaming data web hooks that respond to http
82:12 data web hooks that respond to http triggers
82:13 triggers lightweight apis that compose loosely
82:16 lightweight apis that compose loosely coupled logic into applications as well
82:19 coupled logic into applications as well as mobile back-end functions
82:21 as mobile back-end functions again cloud functions are considered
82:23 again cloud functions are considered function as a service and so that covers
82:26 function as a service and so that covers cloud functions
82:28 cloud functions now moving to the far right of the
82:29 now moving to the far right of the screen on the other side of the arrow we
82:32 screen on the other side of the arrow we have our last compute service option
82:34 have our last compute service option which is cloud run now cloud run is a
82:38 which is cloud run now cloud run is a fully managed compute platform for
82:40 fully managed compute platform for deploying and scaling containerized
82:42 deploying and scaling containerized applications quickly and securely
82:46 applications quickly and securely cloudrun was built on an open standard
82:48 cloudrun was built on an open standard called k native and this enabled the
82:50 called k native and this enabled the portability of any applications that
82:53 portability of any applications that were built on it cloudrun also abstracts
82:56 were built on it cloudrun also abstracts away all the infrastructure management
82:59 away all the infrastructure management by automatically scaling up and down
83:01 by automatically scaling up and down almost instantaneously depending on the
83:04 almost instantaneously depending on the traffic now cloud run was google's
83:07 traffic now cloud run was google's response to abstracting all the
83:09 response to abstracting all the infrastructure that was designed to run
83:12 infrastructure that was designed to run containers and so this is known as
83:14 containers and so this is known as serverless for containers cloudrun has
83:17 serverless for containers cloudrun has massive flexibility as you can write it
83:20 massive flexibility as you can write it in any language
83:22 in any language any library using any binary this
83:25 any library using any binary this compute service is considered a function
83:28 compute service is considered a function as a service now at the time of
83:30 as a service now at the time of recording this video i have not heard of
83:32 recording this video i have not heard of cloud cloudrun being in the exam but
83:35 cloud cloudrun being in the exam but since it is a compute service option i
83:38 since it is a compute service option i felt the need for cloudrun to have an
83:40 felt the need for cloudrun to have an honorable mention and so these are all
83:43 honorable mention and so these are all the compute service options that are
83:45 the compute service options that are available on google cloud and we will be
83:48 available on google cloud and we will be diving deeper into each one of these
83:51 diving deeper into each one of these later on in this course
83:53 later on in this course again this is just an overview of all
83:55 again this is just an overview of all the compute service options that are
83:57 the compute service options that are available on the google cloud platform
84:00 available on the google cloud platform and so that's all i wanted to cover for
84:02 and so that's all i wanted to cover for this lesson
84:03 this lesson so you can now mark this lesson as
84:05 so you can now mark this lesson as complete and let's move on to the next
84:07 complete and let's move on to the next one
84:08 one [Music]
84:12 [Music] welcome back
84:13 welcome back now in the last lesson i covered all the
84:16 now in the last lesson i covered all the different options for compute services
84:19 different options for compute services in this lesson we're going to cover the
84:21 in this lesson we're going to cover the options that are available that couple
84:23 options that are available that couple well with these compute services by
84:26 well with these compute services by diving deeper into the different storage
84:28 diving deeper into the different storage types and the different databases
84:31 types and the different databases available on google cloud again this is
84:34 available on google cloud again this is strictly an overview as i will be diving
84:37 strictly an overview as i will be diving deeper into these services later on in
84:40 deeper into these services later on in the course
84:41 the course now when it comes to storage options
84:43 now when it comes to storage options there are three services that are
84:45 there are three services that are readily available to you in google cloud
84:48 readily available to you in google cloud each of them have their own specific use
84:50 each of them have their own specific use case that i will be diving into in just
84:53 case that i will be diving into in just a second
84:54 a second the first one i wanted to go over is
84:56 the first one i wanted to go over is cloud storage
84:57 cloud storage now with cloud storage this is google's
85:00 now with cloud storage this is google's consistent scalable
85:02 consistent scalable large capacity and highly durable object
85:06 large capacity and highly durable object storage
85:07 storage so when i refer to object storage this
85:09 so when i refer to object storage this is not the type of storage that you
85:11 is not the type of storage that you would attach to your instance and store
85:14 would attach to your instance and store your operating system on i'm talking
85:16 your operating system on i'm talking about managing data as objects such as
85:20 about managing data as objects such as documents or pictures and shouldn't be
85:22 documents or pictures and shouldn't be confused with block storage which
85:24 confused with block storage which manages data at a more granular level
85:27 manages data at a more granular level such as an operating system not to worry
85:30 such as an operating system not to worry if you fully don't grasp the concept of
85:32 if you fully don't grasp the concept of object storage i will be going into
85:34 object storage i will be going into further detail with that
85:36 further detail with that later on in the cloud storage lesson
85:38 later on in the cloud storage lesson cloud storage has 11 9's durability and
85:42 cloud storage has 11 9's durability and what i mean by durability is basically
85:45 what i mean by durability is basically loss of files so just to give you a
85:47 loss of files so just to give you a better picture on cloud storage
85:49 better picture on cloud storage durability if you store 1 million files
85:53 durability if you store 1 million files statistically google would lose one file
85:56 statistically google would lose one file every 659
85:59 every 659 000 years and you are about over 400
86:02 000 years and you are about over 400 times more likely to get hit by a meteor
86:05 times more likely to get hit by a meteor than to actually lose a file so as you
86:08 than to actually lose a file so as you can see cloud storage is a very good
86:10 can see cloud storage is a very good place to be storing your files another
86:13 place to be storing your files another great feature on cloud storage is the
86:15 great feature on cloud storage is the unlimited storage that it has with no
86:18 unlimited storage that it has with no minimum object size so feel free to
86:21 minimum object size so feel free to continuously put files in cloud storage
86:24 continuously put files in cloud storage now when it comes to use cases cloud
86:26 now when it comes to use cases cloud storage is fantastic for content
86:29 storage is fantastic for content delivery data lakes and backups and to
86:32 delivery data lakes and backups and to make cloud storage even more flexible it
86:35 make cloud storage even more flexible it is available in different storage
86:37 is available in different storage classes and availability which i will be
86:40 classes and availability which i will be going over in just a second now when it
86:42 going over in just a second now when it comes to these different storage classes
86:45 comes to these different storage classes there are four different classes that
86:47 there are four different classes that you can choose from the first one is the
86:49 you can choose from the first one is the standard storage class and this storage
86:52 standard storage class and this storage class offers the maximum availability
86:54 class offers the maximum availability with your data with absolutely no
86:57 with your data with absolutely no limitations this is great for storage
87:00 limitations this is great for storage that you access all the time the next
87:02 that you access all the time the next storage class is near line and this is
87:05 storage class is near line and this is low-cost archival storage so this
87:08 low-cost archival storage so this storage class is cheaper than standard
87:11 storage class is cheaper than standard and is designed for storage that only
87:13 and is designed for storage that only needs to be accessed less than once a
87:15 needs to be accessed less than once a month and if you're looking for an even
87:17 month and if you're looking for an even more cost effective solution cloud
87:20 more cost effective solution cloud storage has cold line storage class
87:22 storage has cold line storage class which is an even lower cost archival
87:25 which is an even lower cost archival storage solution this storage class is
87:27 storage solution this storage class is designed for storage that only needs to
87:30 designed for storage that only needs to be accessed less than once every quarter
87:33 be accessed less than once every quarter and just when you thought that the
87:34 and just when you thought that the prices couldn't get lower than cold line
87:37 prices couldn't get lower than cold line cloud storage has offered another
87:39 cloud storage has offered another storage class called archive and this is
87:42 storage class called archive and this is the lowest cost archival storage which
87:45 the lowest cost archival storage which offers storage at a fraction of a penny
87:48 offers storage at a fraction of a penny per gigabyte but is designed for
87:50 per gigabyte but is designed for archival or backup use that is accessed
87:53 archival or backup use that is accessed less than once a year now when it comes
87:55 less than once a year now when it comes to cloud storage availability there are
87:58 to cloud storage availability there are three options that are available
88:01 three options that are available there is region dual region and
88:03 there is region dual region and multi-region region is designed to store
88:06 multi-region region is designed to store your data in one single region dual
88:09 your data in one single region dual region is exactly how it sounds which is
88:11 region is exactly how it sounds which is a pair of regions now in multiregion
88:14 a pair of regions now in multiregion cloud storage stores your data over a
88:16 cloud storage stores your data over a large geographic area consisting of many
88:20 large geographic area consisting of many different regions across that same
88:22 different regions across that same selected geographic area and so that
88:25 selected geographic area and so that about covers cloud storage as a storage
88:28 about covers cloud storage as a storage option the next storage option that i
88:29 option the next storage option that i wanted to talk about is file store
88:32 wanted to talk about is file store now file store is a fully managed nfs
88:36 now file store is a fully managed nfs file server from google cloud that is
88:38 file server from google cloud that is nfs version 3 compliant you can store
88:41 nfs version 3 compliant you can store data from running applications from
88:44 data from running applications from multiple vm instances and kubernetes
88:47 multiple vm instances and kubernetes clusters
88:48 clusters accessing the data at the same time file
88:52 accessing the data at the same time file store is a great option for when you're
88:53 store is a great option for when you're thinking about accessing data from let's
88:56 thinking about accessing data from let's say an instance group and you need
88:58 say an instance group and you need multiple instances to access the same
89:01 multiple instances to access the same data and moving on to the last storage
89:04 data and moving on to the last storage option we have persistent disks
89:07 option we have persistent disks now with persistent disks this is
89:09 now with persistent disks this is durable block storage for instances now
89:12 durable block storage for instances now as i explained before block storage is
89:15 as i explained before block storage is different than object storage
89:17 different than object storage if you remember previously i explained
89:20 if you remember previously i explained that object storage is designed to store
89:23 that object storage is designed to store objects such as data or photos or videos
89:27 objects such as data or photos or videos whereas block storage is raw storage
89:29 whereas block storage is raw storage capacity that is used in drives that are
89:33 capacity that is used in drives that are connected to an operating system in this
89:35 connected to an operating system in this case persistent disks are doing just
89:38 case persistent disks are doing just that persistent disks come in two
89:40 that persistent disks come in two options
89:42 options the first one is the standard option
89:44 the first one is the standard option which gives you regular standard storage
89:47 which gives you regular standard storage at a reasonable price and the other
89:49 at a reasonable price and the other option is solid state or ssd
89:53 option is solid state or ssd which gives you lower latency
89:55 which gives you lower latency higher iops and is just all around
89:57 higher iops and is just all around faster than your standard persistent
89:59 faster than your standard persistent disk both of these options are available
90:02 disk both of these options are available in zonal and regional options depending
90:05 in zonal and regional options depending on what you need for your specific
90:08 on what you need for your specific workload so now that i've covered all
90:10 workload so now that i've covered all three storage options i wanted to touch
90:12 three storage options i wanted to touch into the database options that are
90:15 into the database options that are available on google cloud these database
90:17 available on google cloud these database options come in both the sql and nosql
90:21 options come in both the sql and nosql flavors depending on your use case now
90:24 flavors depending on your use case now getting into the options themselves i
90:26 getting into the options themselves i wanted to start off going into a little
90:28 wanted to start off going into a little bit of detail with the sql relational
90:31 bit of detail with the sql relational options so the first option is cloud sql
90:35 options so the first option is cloud sql and cloud sql is a fully managed
90:37 and cloud sql is a fully managed database service that is offered in
90:39 database service that is offered in postgres mysql and sql server flavors
90:43 postgres mysql and sql server flavors cloud sql also has the option of being
90:46 cloud sql also has the option of being highly available across zones now moving
90:49 highly available across zones now moving into cloud spanner this is a scalable
90:51 into cloud spanner this is a scalable relational database service that's
90:54 relational database service that's highly available not only across zones
90:56 highly available not only across zones but across regions and if need be
90:59 but across regions and if need be available globally cloud spanner is
91:02 available globally cloud spanner is designed to support transactions strong
91:04 designed to support transactions strong consistency and synchronous replication
91:07 consistency and synchronous replication moving into the nosql options there are
91:10 moving into the nosql options there are four available services that google
91:12 four available services that google cloud offers moving into the first one
91:15 cloud offers moving into the first one is bigtable
91:17 is bigtable and bigtable is a fully managed scalable
91:20 and bigtable is a fully managed scalable nosql database that has high throughput
91:24 nosql database that has high throughput and low latency bigtable also comes with
91:27 and low latency bigtable also comes with the flexibility of doing cluster
91:29 the flexibility of doing cluster resizing without any downtime the next
91:32 resizing without any downtime the next nosql option available is datastore and
91:35 nosql option available is datastore and this is google cloud's fast fully
91:38 this is google cloud's fast fully managed
91:39 managed serverless
91:40 serverless nosql document database datastore is
91:43 nosql document database datastore is designed for mobile web
91:46 designed for mobile web and internet of things applications
91:48 and internet of things applications datastore has the capabilities of doing
91:51 datastore has the capabilities of doing multi-region replication
91:53 multi-region replication as well as acid transactions for those
91:56 as well as acid transactions for those of you who don't know i will be covering
91:58 of you who don't know i will be covering acid transactions in a later lesson next
92:01 acid transactions in a later lesson next up for nosql options is firestore and
92:05 up for nosql options is firestore and this is a nosql real-time database
92:08 this is a nosql real-time database and is optimized for offline use if
92:11 and is optimized for offline use if you're looking to store data in a
92:13 you're looking to store data in a database in real time firestore is your
92:16 database in real time firestore is your option and like bigtable you can resize
92:19 option and like bigtable you can resize the cluster in firestore without any
92:21 the cluster in firestore without any downtime and the last nosql option is
92:24 downtime and the last nosql option is memorystore and this is google cloud's
92:27 memorystore and this is google cloud's highly available
92:29 highly available in memory service for redis and
92:31 in memory service for redis and memcached this is a fully managed
92:34 memcached this is a fully managed service and so google cloud takes care
92:36 service and so google cloud takes care of everything for you now i know this
92:38 of everything for you now i know this has been a short lesson on storage and
92:41 has been a short lesson on storage and database options but a necessary
92:43 database options but a necessary overview nonetheless of what's to come
92:46 overview nonetheless of what's to come and so that's about all i wanted to
92:47 and so that's about all i wanted to cover in this lesson so you can now mark
92:50 cover in this lesson so you can now mark this lesson as complete and let's move
92:52 this lesson as complete and let's move on to the next one
92:54 on to the next one [Music]
92:58 [Music] welcome back now while there are some
93:00 welcome back now while there are some services in gcp that take care of
93:02 services in gcp that take care of networking for you
93:04 networking for you there are still others like compute
93:06 there are still others like compute engine that give you a bit more
93:08 engine that give you a bit more flexibility in the type of networking
93:11 flexibility in the type of networking you'd like to establish
93:13 you'd like to establish this lesson will go over these
93:15 this lesson will go over these networking services at a high level and
93:18 networking services at a high level and provide you with strictly an overview to
93:21 provide you with strictly an overview to give you an idea on what's available for
93:24 give you an idea on what's available for any particular type of scenario when it
93:27 any particular type of scenario when it comes to connecting and scaling your
93:29 comes to connecting and scaling your network traffic i will be going into
93:32 network traffic i will be going into further details
93:34 further details on these networking services in later
93:36 on these networking services in later lessons now i wanted to start off with
93:39 lessons now i wanted to start off with some core networking features for your
93:41 some core networking features for your resources and how to govern specific
93:43 resources and how to govern specific traffic
93:44 traffic traveling to and from your network this
93:47 traveling to and from your network this is where networks firewalls and routes
93:50 is where networks firewalls and routes come into play so first i wanted to
93:52 come into play so first i wanted to start off with virtual private cloud
93:55 start off with virtual private cloud also known as vpc now vpc
93:59 also known as vpc now vpc manages networking functionality for
94:02 manages networking functionality for your google cloud resources
94:04 your google cloud resources this is a virtualized network within
94:06 this is a virtualized network within google cloud so you can picture it as
94:09 google cloud so you can picture it as your virtualized data center vpc is a
94:12 your virtualized data center vpc is a core networking service
94:15 core networking service and is also a global resource that spans
94:18 and is also a global resource that spans throughout all the different regions
94:21 throughout all the different regions available in google cloud each vpc
94:24 available in google cloud each vpc contains a default network as well
94:27 contains a default network as well additional networks can be created in
94:29 additional networks can be created in your project but networks cannot be
94:32 your project but networks cannot be shared between projects
94:34 shared between projects and i'll be going into further depth on
94:36 and i'll be going into further depth on vpc in a later lesson so now that we've
94:39 vpc in a later lesson so now that we've covered vpc i wanted to get into
94:42 covered vpc i wanted to get into firewall rules and routes now firewall
94:45 firewall rules and routes now firewall rules segment your networks with a
94:48 rules segment your networks with a global distributive firewall to restrict
94:51 global distributive firewall to restrict access to resources so this governs
94:54 access to resources so this governs traffic coming into instances on a
94:56 traffic coming into instances on a network each default network has a
94:59 network each default network has a default set of firewall rules that have
95:01 default set of firewall rules that have already been established but don't fret
95:04 already been established but don't fret you can create your own rules and set
95:06 you can create your own rules and set them accordingly depending on your
95:08 them accordingly depending on your workload now when it comes to routes
95:11 workload now when it comes to routes this specifies how traffic should be
95:14 this specifies how traffic should be routed within your vpc to get a little
95:17 routed within your vpc to get a little bit more granular routes specify how
95:20 bit more granular routes specify how packets leaving an instance should be
95:22 packets leaving an instance should be directed so it's a basic way of defining
95:25 directed so it's a basic way of defining which way your traffic is going to
95:27 which way your traffic is going to travel moving on to the next concept i
95:30 travel moving on to the next concept i wanted to cover a little bit about low
95:32 wanted to cover a little bit about low balancing and how it distributes
95:34 balancing and how it distributes workloads across multiple instances
95:37 workloads across multiple instances now we have two different types of load
95:39 now we have two different types of load balancing and both these types of load
95:42 balancing and both these types of load balancing can be broken down to even a
95:44 balancing can be broken down to even a more granular level now when it comes to
95:47 more granular level now when it comes to http or https low balancing this is the
95:51 http or https low balancing this is the type of load balancing that covers
95:53 type of load balancing that covers worldwide auto scaling and load
95:55 worldwide auto scaling and load balancing over multiple regions or even
95:58 balancing over multiple regions or even a single region on a single global ip
96:01 a single region on a single global ip https load balancing distributes traffic
96:05 https load balancing distributes traffic across various regions and make sure
96:08 across various regions and make sure that the traffic is routed to the
96:09 that the traffic is routed to the closest region or in case there's
96:12 closest region or in case there's failures amongst instances or in
96:14 failures amongst instances or in instances being bombarded with traffic
96:17 instances being bombarded with traffic http and https load balancing can route
96:21 http and https load balancing can route the traffic to a healthy instance in the
96:24 the traffic to a healthy instance in the next closest region another great
96:26 next closest region another great feature of this load balancing is that
96:28 feature of this load balancing is that it can distribute traffic based on
96:30 it can distribute traffic based on content type now when it comes to
96:33 content type now when it comes to network load balancing this is a
96:35 network load balancing this is a regional load balancer and supports any
96:38 regional load balancer and supports any and all ports
96:39 and all ports it distributes traffic among server
96:42 it distributes traffic among server instances in the same region
96:44 instances in the same region based on incoming ip protocol data such
96:47 based on incoming ip protocol data such as address port and protocol now when it
96:50 as address port and protocol now when it comes to networking
96:52 comes to networking dns plays a big part and because dns
96:55 dns plays a big part and because dns plays a big part in networking google
96:58 plays a big part in networking google has made this service 100 available
97:03 has made this service 100 available on top of giving any dns queries the
97:06 on top of giving any dns queries the absolute lowest latency with google
97:08 absolute lowest latency with google cloud dns you can publish and maintain
97:11 cloud dns you can publish and maintain dns records by using the same
97:14 dns records by using the same infrastructure that google uses and you
97:16 infrastructure that google uses and you can work with your managed zones and dns
97:19 can work with your managed zones and dns records such as mx records tax records
97:23 records such as mx records tax records cname records and a records and you can
97:26 cname records and a records and you can do this all through the cli
97:28 do this all through the cli the api
97:29 the api or the sdk now some of the advanced
97:32 or the sdk now some of the advanced connectivity options that are available
97:35 connectivity options that are available in google cloud are cloudvpn and direct
97:38 in google cloud are cloudvpn and direct interconnect now cloudvpn connects your
97:41 interconnect now cloudvpn connects your existing network whether it be
97:43 existing network whether it be on-premise or in another location
97:46 on-premise or in another location to your vbc network through an ipsec
97:49 to your vbc network through an ipsec connection
97:50 connection the traffic is encrypted and travels
97:53 the traffic is encrypted and travels between the two networks over the public
97:55 between the two networks over the public internet now when it comes to direct
97:58 internet now when it comes to direct interconnect this connectivity option
98:00 interconnect this connectivity option allows you to connect your existing
98:02 allows you to connect your existing network to your vpc network using a
98:06 network to your vpc network using a highly available
98:07 highly available low latency connection this connectivity
98:10 low latency connection this connectivity option does not traverse the public
98:13 option does not traverse the public internet and merely connects to google's
98:16 internet and merely connects to google's backbone and this is what gives it the
98:18 backbone and this is what gives it the highly available low latency connection
98:21 highly available low latency connection a couple of other advanced connectivity
98:23 a couple of other advanced connectivity options is direct and carrier peering
98:26 options is direct and carrier peering these connections allow your traffic to
98:29 these connections allow your traffic to flow through google's edge network
98:31 flow through google's edge network locations and pairing can be done
98:33 locations and pairing can be done directly or it can be done through a
98:36 directly or it can be done through a third-party carrier and so although this
98:38 third-party carrier and so although this is a very short lesson i will be going
98:41 is a very short lesson i will be going into greater depth on all these concepts
98:44 into greater depth on all these concepts in later lessons in the course so that's
98:47 in later lessons in the course so that's all i had to cover for this lesson so
98:49 all i had to cover for this lesson so you can now mark this lesson as complete
98:52 you can now mark this lesson as complete and let's move on to the next one
99:00 welcome back in this lesson we're going to learn about how resources and
99:03 to learn about how resources and entities
99:04 entities are organized within google cloud and
99:06 are organized within google cloud and how permissions are inherited through
99:09 how permissions are inherited through this approach knowing this structure
99:11 this approach knowing this structure is a fundamental concept that you should
99:14 is a fundamental concept that you should know while working in gcp at any
99:17 know while working in gcp at any capacity so before defining what the
99:20 capacity so before defining what the resource hierarchy is i'd like to take a
99:23 resource hierarchy is i'd like to take a little bit of time to define what is a
99:26 little bit of time to define what is a resource now in the context of google
99:29 resource now in the context of google cloud a resource can refer to the
99:32 cloud a resource can refer to the service level resources that are used to
99:35 service level resources that are used to process your workloads such as compute
99:38 process your workloads such as compute instance vms
99:40 instance vms cloud storage buckets
99:42 cloud storage buckets and even cloud sql databases as well as
99:45 and even cloud sql databases as well as the account level resources that sit
99:48 the account level resources that sit above the services
99:50 above the services such as the organization itself
99:53 such as the organization itself the folders
99:54 the folders and the projects of course which we will
99:57 and the projects of course which we will be getting into a little bit deeper in
99:59 be getting into a little bit deeper in just a minute
100:01 just a minute the resource hierarchy is google's way
100:03 the resource hierarchy is google's way to configure and grant access
100:05 to configure and grant access to the various cloud resources for your
100:08 to the various cloud resources for your company within google cloud both at the
100:12 company within google cloud both at the service level
100:13 service level and at the account level
100:16 and at the account level the resource hierarchy in google cloud
100:18 the resource hierarchy in google cloud can truly define the granular
100:21 can truly define the granular permissions needed for when you need to
100:23 permissions needed for when you need to configure permissions to everyone in the
100:25 configure permissions to everyone in the organization that actually makes sense
100:29 organization that actually makes sense so now that we covered what is a
100:30 so now that we covered what is a resource i wanted to start digging into
100:34 resource i wanted to start digging into the resource hierarchy and the structure
100:36 the resource hierarchy and the structure itself now google cloud resources are
100:39 itself now google cloud resources are organized hierarchically using a
100:42 organized hierarchically using a parent-child relationship this hierarchy
100:45 parent-child relationship this hierarchy is designed to map an organization's
100:48 is designed to map an organization's operational structure to google cloud
100:51 operational structure to google cloud and to manage access control and
100:54 and to manage access control and permissions for groups of related
100:57 permissions for groups of related resources
100:58 resources so overall
100:59 so overall resource hierarchy will give
101:01 resource hierarchy will give organizations better management of
101:03 organizations better management of permissions and access control
101:06 permissions and access control the accessibility of these resources or
101:09 the accessibility of these resources or policies are controlled by identity and
101:12 policies are controlled by identity and access management also known as iam a
101:16 access management also known as iam a big component of gcp which we will be
101:18 big component of gcp which we will be digging into a little bit later on in
101:21 digging into a little bit later on in this course and so when an iam policy is
101:24 this course and so when an iam policy is set on a parent the child will inherit
101:27 set on a parent the child will inherit this policy respectively access control
101:30 this policy respectively access control policies and configuration settings on a
101:33 policies and configuration settings on a parent resource
101:35 parent resource are always inherited by the child also
101:38 are always inherited by the child also please note that each child object can
101:41 please note that each child object can only have exactly one parent
101:44 only have exactly one parent and that these policies are again
101:46 and that these policies are again controlled by iam so now to understand a
101:49 controlled by iam so now to understand a little bit more about how the gcp
101:52 little bit more about how the gcp resource hierarchy works
101:54 resource hierarchy works i wanted to dig into the layers that
101:56 i wanted to dig into the layers that support this hierarchy
101:58 support this hierarchy so this is a diagram of exactly what the
102:01 so this is a diagram of exactly what the resource hierarchy looks like in all of
102:04 resource hierarchy looks like in all of its awesomeness
102:06 its awesomeness including the billing account along with
102:08 including the billing account along with the payments profile but we're not going
102:10 the payments profile but we're not going to get into that right now i'll actually
102:12 to get into that right now i'll actually be covering that in a later lesson so
102:15 be covering that in a later lesson so more on that later
102:17 more on that later so building the structure from the top
102:19 so building the structure from the top down we start off with the domain or
102:22 down we start off with the domain or cloud level and as you can see here the
102:24 cloud level and as you can see here the domain of bowtieinc.co
102:27 domain of bowtieinc.co is at the top
102:28 is at the top this is the primary identity of your
102:31 this is the primary identity of your organization at the domain level this is
102:34 organization at the domain level this is where you manage your users in your
102:36 where you manage your users in your organizations
102:38 organizations so users policies and these are linked
102:41 so users policies and these are linked to g suite or cloud identity accounts
102:45 to g suite or cloud identity accounts now underneath the domain level we have
102:47 now underneath the domain level we have the organization level and this is
102:50 the organization level and this is integrated very closely with the domain
102:53 integrated very closely with the domain so with the organization level this
102:55 so with the organization level this represents an organization and is the
102:58 represents an organization and is the root node of the gcp resource hierarchy
103:02 root node of the gcp resource hierarchy it is associated with exactly one domain
103:05 it is associated with exactly one domain here we have the domain set as bowtie
103:08 here we have the domain set as bowtie inc
103:09 inc all entities or resources
103:11 all entities or resources belong to and are grouped under the
103:13 belong to and are grouped under the organization
103:15 organization all controlled policies applied to the
103:17 all controlled policies applied to the organization
103:19 organization are inherited by all other entities and
103:22 are inherited by all other entities and resources underneath it so any folders
103:25 resources underneath it so any folders projects or resources will get those
103:28 projects or resources will get those policies that are applied from the
103:30 policies that are applied from the organization layer now i know that we
103:32 organization layer now i know that we haven't dug into roles as of yet
103:35 haven't dug into roles as of yet but the one thing that i did want to
103:37 but the one thing that i did want to point out is that when an organization
103:39 point out is that when an organization is created
103:41 is created an organization admin role is created
103:44 an organization admin role is created and this is to allow full access to edit
103:47 and this is to allow full access to edit any or all resources
103:49 any or all resources now moving on to the folders layer this
103:52 now moving on to the folders layer this is an additional grouping mechanism and
103:54 is an additional grouping mechanism and isolation boundary between each project
103:58 isolation boundary between each project in essence
103:59 in essence it's a grouping of other folders
104:02 it's a grouping of other folders projects and resources so if you have
104:05 projects and resources so if you have different departments and teams within a
104:07 different departments and teams within a company
104:08 company this is a great way to organize it now a
104:11 this is a great way to organize it now a couple of caveats when it comes to
104:13 couple of caveats when it comes to folders
104:14 folders the first one is you must have an
104:16 the first one is you must have an organization node and the second one is
104:19 organization node and the second one is while a folder can contain multiple
104:21 while a folder can contain multiple folders or resources
104:24 folders or resources a folder or resource can have exactly
104:27 a folder or resource can have exactly one parent
104:28 one parent now moving into the projects layer this
104:31 now moving into the projects layer this is a core organizational component of
104:33 is a core organizational component of google cloud as projects are required to
104:37 google cloud as projects are required to use service level resources
104:40 use service level resources these projects are the base level
104:42 these projects are the base level organizing entity in gcp
104:45 organizing entity in gcp and parent all service level resources
104:48 and parent all service level resources just as a note
104:50 just as a note any given resource can only exist in one
104:53 any given resource can only exist in one project and not multiple projects at the
104:56 project and not multiple projects at the same time and moving on to the last
104:58 same time and moving on to the last layer we have the resources layer and
105:01 layer we have the resources layer and this is any service level resource
105:04 this is any service level resource created in google cloud
105:06 created in google cloud everything from compute engine instances
105:09 everything from compute engine instances to cloud storage buckets to cloud sql
105:12 to cloud storage buckets to cloud sql databases apis users
105:15 databases apis users all these service level resources that
105:17 all these service level resources that we create in google cloud fall under
105:19 we create in google cloud fall under this layer now giving the hierarchy a
105:21 this layer now giving the hierarchy a little bit more context i want to touch
105:24 little bit more context i want to touch on labels for just a second
105:26 on labels for just a second labels help categorize resources by
105:29 labels help categorize resources by using a key value pair and you can
105:31 using a key value pair and you can attach them to any resource
105:34 attach them to any resource and so what labels help you do is to
105:36 and so what labels help you do is to break down and organize costs when it
105:39 break down and organize costs when it comes to billing now to give you some
105:41 comes to billing now to give you some more structure with regards to the
105:43 more structure with regards to the hierarchy
105:44 hierarchy under the domain level
105:46 under the domain level everything underneath this is considered
105:48 everything underneath this is considered a resource
105:50 a resource and to break it down even further
105:52 and to break it down even further everything you see from the organization
105:54 everything you see from the organization layer to the projects layer is
105:57 layer to the projects layer is considered an account level resource
106:00 considered an account level resource everything in the resource layer is
106:02 everything in the resource layer is considered a service level resource and
106:05 considered a service level resource and so this is how the google cloud resource
106:08 so this is how the google cloud resource hierarchy is split up and organized and
106:11 hierarchy is split up and organized and so before i finish off this lesson i
106:14 so before i finish off this lesson i wanted to give you a quick run-through
106:15 wanted to give you a quick run-through on how policies can be applied at a
106:18 on how policies can be applied at a hierarchical level
106:20 hierarchical level so i thought i'd bring in tony bowtie
106:22 so i thought i'd bring in tony bowtie for a quick demo
106:24 for a quick demo so just to give you an example
106:26 so just to give you an example tony bowtie is part of department b and
106:29 tony bowtie is part of department b and tony's manager lark
106:31 tony's manager lark decides to set a policy on department
106:34 decides to set a policy on department b's folder and this policy grants
106:38 b's folder and this policy grants project owner role to tony at
106:40 project owner role to tony at bowtieinc.co so tony will have the
106:43 bowtieinc.co so tony will have the project owner role for project x and for
106:46 project owner role for project x and for project y at the same time
106:49 project y at the same time lark assigns laura at bowtieinc.co
106:53 lark assigns laura at bowtieinc.co cloud storage admin role on project x
106:57 cloud storage admin role on project x and thus she will only be able to manage
107:00 and thus she will only be able to manage cloud storage buckets in that project
107:03 cloud storage buckets in that project this hierarchy and permission
107:05 this hierarchy and permission inheritance comes up quite a bit not
107:08 inheritance comes up quite a bit not only in the exam but is something that
107:10 only in the exam but is something that should be carefully examined when
107:12 should be carefully examined when applying permissions
107:14 applying permissions anywhere within the hierarchy in your
107:16 anywhere within the hierarchy in your day-to-day role as an engineer
107:19 day-to-day role as an engineer applying permissions or policies to
107:22 applying permissions or policies to resources
107:23 resources with existing policies
107:25 with existing policies may not end up getting you the desired
107:27 may not end up getting you the desired results you're looking for and may have
107:30 results you're looking for and may have a chance to be overlooked now i hope
107:32 a chance to be overlooked now i hope these diagrams have given you some good
107:35 these diagrams have given you some good contacts with regards to resource
107:37 contacts with regards to resource hierarchy its structure and the
107:40 hierarchy its structure and the permissions applied down the chain now
107:43 permissions applied down the chain now that's all i have for this lesson on
107:45 that's all i have for this lesson on resource hierarchy so you can now mark
107:47 resource hierarchy so you can now mark this lesson as complete and let's move
107:49 this lesson as complete and let's move on to the next one
107:51 on to the next one [Music]
107:55 [Music] welcome back
107:56 welcome back in this lesson i will be covering a few
107:58 in this lesson i will be covering a few different topics that i will touch on
108:01 different topics that i will touch on when creating a new google cloud account
108:03 when creating a new google cloud account i will be covering going over the free
108:06 i will be covering going over the free tier and the always free options the
108:08 tier and the always free options the differences between them and a demo
108:11 differences between them and a demo showing how you can create your own free
108:13 showing how you can create your own free tier account as well i'll also be going
108:16 tier account as well i'll also be going into what you will need in order to
108:18 into what you will need in order to fulfill this demo so for the remainder
108:21 fulfill this demo so for the remainder of this course all the demos will run
108:24 of this course all the demos will run under the free tier now when i built
108:26 under the free tier now when i built this course i built it with budget in
108:28 this course i built it with budget in mind and having viewed on ways where i
108:31 mind and having viewed on ways where i can keep the price to a minimum while
108:34 can keep the price to a minimum while still keeping the demos extremely useful
108:37 still keeping the demos extremely useful and so the free tier falls within all
108:39 and so the free tier falls within all these guidelines and will help you learn
108:42 these guidelines and will help you learn without the high ticket price and so
108:44 without the high ticket price and so getting into a quick overview of the
108:46 getting into a quick overview of the differences between the free tier and
108:48 differences between the free tier and the always free option i have broken
108:51 the always free option i have broken them down here with their most
108:53 them down here with their most significant differences in the free tier
108:55 significant differences in the free tier google cloud offers you a 12 month free
108:58 google cloud offers you a 12 month free trial with a 300
109:00 trial with a 300 u.s credit this type of account ends
109:03 u.s credit this type of account ends when the credit is used or after the 12
109:06 when the credit is used or after the 12 months whichever happens first and so
109:09 months whichever happens first and so for those of you who are looking at
109:10 for those of you who are looking at taking advantage of this on a business
109:13 taking advantage of this on a business level unfortunately the free tier only
109:16 level unfortunately the free tier only applies to a personal account and cannot
109:19 applies to a personal account and cannot be attached to a business account now
109:22 be attached to a business account now moving over to the always free option
109:24 moving over to the always free option the always free option isn't a special
109:26 the always free option isn't a special program but it's a regular part of your
109:29 program but it's a regular part of your google cloud account it provides you
109:31 google cloud account it provides you limited access to many of the google
109:34 limited access to many of the google cloud resources free of charge and once
109:37 cloud resources free of charge and once these limits have been hit
109:38 these limits have been hit then you are charged at the regular per
109:41 then you are charged at the regular per second billing rate
109:42 second billing rate and i will show you a little bit later
109:44 and i will show you a little bit later how to monitor these credits so that you
109:46 how to monitor these credits so that you don't go over using this in conjunction
109:49 don't go over using this in conjunction with the free tier account is not
109:51 with the free tier account is not possible you have to have an upgraded
109:53 possible you have to have an upgraded billing account which can also include a
109:56 billing account which can also include a business account now there are a bunch
109:58 business account now there are a bunch more stipulations in this program and i
110:01 more stipulations in this program and i will include a link to both of them in
110:03 will include a link to both of them in the lesson text below for later viewing
110:06 the lesson text below for later viewing at your convenience
110:07 at your convenience now lastly before we get into the demo
110:10 now lastly before we get into the demo i wanted to go through a quick
110:11 i wanted to go through a quick run-through of exactly what's needed to
110:14 run-through of exactly what's needed to open up your free tier account
110:17 open up your free tier account so we're going to start off with a fresh
110:19 so we're going to start off with a fresh new gmail address so that it doesn't
110:21 new gmail address so that it doesn't conflict with any current gmail address
110:24 conflict with any current gmail address that you may have you're gonna need a
110:25 that you may have you're gonna need a credit card for verification and this is
110:28 credit card for verification and this is for google to make sure that you're an
110:30 for google to make sure that you're an actual human being and not a robot and
110:32 actual human being and not a robot and you won't be charged unless you go above
110:35 you won't be charged unless you go above the 300 credit limit as well i highly
110:38 the 300 credit limit as well i highly recommend going into a private browsing
110:40 recommend going into a private browsing session so whether you're using chrome
110:43 session so whether you're using chrome you would use an incognito session if
110:45 you would use an incognito session if you're using firefox you would use
110:47 you're using firefox you would use private browsing and in microsoft edge
110:50 private browsing and in microsoft edge you would be using the in private mode
110:53 you would be using the in private mode and so in order to start with this free
110:54 and so in order to start with this free trial you can head on over to the url
110:57 trial you can head on over to the url listed here and i'll also include this
111:00 listed here and i'll also include this in the lesson text so head on over to
111:02 in the lesson text so head on over to this url and i'll see you there in just
111:04 this url and i'll see you there in just a second
111:06 a second okay so here we are at the free trial
111:08 okay so here we are at the free trial url i'm here in google chrome in an
111:11 url i'm here in google chrome in an incognito session and so we're not going
111:14 incognito session and so we're not going to sign up we're going to go over here
111:15 to sign up we're going to go over here to create account you can just click on
111:17 to create account you can just click on create account
111:18 create account for myself because as i mentioned
111:20 for myself because as i mentioned earlier you're not able to create a free
111:23 earlier you're not able to create a free trial account with your business
111:26 trial account with your business so i'm going to click on for myself
111:28 so i'm going to click on for myself and it's going to bring you to this page
111:29 and it's going to bring you to this page where it says create your google account
111:32 where it says create your google account and you're going to go to create a new
111:33 and you're going to go to create a new gmail address instead
111:36 gmail address instead and now you're going to fill in all the
111:37 and now you're going to fill in all the necessary information that's needed in
111:39 necessary information that's needed in order to open up this new gmail account
111:49 once you're finished typing your password you can hit next
111:52 password you can hit next and now i got prompted for six digit
111:54 and now i got prompted for six digit verification code that i have to plug in
111:56 verification code that i have to plug in but in order to do that google needs my
111:59 but in order to do that google needs my telephone number so i'm gonna type that
112:01 telephone number so i'm gonna type that in now and just to let you know this
112:03 in now and just to let you know this verification is done to let google know
112:06 verification is done to let google know that you're not a bot and you're a real
112:08 that you're not a bot and you're a real human and google just sent me a
112:10 human and google just sent me a verification code
112:11 verification code and this is a one-time verification code
112:14 and this is a one-time verification code that i'm going to plug in
112:17 that i'm going to plug in and i'm going to hit verify
112:19 and i'm going to hit verify and you can plug in the necessary
112:21 and you can plug in the necessary information here for recovery email
112:23 information here for recovery email address your birthday and gender and
112:25 address your birthday and gender and this is so that google can authenticate
112:28 this is so that google can authenticate you in case you accidentally misplace
112:31 you in case you accidentally misplace your password
112:32 your password and then just hit next and here google
112:34 and then just hit next and here google gives you a little bit more information
112:36 gives you a little bit more information on what your number can be used for and
112:38 on what your number can be used for and so i'm going to go ahead and skip it
112:41 so i'm going to go ahead and skip it and of course we're going to read
112:42 and of course we're going to read through the terms of service and the
112:44 through the terms of service and the privacy policy
112:45 privacy policy click on agree
112:52 and as you can see we're almost there it shows here that we're
112:54 it shows here that we're signing up for the free trial i'm in
112:57 signing up for the free trial i'm in canada so
112:59 canada so depending on your country this may
113:00 depending on your country this may change of course i read the terms of
113:02 change of course i read the terms of service and i'm going to agree to it and
113:05 service and i'm going to agree to it and i don't really want any updates so you
113:06 i don't really want any updates so you can probably skip that and just hit
113:09 can probably skip that and just hit continue
113:14 and so this is all the necessary information that needs to be filled out
113:15 information that needs to be filled out for billing and so here under account
113:18 for billing and so here under account type be sure to click on individual as
113:20 type be sure to click on individual as opposed to business and again fill in
113:22 opposed to business and again fill in all the necessary information with
113:24 all the necessary information with regards to your address and your credit
113:27 regards to your address and your credit card details and once you fill that in
113:29 card details and once you fill that in you can click on start my free trial
113:36 and once you've entered in all that information you should be brought to
113:38 information you should be brought to this page with a prompt
113:40 this page with a prompt asking you exactly what you need with
113:43 asking you exactly what you need with regards to google cloud and you can just
113:45 regards to google cloud and you can just hit skip here
113:47 hit skip here and i'm going to zoom in here just see a
113:49 and i'm going to zoom in here just see a little better and so here you're left
113:51 little better and so here you're left with a checklist where you can go
113:53 with a checklist where you can go through all the different resources and
113:55 through all the different resources and it even gives you a checklist to go
113:57 it even gives you a checklist to go through but other than that we're in
114:00 through but other than that we're in and so just to verify that we're signed
114:02 and so just to verify that we're signed up for a free tier account i'm going to
114:04 up for a free tier account i'm going to go over to billing and i'm going to see
114:06 go over to billing and i'm going to see here that i have my free trial credit
114:08 here that i have my free trial credit and it says 411 dollars and due to the
114:11 and it says 411 dollars and due to the fact that my currency is in canadian
114:13 fact that my currency is in canadian dollars it's been converted from us
114:15 dollars it's been converted from us dollars and so we'll be going through
114:17 dollars and so we'll be going through billing in a later lesson
114:19 billing in a later lesson but right now we are actually logged in
114:23 but right now we are actually logged in and so that's all i wanted to cover for
114:25 and so that's all i wanted to cover for this lesson on how to sign up for your
114:27 this lesson on how to sign up for your free trial account
114:28 free trial account so you can now mark this lesson as
114:30 so you can now mark this lesson as complete and you can join me in the next
114:32 complete and you can join me in the next one
114:33 one where we will secure the account
114:35 where we will secure the account using a method called two-step
114:37 using a method called two-step verification
114:39 verification [Music]
114:43 [Music] welcome back
114:44 welcome back so in the last lesson we went ahead and
114:46 so in the last lesson we went ahead and created a brand new gcp account in this
114:50 created a brand new gcp account in this lesson we'll be discussing how to secure
114:53 lesson we'll be discussing how to secure that gcp account by following some best
114:56 that gcp account by following some best practices
114:57 practices whenever any account is created in
114:59 whenever any account is created in google cloud and this can be applied
115:02 google cloud and this can be applied with regards to personal accounts as
115:04 with regards to personal accounts as well as the super admin account as it's
115:07 well as the super admin account as it's always good to keep safety as a priority
115:10 always good to keep safety as a priority this lesson may be a refresher for those
115:13 this lesson may be a refresher for those who are a bit more advanced as for
115:16 who are a bit more advanced as for everyone else these steps could help you
115:18 everyone else these steps could help you from an attack on your account i'd first
115:21 from an attack on your account i'd first like to run you through a scenario of
115:23 like to run you through a scenario of the outcome on both secure and
115:25 the outcome on both secure and non-secure accounts
115:27 non-secure accounts as well as the different options that
115:29 as well as the different options that reside in google cloud
115:31 reside in google cloud when it comes to locking down your
115:33 when it comes to locking down your account i'll then run through a hands-on
115:35 account i'll then run through a hands-on demo in the console
115:37 demo in the console to show you how you can apply it
115:39 to show you how you can apply it yourself
115:40 yourself so in this specific scenario a username
115:43 so in this specific scenario a username and password is used to secure the
115:45 and password is used to secure the account
115:47 account here lark a trouble causing manager
115:50 here lark a trouble causing manager looks over the shoulder of tony bowtie
115:53 looks over the shoulder of tony bowtie while he plugs in his username and
115:55 while he plugs in his username and password
115:56 password so that he can later access his account
115:59 so that he can later access his account to wreak havoc on tony's reputation as
116:02 to wreak havoc on tony's reputation as tony leaves for coffee
116:04 tony leaves for coffee lark decides to log in and send a
116:06 lark decides to log in and send a company-wide email from tony's account
116:10 company-wide email from tony's account to change an already made decision about
116:13 to change an already made decision about next season's store opening in rome
116:15 next season's store opening in rome italy that would not look good for tony
116:19 italy that would not look good for tony it was that easy for lark to steal
116:21 it was that easy for lark to steal tony's password and in a real life
116:24 tony's password and in a real life scenario it would be that easy for
116:26 scenario it would be that easy for someone to steal your password now when
116:29 someone to steal your password now when someone steals your password they could
116:31 someone steals your password they could do even more devious things than what
116:33 do even more devious things than what lark did not just sending out harmful
116:36 lark did not just sending out harmful emails they could lock you out of your
116:38 emails they could lock you out of your account or even delete emails or
116:40 account or even delete emails or documents this is where two-step
116:43 documents this is where two-step verification comes in this can help keep
116:46 verification comes in this can help keep bad people out
116:47 bad people out even if they have your password two-step
116:50 even if they have your password two-step verification is an extra layer of
116:53 verification is an extra layer of security most people only have one layer
116:56 security most people only have one layer to protect their account which is their
116:58 to protect their account which is their password with two-step verification
117:01 password with two-step verification if a bad person hacks through your
117:03 if a bad person hacks through your password they'll still need your phone
117:06 password they'll still need your phone or security key to get into your account
117:10 or security key to get into your account so how two-step verification works is
117:13 so how two-step verification works is that sign-in will require something you
117:16 that sign-in will require something you know
117:17 know and something that you have
117:19 and something that you have the first one is to protect your account
117:22 the first one is to protect your account with something you know which will be
117:23 with something you know which will be your password and the second is
117:26 your password and the second is something that you have
117:28 something that you have which is your phone or security key
117:31 which is your phone or security key so whenever you sign into google you'll
117:33 so whenever you sign into google you'll enter your password as usual
117:36 enter your password as usual then a code will be sent to your phone
117:39 then a code will be sent to your phone via text
117:40 via text voice call or google's mobile app or if
117:44 voice call or google's mobile app or if you have a security key you can insert
117:46 you have a security key you can insert it into your computer's usb port
117:49 it into your computer's usb port codes can be sent in a text message or
117:52 codes can be sent in a text message or through a voice call depending on the
117:55 through a voice call depending on the setting you choose
117:56 setting you choose you can set up google authenticator or
117:59 you can set up google authenticator or another app that creates a one-time
118:02 another app that creates a one-time verification code which is great for
118:04 verification code which is great for when you're offline you would then enter
118:07 when you're offline you would then enter the verification code on the sign in
118:09 the verification code on the sign in screen to help verify that it is you
118:12 screen to help verify that it is you another way for verification is using
118:14 another way for verification is using google prompts and this can help protect
118:17 google prompts and this can help protect against sim swap or other phone number
118:20 against sim swap or other phone number based hacks google prompts are push
118:23 based hacks google prompts are push notifications you'll receive on android
118:26 notifications you'll receive on android phones that are signed into your google
118:28 phones that are signed into your google account or iphones with the gmail app or
118:31 account or iphones with the gmail app or google app that's signed into your
118:33 google app that's signed into your google account now you can actually skip
118:36 google account now you can actually skip a second step on trusted devices
118:39 a second step on trusted devices if you don't want to provide a second
118:40 if you don't want to provide a second verification step each time you sign in
118:44 verification step each time you sign in on your computer or your phone you can
118:46 on your computer or your phone you can check the box next to don't ask again on
118:49 check the box next to don't ask again on this computer and this is a great added
118:51 this computer and this is a great added feature if you are the only user on this
118:54 feature if you are the only user on this device
118:55 device this feature is not recommended if this
118:58 this feature is not recommended if this device is being used by multiple users
119:01 device is being used by multiple users security keys are another way to help
119:03 security keys are another way to help protect your google account from
119:05 protect your google account from phishing attacks when a hacker tries to
119:08 phishing attacks when a hacker tries to trick you into giving them your password
119:11 trick you into giving them your password or other personal information now a
119:13 or other personal information now a physical security key is a small device
119:16 physical security key is a small device that you can buy to help prove it's you
119:19 that you can buy to help prove it's you signing in when google needs to make
119:21 signing in when google needs to make sure that it's you
119:23 sure that it's you you can simply connect your key to your
119:25 you can simply connect your key to your computer and verify that it's you and
119:28 computer and verify that it's you and when you have no other way to verify
119:30 when you have no other way to verify your account you have the option of
119:33 your account you have the option of using backup codes and these are
119:35 using backup codes and these are one-time use codes that you can print or
119:38 one-time use codes that you can print or download and these are multiple sets of
119:40 download and these are multiple sets of eight-digit codes that you can keep in a
119:43 eight-digit codes that you can keep in a safe place in case you have no other
119:45 safe place in case you have no other options for verification i personally
119:48 options for verification i personally have found use in using these backup
119:50 have found use in using these backup codes as i have used them in past when
119:53 codes as i have used them in past when my phone died
119:55 my phone died so ever since lark's last email
119:58 so ever since lark's last email tony not only changed his password
120:01 tony not only changed his password but added a two-step verification to his
120:04 but added a two-step verification to his account so that only he would have
120:06 account so that only he would have access and would never have to worry
120:09 access and would never have to worry again about others looking over his
120:11 again about others looking over his shoulder to gain access to his account
120:14 shoulder to gain access to his account as tony leaves for coffee
120:16 as tony leaves for coffee lark tries to log in again but is
120:18 lark tries to log in again but is unsuccessful due to the two-step
120:21 unsuccessful due to the two-step verification in place tony has clearly
120:24 verification in place tony has clearly outsmarted the bad man in this scenario
120:27 outsmarted the bad man in this scenario and lark will have to look for another
120:29 and lark will have to look for another way to foil tony's plan to bring
120:32 way to foil tony's plan to bring greatness to bow ties across the globe
120:35 greatness to bow ties across the globe and this is a sure difference between
120:38 and this is a sure difference between having a secure account and a not so
120:40 having a secure account and a not so secure account and so now that i've gone
120:42 secure account and so now that i've gone through the theory of the two-step
120:44 through the theory of the two-step verification process i'm going to dive
120:47 verification process i'm going to dive into the console and implement it with
120:49 into the console and implement it with the hands-on demo just be aware that you
120:52 the hands-on demo just be aware that you can also do this through the gmail
120:55 can also do this through the gmail console but we're going to go ahead and
120:57 console but we're going to go ahead and do it through the google cloud console
120:59 do it through the google cloud console using the url you see here so whenever
121:02 using the url you see here so whenever you're ready feel free to join me in the
121:04 you're ready feel free to join me in the console
121:06 console and so here we are back in the console
121:08 and so here we are back in the console and over here on the top right hand
121:10 and over here on the top right hand corner you will find a user icon and you
121:13 corner you will find a user icon and you can simply click on it
121:14 can simply click on it and click over to your google account
121:17 and click over to your google account now i'm just going to zoom in for better
121:19 now i'm just going to zoom in for better viewing
121:20 viewing and so in order to enable two-step
121:22 and so in order to enable two-step verification we're gonna go over here to
121:24 verification we're gonna go over here to the menu on the left and click on
121:27 the menu on the left and click on security and under signing into google
121:29 security and under signing into google you will find two-step verification
121:32 you will find two-step verification currently it's off as well as using my
121:34 currently it's off as well as using my phone to sign in is off so i'm going to
121:37 phone to sign in is off so i'm going to click on this bar here for two-step
121:38 click on this bar here for two-step verification
121:40 verification and i definitely want to add an extra
121:41 and i definitely want to add an extra layer of security and i definitely want
121:44 layer of security and i definitely want to keep the bad guys out so i'm going to
121:46 to keep the bad guys out so i'm going to go ahead and click on the get started
121:48 go ahead and click on the get started button
121:49 button it'll ask me for my password
121:52 it'll ask me for my password and because i've entered my phone number
121:54 and because i've entered my phone number when i first signed up for the account
121:56 when i first signed up for the account it actually shows up here this is i
121:58 it actually shows up here this is i antony which is my iphone and so now i
122:01 antony which is my iphone and so now i can get a two-step verification here on
122:04 can get a two-step verification here on my iphone and again this is going to be
122:06 my iphone and again this is going to be a google prompt as it shows here but if
122:08 a google prompt as it shows here but if i wanted to change it to something else
122:11 i wanted to change it to something else i can simply click on show more options
122:14 i can simply click on show more options and here we have a security key as well
122:17 and here we have a security key as well as text message or voice call i highly
122:19 as text message or voice call i highly recommend the google prompt as it's
122:21 recommend the google prompt as it's super easy to use with absolutely no
122:24 super easy to use with absolutely no fuss and so as i always like to verify
122:27 fuss and so as i always like to verify what i've done i'm going to click on
122:29 what i've done i'm going to click on this try it now button and so because i
122:31 this try it now button and so because i wanted to show you exactly what a live
122:34 wanted to show you exactly what a live google prompt looks like i'm going to
122:36 google prompt looks like i'm going to bring up my phone here on the screen so
122:38 bring up my phone here on the screen so that you can take a look
122:41 that you can take a look and it actually sent me a google prompt
122:44 and it actually sent me a google prompt to my phone and i'm just going to go
122:46 to my phone and i'm just going to go ahead and open up my gmail app so i can
122:48 ahead and open up my gmail app so i can verify that it is indeed me that wants
122:51 verify that it is indeed me that wants to log in which i will accept
122:58 and so once i've accepted the google prompt another window will pop up asking
123:01 prompt another window will pop up asking me about a backup option and so i'll
123:04 me about a backup option and so i'll simply need my phone number
123:05 simply need my phone number and i can either get a text message or a
123:08 and i can either get a text message or a phone call and again you have other
123:10 phone call and again you have other options as well so you can use the
123:12 options as well so you can use the one-time backup codes which we discussed
123:14 one-time backup codes which we discussed earlier and you can print or download
123:16 earlier and you can print or download them but i usually like to use a text
123:19 them but i usually like to use a text message and so i'm going to use that
123:22 message and so i'm going to use that i'm going to send it to my phone
123:24 i'm going to send it to my phone and so just to verify it
123:28 and so just to verify it i'm gonna now plug in the one-time code
123:30 i'm gonna now plug in the one-time code that was sent to me
123:37 and then just hit next so the second step is the google prompt
123:40 so the second step is the google prompt it's my default and my backup options if
123:43 it's my default and my backup options if i can't get google prompt is a voice or
123:46 i can't get google prompt is a voice or text message and again this is for my
123:48 text message and again this is for my account antony gcloud ace at gmail.com
123:51 account antony gcloud ace at gmail.com sending it to my i antony device so turn
123:55 sending it to my i antony device so turn on two-step verification absolutely
123:59 on two-step verification absolutely and so there you have it there is
124:01 and so there you have it there is two-step verification enabled and if i
124:03 two-step verification enabled and if i wanted to change the available steps i
124:06 wanted to change the available steps i can do so here i can also edit it i can
124:09 can do so here i can also edit it i can edit my phone number and i can also set
124:12 edit my phone number and i can also set up any backup codes in case i need it in
124:15 up any backup codes in case i need it in my personal opinion two-step
124:17 my personal opinion two-step verification
124:19 verification is a must-have on any account best
124:22 is a must-have on any account best practice is to always do it for your
124:24 practice is to always do it for your super admin account which would be my
124:27 super admin account which would be my gmail account that i am currently signed
124:29 gmail account that i am currently signed up with but i find is a necessity for
124:32 up with but i find is a necessity for any other users and always make it a
124:35 any other users and always make it a policy for people to add two-step
124:37 policy for people to add two-step verification to their accounts i highly
124:40 verification to their accounts i highly recommend that you make it your best
124:42 recommend that you make it your best practice to do this in your role as an
124:45 practice to do this in your role as an engineer in any environment at any
124:48 engineer in any environment at any organization again two-step verification
124:51 organization again two-step verification will allow to keep you safe your users
124:54 will allow to keep you safe your users safe and your environment safe from any
124:57 safe and your environment safe from any malicious activities that could happen
124:59 malicious activities that could happen at any time and that's all i have for
125:01 at any time and that's all i have for this lesson on two-step verification and
125:04 this lesson on two-step verification and securing your account
125:06 securing your account so you can now mark this lesson as
125:08 so you can now mark this lesson as complete and let's move on to the next
125:10 complete and let's move on to the next one
125:11 one [Music]
125:15 [Music] welcome back
125:17 welcome back now there are many different ways in
125:18 now there are many different ways in which you can interact with google cloud
125:21 which you can interact with google cloud services and resources this lesson is an
125:25 services and resources this lesson is an overview of the gcp console and how you
125:28 overview of the gcp console and how you can interact with it using the graphical
125:31 can interact with it using the graphical user interface and so for this hands-on
125:33 user interface and so for this hands-on demo
125:34 demo i will be diving into how to navigate
125:36 i will be diving into how to navigate through the gcp console and point out
125:39 through the gcp console and point out some functions and features that you may
125:41 some functions and features that you may find helpful so with that being said
125:43 find helpful so with that being said let's dive in
125:46 let's dive in and so here we are back in the console
125:48 and so here we are back in the console up here you can see the free trial
125:50 up here you can see the free trial status and then i still have 410 credit
125:54 status and then i still have 410 credit again this is canadian dollars so i
125:56 again this is canadian dollars so i guess consider me lucky so i'm going to
125:58 guess consider me lucky so i'm going to go ahead over here and dismiss this
126:01 go ahead over here and dismiss this don't activate it because otherwise this
126:03 don't activate it because otherwise this will kill your free trial status and you
126:06 will kill your free trial status and you don't want to do that so i'm just going
126:08 don't want to do that so i'm just going to hit dismiss so over here on the main
126:10 to hit dismiss so over here on the main page you have a bunch of cards here that
126:13 page you have a bunch of cards here that will give you the status of your
126:14 will give you the status of your environment as well as the status of
126:17 environment as well as the status of what's happening within google cloud
126:19 what's happening within google cloud with these cards you can customize them
126:22 with these cards you can customize them by hitting this button over here
126:24 by hitting this button over here customize and you can turn them on or
126:26 customize and you can turn them on or off and you can go ahead and move these
126:28 off and you can go ahead and move these around if you'd like
126:36 and i'm going to put this up here as well i'm going to turn on my billing so
126:38 well i'm going to turn on my billing so i can keep track of exactly what my
126:41 i can keep track of exactly what my spend is i don't really need my get
126:43 spend is i don't really need my get starting card so i'm going to turn that
126:45 starting card so i'm going to turn that off as well as the documentation i'm
126:47 off as well as the documentation i'm going to turn that off as well
126:50 going to turn that off as well and the apis is always nice to have
126:54 and the apis is always nice to have as well up here on the project info this
126:56 as well up here on the project info this reflects the current project which is my
126:59 reflects the current project which is my first project
127:00 first project and the project name here is the same
127:04 and the project name here is the same the project id is showing and the
127:06 the project id is showing and the project number and i'm going to dive
127:08 project number and i'm going to dive deeper into that in another lesson also
127:10 deeper into that in another lesson also note that your cards will reflect
127:12 note that your cards will reflect exactly what it is that you're
127:14 exactly what it is that you're interacting with and so the more
127:16 interacting with and so the more resources that you dive into the cards
127:18 resources that you dive into the cards will end up showing up here
127:21 will end up showing up here and you can add them and turn them off
127:23 and you can add them and turn them off at will so i'm going to go up here and
127:24 at will so i'm going to go up here and click on done because i'm satisfied with
127:27 click on done because i'm satisfied with the way that things look
127:29 the way that things look here on my home page and over here to
127:31 here on my home page and over here to your left i wanted to focus on
127:33 your left i wanted to focus on all the services that are available in
127:36 all the services that are available in their own specific topics so for
127:39 their own specific topics so for instance all of compute
127:41 instance all of compute you will find app engine compute engine
127:43 you will find app engine compute engine kubernetes and so on so note that
127:46 kubernetes and so on so note that anything compute related you'll find
127:48 anything compute related you'll find them all grouped together also another
127:51 them all grouped together also another great feature is that you can pin
127:53 great feature is that you can pin exactly what it is that you use often so
127:56 exactly what it is that you use often so if i am a big user of app engine i can
127:59 if i am a big user of app engine i can pin this and it will move its way up to
128:01 pin this and it will move its way up to the top this way it saves me the time
128:04 the top this way it saves me the time from having to go and look for it every
128:06 from having to go and look for it every time i need it and if i'm using it
128:08 time i need it and if i'm using it constantly it's great to have a shortcut
128:11 constantly it's great to have a shortcut to unpin it i simply go back to the pin
128:14 to unpin it i simply go back to the pin and click on it again as well if i'd
128:16 and click on it again as well if i'd like to move the menu out of the way to
128:18 like to move the menu out of the way to get more screen real estate i can simply
128:20 get more screen real estate i can simply click on this hamburger button here
128:22 click on this hamburger button here and make it disappear and to bring it
128:24 and make it disappear and to bring it back i can just click on that again and
128:26 back i can just click on that again and i'll bring it back again now i know that
128:28 i'll bring it back again now i know that there's a lot of resources here to go
128:30 there's a lot of resources here to go through so if you're looking for
128:32 through so if you're looking for something specific you can always go up
128:35 something specific you can always go up to the search bar right here
128:37 to the search bar right here and simply type it in so if i'm looking
128:39 and simply type it in so if i'm looking for let's say cloud sql i can simply
128:42 for let's say cloud sql i can simply type in sql
128:45 type in sql and i can find it right here
128:47 and i can find it right here i can find the api and if anything
128:50 i can find the api and if anything associated with the word sql if i'm
128:53 associated with the word sql if i'm looking for cloud sql specifically i can
128:56 looking for cloud sql specifically i can simply type in
128:57 simply type in cloud sql
129:06 another thing to note is that if you want to go back to your homepage you can
129:09 want to go back to your homepage you can simply go up to the left hand corner
129:11 simply go up to the left hand corner here and click on the google cloud
129:13 here and click on the google cloud platform logo and it'll bring you right
129:16 platform logo and it'll bring you right back and right here under the google
129:18 back and right here under the google cloud platform logo you'll see another
129:20 cloud platform logo you'll see another set of tabs we have dashboard we also
129:23 set of tabs we have dashboard we also have activity and this will show all the
129:25 have activity and this will show all the latest activity that's been done
129:28 latest activity that's been done and because this is a brand new account
129:30 and because this is a brand new account i don't have much here now because this
129:32 i don't have much here now because this is my first time in activity this is
129:34 is my first time in activity this is going to take some time to index
129:36 going to take some time to index and in the meantime i wanted to show you
129:38 and in the meantime i wanted to show you filters
129:40 filters if this were a long list to go through
129:42 if this were a long list to go through where activity has been happening for
129:44 where activity has been happening for months i can filter through these
129:46 months i can filter through these activities either by user or by
129:49 activities either by user or by categories or by resource type as well
129:52 categories or by resource type as well as the date i can also combine these to
129:55 as the date i can also combine these to search for something really granular and
129:57 search for something really granular and beside the activity tab we have
129:59 beside the activity tab we have recommendations which is based on the
130:02 recommendations which is based on the recommender service and this service
130:04 recommender service and this service provides recommendations and insights
130:07 provides recommendations and insights for using resources on google cloud
130:10 for using resources on google cloud these recommendations and insights are
130:12 these recommendations and insights are on a per product or per service basis
130:15 on a per product or per service basis and they are based on machine learning
130:17 and they are based on machine learning and current resource usage a great
130:19 and current resource usage a great example of a recommendation is vm
130:22 example of a recommendation is vm instance right sizing so if the
130:24 instance right sizing so if the recommender service detects that a vm
130:26 recommender service detects that a vm instance is underutilized it will
130:29 instance is underutilized it will recommend changing the machine size so
130:31 recommend changing the machine size so that i can save some money and because
130:33 that i can save some money and because this is a fresh new account and i
130:36 this is a fresh new account and i haven't used any resources this is why
130:38 haven't used any resources this is why there is no recommendations for me so
130:40 there is no recommendations for me so going back to the home page
130:42 going back to the home page i want to touch on this projects menu
130:44 i want to touch on this projects menu for a second and as you can see here i
130:46 for a second and as you can see here i can select a project now if i had many
130:50 can select a project now if i had many different projects i can simply search
130:52 different projects i can simply search from each different one and so to cover
130:54 from each different one and so to cover the last part of the console i wanted to
130:56 the last part of the console i wanted to touch on this menu on the top right hand
130:59 touch on this menu on the top right hand corner here so clicking on this present
131:01 corner here so clicking on this present icon will reveal my free trial status
131:04 icon will reveal my free trial status which i dismissed earlier next to the
131:06 which i dismissed earlier next to the present we have a cloud shell icon and
131:09 present we have a cloud shell icon and this is where you can activate and bring
131:11 this is where you can activate and bring up the cloud shell which i will be
131:13 up the cloud shell which i will be diving into deeper in a later lesson and
131:16 diving into deeper in a later lesson and right next to it is the help button in
131:18 right next to it is the help button in case you need a shortcut to any
131:20 case you need a shortcut to any documentations or tutorials as well some
131:22 documentations or tutorials as well some keyboard shortcuts
131:24 keyboard shortcuts may help you be a little bit more
131:26 may help you be a little bit more efficient and you can always click on
131:28 efficient and you can always click on this and it'll show you exactly what you
131:30 this and it'll show you exactly what you need to know and so i'm going to close
131:32 need to know and so i'm going to close this
131:33 this and to move over to the next part in the
131:35 and to move over to the next part in the menu this is the notifications so any
131:38 menu this is the notifications so any activities that happen you will be
131:40 activities that happen you will be notified here and you can simply click
131:42 notified here and you can simply click on the bell and it'll show you a bunch
131:44 on the bell and it'll show you a bunch of different notifications for either
131:46 of different notifications for either resources that are created or any other
131:49 resources that are created or any other activities that may have happened now
131:52 activities that may have happened now moving on over
131:54 moving on over three buttons over here is the settings
131:56 three buttons over here is the settings and utilities button
131:58 and utilities button and over here you will find the
131:59 and over here you will find the preferences
132:02 preferences and under communication you will find
132:04 and under communication you will find product notifications and updates and
132:06 product notifications and updates and offers and you can turn them off or on
132:09 offers and you can turn them off or on depending on whether or not you want to
132:11 depending on whether or not you want to receive these notifications as well you
132:13 receive these notifications as well you have your language and region and you
132:15 have your language and region and you can personalize the cloud console as to
132:18 can personalize the cloud console as to whether or not you want to allow google
132:20 whether or not you want to allow google to track your activity and this is great
132:22 to track your activity and this is great for when you want recommendations so i'm
132:24 for when you want recommendations so i'm going to keep that checked off getting
132:26 going to keep that checked off getting back to some other options you will find
132:28 back to some other options you will find a link to downloads as well as cloud
132:31 a link to downloads as well as cloud partners and the terms of service
132:33 partners and the terms of service privacy and project settings and so to
132:36 privacy and project settings and so to cover the last topic i wanted to touch
132:38 cover the last topic i wanted to touch on is the actual google account button
132:41 on is the actual google account button and here you can add other user accounts
132:43 and here you can add other user accounts for when you log into the console with a
132:45 for when you log into the console with a different user as well as go straight to
132:48 different user as well as go straight to your google account and of course if
132:50 your google account and of course if you're using a computer that's used by
132:51 you're using a computer that's used by multiple users you can sign out here as
132:54 multiple users you can sign out here as well and so that's just a quick
132:56 well and so that's just a quick run-through of the console and so feel
132:58 run-through of the console and so feel free to poke around and get familiar
133:01 free to poke around and get familiar with exactly what's available in the
133:02 with exactly what's available in the console so that it's a lot easier for
133:05 console so that it's a lot easier for you to use and allow you to become more
133:07 you to use and allow you to become more efficient and so that's all i have for
133:09 efficient and so that's all i have for this lesson so you can now mark this
133:11 this lesson so you can now mark this lesson as complete and let's move on to
133:14 lesson as complete and let's move on to the next one
133:20 welcome back in this lesson i'm going to be going
133:23 in this lesson i'm going to be going through a breakdown of cloud billing and
133:25 through a breakdown of cloud billing and an overview of the various resources
133:28 an overview of the various resources that's involved with billing billing is
133:30 that's involved with billing billing is important to know
133:32 important to know and i'll be diving into the concepts
133:34 and i'll be diving into the concepts around billing and billing interaction
133:37 around billing and billing interaction over the next few lessons
133:39 over the next few lessons as well i'll be getting into another
133:41 as well i'll be getting into another demo going through the details on how to
133:44 demo going through the details on how to create
133:46 create edit
133:46 edit and delete a cloud billing account
133:49 and delete a cloud billing account now earlier on in the course i went over
133:51 now earlier on in the course i went over the resource hierarchy and how google
133:54 the resource hierarchy and how google cloud resources are broken down starting
133:58 cloud resources are broken down starting from the domain level down to their
134:00 from the domain level down to their resource level
134:02 resource level this lesson will focus strictly on the
134:05 this lesson will focus strictly on the billing account
134:06 billing account and payments profile
134:08 and payments profile and the breakdown are concepts that are
134:10 and the breakdown are concepts that are comprised within them
134:12 comprised within them so getting right into it let's start
134:15 so getting right into it let's start with the cloud billing account a cloud
134:17 with the cloud billing account a cloud billing account is a cloud level
134:19 billing account is a cloud level resource managed in the cloud console
134:22 resource managed in the cloud console this defines who pays for a given set of
134:26 this defines who pays for a given set of google cloud resources billing tracks
134:28 google cloud resources billing tracks all of the costs incurred by your google
134:31 all of the costs incurred by your google cloud usage as well it is connected to a
134:35 cloud usage as well it is connected to a google payments profile which includes a
134:38 google payments profile which includes a payment method defining on how you pay
134:41 payment method defining on how you pay for your charges a cloud billing account
134:44 for your charges a cloud billing account can be linked to one or more projects
134:47 can be linked to one or more projects and not to any one project specifically
134:50 and not to any one project specifically cloud billing also has billing specific
134:53 cloud billing also has billing specific roles and permissions to control
134:55 roles and permissions to control accessing and modifying billing related
134:59 accessing and modifying billing related functions that are established by
135:01 functions that are established by identity and access management cloud
135:04 identity and access management cloud billing is offered in two different
135:06 billing is offered in two different account types there is the self-service
135:09 account types there is the self-service or online account or you can also choose
135:12 or online account or you can also choose from the invoiced or offline payments
135:14 from the invoiced or offline payments when it comes to the self-service option
135:17 when it comes to the self-service option the payment method is usually a credit
135:19 the payment method is usually a credit or debit card and costs are charged
135:21 or debit card and costs are charged automatically to the specific payment
135:24 automatically to the specific payment method connected to the cloud billing
135:26 method connected to the cloud billing account and when you need access to your
135:28 account and when you need access to your invoices you can simply go to the cloud
135:31 invoices you can simply go to the cloud console and view them online now when it
135:34 console and view them online now when it comes to the invoice account first you
135:36 comes to the invoice account first you must be eligible for invoice billing
135:39 must be eligible for invoice billing once you are made eligible the payment
135:41 once you are made eligible the payment method used can be check or wire
135:44 method used can be check or wire transfer your invoices are sent by mail
135:47 transfer your invoices are sent by mail or electronically as well they're also
135:50 or electronically as well they're also available in the cloud console as well
135:53 available in the cloud console as well as the payment receipts
135:55 as the payment receipts now another cool feature of billing
135:56 now another cool feature of billing account is sub-accounts and these are
136:00 account is sub-accounts and these are intended for resellers so if you are a
136:02 intended for resellers so if you are a reseller you can use subaccounts to
136:05 reseller you can use subaccounts to represent your customers and make it
136:07 represent your customers and make it easy for chargebacks cloud billing
136:10 easy for chargebacks cloud billing subaccounts allow you to group charges
136:12 subaccounts allow you to group charges from projects together on a separate
136:15 from projects together on a separate section of your invoice and is linked
136:17 section of your invoice and is linked back to the master cloud billing account
136:20 back to the master cloud billing account on which your charges appear
136:22 on which your charges appear sub-accounts are designed to allow for
136:24 sub-accounts are designed to allow for customer separation and management so
136:27 customer separation and management so when it comes to ownership of a cloud
136:30 when it comes to ownership of a cloud billing account it is limited to a
136:32 billing account it is limited to a single organization
136:34 single organization it is possible though for a cloud
136:36 it is possible though for a cloud billing account to pay for projects that
136:39 billing account to pay for projects that belong to an organization that is
136:41 belong to an organization that is different than the organization that
136:43 different than the organization that owns the cloud billing account now one
136:46 owns the cloud billing account now one thing to note is that if you have a
136:48 thing to note is that if you have a project that is not linked to a billing
136:51 project that is not linked to a billing account you will have limited use of
136:53 account you will have limited use of products and services available for your
136:56 products and services available for your project that is projects that are not
136:59 project that is projects that are not linked to a billing account cannot use
137:01 linked to a billing account cannot use google cloud services that aren't free
137:05 google cloud services that aren't free and so now that we've gone through an
137:07 and so now that we've gone through an overview of the billing account let's
137:09 overview of the billing account let's take a quick step into the payments
137:11 take a quick step into the payments profile now the payments profile is a
137:15 profile now the payments profile is a google level resource managed at
137:17 google level resource managed at payments.google.com
137:19 payments.google.com the payments profile
137:21 the payments profile processes payments for all google
137:23 processes payments for all google services and not just for google cloud
137:27 services and not just for google cloud it connects to all of your google
137:29 it connects to all of your google services such as google ads as well as
137:32 services such as google ads as well as google cloud it stores information like
137:35 google cloud it stores information like your name address and who is responsible
137:38 your name address and who is responsible for the profile it stores your various
137:40 for the profile it stores your various payment methods like credit cards debit
137:43 payment methods like credit cards debit cards and bank accounts the payments
137:46 cards and bank accounts the payments profile
137:47 profile functions as a single pane of glass
137:50 functions as a single pane of glass where you can view invoices payment
137:52 where you can view invoices payment history and so on it also controls who
137:56 history and so on it also controls who can view and receive invoices for your
137:59 can view and receive invoices for your various cloud billing accounts and
138:01 various cloud billing accounts and products
138:02 products now one thing to note about payments
138:04 now one thing to note about payments profile is that there are two different
138:07 profile is that there are two different types of payment profiles the first one
138:11 types of payment profiles the first one is individual and that's when you're
138:13 is individual and that's when you're using your account for your own personal
138:15 using your account for your own personal payments if you register your payments
138:18 payments if you register your payments profile as an individual then only you
138:21 profile as an individual then only you can manage the profile you won't be able
138:23 can manage the profile you won't be able to add or remove users or change
138:26 to add or remove users or change permissions on the profile now if you
138:29 permissions on the profile now if you choose a business profile type you're
138:31 choose a business profile type you're paying on behalf of a business or
138:33 paying on behalf of a business or organization a business profile gives
138:36 organization a business profile gives you the flexibility to add other users
138:40 you the flexibility to add other users to the google payments profile you
138:42 to the google payments profile you manage so that more than one person can
138:45 manage so that more than one person can access or manage a payments profile all
138:48 access or manage a payments profile all users added to a business profile
138:51 users added to a business profile can then see the payment information on
138:53 can then see the payment information on that profile another thing to note is
138:56 that profile another thing to note is that once the profile type has been
138:58 that once the profile type has been selected it cannot be changed afterwards
139:02 selected it cannot be changed afterwards and so now that we've quickly gone
139:04 and so now that we've quickly gone through an overview of all the concepts
139:06 through an overview of all the concepts when it comes to billing i am now going
139:08 when it comes to billing i am now going to run through a short demo where i will
139:11 to run through a short demo where i will create a new billing account edit that
139:14 create a new billing account edit that billing account and show you how to
139:16 billing account and show you how to close a billing account so whenever
139:18 close a billing account so whenever you're ready join me in the console and
139:21 you're ready join me in the console and so here i am back in the console and so
139:23 so here i am back in the console and so the first thing i want to do is i want
139:25 the first thing i want to do is i want to make sure that i have the proper
139:26 to make sure that i have the proper permissions in order to create and edit
139:29 permissions in order to create and edit a new billing account so what i'm going
139:31 a new billing account so what i'm going to do is go over here to the hamburger
139:33 to do is go over here to the hamburger menu up here in the top left hand corner
139:36 menu up here in the top left hand corner and click on it
139:38 and click on it and go over to i am an admin and over to
139:41 and go over to i am an admin and over to iam
139:47 now don't worry i'm not going to get really deep into this i will be going
139:49 really deep into this i will be going over this in a later section where i'll
139:52 over this in a later section where i'll go through iam and roles but i wanted to
139:56 go through iam and roles but i wanted to give you a sense of exactly what you
139:58 give you a sense of exactly what you need with regards to permissions so now
140:01 need with regards to permissions so now that i'm here i'm going to be looking
140:03 that i'm here i'm going to be looking for a role that has to do with billing
140:05 for a role that has to do with billing so i'm simply going to go over here on
140:07 so i'm simply going to go over here on the left hand menu and click on roles
140:10 the left hand menu and click on roles and you'll have a slew of roles coming
140:13 and you'll have a slew of roles coming up
140:14 up and what you can do is filter through
140:16 and what you can do is filter through them just by simply typing in billing
140:19 them just by simply typing in billing into the filter table here at the top
140:22 into the filter table here at the top and as you can see here
140:24 and as you can see here there is billing account administrator
140:26 there is billing account administrator billing account creator and so on and so
140:29 billing account creator and so on and so forth and just to give you a quick
140:31 forth and just to give you a quick overview on these roles and so for the
140:33 overview on these roles and so for the billing account administrator this is a
140:35 billing account administrator this is a role that lets you manage billing
140:37 role that lets you manage billing accounts but not create them so if you
140:40 accounts but not create them so if you need to set budget alerts or manage
140:42 need to set budget alerts or manage payment methods you can use this role
140:44 payment methods you can use this role the billing account creator allows you
140:46 the billing account creator allows you to create new self-serve online billing
140:49 to create new self-serve online billing accounts the billing account user allows
140:51 accounts the billing account user allows you to link projects to billing accounts
140:54 you to link projects to billing accounts the billing account viewer allows you to
140:56 the billing account viewer allows you to view billing account cost information
140:58 view billing account cost information and transactions and lastly the project
141:01 and transactions and lastly the project billing manager allows you to link or
141:05 billing manager allows you to link or unlink the project to and from a billing
141:08 unlink the project to and from a billing account so as you can see these roles
141:10 account so as you can see these roles allow you to get pretty granular when it
141:13 allow you to get pretty granular when it comes to billing so i'm going to go back
141:15 comes to billing so i'm going to go back over to the left hand menu over on iam
141:18 over to the left hand menu over on iam and click on there and i want to be able
141:20 and click on there and i want to be able to check my specific role and what
141:23 to check my specific role and what permissions that i have or i will need
141:26 permissions that i have or i will need in order to create a new billing account
141:28 in order to create a new billing account and so if i click on this pencil it'll
141:31 and so if i click on this pencil it'll show me exactly
141:32 show me exactly what my role is and what it does and as
141:36 what my role is and what it does and as it says here i have full access to all
141:38 it says here i have full access to all resources which means that i am pretty
141:41 resources which means that i am pretty much good to go so i'm going to cancel
141:43 much good to go so i'm going to cancel out here
141:46 out here and i'm going to exit i am an admin
141:50 and i'm going to exit i am an admin so i'm going to click on the navigation
141:51 so i'm going to click on the navigation menu
141:53 menu and go over to billing
141:55 and go over to billing and so this billing account is tied to
141:58 and so this billing account is tied to the current project and because it's the
142:00 the current project and because it's the only billing account it's the one that
142:02 only billing account it's the one that shows up and so what i want to do is i
142:05 shows up and so what i want to do is i want to find out a little bit more
142:06 want to find out a little bit more information with regards to this billing
142:08 information with regards to this billing account so i'm going to move down the
142:10 account so i'm going to move down the menu and click on account management
142:13 menu and click on account management here i can see the billing account which
142:15 here i can see the billing account which is my billing account i can rename it if
142:18 is my billing account i can rename it if i'd like
142:19 i'd like and i can also see the projects that are
142:22 and i can also see the projects that are linked to this billing account so now
142:24 linked to this billing account so now that we've viewed all the information
142:26 that we've viewed all the information with regards to the my billing account
142:28 with regards to the my billing account i'm going to simply click on this menu
142:30 i'm going to simply click on this menu over here
142:31 over here and click on the arrow and go to manage
142:34 and click on the arrow and go to manage billing accounts and here it will bring
142:36 billing accounts and here it will bring me to all my billing accounts and
142:39 me to all my billing accounts and because i only have one is shown here my
142:42 because i only have one is shown here my billing account but if i had more than
142:44 billing account but if i had more than one they would show up here and so now
142:47 one they would show up here and so now in order for me to create this new
142:48 in order for me to create this new billing account i'm going to simply
142:50 billing account i'm going to simply click on create account
142:53 click on create account and i will be prompted with a name a
142:56 and i will be prompted with a name a country and a currency for my new
142:58 country and a currency for my new billing account and i'm actually going
143:00 billing account and i'm actually going to rename this billing account and i'm
143:02 to rename this billing account and i'm going to rename it to gcloud
143:05 going to rename it to gcloud ace
143:06 ace dash billing
143:09 dash billing i'm going to leave my country as canada
143:11 i'm going to leave my country as canada and my currency in canadian dollars and
143:13 and my currency in canadian dollars and i'm going to simply hit continue
143:19 and it's giving me the choice in my payments profile
143:20 payments profile and because i want to use the same
143:22 and because i want to use the same payments profile i'm just going to
143:24 payments profile i'm just going to simply leave everything as is but for
143:27 simply leave everything as is but for demonstration purposes
143:29 demonstration purposes over here you can click on the payments
143:31 over here you can click on the payments profile
143:32 profile and the little arrow right beside the
143:34 and the little arrow right beside the current profile will give me the option
143:36 current profile will give me the option to create a new payments profile
143:40 to create a new payments profile and we're going to leave that as is
143:42 and we're going to leave that as is under customer info i have the option of
143:45 under customer info i have the option of changing my address and i can click on
143:47 changing my address and i can click on this pencil icon and change it as well i
143:50 this pencil icon and change it as well i can go to payment methods and click on
143:53 can go to payment methods and click on the current payment method with that
143:55 the current payment method with that little arrow
143:56 little arrow and add a new credit or debit card and
144:00 and add a new credit or debit card and as i said before we're going to keep
144:01 as i said before we're going to keep things the way they are and just hit
144:03 things the way they are and just hit submit and enable billing
144:06 submit and enable billing now as you can see here i got a prompt
144:08 now as you can see here i got a prompt saying that a confirmation email will be
144:10 saying that a confirmation email will be sent within 48 hours now usually when
144:13 sent within 48 hours now usually when you're setting up a brand new billing
144:15 you're setting up a brand new billing profile with an already created payments
144:17 profile with an already created payments profile you'll definitely get a
144:19 profile you'll definitely get a confirmation email in less than 48 hours
144:23 confirmation email in less than 48 hours now in order for me to finish up this
144:24 now in order for me to finish up this demo i'm gonna wait until the new
144:27 demo i'm gonna wait until the new billing account shows up and continue
144:29 billing account shows up and continue with the demo from then and so here i am
144:31 with the demo from then and so here i am back in the billing console and it only
144:34 back in the billing console and it only took about 20 minutes and the gcloud ace
144:36 took about 20 minutes and the gcloud ace billing account has shown up and so with
144:39 billing account has shown up and so with part of this demo what i wanted to show
144:42 part of this demo what i wanted to show is how you can take a project and attach
144:45 is how you can take a project and attach it to a different billing account and so
144:47 it to a different billing account and so currently my only project is attached to
144:50 currently my only project is attached to the my billing account so now if i
144:53 the my billing account so now if i wanted to change my first project to my
144:56 wanted to change my first project to my gcloud ace dash billing account
144:59 gcloud ace dash billing account i can simply go over here to actions
145:02 i can simply go over here to actions click on the hamburger menu
145:04 click on the hamburger menu and go to change billing
145:07 and go to change billing here i'll be prompted to choose a
145:08 here i'll be prompted to choose a billing account and i can choose g cloud
145:11 billing account and i can choose g cloud a stash billing
145:13 a stash billing and then click on set account
145:17 and then click on set account and there it is my first project is now
145:20 and there it is my first project is now linked to g cloud a stash billing so if
145:23 linked to g cloud a stash billing so if i go back over to my billing accounts
145:26 i go back over to my billing accounts you can see here that my billing account
145:28 you can see here that my billing account currently has zero projects and g cloud
145:32 currently has zero projects and g cloud a stash billing has one project now just
145:35 a stash billing has one project now just as a quick note and i really want to
145:37 as a quick note and i really want to emphasize this
145:39 emphasize this is that if you're changing a billing
145:41 is that if you're changing a billing account for a project
145:43 account for a project and you are a regular user
145:45 and you are a regular user you will need the role of the billing
145:47 you will need the role of the billing account administrator
145:49 account administrator as well as the project owner role
145:52 as well as the project owner role so these two together will allow a
145:55 so these two together will allow a regular user to change a billing account
145:58 regular user to change a billing account for a project
145:59 for a project and so now what i want to do is i want
146:02 and so now what i want to do is i want to take the gcloud a stash billing and i
146:05 to take the gcloud a stash billing and i want to close that account
146:07 want to close that account but before i do that i need to unlink
146:10 but before i do that i need to unlink this project and bring it back to
146:12 this project and bring it back to another billing account which in this
146:14 another billing account which in this case would be my billing account so i'm
146:17 case would be my billing account so i'm going to go back up here to the menu
146:18 going to go back up here to the menu click on my projects and we're going to
146:21 click on my projects and we're going to do the exact same thing that we did
146:23 do the exact same thing that we did before
146:24 before under actions i'm going to click on the
146:26 under actions i'm going to click on the hamburger menu and change billing
146:28 hamburger menu and change billing i'm going to get the prompt again and
146:30 i'm going to get the prompt again and under billing account i'm going to
146:32 under billing account i'm going to choose my billing account and then click
146:34 choose my billing account and then click on set account
146:36 on set account so as you can see the project has been
146:38 so as you can see the project has been moved to a different billing account i'm
146:40 moved to a different billing account i'm going to go back to my billing accounts
146:43 going to go back to my billing accounts and as you can see here the project is
146:45 and as you can see here the project is back to my billing account and so now
146:47 back to my billing account and so now that the project is unlinked from the
146:49 that the project is unlinked from the gcloud a stash billing account i can now
146:52 gcloud a stash billing account i can now go ahead and close out that account now
146:54 go ahead and close out that account now in order to do that i'm going to click
146:56 in order to do that i'm going to click on gcloud a stash billing i'm going to
146:59 on gcloud a stash billing i'm going to go down here on the hand menu all the
147:02 go down here on the hand menu all the way to the bottom to account management
147:04 way to the bottom to account management click on there and at the top here you
147:07 click on there and at the top here you will see close billing account i'm going
147:10 will see close billing account i'm going to simply click on that and i'll get a
147:12 to simply click on that and i'll get a prompt that i've spent zero dollars and
147:15 prompt that i've spent zero dollars and is linked to zero projects
147:17 is linked to zero projects now if i did have a project that was
147:19 now if i did have a project that was linked to this billing account i would
147:21 linked to this billing account i would have to unlink the project before i was
147:23 have to unlink the project before i was able to close this billing account so as
147:26 able to close this billing account so as a failsafe i'm being asked to type close
147:29 a failsafe i'm being asked to type close in order to close this billing account
147:31 in order to close this billing account so i'm going to go ahead and do that now
147:33 so i'm going to go ahead and do that now and click on close billing account just
147:36 and click on close billing account just as a note google gives me the option to
147:39 as a note google gives me the option to reopen this billing account in case i
147:41 reopen this billing account in case i did this by mistake and i really needed
147:43 did this by mistake and i really needed it
147:44 it i can reopen this billing account so now
147:47 i can reopen this billing account so now moving back over to billing you'll see
147:49 moving back over to billing you'll see here
147:50 here that i'm left with my single billing
147:52 that i'm left with my single billing account called my billing account with
147:55 account called my billing account with the one project that's linked to it and
147:57 the one project that's linked to it and so that covers my demo on creating
148:00 so that covers my demo on creating editing and closing a new billing
148:02 editing and closing a new billing account as well as linking and unlinking
148:05 account as well as linking and unlinking a project to and from a different
148:08 a project to and from a different billing account so i hope you found this
148:10 billing account so i hope you found this useful
148:11 useful and you can now mark this lesson as
148:12 and you can now mark this lesson as complete and let's move on to the next
148:15 complete and let's move on to the next one
148:16 one [Music]
148:20 [Music] welcome back
148:21 welcome back in this lesson i'm going to be going
148:23 in this lesson i'm going to be going over controlling costs in google cloud
148:26 over controlling costs in google cloud along with budget alerts
148:28 along with budget alerts i will be touching on all the available
148:30 i will be touching on all the available discounts the number of ways to control
148:33 discounts the number of ways to control costs
148:34 costs and go over budget alerts to get a more
148:36 and go over budget alerts to get a more granular and programmatic approach so
148:39 granular and programmatic approach so starting off i wanted to touch on
148:41 starting off i wanted to touch on committed use discounts now committed
148:44 committed use discounts now committed use discounts provide discounted prices
148:47 use discounts provide discounted prices in exchange for your commitment to use a
148:50 in exchange for your commitment to use a minimum level of resources for a
148:53 minimum level of resources for a specified term the discounts are
148:55 specified term the discounts are flexible cover a wide range of resources
148:58 flexible cover a wide range of resources and are ideal for workloads with
149:01 and are ideal for workloads with predictable resource needs when you
149:03 predictable resource needs when you purchase google cloud committed use
149:05 purchase google cloud committed use discounts you commit to a consistent
149:08 discounts you commit to a consistent amount of usage for a one or three year
149:11 amount of usage for a one or three year period there are two commitment types
149:13 period there are two commitment types available
149:14 available and as you can see here they are spend
149:17 and as you can see here they are spend based and resource based commitment
149:19 based and resource based commitment types
149:20 types and unlike most other providers the
149:23 and unlike most other providers the commitment fee is billed monthly so
149:26 commitment fee is billed monthly so going over the specific commitment types
149:28 going over the specific commitment types i wanted to start off with spend based
149:30 i wanted to start off with spend based commitment now for spend based
149:32 commitment now for spend based commitment you commit to a consistent
149:35 commitment you commit to a consistent amount of usage measured in dollars per
149:38 amount of usage measured in dollars per hour
149:39 hour of equivalent on-demand spend for a one
149:42 of equivalent on-demand spend for a one or three year term in exchange you
149:45 or three year term in exchange you receive a discounted rate on the
149:47 receive a discounted rate on the applicable usage your commitment covers
149:50 applicable usage your commitment covers so you can purchase committed use
149:51 so you can purchase committed use discounts from any cloud billing account
149:54 discounts from any cloud billing account and the discount applies to any eligible
149:57 and the discount applies to any eligible usage in projects paid for by that cloud
150:00 usage in projects paid for by that cloud billing account any overage is charged
150:03 billing account any overage is charged at the on-demand rate spend based
150:06 at the on-demand rate spend based commitments can give you a 25 discount
150:10 commitments can give you a 25 discount off on-demand pricing for a one-year
150:12 off on-demand pricing for a one-year commitment and up to a 52 discount off
150:16 commitment and up to a 52 discount off of on-demand pricing for a three-year
150:19 of on-demand pricing for a three-year commitment now spend-based commitments
150:22 commitment now spend-based commitments are restricted to specific resources
150:25 are restricted to specific resources which is cloud sql database instances
150:28 which is cloud sql database instances and google cloud vmware engine
150:31 and google cloud vmware engine and this commitment
150:32 and this commitment applies to the cpu and memory usage for
150:36 applies to the cpu and memory usage for these available resources now the other
150:38 these available resources now the other committed use discount is the
150:41 committed use discount is the resource-based commitment
150:43 resource-based commitment so this discount is for a commitment to
150:45 so this discount is for a commitment to spend a minimum amount for compute
150:48 spend a minimum amount for compute engine resources in a particular region
150:51 engine resources in a particular region resource-based commitments are ideal for
150:54 resource-based commitments are ideal for predictable workloads when it comes to
150:57 predictable workloads when it comes to your vms
150:58 your vms when you purchase a committed use
151:00 when you purchase a committed use contract you purchase compute resources
151:03 contract you purchase compute resources such as vcpus
151:06 such as vcpus memory
151:07 memory gpus
151:08 gpus and local ssds and you purchase these at
151:12 and local ssds and you purchase these at a discounted price in return for
151:14 a discounted price in return for committing to paying for those resources
151:17 committing to paying for those resources for one or three years the discount is
151:20 for one or three years the discount is up to 57 percent for most resources like
151:24 up to 57 percent for most resources like machine types or gpus the discount is up
151:28 machine types or gpus the discount is up to 70 percent for memory optimized
151:30 to 70 percent for memory optimized machine types and you can purchase a
151:33 machine types and you can purchase a committed use contract for a single
151:35 committed use contract for a single project or purchase multiple contracts
151:38 project or purchase multiple contracts which you can share across many project
151:41 which you can share across many project by enabling shared discounts and sharing
151:44 by enabling shared discounts and sharing your committed use discounts across all
151:47 your committed use discounts across all your projects reduces the overhead of
151:50 your projects reduces the overhead of managing discounts on a per project
151:53 managing discounts on a per project basis
151:54 basis and maximizes your savings by pooling
151:57 and maximizes your savings by pooling all of your discounts across your
151:59 all of your discounts across your project's resource usage if you have
152:02 project's resource usage if you have multiple projects that share the same
152:04 multiple projects that share the same cloud billing account you can enable
152:07 cloud billing account you can enable committed use discount sharing so all of
152:10 committed use discount sharing so all of your projects within that cloud billing
152:12 your projects within that cloud billing account share all of your committed use
152:15 account share all of your committed use discount contracts and so your sustained
152:18 discount contracts and so your sustained use discounts are also pooled at the
152:21 use discounts are also pooled at the same time so touching on sustained use
152:24 same time so touching on sustained use discounts
152:26 discounts these are automatic discounts for
152:28 these are automatic discounts for running specific compute engine
152:30 running specific compute engine resources a significant portion of the
152:33 resources a significant portion of the billing month
152:34 billing month sustained use discounts apply to the
152:37 sustained use discounts apply to the general purpose
152:38 general purpose compute and memory optimize machine
152:41 compute and memory optimize machine types as well as sole tenant nodes and
152:44 types as well as sole tenant nodes and gpus again sustained use discounts are
152:48 gpus again sustained use discounts are applied automatically to usage within a
152:51 applied automatically to usage within a project separately for each region so
152:54 project separately for each region so there's no action required on your part
152:56 there's no action required on your part to enable these discounts so for example
152:59 to enable these discounts so for example when you're running one of these
153:00 when you're running one of these resources for more than let's say 25
153:03 resources for more than let's say 25 percent of the month
153:05 percent of the month compute engine automatically gives you a
153:08 compute engine automatically gives you a discount for every incremental minute
153:11 discount for every incremental minute that you use for that instance now
153:14 that you use for that instance now sustained use discounts automatically
153:16 sustained use discounts automatically apply to vms created by both google
153:19 apply to vms created by both google kubernetes engine and compute engine but
153:22 kubernetes engine and compute engine but unfortunately do not apply to vms
153:25 unfortunately do not apply to vms created using the app engine flexible
153:28 created using the app engine flexible environment as well as data flow and e2
153:32 environment as well as data flow and e2 machine types
153:33 machine types now to take advantage of the full
153:34 now to take advantage of the full discount you would create your vm
153:37 discount you would create your vm instances on the first day of the month
153:39 instances on the first day of the month as discounts reset at the beginning of
153:42 as discounts reset at the beginning of each month and so the following table
153:45 each month and so the following table shows the discount you get at each usage
153:48 shows the discount you get at each usage level of a vm instance these discounts
153:51 level of a vm instance these discounts apply for all machine types but don't
153:54 apply for all machine types but don't apply to preemptable instances and so
153:57 apply to preemptable instances and so sustained use discounts can save you up
154:00 sustained use discounts can save you up to a maximum of a 30 percent discount so
154:04 to a maximum of a 30 percent discount so another great way to calculate savings
154:06 another great way to calculate savings in google cloud is by using the gcp
154:09 in google cloud is by using the gcp pricing calculator this is a quick way
154:12 pricing calculator this is a quick way to get an estimate of what your usage
154:15 to get an estimate of what your usage will cost on google cloud so the gcp
154:18 will cost on google cloud so the gcp pricing calculator can help you identify
154:21 pricing calculator can help you identify the pricing for the resources that you
154:23 the pricing for the resources that you plan to use in your future architecture
154:27 plan to use in your future architecture so that you are able to calculate how
154:29 so that you are able to calculate how much your architecture will cost you
154:32 much your architecture will cost you this calculator holds the pricing for
154:35 this calculator holds the pricing for almost all resources encapsulated within
154:38 almost all resources encapsulated within gcp and so you can get a pretty good
154:41 gcp and so you can get a pretty good idea of what your architecture will cost
154:44 idea of what your architecture will cost you without having to find out the hard
154:46 you without having to find out the hard way this calculator can be found at the
154:49 way this calculator can be found at the url shown here and i will include this
154:52 url shown here and i will include this in the lesson text below now moving
154:54 in the lesson text below now moving right along to cloud billing budgets so
154:57 right along to cloud billing budgets so budgets enable you to track your actual
155:00 budgets enable you to track your actual spend
155:01 spend against your plan spend
155:03 against your plan spend after you've set a budget amount you set
155:05 after you've set a budget amount you set budget alert threshold rules that are
155:07 budget alert threshold rules that are used to trigger email notifications and
155:10 used to trigger email notifications and budget alert emails help you stay
155:13 budget alert emails help you stay informed about how your spend is
155:15 informed about how your spend is tracking against your budget this
155:17 tracking against your budget this example here is a diagram of a budget
155:21 example here is a diagram of a budget alert notification and is the default
155:23 alert notification and is the default functionality for any budget alert
155:26 functionality for any budget alert notifications
155:28 notifications now to get a little bit more granular
155:30 now to get a little bit more granular you can define the scope of the budget
155:33 you can define the scope of the budget so for example you can scope the budget
155:36 so for example you can scope the budget to apply to the spend of an entire cloud
155:40 to apply to the spend of an entire cloud billing account or get more granular to
155:42 billing account or get more granular to one or more projects
155:44 one or more projects and even down to a specific product you
155:47 and even down to a specific product you can set the budget amount to a total
155:50 can set the budget amount to a total that you specify
155:51 that you specify or base the budget amount on the
155:53 or base the budget amount on the previous month's spend when costs exceed
155:56 previous month's spend when costs exceed a percentage of your budget based on the
155:58 a percentage of your budget based on the rules that you set by default alert
156:01 rules that you set by default alert emails are sent to billing account
156:03 emails are sent to billing account administrators and billing account users
156:06 administrators and billing account users on the target cloud billing account and
156:08 on the target cloud billing account and again this is the default behavior of a
156:11 again this is the default behavior of a budget email notification
156:14 budget email notification now as said before the default behavior
156:16 now as said before the default behavior of a budget is to send alert emails to
156:19 of a budget is to send alert emails to billing account administrators and
156:21 billing account administrators and billing account users on the target
156:24 billing account users on the target cloud billing account when the budget
156:26 cloud billing account when the budget alert threshold rules trigger an email
156:29 alert threshold rules trigger an email notification now these email recipients
156:32 notification now these email recipients can be customized by using cloud
156:34 can be customized by using cloud monitoring to specify other people in
156:37 monitoring to specify other people in your organization to receive these
156:39 your organization to receive these budget alert emails a great example of
156:42 budget alert emails a great example of this would be a project manager or a
156:45 this would be a project manager or a director knowing how much spend has been
156:48 director knowing how much spend has been used up in your budget and the last
156:50 used up in your budget and the last concept i wanted to touch on when it
156:52 concept i wanted to touch on when it comes to cloud billing budgets is that
156:55 comes to cloud billing budgets is that you can also use pub sub for
156:57 you can also use pub sub for programmatic notifications to automate
157:00 programmatic notifications to automate your cost control response based on the
157:03 your cost control response based on the budget notification
157:04 budget notification you can also use pub sub in conjunction
157:07 you can also use pub sub in conjunction with billing budgets to automate cost
157:09 with billing budgets to automate cost management tasks and this will provide a
157:12 management tasks and this will provide a real-time status of the cloud billing
157:15 real-time status of the cloud billing budget and allow you to do things like
157:18 budget and allow you to do things like send notifications to slack or disable
157:21 send notifications to slack or disable billing to stop usage as well as
157:23 billing to stop usage as well as selectively control usage when budget
157:26 selectively control usage when budget has been met and so these are all the
157:28 has been met and so these are all the concepts that i wanted to cover when it
157:31 concepts that i wanted to cover when it came to cloud billing budgets now i know
157:33 came to cloud billing budgets now i know this lesson may have been a bit dry and
157:36 this lesson may have been a bit dry and not the most exciting service to dive
157:38 not the most exciting service to dive into but it is very important to know
157:40 into but it is very important to know both for the exam
157:42 both for the exam and for your role as an engineer when it
157:45 and for your role as an engineer when it comes to cutting costs in environments
157:48 comes to cutting costs in environments where your business owners deem
157:50 where your business owners deem necessary and so that's all i had for
157:52 necessary and so that's all i had for this lesson
157:53 this lesson so you can now mark this lesson as
157:55 so you can now mark this lesson as complete and please join me in the next
157:57 complete and please join me in the next one where i dive into the console and do
158:00 one where i dive into the console and do some hands-on demos when it comes to
158:02 some hands-on demos when it comes to committed use discounts
158:04 committed use discounts budget alerts and editing budget alerts
158:07 budget alerts and editing budget alerts as well as adding a little bit of
158:09 as well as adding a little bit of automation into the budgeting alerts
158:12 automation into the budgeting alerts [Music]
158:16 [Music] welcome back in the last lesson i went
158:18 welcome back in the last lesson i went over a few ways to do cost management
158:22 over a few ways to do cost management and the behaviors of budget alerts
158:24 and the behaviors of budget alerts in this lesson i will be doing a demo to
158:28 in this lesson i will be doing a demo to show you committed use discounts and
158:30 show you committed use discounts and reservations along with how to create
158:33 reservations along with how to create budget alerts
158:34 budget alerts and as well how to edit them so with
158:36 and as well how to edit them so with that being said let's dive in so now i'm
158:39 that being said let's dive in so now i'm going to start off with committed use
158:41 going to start off with committed use discounts in order to get there i'm
158:44 discounts in order to get there i'm going to find it in compute engine so
158:46 going to find it in compute engine so i'm going to simply go up here on the
158:48 i'm going to simply go up here on the top left hand corner back to the
158:50 top left hand corner back to the navigation menu
158:52 navigation menu i'm going to go down to compute engine
158:55 i'm going to go down to compute engine and i'm going to go over here to
158:56 and i'm going to go over here to committed use discounts
158:59 committed use discounts and as we discussed earlier these
159:01 and as we discussed earlier these commitments for compute engine are
159:03 commitments for compute engine are resource based and as you can see here
159:06 resource based and as you can see here we have hardware commitments and
159:07 we have hardware commitments and reservations now reservations i will get
159:10 reservations now reservations i will get into just a little bit later but with
159:13 into just a little bit later but with regards to hardware commitments we're
159:15 regards to hardware commitments we're going to get into that right now and as
159:17 going to get into that right now and as expected i have no current commitments
159:20 expected i have no current commitments so i'm going to go up to purchase
159:21 so i'm going to go up to purchase commitment and so i need to start off
159:24 commitment and so i need to start off with
159:24 with finding a name for this commitment and
159:26 finding a name for this commitment and so i'm going to name this commitment
159:28 so i'm going to name this commitment demo dash commitment
159:36 it's going to ask me for a region i'm going to keep it in us central one with
159:38 going to keep it in us central one with the commitment type here is where i can
159:40 the commitment type here is where i can select the type of machine that i'm
159:42 select the type of machine that i'm looking for so i can go into general
159:45 looking for so i can go into general purpose and 1 and 2
159:47 purpose and 1 and 2 and 2d e2 as well as memory optimize
159:51 and 2d e2 as well as memory optimize and compute optimized and so i'm going
159:53 and compute optimized and so i'm going to keep it at general purpose and one
159:56 to keep it at general purpose and one again the duration one or three years
159:59 again the duration one or three years and we get down to cores i can have as
160:02 and we get down to cores i can have as many vcpus as i'd like
160:04 many vcpus as i'd like so if i needed 10 i can do that and i'll
160:07 so if i needed 10 i can do that and i'll get a pop-up here on the right showing
160:09 get a pop-up here on the right showing me the estimated monthly total as well
160:11 me the estimated monthly total as well as an hourly rate for this specific vm
160:15 as an hourly rate for this specific vm with 10 cores i can also select the
160:17 with 10 cores i can also select the duration for three years and as expected
160:20 duration for three years and as expected i'll get a higher savings because i'm
160:22 i'll get a higher savings because i'm giving a bigger commitment so bring it
160:24 giving a bigger commitment so bring it back down to one year and let's put the
160:27 back down to one year and let's put the memory up to 64 gigabytes here i can add
160:31 memory up to 64 gigabytes here i can add gpus and i have quite a few to choose
160:34 gpus and i have quite a few to choose from as well as local ssds and here with
160:37 from as well as local ssds and here with the local ssds i can choose as many
160:40 the local ssds i can choose as many disks as i'd like as long as it's within
160:42 disks as i'd like as long as it's within my quota and each disk size is going to
160:45 my quota and each disk size is going to be 375 gigabytes so if you're looking
160:48 be 375 gigabytes so if you're looking into committed use discounts and using
160:51 into committed use discounts and using local ssds please keep that in mind
160:54 local ssds please keep that in mind again the reservation can be added here
160:56 again the reservation can be added here and i'll be getting into that in just a
160:58 and i'll be getting into that in just a second and now i don't want to actually
161:00 second and now i don't want to actually purchase it but i did want to show you
161:03 purchase it but i did want to show you exactly what a committed use discount
161:05 exactly what a committed use discount would look like and how you would apply
161:07 would look like and how you would apply it again here on the right hand side it
161:10 it again here on the right hand side it shows me the details of the estimated
161:13 shows me the details of the estimated monthly total and the hourly rate so i'm
161:16 monthly total and the hourly rate so i'm going to go over here and hit cancel
161:18 going to go over here and hit cancel and if i were to have applied it
161:20 and if i were to have applied it the commitment would show up here in
161:22 the commitment would show up here in this table and give me all the specified
161:24 this table and give me all the specified configurations
161:26 configurations of that instance right here now touching
161:28 of that instance right here now touching on reservations
161:30 on reservations reservations is when you reserve the vm
161:33 reservations is when you reserve the vm instances you need
161:35 instances you need so when the reservation has been placed
161:37 so when the reservation has been placed the reservation ensures that those
161:40 the reservation ensures that those resources are always available for you
161:43 resources are always available for you as some of you might know when you go to
161:45 as some of you might know when you go to spin up a new compute engine vm
161:47 spin up a new compute engine vm especially when it comes to auto scaling
161:50 especially when it comes to auto scaling instance groups the instances can
161:52 instance groups the instances can sometimes be delayed or unavailable now
161:55 sometimes be delayed or unavailable now the thing with reservations is that a vm
161:58 the thing with reservations is that a vm instance can only use a reservation if
162:01 instance can only use a reservation if its properties exactly match the
162:04 its properties exactly match the properties of the reservation which is
162:06 properties of the reservation which is why it's such a great pairing with
162:09 why it's such a great pairing with committed use discounts
162:10 committed use discounts so if you're looking to make a
162:12 so if you're looking to make a resource-based commitment and you always
162:14 resource-based commitment and you always want your instance available you can
162:16 want your instance available you can simply create a reservation attach it to
162:19 simply create a reservation attach it to the commitment and you will never have
162:21 the commitment and you will never have to worry about having the resources to
162:24 to worry about having the resources to satisfy your workload as they will
162:26 satisfy your workload as they will always be there so again going into
162:28 always be there so again going into create reservation it'll show me here
162:31 create reservation it'll show me here the name the description i can choose to
162:34 the name the description i can choose to use the reservation automatically or
162:36 use the reservation automatically or select a specific reservation the region
162:39 select a specific reservation the region and zone
162:40 and zone number of instances and here i can
162:43 number of instances and here i can specify the machine type or specify an
162:46 specify the machine type or specify an instance template and again this is
162:48 instance template and again this is another use case where if you need
162:50 another use case where if you need compute engine instances spun up due to
162:53 compute engine instances spun up due to auto scaling this is where reservations
162:56 auto scaling this is where reservations would apply so getting back to machine
162:58 would apply so getting back to machine type i can choose from vcpus
163:02 type i can choose from vcpus as well as the memory i can customize it
163:05 as well as the memory i can customize it i can add as many local ssds as my
163:08 i can add as many local ssds as my quotas will allow me and i can select my
163:10 quotas will allow me and i can select my interface type and i'm going to cancel
163:13 interface type and i'm going to cancel out of here now when it comes to
163:15 out of here now when it comes to committed use discounts and reservations
163:17 committed use discounts and reservations as it pertains to the exam
163:20 as it pertains to the exam i have not seen it but since this is an
163:22 i have not seen it but since this is an option to save money i wanted to make
163:25 option to save money i wanted to make sure that i included it in this lesson
163:28 sure that i included it in this lesson as this could be a great option for use
163:30 as this could be a great option for use in your environment so now that we
163:32 in your environment so now that we covered resource-based committed use
163:34 covered resource-based committed use discounts i wanted to move into spend
163:37 discounts i wanted to move into spend based commitments and so where you would
163:39 based commitments and so where you would find that would be over in billing
163:42 find that would be over in billing so again i'm going to go up to the
163:43 so again i'm going to go up to the navigation menu in the top left hand
163:46 navigation menu in the top left hand corner and go into billing
163:49 corner and go into billing now you'd think that you would find it
163:51 now you'd think that you would find it here under commitments but only when you
163:53 here under commitments but only when you have purchased a commitment will it
163:56 have purchased a commitment will it actually show up here but as you can see
163:58 actually show up here but as you can see here it's prompting us to go to the
164:00 here it's prompting us to go to the billing overview page
164:03 billing overview page so going back to the overview page
164:06 so going back to the overview page you'll find it
164:07 you'll find it down here on the right and so i can now
164:10 down here on the right and so i can now purchase a commitment and as we
164:12 purchase a commitment and as we discussed before
164:13 discussed before a spend based commitment can be used for
164:16 a spend based commitment can be used for either cloud sql or for vmware engine
164:19 either cloud sql or for vmware engine i select my billing account
164:22 i select my billing account the commitment name the period either
164:24 the commitment name the period either one year or three years and it also
164:27 one year or three years and it also shows me the discount which could help
164:29 shows me the discount which could help sway my decision as well as the region
164:32 sway my decision as well as the region as well as the hourly on-demand
164:34 as well as the hourly on-demand commitment now you're probably wondering
164:36 commitment now you're probably wondering what this is
164:38 what this is and as explained here this commitment is
164:40 and as explained here this commitment is based on the on-demand price
164:43 based on the on-demand price and once this is all filled out the
164:45 and once this is all filled out the commitment summary will be populated and
164:48 commitment summary will be populated and after you agree to all the terms and
164:49 after you agree to all the terms and services you can simply hit purchase
164:52 services you can simply hit purchase but i'm going to cancel out of here
164:54 but i'm going to cancel out of here and so that is an overview for the spend
164:57 and so that is an overview for the spend based commitment and again these
164:59 based commitment and again these committed use discounts i have not seen
165:02 committed use discounts i have not seen on the exam
165:03 on the exam but i do think that it's good to know
165:05 but i do think that it's good to know for your day-to-day environment if
165:08 for your day-to-day environment if you're looking to save money and really
165:10 you're looking to save money and really break down costs
165:12 break down costs so now that i've covered committed use
165:14 so now that i've covered committed use discounts and reservations
165:16 discounts and reservations i wanted to move over to budgets and
165:18 i wanted to move over to budgets and budget alerts and because i'm already on
165:21 budget alerts and because i'm already on the billing page all i need to do is go
165:23 the billing page all i need to do is go over here to the left hand menu and
165:25 over here to the left hand menu and click on budgets and alerts now setting
165:28 click on budgets and alerts now setting up a budget for yourself for this course
165:31 up a budget for yourself for this course would be a great idea especially for
165:34 would be a great idea especially for those who are cost conscious on how much
165:37 those who are cost conscious on how much you're spending with regards to your
165:39 you're spending with regards to your cloud usage and so we're to go ahead and
165:41 cloud usage and so we're to go ahead and create a new budget right now so let's
165:43 create a new budget right now so let's go up here to the top to create budget
165:46 go up here to the top to create budget and i'm going to be brought to a new
165:48 and i'm going to be brought to a new window where i can put in the name of
165:50 window where i can put in the name of the budget and i'm going to call this
165:52 the budget and i'm going to call this ace
165:54 ace dash budget and because i want to
165:57 dash budget and because i want to monitor all projects and all products
165:59 monitor all projects and all products i'm going to leave this as is but if you
166:02 i'm going to leave this as is but if you did have multiple projects you could get
166:04 did have multiple projects you could get a little bit more granular and the same
166:07 a little bit more granular and the same thing with products
166:14 so i'm going to go ahead and leave it as is and just click on next now under
166:17 is and just click on next now under budget type
166:18 budget type i can select from either a specified
166:20 i can select from either a specified amount or the last month's spend and so
166:23 amount or the last month's spend and so for this demo i'm going to keep it at
166:25 for this demo i'm going to keep it at specified amount
166:27 specified amount and because i want to be really
166:28 and because i want to be really conscious about how much i spend in this
166:31 conscious about how much i spend in this course i'm going to put in 10
166:33 course i'm going to put in 10 for my target amount i'm going to
166:35 for my target amount i'm going to include the credits and cost and then
166:37 include the credits and cost and then i'm going to click on next now these
166:40 i'm going to click on next now these threshold rules are where billing
166:42 threshold rules are where billing administrators will be emailed when a
166:45 administrators will be emailed when a certain percent of the budget is hit
166:48 certain percent of the budget is hit so if my spend happens to hit five
166:50 so if my spend happens to hit five dollars
166:51 dollars because i am a billing administrator i
166:54 because i am a billing administrator i will be sent an email telling me that my
166:56 will be sent an email telling me that my spend has hit five dollars i also have
166:59 spend has hit five dollars i also have the option of changing these percentages
167:02 the option of changing these percentages so if i decided to change it to forty
167:03 so if i decided to change it to forty percent
167:05 percent now my amount goes to four dollars and
167:07 now my amount goes to four dollars and this is done automatically so no need to
167:10 this is done automatically so no need to do any calculations
167:12 do any calculations but i'm going to keep this here at 50
167:14 but i'm going to keep this here at 50 percent
167:15 percent and vice versa if i wanted to change the
167:18 and vice versa if i wanted to change the amount
167:19 amount the percentage of budget will actually
167:21 the percentage of budget will actually change now with the trigger i actually
167:23 change now with the trigger i actually have the option of selecting forecasted
167:27 have the option of selecting forecasted or actual and so i'm going to keep it on
167:29 or actual and so i'm going to keep it on actual and if i want i can add more
167:31 actual and if i want i can add more threshold rules now i'm going to leave
167:33 threshold rules now i'm going to leave everything as is and just click on
167:35 everything as is and just click on finish
167:36 finish and now as you can see here i have a
167:38 and now as you can see here i have a budget name of ace budget now because
167:41 budget name of ace budget now because the budget name doesn't have to be
167:43 the budget name doesn't have to be globally unique in your environment you
167:46 globally unique in your environment you can name your budget exactly the same
167:48 can name your budget exactly the same and again it'll give me all the specific
167:50 and again it'll give me all the specific configurations that i filled out shows
167:52 configurations that i filled out shows me how much credits i've used and that's
167:55 me how much credits i've used and that's it and that's how you would create a
167:57 it and that's how you would create a budget alert now if i needed to edit it
168:00 budget alert now if i needed to edit it i can always go back to ace budget and
168:03 i can always go back to ace budget and here i can edit it but i'm not going to
168:05 here i can edit it but i'm not going to touch it and i'm just going to hit
168:06 touch it and i'm just going to hit cancel
168:07 cancel and so the last thing i wanted to show
168:09 and so the last thing i wanted to show you before we end this lesson is how to
168:11 you before we end this lesson is how to create another budget but being able to
168:14 create another budget but being able to send out the trigger alert emails to
168:17 send out the trigger alert emails to different users
168:18 different users and so in order to do that i'm going to
168:20 and so in order to do that i'm going to go back up here to create budget i'm
168:22 go back up here to create budget i'm going to name this to ace dash
168:26 going to name this to ace dash budget
168:27 budget dash
168:28 dash users i'm going to leave the rest as is
168:32 users i'm going to leave the rest as is i'm going to click on next again i'm
168:34 i'm going to click on next again i'm going to leave the budget type the way
168:35 going to leave the budget type the way it is the target amount i'm going to put
168:38 it is the target amount i'm going to put ten dollars
168:39 ten dollars leave the include credits and cost and
168:41 leave the include credits and cost and just click on next
168:43 just click on next and so here i'm going to leave the
168:44 and so here i'm going to leave the threshold rules the way they are and
168:46 threshold rules the way they are and right here under manage notifications
168:49 right here under manage notifications i'm going to click off link monitoring
168:51 i'm going to click off link monitoring email notification channels to this
168:53 email notification channels to this budget now because the email
168:56 budget now because the email notification channel needs cloud
168:58 notification channel needs cloud monitoring in order to work i am
169:00 monitoring in order to work i am prompted here to select a workspace
169:03 prompted here to select a workspace which is needed by cloud monitoring so
169:05 which is needed by cloud monitoring so because i have none i'm going to go
169:07 because i have none i'm going to go ahead and create one and so clicking on
169:10 ahead and create one and so clicking on managing monitoring workspaces will
169:12 managing monitoring workspaces will bring you to the documentation but in
169:14 bring you to the documentation but in order for me to get a workspace created
169:17 order for me to get a workspace created i need to go to cloud monitoring now
169:20 i need to go to cloud monitoring now workspace is the top level container
169:23 workspace is the top level container that is used to organize and control
169:25 that is used to organize and control access to your monitoring notification
169:28 access to your monitoring notification channels in order for your notification
169:30 channels in order for your notification channels to work they must belong to a
169:33 channels to work they must belong to a monitoring workspace so you need to
169:36 monitoring workspace so you need to create at least one workspace before
169:38 create at least one workspace before adding monitoring
169:40 adding monitoring notification channels and don't worry
169:42 notification channels and don't worry we'll be getting into greater depth with
169:45 we'll be getting into greater depth with regards to monitoring in a later section
169:48 regards to monitoring in a later section in this course so i'm going to go ahead
169:50 in this course so i'm going to go ahead and cancel this
169:51 and cancel this and i'm going to go up to the navigation
169:53 and i'm going to go up to the navigation menu
169:54 menu click on there
169:56 click on there and scroll down to monitoring
170:03 and then overview and this may take a minute to start up
170:05 minute to start up as the apis are being enabled and the
170:07 as the apis are being enabled and the default workspace for cloud monitoring
170:10 default workspace for cloud monitoring is being built
170:12 is being built okay and now that the monitoring api has
170:14 okay and now that the monitoring api has been enabled we are now in monitoring
170:17 been enabled we are now in monitoring the workspace that was created is my
170:19 the workspace that was created is my first project so now that we have our
170:21 first project so now that we have our monitoring workspace created i need to
170:24 monitoring workspace created i need to add the emails to the users that i want
170:27 add the emails to the users that i want the alerts to be sent out to and added
170:29 the alerts to be sent out to and added to the notification channel so in order
170:31 to the notification channel so in order to do that i'm going to go over here to
170:34 to do that i'm going to go over here to alerting and up here at the top i'm
170:36 alerting and up here at the top i'm going to click on edit notification
170:38 going to click on edit notification channels
170:39 channels and here as you can see are many
170:41 and here as you can see are many notification channels that you can
170:42 notification channels that you can enable by simply clicking on add new
170:45 enable by simply clicking on add new over here on the right so now what i'm
170:48 over here on the right so now what i'm looking for is under email i'm going to
170:51 looking for is under email i'm going to click on add new now here i can add the
170:53 click on add new now here i can add the new email address and so for me i'm
170:56 new email address and so for me i'm going to add antony
170:58 going to add antony at antonyt.com
171:00 at antonyt.com and you can add whatever email address
171:02 and you can add whatever email address you'd like
171:03 you'd like and under display name i'm going to add
171:06 and under display name i'm going to add billing admin
171:08 billing admin notification
171:14 and just click on save and as you can see my email has been
171:17 and as you can see my email has been added to the notification channel and so
171:19 added to the notification channel and so this is all i needed to do in order to
171:22 this is all i needed to do in order to move on to the next step and so now that
171:24 move on to the next step and so now that i've covered creating my monitoring
171:26 i've covered creating my monitoring workspace as well as adding another
171:29 workspace as well as adding another email to my email notification channels
171:32 email to my email notification channels i can now go back to billing and finish
171:35 i can now go back to billing and finish off my budget alert
171:44 create budget and we're gonna go through the same
171:46 and we're gonna go through the same steps
171:47 steps call this billing
171:50 call this billing alert
171:58 leave everything else as is and click on next
171:59 next i'm just going to change the target
172:00 i'm just going to change the target amount to 10
172:03 amount to 10 click on next
172:04 click on next i'm going to leave everything here as is
172:07 i'm going to leave everything here as is and i'm going to go back to click on
172:09 and i'm going to go back to click on link monitoring email notification
172:11 link monitoring email notification channels to this budget now if you
172:13 channels to this budget now if you notice when i click on select workspace
172:16 notice when i click on select workspace my first project shows up and here it
172:19 my first project shows up and here it will ask me for my notification channels
172:22 will ask me for my notification channels and because i've already set it up i can
172:24 and because i've already set it up i can simply click on it and you'll see the
172:27 simply click on it and you'll see the billing admin notification channel
172:29 billing admin notification channel and so if i didn't have this set up i
172:32 and so if i didn't have this set up i can always go to manage notification
172:34 can always go to manage notification channels and it'll bring me back to the
172:36 channels and it'll bring me back to the screen which you saw earlier
172:38 screen which you saw earlier and so now that that's set up i can
172:40 and so now that that's set up i can simply click on finish
172:42 simply click on finish and so now that i have a
172:44 and so now that i have a regular budget alert i also have another
172:47 regular budget alert i also have another budget alert that can go to a different
172:49 budget alert that can go to a different email so if you have a project manager
172:52 email so if you have a project manager or a director that you want to send
172:54 or a director that you want to send budget alerts to this is how you would
172:57 budget alerts to this is how you would do it and so that about covers this demo
173:00 do it and so that about covers this demo on committed use discounts reservations
173:03 on committed use discounts reservations budgets and budget alerts and so that's
173:06 budgets and budget alerts and so that's all i wanted to cover for this lesson
173:08 all i wanted to cover for this lesson so you can now mark this lesson as
173:10 so you can now mark this lesson as complete
173:11 complete and let's move on to the next one
173:13 and let's move on to the next one [Music]
173:17 [Music] welcome back
173:18 welcome back in this short lesson i will be covering
173:21 in this short lesson i will be covering the exporting of your billing data so
173:24 the exporting of your billing data so that you're able to analyze that data
173:26 that you're able to analyze that data and understand your spend at a more
173:28 and understand your spend at a more granular level
173:30 granular level i will also be going through a short
173:32 i will also be going through a short demo where i will show you how to enable
173:35 demo where i will show you how to enable the export billing feature and bring it
173:38 the export billing feature and bring it into bigquery to be analyzed
173:41 into bigquery to be analyzed now cloud billing export to bigquery
173:44 now cloud billing export to bigquery enables you to export granular google
173:48 enables you to export granular google cloud billing data such as usage
173:51 cloud billing data such as usage cost details and pricing data
173:54 cost details and pricing data automatically to a bigquery data set
173:56 automatically to a bigquery data set that you specify then you can access
174:00 that you specify then you can access your cloud billing data from bigquery
174:02 your cloud billing data from bigquery for detailed analysis
174:04 for detailed analysis or use a tool like data studio to
174:07 or use a tool like data studio to visualize your data just a quick note
174:10 visualize your data just a quick note here that billing export is not
174:12 here that billing export is not retroactive and this should be taken
174:15 retroactive and this should be taken into consideration when planning for
174:18 into consideration when planning for analysis on this data and so there are
174:20 analysis on this data and so there are two types of cloud billing data that you
174:23 two types of cloud billing data that you can export
174:24 can export there's the daily cost detail data
174:28 there's the daily cost detail data and the pricing data and these can be
174:30 and the pricing data and these can be selected right within the console
174:32 selected right within the console depending on your use case and so now
174:35 depending on your use case and so now that we've gone through exactly what
174:37 that we've gone through exactly what billing export is i wanted to get into a
174:40 billing export is i wanted to get into a demo and show you how to export your
174:43 demo and show you how to export your cloud billing data to bigquery and go
174:46 cloud billing data to bigquery and go through all the necessary steps to get
174:49 through all the necessary steps to get it enabled so when you're ready join me
174:51 it enabled so when you're ready join me in the console and so here we are back
174:53 in the console and so here we are back in the console and so in order to enable
174:56 in the console and so in order to enable billing export i'm going to be going to
174:58 billing export i'm going to be going to the billing page so i'm going to move up
175:01 the billing page so i'm going to move up to the top left hand corner to the
175:03 to the top left hand corner to the navigation menu
175:05 navigation menu and click on billing
175:08 and click on billing here in the left hand menu you'll see
175:10 here in the left hand menu you'll see billing export and you can just click on
175:12 billing export and you can just click on there
175:13 there and so for those just coming to billing
175:15 and so for those just coming to billing export for the first time there's a
175:17 export for the first time there's a quick summary of exactly what the
175:19 quick summary of exactly what the bigquery export is used for and as we
175:22 bigquery export is used for and as we discussed earlier there is an option for
175:24 discussed earlier there is an option for the daily cost detail and for pricing
175:28 the daily cost detail and for pricing and i'm going to use the daily cost
175:30 and i'm going to use the daily cost detail in this demo and export that data
175:33 detail in this demo and export that data to bigquery so the first step i'm going
175:35 to bigquery so the first step i'm going to do
175:36 to do is to click on edit settings and it's
175:38 is to click on edit settings and it's going to bring me to a new page where it
175:41 going to bring me to a new page where it will ask me for my project and this is
175:43 will ask me for my project and this is where my billing data is going to be
175:45 where my billing data is going to be stored but as you can see here i'm
175:47 stored but as you can see here i'm getting a prompt that says you need to
175:49 getting a prompt that says you need to create a bigquery data set first now the
175:52 create a bigquery data set first now the bigquery data set that is asking for is
175:55 bigquery data set that is asking for is where the billing data is going to be
175:57 where the billing data is going to be stored so in order to move forward with
175:59 stored so in order to move forward with my billing export i need to go to
176:01 my billing export i need to go to bigquery and set up a data set so i'm
176:04 bigquery and set up a data set so i'm going to simply click on this button
176:05 going to simply click on this button here that says go to bigquery
176:10 and it's going to bring me to the bigquery page where i'll be prompted
176:13 bigquery page where i'll be prompted with a big welcome note you can just
176:15 with a big welcome note you can just click on done and over here in the right
176:17 click on done and over here in the right hand side where it says create data set
176:20 hand side where it says create data set i'm just going to click on there and i'm
176:22 i'm just going to click on there and i'm going to create my new data set and so
176:24 going to create my new data set and so for my data set id i'm going to call
176:27 for my data set id i'm going to call this
176:28 this billing
176:29 billing export
176:31 export and just as a note with the data set id
176:33 and just as a note with the data set id you can't use any characters like
176:35 you can't use any characters like hyphens commas or periods and therefore
176:38 hyphens commas or periods and therefore i capitalize the b and the e now with
176:41 i capitalize the b and the e now with the data location the default location
176:44 the data location the default location is the us multi region but i can simply
176:47 is the us multi region but i can simply click on the drop down and have an
176:49 click on the drop down and have an option to store my data in a different
176:52 option to store my data in a different location
176:53 location but i'm going to keep it at default i
176:55 but i'm going to keep it at default i have the option of expiring this table
176:58 have the option of expiring this table in either a certain amount of days or to
177:00 in either a certain amount of days or to never expire as well when it comes to
177:02 never expire as well when it comes to encryption i'm going to leave it as
177:05 encryption i'm going to leave it as google manage key as opposed to a
177:07 google manage key as opposed to a customer manage key and i'll get into
177:10 customer manage key and i'll get into encryption and key management a little
177:12 encryption and key management a little later on in this course i'm going to go
177:14 later on in this course i'm going to go ahead and move right down to the bottom
177:16 ahead and move right down to the bottom and click on create data set
177:18 and click on create data set and now my data set has been created i
177:21 and now my data set has been created i can now see it over here on the left
177:23 can now see it over here on the left hand side menu where subtle poet 28400
177:27 hand side menu where subtle poet 28400 is the id for my project if i simply
177:30 is the id for my project if i simply click on the arrow beside it it'll show
177:32 click on the arrow beside it it'll show my billing export data set because
177:35 my billing export data set because there's nothing in it nothing is showing
177:37 there's nothing in it nothing is showing and so now that the data set is set up i
177:40 and so now that the data set is set up i can now go back to the billing export
177:42 can now go back to the billing export page and finish setting up my billing
177:44 page and finish setting up my billing export so with that being said i'm going
177:47 export so with that being said i'm going to go back up to the navigation menu
177:49 to go back up to the navigation menu head over to billing
177:52 head over to billing and go to billing export under daily
177:54 and go to billing export under daily cost detail i'm going to click on edit
177:56 cost detail i'm going to click on edit settings and because i have a data set
177:59 settings and because i have a data set already set up and since it's the only
178:01 already set up and since it's the only one it has been propagated in my billing
178:04 one it has been propagated in my billing export data set field if i had more data
178:06 export data set field if i had more data sets then i would be able to select them
178:08 sets then i would be able to select them here as well so i'm going to leave the
178:10 here as well so i'm going to leave the data set at billing export
178:12 data set at billing export and simply click on save
178:15 and simply click on save and so now that billing export has been
178:17 and so now that billing export has been enabled i'll be able to check on my
178:19 enabled i'll be able to check on my billing as it is updated each day as it
178:22 billing as it is updated each day as it says here and to go right to the data
178:24 says here and to go right to the data set i can simply click on this hot link
178:27 set i can simply click on this hot link and it'll bring me right to bigquery and
178:29 and it'll bring me right to bigquery and so there is one last step that still
178:31 so there is one last step that still needs to be done to enable the billing
178:34 needs to be done to enable the billing export to work and that is to enable the
178:37 export to work and that is to enable the bigquery data transfer service api so in
178:41 bigquery data transfer service api so in order to do that we need to go back to
178:43 order to do that we need to go back to the navigation menu
178:45 the navigation menu go into apis and services
178:48 go into apis and services into the dashboard
178:51 into the dashboard and now i'm going to do a search for the
178:53 and now i'm going to do a search for the bigquery data transfer service and i'm
178:55 bigquery data transfer service and i'm going to simply go up here to the top
178:57 going to simply go up here to the top search bar
178:58 search bar and simply type in bigquery
179:02 and simply type in bigquery and here it is bigquery data transfer
179:04 and here it is bigquery data transfer api i'm going to simply click on that
179:07 api i'm going to simply click on that and hit enable
179:09 and hit enable and this might take a minute and you may
179:11 and this might take a minute and you may be asked to create credentials over here
179:13 be asked to create credentials over here on the top right and you can simply
179:15 on the top right and you can simply ignore that as they are not currently
179:18 ignore that as they are not currently needed and so now that the bigquery data
179:20 needed and so now that the bigquery data transfer service api has been enabled
179:23 transfer service api has been enabled i'm now able to go over to bigquery and
179:26 i'm now able to go over to bigquery and take a look at my billing export data
179:28 take a look at my billing export data without any issues now it's going to
179:30 without any issues now it's going to take time to propagate but by the time i
179:33 take time to propagate but by the time i come here tomorrow the data will be
179:35 come here tomorrow the data will be fully propagated and i'll be able to
179:37 fully propagated and i'll be able to query the data as i see fit and so
179:40 query the data as i see fit and so although this is a short demo this is
179:43 although this is a short demo this is necessary to know for the exam
179:45 necessary to know for the exam as well being an engineer and looking to
179:48 as well being an engineer and looking to query your billing data you will now
179:50 query your billing data you will now have the knowledge in order to take the
179:52 have the knowledge in order to take the steps necessary that will allow you to
179:55 steps necessary that will allow you to do so and so that's all i have for this
179:57 do so and so that's all i have for this lesson and demo on export billing data
180:00 lesson and demo on export billing data so you can now mark this lesson as
180:02 so you can now mark this lesson as complete
180:03 complete and let's move on to the next one
180:11 welcome back in this hands-on demo i'm going to go
180:13 in this hands-on demo i'm going to go over apis in google cloud
180:17 over apis in google cloud now the google cloud platform is pretty
180:19 now the google cloud platform is pretty much run on apis
180:22 much run on apis whether it's in the console or the sdk
180:25 whether it's in the console or the sdk under the hood it's hitting the apis now
180:29 under the hood it's hitting the apis now some of you may be wondering what is an
180:31 some of you may be wondering what is an api
180:32 api well this is an acronym standing for
180:35 well this is an acronym standing for application programming interface and
180:38 application programming interface and it's a standard used amongst the
180:40 it's a standard used amongst the programming community in this specific
180:42 programming community in this specific context it is the programming interface
180:46 context it is the programming interface for google cloud services
180:48 for google cloud services and as i said before both the cloud sdk
180:51 and as i said before both the cloud sdk and the console are using apis under the
180:54 and the console are using apis under the hood and it provides similar
180:57 hood and it provides similar functionality now when using the apis
180:59 functionality now when using the apis directly it allows you to enable
181:02 directly it allows you to enable automation in your workflow by using the
181:05 automation in your workflow by using the software libraries that you use for your
181:08 software libraries that you use for your favorite programming language now as
181:10 favorite programming language now as seen in previous lessons
181:12 seen in previous lessons to use a cloud api
181:14 to use a cloud api you must enable it first so if i went to
181:17 you must enable it first so if i went to compute engine or when i was enabling
181:19 compute engine or when i was enabling monitoring
181:22 monitoring i had to enable the api so no matter the
181:24 i had to enable the api so no matter the service you're requesting here in google
181:26 service you're requesting here in google cloud and some of them may be even
181:28 cloud and some of them may be even linked together it always has to be
181:30 linked together it always has to be enabled in order to use it now getting a
181:33 enabled in order to use it now getting a little bit more granular when using an
181:35 little bit more granular when using an api you need to have a project so when
181:38 api you need to have a project so when you enable the api you enable it for
181:41 you enable the api you enable it for your project
181:42 your project using the permissions on the project and
181:45 using the permissions on the project and permissions on the api to enable it now
181:48 permissions on the api to enable it now since this is a demo i want to go over
181:51 since this is a demo i want to go over to the navigation menu and go straight
181:53 to the navigation menu and go straight into apis and services
181:56 into apis and services and so here is the dashboard of the apis
181:58 and so here is the dashboard of the apis and services you can see the traffic
182:01 and services you can see the traffic here the errors and the latency with
182:04 here the errors and the latency with regards to these apis as well up here it
182:08 regards to these apis as well up here it has a time frame for the median latency
182:10 has a time frame for the median latency that you can select for a more granular
182:12 that you can select for a more granular search now when it comes to what is
182:15 search now when it comes to what is enabled already you can see list here of
182:18 enabled already you can see list here of the apis that are enabled and since we
182:21 the apis that are enabled and since we haven't done much there's only a few
182:23 haven't done much there's only a few apis that are enabled now this hands-on
182:26 apis that are enabled now this hands-on demo is not meant to go into depth with
182:29 demo is not meant to go into depth with apis but is merely an overview so that
182:32 apis but is merely an overview so that you understand what the apis are used
182:35 you understand what the apis are used for in context with google cloud if
182:38 for in context with google cloud if you'd like to go more in depth with
182:40 you'd like to go more in depth with regards to apis and possibly get
182:42 regards to apis and possibly get certified in it the apogee certification
182:46 certified in it the apogee certification with its corresponding lessons would be
182:48 with its corresponding lessons would be a great way to get a little bit more
182:50 a great way to get a little bit more understanding but for this demo we're
182:52 understanding but for this demo we're going to stick with this overview and so
182:54 going to stick with this overview and so in order to search for more apis that
182:57 in order to search for more apis that need to be enabled or if you're looking
182:59 need to be enabled or if you're looking for something specific
183:00 for something specific you can come up here to enable apis and
183:03 you can come up here to enable apis and services
183:04 services or you can do a quick search on the
183:06 or you can do a quick search on the search bar at the top of the page but
183:08 search bar at the top of the page but just as a quick glance i'm going to go
183:10 just as a quick glance i'm going to go into enable apis and services and so you
183:14 into enable apis and services and so you will be brought to a new page where you
183:16 will be brought to a new page where you will see the api library on the left you
183:19 will see the api library on the left you will see a menu where the apis are
183:21 will see a menu where the apis are categorized and all the apis that are
183:25 categorized and all the apis that are available when it comes to google cloud
183:28 available when it comes to google cloud and other google services so as you saw
183:30 and other google services so as you saw before when i needed to enable the api
183:34 before when i needed to enable the api for bigquery
183:35 for bigquery i would simply type in bigquery
183:38 i would simply type in bigquery and i can go to the api and since the
183:41 and i can go to the api and since the api is enabled there's nothing for me to
183:44 api is enabled there's nothing for me to do but if i needed to enable it i could
183:47 do but if i needed to enable it i could do that right there and just as a quick
183:49 do that right there and just as a quick note when going to a service that's
183:52 note when going to a service that's available in the console the api
183:54 available in the console the api automatically gets enabled when you go
183:57 automatically gets enabled when you go and use it for the first time and so
183:59 and use it for the first time and so again this is just a quick overview of
184:02 again this is just a quick overview of apis and the api library with regards to
184:06 apis and the api library with regards to google cloud a short yet important demo
184:09 google cloud a short yet important demo to understand the under workings of the
184:12 to understand the under workings of the cloud sdk and the console so just
184:15 cloud sdk and the console so just remember that when using any service in
184:17 remember that when using any service in google cloud
184:18 google cloud again you must enable the api in order
184:22 again you must enable the api in order to start using it and so that about
184:24 to start using it and so that about wraps up this demo for cloud apis so you
184:27 wraps up this demo for cloud apis so you can now mark this lesson as complete and
184:29 can now mark this lesson as complete and let's move on to the next one
184:31 let's move on to the next one [Music]
184:35 [Music] welcome back
184:36 welcome back in this demo i'll be creating and
184:39 in this demo i'll be creating and setting up a new gmail user as an admin
184:42 setting up a new gmail user as an admin user for use moving ahead in this course
184:46 user for use moving ahead in this course as well as following google's best
184:48 as well as following google's best practices we need a user that has lesser
184:51 practices we need a user that has lesser privileges than the user account that we
184:53 privileges than the user account that we set up previously and i'll be going
184:55 set up previously and i'll be going through a full demo to show you how to
184:58 through a full demo to show you how to configure it
185:00 configure it now in a google cloud setup that uses a
185:03 now in a google cloud setup that uses a g suite or cloud identity account a
185:06 g suite or cloud identity account a super administrator account is created
185:09 super administrator account is created to administer the domain this super
185:11 to administer the domain this super admin account has irrevocable
185:14 admin account has irrevocable administrative permissions
185:16 administrative permissions that should not be used for day-to-day
185:18 that should not be used for day-to-day administration this means that no
185:21 administration this means that no permissions can be taken away from this
185:24 permissions can be taken away from this account and has the power to grant
185:27 account and has the power to grant organization admin role
185:29 organization admin role or any other role for that matter and
185:31 or any other role for that matter and recover accounts at the domain level
185:33 recover accounts at the domain level which makes this account extremely
185:36 which makes this account extremely powerful now since i do not have a
185:38 powerful now since i do not have a domain setup or using a g suite or cloud
185:42 domain setup or using a g suite or cloud identity account i don't need to worry
185:44 identity account i don't need to worry about a super admin account in this
185:47 about a super admin account in this specific environment as gmail accounts
185:50 specific environment as gmail accounts are standalone accounts that are meant
185:52 are standalone accounts that are meant to be personal and hold no organization
185:55 to be personal and hold no organization and usually start at the project level
185:58 and usually start at the project level and so to explain it in a bit more
186:00 and so to explain it in a bit more detail
186:01 detail i have a diagram here showing the two
186:04 i have a diagram here showing the two different accounts i will be using
186:06 different accounts i will be using and the structure behind it
186:09 and the structure behind it now as we discussed before billing
186:11 now as we discussed before billing accounts have the option of paying for
186:14 accounts have the option of paying for projects in a different organization so
186:16 projects in a different organization so when creating new projects using the two
186:19 when creating new projects using the two different gmail accounts they were
186:21 different gmail accounts they were created without any organization and so
186:25 created without any organization and so each account is standalone and can
186:27 each account is standalone and can create their own projects now what makes
186:29 create their own projects now what makes them different is that the antony gcloud
186:33 them different is that the antony gcloud ace account owns the billing account and
186:36 ace account owns the billing account and is set as a billing account
186:38 is set as a billing account administrator and the tony bowtie ace
186:41 administrator and the tony bowtie ace account is a billing account user that
186:44 account is a billing account user that is able to link projects to that billing
186:46 is able to link projects to that billing account but does not hold full access to
186:49 account but does not hold full access to billing
186:51 billing so in the spirit of sticking to the
186:53 so in the spirit of sticking to the principle of lease privilege
186:55 principle of lease privilege i will be using the tony bowtie ace
186:58 i will be using the tony bowtie ace account that i had created earlier with
187:00 account that i had created earlier with lesser privileges on billing it will
187:03 lesser privileges on billing it will still give me all the permissions i need
187:05 still give me all the permissions i need to create edit and delete resources
187:09 to create edit and delete resources without all the powerful permissions
187:11 without all the powerful permissions needed for billing i will be assigning
187:14 needed for billing i will be assigning this new gmail user the billing account
187:17 this new gmail user the billing account user role and it will allow you to
187:19 user role and it will allow you to achieve everything you need to build for
187:22 achieve everything you need to build for the remainder of the course
187:25 the remainder of the course so just as a review i will be using a
187:28 so just as a review i will be using a new google account that i have created
187:30 new google account that i have created or if you'd like you can use a
187:32 or if you'd like you can use a pre-existing google account and as
187:35 pre-existing google account and as always i recommend enabling two-step
187:38 always i recommend enabling two-step verification on your account
187:40 verification on your account as this user will hold some powerful
187:43 as this user will hold some powerful permissions to access a ton of different
187:46 permissions to access a ton of different resources in google cloud
187:49 resources in google cloud so now that we've gone over the details
187:52 so now that we've gone over the details of the what and why for setting up this
187:54 of the what and why for setting up this second account let's head into the demo
187:57 second account let's head into the demo and get things started so whenever
187:59 and get things started so whenever you're ready join me over in the console
188:02 you're ready join me over in the console and so here i am back in the console and
188:05 and so here i am back in the console and so before switching over to my new user
188:08 so before switching over to my new user i need to assign the specific roles that
188:10 i need to assign the specific roles that i will need for that user which is the
188:13 i will need for that user which is the billing account user role so to assign
188:16 billing account user role so to assign this role to my new user i need to head
188:18 this role to my new user i need to head over to billing so i'm going to go back
188:20 over to billing so i'm going to go back up here to the left-hand corner
188:22 up here to the left-hand corner and click on the navigation menu
188:25 and click on the navigation menu and go to billing
188:27 and go to billing again in the left-hand menu i'm going to
188:29 again in the left-hand menu i'm going to move down to account management and
188:31 move down to account management and click on there and over here under my
188:34 click on there and over here under my billing account you will see that i have
188:36 billing account you will see that i have permissions assigned to one member of
188:39 permissions assigned to one member of the billing account administrator
188:42 the billing account administrator and as expected i am seeing anthony g
188:45 and as expected i am seeing anthony g cloud ace
188:46 cloud ace gmail.com and so i want to add another
188:49 gmail.com and so i want to add another member to my billing account so i'm
188:52 member to my billing account so i'm going to simply click on add members and
188:54 going to simply click on add members and here i will enter in my new second user
188:57 here i will enter in my new second user which is tony bowtie ace
189:01 which is tony bowtie ace gmail.com
189:09 and under select a role i'm going to move down to billing and over to billing
189:11 move down to billing and over to billing account user and as you can see here
189:14 account user and as you can see here this role billing account user will
189:17 this role billing account user will allow permissions to associate projects
189:19 allow permissions to associate projects with billing accounts which is exactly
189:22 with billing accounts which is exactly what i want to do
189:23 what i want to do and so i'm going to simply click on that
189:25 and so i'm going to simply click on that and simply click on save
189:29 and simply click on save and so now that i've assigned my second
189:30 and so now that i've assigned my second user the proper permissions that i
189:32 user the proper permissions that i needed i am now going to log out
189:35 needed i am now going to log out and log in as my new user by simply
189:38 and log in as my new user by simply going up to the right hand corner in the
189:40 going up to the right hand corner in the icon clicking on the icon and going to
189:43 icon clicking on the icon and going to add account by adding the account i'll
189:45 add account by adding the account i'll be able to switch back and forth between
189:48 be able to switch back and forth between the different users and i would only
189:50 the different users and i would only recommend this if you are the sole user
189:52 recommend this if you are the sole user of your computer if you are on a
189:54 of your computer if you are on a computer that has multiple users simply
189:57 computer that has multiple users simply sign out and sign back in again with
189:59 sign out and sign back in again with your different user
190:01 your different user and here i'm asked for the email which
190:03 and here i'm asked for the email which would be tony bowtie ace
190:06 would be tony bowtie ace gmail.com
190:08 gmail.com i'm gonna plug in my password
190:14 and it's going to ask me for my two-step verification
190:24 i'm going to click on yes and i should be in
190:30 and because it's my first time logging into google cloud with this user i get a
190:32 into google cloud with this user i get a prompt asking me to agree to the terms
190:35 prompt asking me to agree to the terms of service i'm going to agree to them
190:37 of service i'm going to agree to them and simply click on agree and continue
190:40 and simply click on agree and continue and so now i'm going to move back up to
190:42 and so now i'm going to move back up to overview and as you can see here i don't
190:44 overview and as you can see here i don't have the permissions to view costs for
190:47 have the permissions to view costs for this billing account and so all the
190:49 this billing account and so all the permissions assigned for the billing
190:51 permissions assigned for the billing account administrator which is antony g
190:54 account administrator which is antony g cloud ace is not applied to tony bowtie
190:57 cloud ace is not applied to tony bowtie ace and therefore things like budgets
191:00 ace and therefore things like budgets and alerts even billing exports i do not
191:03 and alerts even billing exports i do not have access to
191:05 have access to so moving forward in the course if you
191:07 so moving forward in the course if you need to access anything in billing that
191:10 need to access anything in billing that you currently don't have access to like
191:13 you currently don't have access to like budgets and alerts you can simply switch
191:15 budgets and alerts you can simply switch over to your other account and take care
191:17 over to your other account and take care of any necessary changes but what i do
191:20 of any necessary changes but what i do have access to is if i go up here to my
191:23 have access to is if i go up here to my billing account click on the drop down
191:25 billing account click on the drop down menu and click on manage billing
191:28 menu and click on manage billing accounts but as you can see here i do
191:30 accounts but as you can see here i do have access to view all the billing
191:32 have access to view all the billing accounts along with the projects that
191:35 accounts along with the projects that are linked to them now because these
191:37 are linked to them now because these gmail accounts are standalone accounts
191:39 gmail accounts are standalone accounts this project here that is owned by
191:42 this project here that is owned by antony gcloud ace i do not have access
191:45 antony gcloud ace i do not have access to in order to access the project i
191:48 to in order to access the project i would have to have permissions assigned
191:51 would have to have permissions assigned to me directly in order for me to
191:53 to me directly in order for me to actually view the project or possibly
191:56 actually view the project or possibly creating any resources within that
191:58 creating any resources within that project now if i go back to my home page
192:01 project now if i go back to my home page i can see here that i have no projects
192:04 i can see here that i have no projects available and therefore no resources
192:07 available and therefore no resources within my environment and so to kick it
192:10 within my environment and so to kick it off i'm going to create a new project
192:13 off i'm going to create a new project and so under project name i am going to
192:15 and so under project name i am going to call this
192:16 call this project tony
192:18 project tony and you can name your project whatever
192:20 and you can name your project whatever you'd like
192:21 you'd like under location i don't have any
192:23 under location i don't have any organization
192:25 organization and so therefore i'm just going to click
192:27 and so therefore i'm just going to click on create
192:28 on create and this may take a minute to
192:30 and this may take a minute to create and here we are with my first
192:34 create and here we are with my first project
192:35 project named project tony as well as my
192:37 named project tony as well as my notification came up saying that my
192:39 notification came up saying that my project has been created and so now that
192:42 project has been created and so now that this project has been created it should
192:44 this project has been created it should be linked to my billing account so in
192:46 be linked to my billing account so in order to verify this i'm going to go
192:49 order to verify this i'm going to go over into billing
192:55 and under the drop down i'm going to click on manage billing
192:56 i'm going to click on manage billing accounts
192:58 accounts and as you can see here the number of
193:00 and as you can see here the number of projects has gone from one to two and if
193:03 projects has gone from one to two and if i click on the menu up here under my
193:05 i click on the menu up here under my projects you can see that project tony
193:08 projects you can see that project tony is a project that is linked to my
193:11 is a project that is linked to my billing account i also have the
193:12 billing account i also have the permissions to either disable billing or
193:15 permissions to either disable billing or change billing for this specific project
193:18 change billing for this specific project yet in order to change billing i will
193:21 yet in order to change billing i will have to have another billing account but
193:23 have to have another billing account but there are no other billing accounts
193:25 there are no other billing accounts available
193:26 available and so moving forward i will only have
193:29 and so moving forward i will only have this one billing account and so any
193:31 this one billing account and so any projects i decide to create will be
193:34 projects i decide to create will be linked to this billing account and so
193:36 linked to this billing account and so this is a great example of trimming down
193:39 this is a great example of trimming down the permissions needed for different
193:41 the permissions needed for different users and even though this is not a
193:44 users and even though this is not a domain owned account but a personal
193:46 domain owned account but a personal account it's always recommended to
193:49 account it's always recommended to practice the principle of lease
193:51 practice the principle of lease privilege whenever you come across
193:53 privilege whenever you come across assigning permissions to any user now as
193:56 assigning permissions to any user now as i said before any billing related tasks
193:59 i said before any billing related tasks that you decide to do moving forward
194:02 that you decide to do moving forward you can simply switch over to your other
194:04 you can simply switch over to your other user and do the necessary changes and so
194:08 user and do the necessary changes and so that's all i have for this lesson
194:10 that's all i have for this lesson so you can now mark this lesson as
194:12 so you can now mark this lesson as complete
194:13 complete and let's move on to the next one
194:15 and let's move on to the next one [Music]
194:19 [Music] welcome back
194:21 welcome back in this short lesson i'm going to be
194:23 in this short lesson i'm going to be covering an overview of the cloud sdk
194:27 covering an overview of the cloud sdk and the command line interface as it is
194:29 and the command line interface as it is an essential component of interacting
194:32 an essential component of interacting with google cloud for the exam you will
194:35 with google cloud for the exam you will need to get familiar with the command
194:37 need to get familiar with the command line and the commands needed in order to
194:40 line and the commands needed in order to create
194:41 create modify and delete resources this is also
194:44 modify and delete resources this is also an extremely valuable tool for your tool
194:48 an extremely valuable tool for your tool belt in the world of being a cloud
194:50 belt in the world of being a cloud engineer as i have found that is a very
194:53 engineer as i have found that is a very common and easy way to implement small
194:56 common and easy way to implement small operations within google cloud as well
194:59 operations within google cloud as well as automating the complex ones so what
195:02 as automating the complex ones so what exactly is the cloud sdk
195:05 exactly is the cloud sdk well the cloud sdk is a set of command
195:08 well the cloud sdk is a set of command line tools
195:10 line tools that allows you to manage resources
195:12 that allows you to manage resources through the terminal in google cloud and
195:15 through the terminal in google cloud and includes commands such as gcloud
195:18 includes commands such as gcloud gsutil bq and cubectl using these
195:23 gsutil bq and cubectl using these commands
195:24 commands allow you to manage resources such as
195:26 allow you to manage resources such as compute engine
195:27 compute engine cloud storage bigquery kubernetes and so
195:31 cloud storage bigquery kubernetes and so many other resources these tools can be
195:35 many other resources these tools can be run interactively or through automated
195:38 run interactively or through automated scripts giving you the power and
195:40 scripts giving you the power and flexibility that you need to get the job
195:43 flexibility that you need to get the job done the cloud sdk is so powerful that
195:47 done the cloud sdk is so powerful that you can do everything that the console
195:49 you can do everything that the console can do yet has more options than the
195:52 can do yet has more options than the console you can use it for
195:53 console you can use it for infrastructure as code autocompletion
195:56 infrastructure as code autocompletion helps you finish all of your command
195:58 helps you finish all of your command line statements and for those of you who
196:00 line statements and for those of you who run windows the cloud sdk has got you
196:04 run windows the cloud sdk has got you covered with availability for powershell
196:08 covered with availability for powershell now in order to access google cloud
196:10 now in order to access google cloud platform you will usually have to
196:13 platform you will usually have to authorize google cloud sdk tools so to
196:17 authorize google cloud sdk tools so to grant authorization to cloud sdk tools
196:20 grant authorization to cloud sdk tools you can either use a user account or a
196:24 you can either use a user account or a service account now a user account is a
196:26 service account now a user account is a google account that allows end users to
196:30 google account that allows end users to authenticate directly to your
196:32 authenticate directly to your application for most common use cases on
196:35 application for most common use cases on a single machine using a user account is
196:38 a single machine using a user account is best practice now going the route of a
196:40 best practice now going the route of a service account this is a google account
196:43 service account this is a google account that is associated with your gcp project
196:47 that is associated with your gcp project and not a specific user a service
196:49 and not a specific user a service account can be used by providing a
196:52 account can be used by providing a service account key to your application
196:54 service account key to your application and is recommended to script cloud sdk
196:58 and is recommended to script cloud sdk tools for use on multiple machines now
197:01 tools for use on multiple machines now having installed the cloud sdk it comes
197:04 having installed the cloud sdk it comes with some built-in commands that allow
197:07 with some built-in commands that allow you to configure different options using
197:09 you to configure different options using gcloud init this initializes and
197:12 gcloud init this initializes and authorizes access and performs other
197:16 authorizes access and performs other common cloud sdk setup steps using some
197:19 common cloud sdk setup steps using some optional commands
197:21 optional commands gcloud auth login authorizes your access
197:25 gcloud auth login authorizes your access for gcloud with google user credentials
197:28 for gcloud with google user credentials and sets the current account as active
197:31 and sets the current account as active gcloud config
197:33 gcloud config is another optional configuration that
197:35 is another optional configuration that allows you to configure accounts and
197:38 allows you to configure accounts and projects as well gcloud components
197:41 projects as well gcloud components allow you to install
197:43 allow you to install update and delete
197:45 update and delete optional components of the sdk that give
197:48 optional components of the sdk that give you more flexibility with different
197:51 you more flexibility with different resources now after having installed the
197:54 resources now after having installed the cloud sdk almost all gcloud commands
197:57 cloud sdk almost all gcloud commands will follow a specific format shown here
198:01 will follow a specific format shown here is an example of this format and is
198:03 is an example of this format and is broken down through component
198:05 broken down through component entity
198:06 entity operation positional arguments and flags
198:10 operation positional arguments and flags and i'll be going through some specific
198:12 and i'll be going through some specific examples in the demonstration a little
198:15 examples in the demonstration a little bit later on and so that's all i wanted
198:17 bit later on and so that's all i wanted to cover in this overview of the cloud
198:20 to cover in this overview of the cloud sdk and the cli so you can now mark this
198:24 sdk and the cli so you can now mark this lesson as complete and you can join me
198:26 lesson as complete and you can join me in the next one where i go ahead and
198:28 in the next one where i go ahead and demonstrate installing the cloud sdk
198:31 demonstrate installing the cloud sdk [Music]
198:36 [Music] back in this demonstration i will show
198:39 back in this demonstration i will show you how to download install and
198:41 you how to download install and configure the cloud sdk and i will be
198:44 configure the cloud sdk and i will be using the quick start guide that lies in
198:47 using the quick start guide that lies in the cloud sdk documentation which holds
198:51 the cloud sdk documentation which holds all the steps for installing the cloud
198:53 all the steps for installing the cloud sdk on different operating systems and i
198:57 sdk on different operating systems and i will make sure to include it in the
198:58 will make sure to include it in the lesson text below this demo will show
199:01 lesson text below this demo will show you how to install the cloud sdk
199:04 you how to install the cloud sdk on each of the most common operating
199:06 on each of the most common operating systems
199:08 systems windows
199:09 windows mac os and ubuntu linux all you need to
199:12 mac os and ubuntu linux all you need to do is follow the process on each of the
199:15 do is follow the process on each of the pages and you should be well on your way
199:18 pages and you should be well on your way so with that being said let's get this
199:20 so with that being said let's get this demo started and bring the cloud sdk to
199:23 demo started and bring the cloud sdk to life by getting it all installed and
199:26 life by getting it all installed and configured for your specific operating
199:28 configured for your specific operating system
199:30 system so as i explained before i'm gonna go
199:32 so as i explained before i'm gonna go ahead and install the cloud sdk
199:36 ahead and install the cloud sdk on each of the three different operating
199:38 on each of the three different operating systems
199:39 systems windows mac os and ubuntu linux and i
199:43 windows mac os and ubuntu linux and i will be installing it with the help of
199:45 will be installing it with the help of the quick start guide that you see here
199:47 the quick start guide that you see here and as i said before i'll be including
199:50 and as i said before i'll be including this link in the lesson text and so to
199:52 this link in the lesson text and so to kick off this demo i wanted to start by
199:55 kick off this demo i wanted to start by installing the cloud sdk on windows so
199:58 installing the cloud sdk on windows so i'm going to move over to my windows
200:00 i'm going to move over to my windows virtual machine and i'm going to open up
200:02 virtual machine and i'm going to open up a browser and i'm going to paste in the
200:05 a browser and i'm going to paste in the link for the quick start guide
200:07 link for the quick start guide and you can click on either link for the
200:10 and you can click on either link for the quick start for windows and each quick
200:12 quick start for windows and each quick start page will give me the instructions
200:15 start page will give me the instructions of exactly what i need to do for each
200:18 of exactly what i need to do for each operating system so now it says that we
200:20 operating system so now it says that we need to have a project created which i
200:23 need to have a project created which i did in the last lesson which is project
200:25 did in the last lesson which is project tony so next i'm going to download the
200:28 tony so next i'm going to download the cloud sdk installer
200:30 cloud sdk installer so i'm going to click on there
200:33 so i'm going to click on there and i'll see a prompt in the bottom left
200:35 and i'll see a prompt in the bottom left hand corner that the installer has been
200:37 hand corner that the installer has been downloaded i'm going to click on it to
200:39 downloaded i'm going to click on it to open the file and i'm going to be
200:41 open the file and i'm going to be prompted to go through this wizard and
200:43 prompted to go through this wizard and so i'm just going to click on next
200:45 so i'm just going to click on next i'm going to agree to the terms of the
200:47 i'm going to agree to the terms of the agreement it's going to be for just me
200:49 agreement it's going to be for just me anthony and my destination folder i'll
200:52 anthony and my destination folder i'll keep it as is and here's all the
200:54 keep it as is and here's all the components that it's going to install
200:56 components that it's going to install i'm going to keep the beta commands
200:58 i'm going to keep the beta commands unchecked as i don't really need them
201:00 unchecked as i don't really need them and if i need them later then i can
201:02 and if i need them later then i can install that component for those who are
201:04 install that component for those who are more experienced or even a bit curious
201:07 more experienced or even a bit curious you could click on the beta commands and
201:09 you could click on the beta commands and take it for a test drive but i'm going
201:11 take it for a test drive but i'm going to keep it off and i'm going to click
201:13 to keep it off and i'm going to click install and depending on the power of
201:15 install and depending on the power of your machine
201:16 your machine it should take anywhere from two to five
201:18 it should take anywhere from two to five minutes to install and the google cloud
201:21 minutes to install and the google cloud sdk has been installed and so i'm just
201:24 sdk has been installed and so i'm just going to click on next and as shown here
201:26 going to click on next and as shown here in the documentation you want to make
201:28 in the documentation you want to make sure that you have all your options
201:30 sure that you have all your options checked off is to create a start menu
201:32 checked off is to create a start menu shortcut a desktop shortcut you want to
201:35 shortcut a desktop shortcut you want to start the google cloud sdk shell and
201:38 start the google cloud sdk shell and lastly you want to run gcloud init in
201:41 lastly you want to run gcloud init in order to initialize and configure the
201:43 order to initialize and configure the cloud sdk now i'm going to click on
201:46 cloud sdk now i'm going to click on finish to exit the setup and i'm going
201:48 finish to exit the setup and i'm going to get a command shell that pops up and
201:50 to get a command shell that pops up and i'm just going to zoom in for better
201:52 i'm just going to zoom in for better viewing
201:53 viewing and so it says here my current
201:54 and so it says here my current configuration has been set to default so
201:57 configuration has been set to default so when it comes to configuration this is
202:00 when it comes to configuration this is all about selecting the active account
202:03 all about selecting the active account and so my current active account is
202:05 and so my current active account is going to be set as the default account
202:07 going to be set as the default account it also needed to do a diagnostic check
202:09 it also needed to do a diagnostic check just to make sure that it can connect to
202:12 just to make sure that it can connect to the internet so that it's able to verify
202:14 the internet so that it's able to verify the account and so now the prompt is
202:16 the account and so now the prompt is saying you must log in to continue would
202:18 saying you must log in to continue would you like to log in yes
202:21 you like to log in yes you can just click on y and then enter
202:24 you can just click on y and then enter and it's going to prompt me with a new
202:26 and it's going to prompt me with a new browser window where i need to log in
202:29 browser window where i need to log in using my current account so that i can
202:32 using my current account so that i can authorize the cloud sdk so i'm going to
202:34 authorize the cloud sdk so i'm going to log in with my tony bowtie ace account
202:37 log in with my tony bowtie ace account click on next
202:38 click on next type in my password
202:41 type in my password again it's going to ask me for my
202:43 again it's going to ask me for my two-step verification
202:46 two-step verification and i'm going to get a prompt saying
202:48 and i'm going to get a prompt saying that the google sdk wants to access my
202:50 that the google sdk wants to access my google account
202:51 google account i'm going to click on allow
202:54 i'm going to click on allow and success you are now authenticated
202:56 and success you are now authenticated with the google cloud sdk
202:59 with the google cloud sdk and if i go back to my terminal i am
203:01 and if i go back to my terminal i am prompted to enter some values
203:03 prompted to enter some values so that i can properly configure the
203:06 so that i can properly configure the google cloud sdk so i'm going to pick a
203:09 google cloud sdk so i'm going to pick a cloud project to use
203:10 cloud project to use and i'm going to use project tony that i
203:13 and i'm going to use project tony that i created earlier so i'm going to enter 1
203:15 created earlier so i'm going to enter 1 and hit enter
203:18 and hit enter and again whatever project that you've
203:20 and again whatever project that you've created use that one for your default
203:23 created use that one for your default configuration and it states here that my
203:26 configuration and it states here that my current project has been set to project
203:28 current project has been set to project tony and again this configuration is
203:30 tony and again this configuration is called default
203:32 called default so if i have a second configuration that
203:34 so if i have a second configuration that i wanted to use i can call it a
203:36 i wanted to use i can call it a different configuration but other than
203:38 different configuration but other than that my google cloud sdk is configured
203:42 that my google cloud sdk is configured and ready to use so just to make sure
203:44 and ready to use so just to make sure that it's working i'm going to run a
203:46 that it's working i'm going to run a couple commands i'm going to run the
203:48 couple commands i'm going to run the gcloud
203:50 gcloud help
203:52 help command
203:53 command and as you can see it's given me a list
203:56 and as you can see it's given me a list of a bunch of different commands that i
203:58 of a bunch of different commands that i can run and to exit you can just hit
204:00 can run and to exit you can just hit ctrl c i'm going to run gcloud
204:04 ctrl c i'm going to run gcloud config list
204:08 and this will give me my properties in my active configuration
204:11 my properties in my active configuration so my account is tony bowtie ace
204:14 so my account is tony bowtie ace gmail.com i've disabled usage reporting
204:17 gmail.com i've disabled usage reporting and my project is project tony and my
204:20 and my project is project tony and my active configuration is set as default
204:23 active configuration is set as default now don't worry i'm going to be covering
204:26 now don't worry i'm going to be covering all these commands in the next lesson
204:28 all these commands in the next lesson and i'm going to be going into detail on
204:31 and i'm going to be going into detail on how you can configure and add other
204:34 how you can configure and add other users
204:35 users within your cloud sdk configuration so
204:38 within your cloud sdk configuration so as we go deeper into the course i'm
204:40 as we go deeper into the course i'm going to be using a lot more command
204:42 going to be using a lot more command line just so you can get familiar with
204:44 line just so you can get familiar with the syntax and become a bit more
204:47 the syntax and become a bit more comfortable with it so now that i've
204:49 comfortable with it so now that i've installed the cloud sdk on windows the
204:52 installed the cloud sdk on windows the process will be a little bit different
204:54 process will be a little bit different when it comes to installation on the
204:56 when it comes to installation on the other operating systems but will be very
204:59 other operating systems but will be very similar when it comes to the
205:01 similar when it comes to the configuration so now let's head over to
205:04 configuration so now let's head over to mac os and install the cloud sdk there
205:08 mac os and install the cloud sdk there and so here we are in mac os and so the
205:11 and so here we are in mac os and so the first thing i want to do is i want to
205:13 first thing i want to do is i want to open up a web browser and i want to go
205:15 open up a web browser and i want to go to the cloud sdk quick start page so i'm
205:18 to the cloud sdk quick start page so i'm just going to paste in the url here
205:22 just going to paste in the url here and we're looking for the quick start
205:23 and we're looking for the quick start for mac os and so you can either click
205:26 for mac os and so you can either click on the menu from the left hand side or
205:28 on the menu from the left hand side or the menu here on the main page
205:31 the menu here on the main page and so like i said before
205:32 and so like i said before this installation is going to be a
205:34 this installation is going to be a little bit different than what it was in
205:37 little bit different than what it was in windows and so there's a few steps here
205:39 windows and so there's a few steps here to follow and so the first step asks us
205:41 to follow and so the first step asks us if we have a project already created
205:44 if we have a project already created which we've already done and is project
205:46 which we've already done and is project tony and so the next step tells us that
205:49 tony and so the next step tells us that the cloud sdk
205:50 the cloud sdk requires python and so we want to check
205:53 requires python and so we want to check our system to see if we have a supported
205:56 our system to see if we have a supported version so in order to check our version
205:59 version so in order to check our version we're going to use this command here
206:00 we're going to use this command here python minus v
206:02 python minus v and i'm going to copy that to my
206:03 and i'm going to copy that to my clipboard
206:05 clipboard and then open up a terminal and i'm
206:07 and then open up a terminal and i'm going to zoom in for better viewing and
206:09 going to zoom in for better viewing and so i'm going to paste the command in
206:11 so i'm going to paste the command in here
206:12 here and simply click on enter and as you can
206:14 and simply click on enter and as you can see here i'm running python 2.7
206:19 see here i'm running python 2.7 but the starred note here says that the
206:21 but the starred note here says that the cloud sdk will soon move to python 3 and
206:25 cloud sdk will soon move to python 3 and so in order to avoid having to upgrade
206:28 so in order to avoid having to upgrade later you'd want to check your version
206:30 later you'd want to check your version for python 3 and so you can use a
206:32 for python 3 and so you can use a similar command by typing in python 3
206:37 similar command by typing in python 3 space minus capital v
206:39 space minus capital v and as you can see i'm running version
206:42 and as you can see i'm running version 3.7.3
206:44 3.7.3 and so moving back to the guide i can
206:46 and so moving back to the guide i can see here that it is a supportive version
206:49 see here that it is a supportive version if you do not have a supportive version
206:52 if you do not have a supportive version i will include a link on how to upgrade
206:54 i will include a link on how to upgrade your version
206:55 your version in the lesson text below and so now that
206:58 in the lesson text below and so now that i've finished off this step let's move
207:00 i've finished off this step let's move on to the next one
207:02 on to the next one where i can download the archive file
207:05 where i can download the archive file for the google cloud sdk again most
207:08 for the google cloud sdk again most machines will run the 64-bit package so
207:12 machines will run the 64-bit package so if you do have the latest operating
207:14 if you do have the latest operating system for mac os
207:16 system for mac os you should be good to go so i'm going to
207:18 you should be good to go so i'm going to click on this package
207:20 click on this package and it'll start downloading for me and
207:22 and it'll start downloading for me and once it's finished you can click on
207:24 once it's finished you can click on downloads and click on the file itself
207:27 downloads and click on the file itself and it should extract itself in the same
207:30 and it should extract itself in the same folder with all the files and folders
207:32 folder with all the files and folders within it and so just as another quick
207:34 within it and so just as another quick note google prefers that you keep the
207:37 note google prefers that you keep the google cloud sdk in your home directory
207:40 google cloud sdk in your home directory and so following the guide i'm going to
207:42 and so following the guide i'm going to do exactly that and so the easiest way
207:45 do exactly that and so the easiest way to move the folder into your home
207:47 to move the folder into your home directory is to simply drag and drop it
207:51 directory is to simply drag and drop it into the home folder
207:53 into the home folder on the left hand menu it should be
207:55 on the left hand menu it should be marked with a little house icon and
207:58 marked with a little house icon and nested under favorites i can now move
208:00 nested under favorites i can now move into my home folder and confirm that it
208:03 into my home folder and confirm that it is indeed in here and so now moving to
208:06 is indeed in here and so now moving to the last step which shows as optional
208:09 the last step which shows as optional the guide asks us to install a script to
208:12 the guide asks us to install a script to add cloud sdk tools to our path now i
208:16 add cloud sdk tools to our path now i highly recommend that you install this
208:18 highly recommend that you install this script so that you can add the tools for
208:22 script so that you can add the tools for command completion and i will get into
208:24 command completion and i will get into command completion a little bit later on
208:27 command completion a little bit later on in the next couple of lessons and so
208:29 in the next couple of lessons and so here is the command that i need to run
208:32 here is the command that i need to run so i'm going to copy that to my
208:33 so i'm going to copy that to my clipboard again and i'm going to move
208:35 clipboard again and i'm going to move back over to my terminal i'm going to
208:37 back over to my terminal i'm going to clear my screen and so to make sure i'm
208:40 clear my screen and so to make sure i'm in my home directory where the cloud sdk
208:43 in my home directory where the cloud sdk folder is i'm going to simply type ls
208:47 folder is i'm going to simply type ls and so for those who don't know
208:49 and so for those who don't know ls is a linux command that will list all
208:52 ls is a linux command that will list all your files and folders in your current
208:54 your files and folders in your current path and as you can see here the google
208:57 path and as you can see here the google cloud sdk is in my path and therefore i
209:01 cloud sdk is in my path and therefore i can run that script so i'm going to
209:03 can run that script so i'm going to paste it in here
209:05 paste it in here and i'm going to hit enter
209:07 and i'm going to hit enter and so a prompt comes up asking me
209:09 and so a prompt comes up asking me whether or not i want to disable usage
209:11 whether or not i want to disable usage reporting and because i want to help
209:13 reporting and because i want to help improve the google cloud sdk i'm going
209:16 improve the google cloud sdk i'm going to type in y for yes and hit enter and
209:19 to type in y for yes and hit enter and so as i was explaining before
209:22 so as i was explaining before the cloud sdk tools will be installed in
209:25 the cloud sdk tools will be installed in my path and so this is the step that
209:27 my path and so this is the step that takes care of it and so i'm going to
209:29 takes care of it and so i'm going to type y
209:30 type y and enter
209:32 and enter for yes to continue and usually the path
209:35 for yes to continue and usually the path that comes up is the right one unless
209:37 that comes up is the right one unless you've changed it otherwise so i'm going
209:39 you've changed it otherwise so i'm going to leave this blank and just hit enter
209:42 to leave this blank and just hit enter and that's it i've installed the tools
209:44 and that's it i've installed the tools so now in order for me to run gcloud
209:46 so now in order for me to run gcloud init i have to start a new shell as it
209:49 init i have to start a new shell as it says here for the changes to take effect
209:52 says here for the changes to take effect so i'm going to go up here to the top
209:54 so i'm going to go up here to the top left hand menu click on terminal and
209:56 left hand menu click on terminal and quit terminal and so now i can restart
209:59 quit terminal and so now i can restart the terminal
210:00 the terminal again i'm going to zoom in for better
210:02 again i'm going to zoom in for better viewing
210:03 viewing and now i'm able to run gcloud init in
210:06 and now i'm able to run gcloud init in order to initialize the installation
210:14 again the prompt to do the diagnostic tests and i can see i have no network
210:16 tests and i can see i have no network issues but it shows me that i have to
210:18 issues but it shows me that i have to login to continue i would like to log in
210:21 login to continue i would like to log in so i'm going to type y for yes and hit
210:24 so i'm going to type y for yes and hit enter
210:30 and so a new browser has popped open prompting me to enter my email and
210:33 prompting me to enter my email and password and so i'm going to do that now
210:41 i'm going to authorize my account with two-step verification
210:47 i'm not going to save this password and yes i want to allow the google cloud sdk
210:50 yes i want to allow the google cloud sdk to access my google account
210:53 to access my google account so i'm going to click on allow
210:58 and it shows that i've been authenticated so now i'm going to move
211:00 authenticated so now i'm going to move back to my terminal and so just as a
211:02 back to my terminal and so just as a note before we move forward in case you
211:05 note before we move forward in case you don't get a browser pop-up for you to
211:07 don't get a browser pop-up for you to log into your google account you can
211:10 log into your google account you can simply highlight this url copy it into
211:13 simply highlight this url copy it into your browser and it should prompt you
211:15 your browser and it should prompt you just the same so moving right ahead it
211:18 just the same so moving right ahead it shows that i'm logged in as
211:19 shows that i'm logged in as tonybowtieace
211:21 tonybowtieace gmail.com which is exactly what i wanted
211:24 gmail.com which is exactly what i wanted and it's asking me to pick a cloud
211:26 and it's asking me to pick a cloud project to use now i want to use project
211:28 project to use now i want to use project tony so i'm going to type in 1 and enter
211:32 tony so i'm going to type in 1 and enter and that's it the cloud sdk has been
211:34 and that's it the cloud sdk has been configured and just to double check i'm
211:36 configured and just to double check i'm going to run the gcloud
211:39 going to run the gcloud config list command to show me my
211:41 config list command to show me my configuration and as you can see here my
211:44 configuration and as you can see here my account is tonybowties
211:46 account is tonybowties gmail.com my disable usage reporting is
211:49 gmail.com my disable usage reporting is equal to false
211:50 equal to false and my project is project tony and again
211:53 and my project is project tony and again my active configuration is set as
211:56 my active configuration is set as default and so that about covers the
211:58 default and so that about covers the cloud sdk install for mac os and so
212:02 cloud sdk install for mac os and so finally i'm going to move over to ubuntu
212:05 finally i'm going to move over to ubuntu linux and configure the cloud sdk there
212:09 linux and configure the cloud sdk there and so here we are in ubuntu and like i
212:11 and so here we are in ubuntu and like i did in the other operating systems i'm
212:13 did in the other operating systems i'm going to open up the browser and i'm
212:15 going to open up the browser and i'm going to paste in the url for the quick
212:17 going to paste in the url for the quick start guide
212:19 start guide and so we want to click on the quick
212:21 and so we want to click on the quick start for debian and ubuntu and so again
212:24 start for debian and ubuntu and so again you have your choice from either
212:26 you have your choice from either clicking on the link on the left hand
212:27 clicking on the link on the left hand menu or the one here in the main menu
212:30 menu or the one here in the main menu and so following the guide
212:32 and so following the guide it is telling us that when it comes to
212:35 it is telling us that when it comes to an ubuntu release it is recommended that
212:38 an ubuntu release it is recommended that the sdk should be installed on an ubuntu
212:41 the sdk should be installed on an ubuntu release that has not reached end of life
212:44 release that has not reached end of life the guide also asks to create a project
212:47 the guide also asks to create a project if we don't have one already which we
212:49 if we don't have one already which we have already done
212:50 have already done and so now we can continue on with the
212:52 and so now we can continue on with the steps and so since we are not installing
212:55 steps and so since we are not installing it inside a docker image we're gonna go
212:57 it inside a docker image we're gonna go ahead and use the commands right here
212:59 ahead and use the commands right here now you can copy all the commands at
213:01 now you can copy all the commands at once
213:02 once by copying this to the clipboard but my
213:05 by copying this to the clipboard but my recommendation is to install each one
213:08 recommendation is to install each one one by one so i'm going to copy this
213:11 one by one so i'm going to copy this and i'm going to open up my terminal i'm
213:14 and i'm going to open up my terminal i'm going to zoom in for better viewing and
213:16 going to zoom in for better viewing and i'm going to paste that command in and
213:18 i'm going to paste that command in and click on enter it's going to prompt me
213:20 click on enter it's going to prompt me for my password
213:21 for my password and it didn't come up with any errors so
213:24 and it didn't come up with any errors so that means it was successfully executed
213:26 that means it was successfully executed and so i'm going to move on to the next
213:28 and so i'm going to move on to the next command
213:30 command i'm going to copy this
213:32 i'm going to copy this go back over to my terminal
213:35 go back over to my terminal and paste it in
213:37 and paste it in now for those of you who do not have
213:39 now for those of you who do not have curl installed you will be prompted to
213:41 curl installed you will be prompted to install it and given the command to run
213:44 install it and given the command to run it so i'm going to copy and paste this
213:46 it so i'm going to copy and paste this command
213:47 command and click on enter
213:50 and click on enter i'm going to type in y for yes to
213:52 i'm going to type in y for yes to continue and it's going to install it
213:55 continue and it's going to install it after a couple of minutes okay now that
213:57 after a couple of minutes okay now that curl has been installed i'm able to run
214:00 curl has been installed i'm able to run that command again i'm going to clear
214:02 that command again i'm going to clear the screen first
214:04 the screen first and that executed with no errors as well
214:07 and that executed with no errors as well and so now moving on to the last command
214:09 and so now moving on to the last command this command will download and install
214:11 this command will download and install the google cloud sdk
214:21 i am prompted to install some packages and so i'm going to type y for yes to
214:23 and so i'm going to type y for yes to continue so now it's going to download
214:26 continue so now it's going to download and install the necessary packages
214:29 and install the necessary packages needed for the google cloud sdk and
214:32 needed for the google cloud sdk and depending on the speed of your internet
214:34 depending on the speed of your internet and the speed of your machine this could
214:36 and the speed of your machine this could take anywhere from two to five minutes
214:38 take anywhere from two to five minutes okay and the google cloud sdk has been
214:41 okay and the google cloud sdk has been installed
214:42 installed and so now that the cloud sdk has been
214:44 and so now that the cloud sdk has been installed we can now initialize the
214:47 installed we can now initialize the configuration
214:48 configuration so i'm going to type in gcloud init
214:54 again the prompt with the network diagnostics i'm going to type y for yes
214:57 diagnostics i'm going to type y for yes to log in
214:59 to log in and i'm going to get the prompt for my
215:01 and i'm going to get the prompt for my email and password
215:06 i'm going to take care of my two-step verification and i'm going to allow the
215:08 verification and i'm going to allow the google cloud sdk to access my google
215:11 google cloud sdk to access my google account and success i am now
215:13 account and success i am now authenticated and moving back to the
215:15 authenticated and moving back to the terminal just to verify it and again i'm
215:18 terminal just to verify it and again i'm going to pick project tony as the cloud
215:21 going to pick project tony as the cloud project to use
215:24 project to use and the cloud sdk has been configured as
215:27 and the cloud sdk has been configured as always i'm going to do a double check by
215:29 always i'm going to do a double check by running a gcloud config list
215:39 and as expected the same details has come up and so this is a quick run
215:41 come up and so this is a quick run through on all three operating systems
215:44 through on all three operating systems windows mac os and ubuntu linux on how
215:48 windows mac os and ubuntu linux on how to install the google cloud sdk and this
215:52 to install the google cloud sdk and this will help you get started with becoming
215:54 will help you get started with becoming more familiar and more comfortable using
215:58 more familiar and more comfortable using the command line interface and so that
216:00 the command line interface and so that about wraps up for this lesson
216:02 about wraps up for this lesson so you can now mark this lesson as
216:04 so you can now mark this lesson as complete
216:05 complete and let's move on to the next one
216:07 and let's move on to the next one [Music]
216:11 [Music] welcome back in the last demo we went
216:14 welcome back in the last demo we went through a complete install of the cloud
216:16 through a complete install of the cloud sdk
216:17 sdk and configured our admin account to be
216:20 and configured our admin account to be used within it in this demonstration i
216:22 used within it in this demonstration i will be walking through how to manage
216:25 will be walking through how to manage the cloud sdk and this will involve how
216:28 the cloud sdk and this will involve how to utilize it and how to customize it to
216:31 to utilize it and how to customize it to your environment as well as configuring
216:33 your environment as well as configuring our other user account so that we are
216:36 our other user account so that we are able to apply switching configurations
216:39 able to apply switching configurations from one user to another and so i will
216:42 from one user to another and so i will be going through initializing and
216:44 be going through initializing and authorization
216:45 authorization configurations and properties installing
216:48 configurations and properties installing and removing components as well as a
216:51 and removing components as well as a full run through of the gcloud
216:53 full run through of the gcloud interactive shell so let's kick off this
216:55 interactive shell so let's kick off this demo by diving into a pre-configured
216:58 demo by diving into a pre-configured terminal with the sdk installed and
217:01 terminal with the sdk installed and configured with my second user tony
217:04 configured with my second user tony bowtie ace gmail.com
217:07 bowtie ace gmail.com and so here i am in the mac os terminal
217:10 and so here i am in the mac os terminal and just be aware that it doesn't matter
217:12 and just be aware that it doesn't matter which operating system you're running as
217:15 which operating system you're running as long as the sdk is installed and you
217:17 long as the sdk is installed and you have your user configured and so as you
217:19 have your user configured and so as you saw in the last lesson after you install
217:22 saw in the last lesson after you install the cloud sdk the next step is typically
217:26 the cloud sdk the next step is typically to initialize the cloud sdk
217:29 to initialize the cloud sdk by running the gcloud init command and
217:31 by running the gcloud init command and this is to perform the initial setup
217:34 this is to perform the initial setup tasks as well as authorizing the cloud
217:37 tasks as well as authorizing the cloud sdk to use your user account credentials
217:40 sdk to use your user account credentials so that it can access google cloud and
217:42 so that it can access google cloud and so in short it sets up a cloud sdk
217:45 so in short it sets up a cloud sdk configuration and sets a base set of
217:48 configuration and sets a base set of properties and this usually covers the
217:51 properties and this usually covers the active account the current project and
217:53 active account the current project and if the api is enabled the default google
217:56 if the api is enabled the default google compute engine region and zone now as a
217:59 compute engine region and zone now as a note if you're in a remote terminal
218:02 note if you're in a remote terminal session with no access to a browser you
218:04 session with no access to a browser you can still run the gcloud init command
218:07 can still run the gcloud init command but adding a flag of dash dash console
218:11 but adding a flag of dash dash console dash only
218:12 dash only and this will prevent the command from
218:14 and this will prevent the command from launching a browser-based authorization
218:17 launching a browser-based authorization like you saw when setting up your last
218:19 like you saw when setting up your last user so now even though i have a user
218:22 user so now even though i have a user already set up i can still run gcloud
218:24 already set up i can still run gcloud init and it will give me a couple
218:27 init and it will give me a couple different options to choose from so i
218:29 different options to choose from so i can re-initialize this configuration
218:32 can re-initialize this configuration with some new settings or i can create a
218:35 with some new settings or i can create a new configuration now for this demo
218:37 new configuration now for this demo since we already have two users and to
218:41 since we already have two users and to demonstrate how to switch between
218:42 demonstrate how to switch between different users i want to create a new
218:45 different users i want to create a new configuration with my very first user so
218:48 configuration with my very first user so i'm going to type in 2 and hit enter and
218:51 i'm going to type in 2 and hit enter and it's going to ask me for a configuration
218:53 it's going to ask me for a configuration name now it asks me for a configuration
218:56 name now it asks me for a configuration name because when setting up your first
218:58 name because when setting up your first configuration it's set as default and
219:01 configuration it's set as default and because i know that this user account
219:04 because i know that this user account has full access to billing as well as
219:06 has full access to billing as well as administration privileges i'm going to
219:09 administration privileges i'm going to call this configuration master and i'm
219:11 call this configuration master and i'm going to hit enter
219:12 going to hit enter it did the necessary network checks and
219:15 it did the necessary network checks and now it's asking me for which account i
219:17 now it's asking me for which account i want to use this configuration for now
219:19 want to use this configuration for now if tony bowtie ace had access to two
219:23 if tony bowtie ace had access to two different google cloud accounts i would
219:25 different google cloud accounts i would be able to add a different configuration
219:27 be able to add a different configuration here and so because i'm going to log in
219:30 here and so because i'm going to log in with a new account i'm going to put in
219:32 with a new account i'm going to put in two
219:33 two and hit enter
219:36 and hit enter and so again it brought me to my browser
219:38 and so again it brought me to my browser window and i'm going to log in using
219:41 window and i'm going to log in using another account
219:42 another account and so here you can type in the first
219:44 and so here you can type in the first account that you created and for me it
219:47 account that you created and for me it was antony gcloud ace gmail.com
219:51 was antony gcloud ace gmail.com i hit next and i'm going to enter my
219:54 i hit next and i'm going to enter my password
219:56 password it's going to ask me for my two-step
219:57 it's going to ask me for my two-step verification
219:59 verification and i don't want to save this password
220:01 and i don't want to save this password and i'm going to allow the google cloud
220:03 and i'm going to allow the google cloud sdk to access my google account and i am
220:07 sdk to access my google account and i am now authenticated so moving back to the
220:09 now authenticated so moving back to the console you can see here that i am
220:11 console you can see here that i am currently logged in and it's asking me
220:14 currently logged in and it's asking me to pick a cloud project to use now since
220:16 to pick a cloud project to use now since i only have one project in that google
220:19 i only have one project in that google cloud account which is subtle poet i'm
220:22 cloud account which is subtle poet i'm going to choose one
220:24 going to choose one and since i have the compute engine api
220:26 and since i have the compute engine api enabled i am now able to configure a
220:29 enabled i am now able to configure a default compute region and zone and so
220:32 default compute region and zone and so i'm going to hit y for yes to configure
220:35 i'm going to hit y for yes to configure it and as you can see there are 74
220:37 it and as you can see there are 74 different options to choose from and if
220:40 different options to choose from and if you scroll up a little bit you should be
220:42 you scroll up a little bit you should be able to find the zone that you're
220:43 able to find the zone that you're looking for and so for this course we
220:46 looking for and so for this course we are going to be using us central one
220:49 are going to be using us central one dash a and so this is number eight so
220:51 dash a and so this is number eight so i'm going to scroll back down
220:53 i'm going to scroll back down and type in eight
220:56 and type in eight and so now my master configuration has
220:58 and so now my master configuration has been configured with my antony g cloud
221:02 been configured with my antony g cloud ace account using us central 1a as the
221:05 ace account using us central 1a as the compute engine zone now touching back on
221:08 compute engine zone now touching back on authorization if i didn't want to set up
221:11 authorization if i didn't want to set up a whole configuration i can simply type
221:14 a whole configuration i can simply type in gcloud
221:15 in gcloud auth login
221:17 auth login and this will allow me to authorize just
221:20 and this will allow me to authorize just the user account only so gcloud init
221:23 the user account only so gcloud init would authorize access and perform the
221:26 would authorize access and perform the cloud sdk setup steps and gcloud auth
221:29 cloud sdk setup steps and gcloud auth login will authorize the access only now
221:32 login will authorize the access only now as i mentioned in a previous lesson you
221:35 as i mentioned in a previous lesson you can use a service account for
221:36 can use a service account for authorization to the cloud sdk tools and
221:39 authorization to the cloud sdk tools and this would be great for a compute
221:41 this would be great for a compute instance or an application but would
221:44 instance or an application but would need a service account key file in order
221:46 need a service account key file in order to authorize it and so moving back to
221:49 to authorize it and so moving back to our user accounts when running the cloud
221:51 our user accounts when running the cloud sdk you can only have one active account
221:55 sdk you can only have one active account at any given time and so to check my
221:58 at any given time and so to check my active account i can type in the command
222:00 active account i can type in the command gcloud auth list
222:03 gcloud auth list and this will give me a list of all the
222:05 and this will give me a list of all the accounts that have been authorized and
222:07 accounts that have been authorized and so whenever you run a gcloud init it
222:10 so whenever you run a gcloud init it will use that account as the active
222:13 will use that account as the active account and as you can see here the
222:15 account and as you can see here the antony gcloud ace gmail.com has a star
222:19 antony gcloud ace gmail.com has a star beside it and this is marked as the
222:21 beside it and this is marked as the active account and so in essence the
222:24 active account and so in essence the account with the star beside it is the
222:26 account with the star beside it is the active account and so i'm looking to
222:29 active account and so i'm looking to change my active account back to tony
222:31 change my active account back to tony bowtie ace and in order for me to do
222:34 bowtie ace and in order for me to do that the command is conveniently shown
222:37 that the command is conveniently shown here and so i'm going to go ahead and
222:38 here and so i'm going to go ahead and run that
222:40 run that and the account would be the user shown
222:43 and the account would be the user shown above and so when i do a gcloud auth
222:46 above and so when i do a gcloud auth list
222:47 list i can see that my active account is now
222:49 i can see that my active account is now back to tony bowtie bowtieace gmail.com
222:53 back to tony bowtie bowtieace gmail.com now if you wanted to switch the account
222:55 now if you wanted to switch the account on a per command basis you can always do
222:58 on a per command basis you can always do that using the flag dash dash account
223:01 that using the flag dash dash account after the command and put in the user
223:03 after the command and put in the user account that you want to use and so
223:05 account that you want to use and so let's say i wanted to revoke credentials
223:07 let's say i wanted to revoke credentials from an account that i don't need
223:09 from an account that i don't need anymore i can simply use the command
223:12 anymore i can simply use the command gcloud auth revoke
223:14 gcloud auth revoke followed by the username and it will
223:16 followed by the username and it will revoke the credentials for that account
223:19 revoke the credentials for that account and so doing this
223:20 and so doing this would remove your credentials and any
223:22 would remove your credentials and any access tokens for any specific account
223:26 access tokens for any specific account that you choose that's currently on your
223:28 that you choose that's currently on your computer and so if we're looking for
223:30 computer and so if we're looking for that specific account we can always use
223:33 that specific account we can always use the gcloud info command and it will give
223:36 the gcloud info command and it will give us the path for the user config
223:38 us the path for the user config directory and it is this directory that
223:41 directory and it is this directory that holds your encrypted credentials and
223:43 holds your encrypted credentials and access tokens
223:45 access tokens alongside with your active
223:46 alongside with your active configurations and any other
223:48 configurations and any other configurations as well now as you can
223:50 configurations as well now as you can see here running the gcloud info command
223:53 see here running the gcloud info command will also give you some other
223:55 will also give you some other information everything from the account
223:58 information everything from the account the project the current properties and
224:01 the project the current properties and where the logs can be found so now
224:04 where the logs can be found so now moving on to configurations
224:06 moving on to configurations a configuration is a named set of gcloud
224:09 a configuration is a named set of gcloud cli properties
224:11 cli properties and it works kind of like a profile and
224:13 and it works kind of like a profile and so earlier on i demonstrated how to set
224:16 so earlier on i demonstrated how to set up another configuration through gcloud
224:19 up another configuration through gcloud init so now if i run a gcloud config
224:23 init so now if i run a gcloud config list command it would give me all the
224:25 list command it would give me all the information of the active configuration
224:28 information of the active configuration so as you can see here my user has
224:31 so as you can see here my user has changed but my configuration has stayed
224:33 changed but my configuration has stayed the same now as seen previously in a
224:36 the same now as seen previously in a different lesson tony bow tie ace does
224:39 different lesson tony bow tie ace does not have access to the project subtle
224:42 not have access to the project subtle poet this project belongs to antony g
224:45 poet this project belongs to antony g cloud ace and the configuration was set
224:48 cloud ace and the configuration was set for that account now if tony bowtie ace
224:51 for that account now if tony bowtie ace did have access to the subtle poet
224:53 did have access to the subtle poet project then i could use this
224:55 project then i could use this configuration but it doesn't and so i
224:58 configuration but it doesn't and so i want to switch back to my other
225:00 want to switch back to my other configuration and how i would do this is
225:03 configuration and how i would do this is type in the command
225:04 type in the command gcloud config configurations
225:08 gcloud config configurations activate and the configuration that i
225:10 activate and the configuration that i set up for tony bowtie ace is the
225:13 set up for tony bowtie ace is the default configuration
225:15 default configuration and so now that it has been activated i
225:18 and so now that it has been activated i can now run a gcloud config list and as
225:22 can now run a gcloud config list and as you can see here the configuration is
225:24 you can see here the configuration is back to default setup during the
225:26 back to default setup during the initialization process for tony bowtie
225:29 initialization process for tony bowtie ace now if i wanted to create multiple
225:32 ace now if i wanted to create multiple configurations for the same user account
225:35 configurations for the same user account i can simply type in the command gcloud
225:38 i can simply type in the command gcloud config configurations
225:41 config configurations create
225:42 create but if i wanted to just view the
225:44 but if i wanted to just view the configuration properties i can always
225:47 configuration properties i can always type in the command gcloud config
225:49 type in the command gcloud config configurations describe
225:52 configurations describe and as you can see after the describe i
225:55 and as you can see after the describe i needed the configuration name to
225:57 needed the configuration name to complete the command and so i'm going to
225:59 complete the command and so i'm going to do that now
226:04 and i've been given all the properties for this configuration now another thing
226:07 for this configuration now another thing that i wanted to share when it comes to
226:09 that i wanted to share when it comes to properties is that you can change the
226:11 properties is that you can change the project or the compute region and zone
226:14 project or the compute region and zone by simply typing in the command
226:17 by simply typing in the command gcloud config set now if i wanted to
226:20 gcloud config set now if i wanted to change the project i can simply type in
226:22 change the project i can simply type in project and the project name if it was
226:25 project and the project name if it was for the compute instance i can simply
226:28 for the compute instance i can simply type in compute
226:29 type in compute forward slash zone for the specific zone
226:32 forward slash zone for the specific zone and just as a note only the properties
226:35 and just as a note only the properties that are not in the core property
226:37 that are not in the core property section are the ones that can be set as
226:40 section are the ones that can be set as well when you are setting the properties
226:42 well when you are setting the properties this only applies to the active
226:45 this only applies to the active configuration if you want to change the
226:47 configuration if you want to change the configuration of one that is not active
226:50 configuration of one that is not active then you'd have to switch to it and run
226:53 then you'd have to switch to it and run the gcloud config set command and so
226:55 the gcloud config set command and so moving on i wanted to touch on
226:57 moving on i wanted to touch on components which are the installable
227:00 components which are the installable parts of the sdk and when you install
227:02 parts of the sdk and when you install the sdk the components gcloud bq gsutil
227:08 the sdk the components gcloud bq gsutil and the core libraries are installed by
227:11 and the core libraries are installed by default now you probably saw a list of
227:13 default now you probably saw a list of components when you ran the gcloud init
227:16 components when you ran the gcloud init command and so to see all the components
227:19 command and so to see all the components again you can simply type in the gcloud
227:22 again you can simply type in the gcloud components
227:23 components list command
227:25 list command and if you scroll up you're able to see
227:28 and if you scroll up you're able to see all the components that are available
227:30 all the components that are available that you can install at your convenience
227:33 that you can install at your convenience and so if i wanted to install the
227:35 and so if i wanted to install the cubectl component i can type in the
227:37 cubectl component i can type in the command gcloud components install
227:41 command gcloud components install cubectl and a prompt will come up asking
227:44 cubectl and a prompt will come up asking me if i want to continue with this i
227:46 me if i want to continue with this i want to say yes and now it will go
227:48 want to say yes and now it will go through the process of installing these
227:50 through the process of installing these components
227:52 components and so just to verify if i run the
227:54 and so just to verify if i run the command gcloud components list you can
227:58 command gcloud components list you can see here that i have the cube ctl
228:01 see here that i have the cube ctl component installed now if i wanted to
228:03 component installed now if i wanted to remove that component i can simply type
228:06 remove that component i can simply type in
228:06 in gcloud components
228:08 gcloud components remove
228:10 remove and then the component that i want to
228:12 and then the component that i want to remove
228:13 remove which is cubectl i'm going to be
228:15 which is cubectl i'm going to be prompted if i want to do this i'm going
228:17 prompted if i want to do this i'm going to say yes and it's going to go through
228:19 to say yes and it's going to go through the stages of removing this component
228:23 the stages of removing this component and it's been successfully uninstalled
228:25 and it's been successfully uninstalled and so if you're working with a resource
228:27 and so if you're working with a resource that you need a component for you can
228:29 that you need a component for you can simply install or uninstall it using the
228:33 simply install or uninstall it using the gcloud components command and so one
228:35 gcloud components command and so one last thing about components before we
228:37 last thing about components before we move on
228:38 move on is that you can update your components
228:40 is that you can update your components to make sure you have the latest version
228:42 to make sure you have the latest version and so in order to update all of your
228:44 and so in order to update all of your installed components you would simply
228:47 installed components you would simply run the command gcloud components update
228:50 run the command gcloud components update and so before i go ahead and finish off
228:52 and so before i go ahead and finish off this demonstration i wanted to touch on
228:55 this demonstration i wanted to touch on the gcloud interactive shell the gcloud
228:57 the gcloud interactive shell the gcloud interactive shell provides a richer
229:00 interactive shell provides a richer shell experience simplifying commands
229:03 shell experience simplifying commands and documentation discovery with as you
229:06 and documentation discovery with as you type autocompletion and help text
229:09 type autocompletion and help text snippets below it produces suggestions
229:12 snippets below it produces suggestions and autocompletion for gcloud bq gsutil
229:17 and autocompletion for gcloud bq gsutil and cubectl command line tools as well
229:20 and cubectl command line tools as well as any command that has a man page sub
229:24 as any command that has a man page sub commands and flags can be completed
229:27 commands and flags can be completed along with online help as you type the
229:29 along with online help as you type the command and because this is part of the
229:32 command and because this is part of the beta component i need to install it and
229:35 beta component i need to install it and so i'm going to run the command gcloud
229:38 so i'm going to run the command gcloud components install beta and i want to
229:41 components install beta and i want to hit yes to continue and this will go
229:43 hit yes to continue and this will go ahead and kick off the installation of
229:46 ahead and kick off the installation of the gcloud beta commands
229:48 the gcloud beta commands and so now that it's installed i'm going
229:50 and so now that it's installed i'm going to simply clear the screen and so now in
229:53 to simply clear the screen and so now in order to run the gcloud interactive
229:55 order to run the gcloud interactive shell i need to run the command gcloud
229:59 shell i need to run the command gcloud beta
230:00 beta interactive
230:01 interactive and so now for every command that i type
230:04 and so now for every command that i type i will get auto suggestions that will
230:06 i will get auto suggestions that will help me with my commands and so to see
230:09 help me with my commands and so to see it in all of its glory i'm going to
230:11 it in all of its glory i'm going to start typing
230:12 start typing and as you can see it's giving me the
230:14 and as you can see it's giving me the option between g cloud or gsutil and i
230:17 option between g cloud or gsutil and i can use the arrow to choose either one
230:20 can use the arrow to choose either one and below it it'll also show me the
230:22 and below it it'll also show me the different flags that i can use for these
230:25 different flags that i can use for these specific commands and how to structure
230:27 specific commands and how to structure them and so for now i'm going to run
230:29 them and so for now i'm going to run gsutil version
230:31 gsutil version minus l and as you can see here it's
230:34 minus l and as you can see here it's giving me all the information about this
230:36 giving me all the information about this command and what it can do and so i'm
230:38 command and what it can do and so i'm going to hit enter
230:40 going to hit enter and as you can see my gsutil version is
230:43 and as you can see my gsutil version is 4.52
230:45 4.52 and along with the version number i'm
230:47 and along with the version number i'm also given all the specific information
230:50 also given all the specific information with regards to this gsutil version and
230:53 with regards to this gsutil version and this can be used with absolutely any
230:55 this can be used with absolutely any command used on the google cloud
230:58 command used on the google cloud platform and so i'm going to go ahead
231:00 platform and so i'm going to go ahead and do that again but running a
231:01 and do that again but running a different command so i'm just going to
231:03 different command so i'm just going to first clear the screen and i'm going to
231:05 first clear the screen and i'm going to type gcloud
231:07 type gcloud compute
231:08 compute instances and as you can see the snippet
231:11 instances and as you can see the snippet on the bottom of the screen is showing
231:14 on the bottom of the screen is showing me not only the command and how it's
231:16 me not only the command and how it's structured but also the url for the
231:19 structured but also the url for the documentation so continuing on gcloud
231:22 documentation so continuing on gcloud compute instances i'm going to do a list
231:25 compute instances i'm going to do a list and i'm going to filter it by using the
231:28 and i'm going to filter it by using the flag dash dash filter and i'm going to
231:31 flag dash dash filter and i'm going to filter the us east one a zone and i'm
231:34 filter the us east one a zone and i'm going to hit enter
231:40 and as expected there are no instances in us east 1a and as you've just
231:43 in us east 1a and as you've just experienced this is a great tool and i
231:46 experienced this is a great tool and i highly recommend that you use it
231:47 highly recommend that you use it whenever you can now i know this is a
231:50 whenever you can now i know this is a lot to take in
231:51 lot to take in and a lot of these commands will not
231:53 and a lot of these commands will not show up on the exam but again getting
231:56 show up on the exam but again getting comfortable with the command line and
231:58 comfortable with the command line and the sdk will help you on your path to
232:01 the sdk will help you on your path to becoming a cloud engineer as well it
232:04 becoming a cloud engineer as well it will help you get really comfortable
232:06 will help you get really comfortable with the command line and before you
232:08 with the command line and before you know it you'll be running commands in
232:10 know it you'll be running commands in the command line and prefer it over
232:13 the command line and prefer it over using the console and so that's all i
232:15 using the console and so that's all i have for this demo on managing the cloud
232:18 have for this demo on managing the cloud sdk so you can now mark this lesson as
232:21 sdk so you can now mark this lesson as complete and let's move on to the next
232:23 complete and let's move on to the next one
232:31 welcome back in this demonstration i'm going to be talking about the always
232:33 going to be talking about the always available browser-based shell called
232:36 available browser-based shell called cloud shell cloud shell is a virtual
232:38 cloud shell cloud shell is a virtual machine that is loaded with development
232:40 machine that is loaded with development tools and offers a persistent five
232:44 tools and offers a persistent five gigabyte home directory that runs on
232:46 gigabyte home directory that runs on google cloud cloud shell is what
232:48 google cloud cloud shell is what provides you command line access to your
232:52 provides you command line access to your google cloud resources within the
232:54 google cloud resources within the console cloud shell also comes with a
232:57 console cloud shell also comes with a built-in code editor that i will be
232:59 built-in code editor that i will be diving into and allows you to browse
233:02 diving into and allows you to browse file directories as well as view and
233:05 file directories as well as view and edit files while still accessing the
233:07 edit files while still accessing the cloud shell the code editor is available
233:10 cloud shell the code editor is available by default with every cloud shell
233:13 by default with every cloud shell instance and is based on the open source
233:16 instance and is based on the open source editor thea
233:18 editor thea now cloud shell is available from
233:20 now cloud shell is available from anywhere in the console by merely
233:22 anywhere in the console by merely clicking on the icon showed here in the
233:25 clicking on the icon showed here in the picture
233:26 picture and is positioned in the top right hand
233:28 and is positioned in the top right hand corner of the console in the blue
233:30 corner of the console in the blue toolbar so let's get started with the
233:33 toolbar so let's get started with the cloud shell by getting our hands dirty
233:36 cloud shell by getting our hands dirty and jumping right into it
233:38 and jumping right into it and so here we are back in the console
233:40 and so here we are back in the console and i am logged in as tony bowtie ace
233:43 and i am logged in as tony bowtie ace gmail.com and as you can see up here in
233:46 gmail.com and as you can see up here in the right hand corner
233:48 the right hand corner as mentioned earlier you will find the
233:50 as mentioned earlier you will find the cloud shell logo and so to open it up
233:53 cloud shell logo and so to open it up you simply click on it and it'll
233:55 you simply click on it and it'll activate the cloud shell here at the
233:57 activate the cloud shell here at the bottom and because it's my first time
233:59 bottom and because it's my first time using cloud shell i'll get this prompt
234:02 using cloud shell i'll get this prompt quickly explaining an overview of what
234:04 quickly explaining an overview of what cloud shell is and i'm going to simply
234:06 cloud shell is and i'm going to simply hit continue
234:09 hit continue and i'm going to make the terminal a
234:10 and i'm going to make the terminal a little bit bigger by dragging this line
234:13 little bit bigger by dragging this line up to the middle of the screen and so
234:15 up to the middle of the screen and so when you start cloud shell it provisions
234:17 when you start cloud shell it provisions an e2 small
234:19 an e2 small google compute engine instance running a
234:22 google compute engine instance running a debian-based linux operating system now
234:24 debian-based linux operating system now this is an ephemeral pre-configured vm
234:27 this is an ephemeral pre-configured vm and the environment you work with is a
234:30 and the environment you work with is a docker container running on that vm
234:33 docker container running on that vm cloud shell instances are provisioned on
234:35 cloud shell instances are provisioned on a per user per session basis the
234:38 a per user per session basis the instance persists while your cloud shell
234:40 instance persists while your cloud shell session is active and after an hour of
234:43 session is active and after an hour of inactivity your session terminates and
234:46 inactivity your session terminates and the vm is discarded you can also
234:48 the vm is discarded you can also customize your environment automatically
234:51 customize your environment automatically on boot time and it will allow you to
234:53 on boot time and it will allow you to have your preferred tools when cloud
234:56 have your preferred tools when cloud shell boots up so when your cloud shell
234:58 shell boots up so when your cloud shell instance is provision it's provisioned
235:01 instance is provision it's provisioned with 5 gigabytes of free persistent disk
235:04 with 5 gigabytes of free persistent disk storage and it's mounted at your home
235:06 storage and it's mounted at your home directory on the virtual machine
235:09 directory on the virtual machine instance and you can check your disk
235:11 instance and you can check your disk storage by simply typing in the command
235:14 storage by simply typing in the command df minus h and here where it shows dev
235:17 df minus h and here where it shows dev disk by id google home part one it shows
235:21 disk by id google home part one it shows here the size as 4.8 gigabytes and this
235:25 here the size as 4.8 gigabytes and this would be the persistent disk storage
235:27 would be the persistent disk storage that's mounted on your home directory
235:30 that's mounted on your home directory now if you've noticed it shows here that
235:32 now if you've noticed it shows here that i'm logged in as tony bowtie ace at
235:35 i'm logged in as tony bowtie ace at cloud shell and that my project id is
235:38 cloud shell and that my project id is set at project tony so the great thing
235:40 set at project tony so the great thing about cloud shell
235:42 about cloud shell is that you're automatically
235:43 is that you're automatically authenticated as the google account
235:46 authenticated as the google account you're logged in with so here you can
235:48 you're logged in with so here you can see i'm logged in as tony bowtie ace and
235:51 see i'm logged in as tony bowtie ace and so picture it like running gcloud auth
235:54 so picture it like running gcloud auth login and specifying your google account
235:57 login and specifying your google account but without having to actually do it now
235:59 but without having to actually do it now when the cloud shell is started the
236:01 when the cloud shell is started the active project in the console is
236:04 active project in the console is propagated to your gcloud configuration
236:07 propagated to your gcloud configuration inside cloud shell so as you can see
236:09 inside cloud shell so as you can see here my project is set at project tony
236:13 here my project is set at project tony now if i wanted to change it to a
236:14 now if i wanted to change it to a different project i could simply use the
236:16 different project i could simply use the command stated up here
236:18 command stated up here gcloud config set project along with the
236:21 gcloud config set project along with the project id and this will change me to a
236:24 project id and this will change me to a different project now behind the scenes
236:26 different project now behind the scenes cloud shell is globally distributed
236:29 cloud shell is globally distributed across multiple regions so when you
236:31 across multiple regions so when you first connect to cloud shell you'll be
236:34 first connect to cloud shell you'll be automatically assigned to the closest
236:36 automatically assigned to the closest available region and thus avoiding
236:39 available region and thus avoiding any unnecessary latency you do not have
236:42 any unnecessary latency you do not have the option to choose your own region and
236:44 the option to choose your own region and so cloud shell does that for you by
236:46 so cloud shell does that for you by optimizing it to migrate to a closer
236:49 optimizing it to migrate to a closer region whenever it can so if you're ever
236:51 region whenever it can so if you're ever curious where your cloud shell session
236:54 curious where your cloud shell session is currently active
236:55 is currently active you can simply type in this command
236:58 you can simply type in this command curl metadata slash compute metadata
237:02 curl metadata slash compute metadata slash version one slash instance slash
237:05 slash version one slash instance slash zone
237:06 zone and this will give me the zone where my
237:08 and this will give me the zone where my instance is located and as shown here it
237:11 instance is located and as shown here it is in us east 1b now as you've probably
237:14 is in us east 1b now as you've probably been seeing every time i highlight
237:16 been seeing every time i highlight something that there is a picture of
237:18 something that there is a picture of scissors coming up the cloud shell has
237:20 scissors coming up the cloud shell has some automated and available tools that
237:23 some automated and available tools that are built in and so one of those
237:25 are built in and so one of those available tools is that whenever i
237:27 available tools is that whenever i highlight something it will
237:29 highlight something it will automatically copy it to the clipboard
237:31 automatically copy it to the clipboard for me cloud shell also has a bunch of
237:34 for me cloud shell also has a bunch of very powerful pre-installed tools that
237:36 very powerful pre-installed tools that come with it such as the cloud sdk bash
237:40 come with it such as the cloud sdk bash vim helm
237:42 vim helm git
237:43 git docker and more as well cloud shell has
237:46 docker and more as well cloud shell has support for a lot of major different
237:48 support for a lot of major different programming languages like java go
237:52 programming languages like java go python node.js
237:54 python node.js ruby and net core for those who run
237:57 ruby and net core for those who run windows now if you're looking for an
237:59 windows now if you're looking for an available tool that is not pre-installed
238:02 available tool that is not pre-installed you can actually customize your
238:04 you can actually customize your environment when your instance boots up
238:07 environment when your instance boots up and automatically run a script that will
238:09 and automatically run a script that will install the tool of your choice and the
238:12 install the tool of your choice and the script runs as root and you can install
238:14 script runs as root and you can install any package that you please and so in
238:17 any package that you please and so in order for this environment customization
238:19 order for this environment customization to work there needs to be a file labeled
238:22 to work there needs to be a file labeled as dot customize underscore environment
238:25 as dot customize underscore environment now if we do an ls here you can see that
238:28 now if we do an ls here you can see that all we have is the readme dash cloud
238:30 all we have is the readme dash cloud shell text file if we do ls space minus
238:34 shell text file if we do ls space minus al to show all the hidden files as well
238:37 al to show all the hidden files as well you can see that the dot customize
238:40 you can see that the dot customize underscore environment file does not
238:42 underscore environment file does not exist and this is because we need to
238:44 exist and this is because we need to create it ourselves and so for this
238:47 create it ourselves and so for this example i want terraform installed as an
238:50 example i want terraform installed as an available tool when my instance boots up
238:53 available tool when my instance boots up and so i have to create this file so i'm
238:56 and so i have to create this file so i'm going to do so by using the touch
238:58 going to do so by using the touch command and then the name of the file
239:01 command and then the name of the file dot customize
239:02 dot customize underscore environment hit enter and if
239:06 underscore environment hit enter and if i clear the screen and do another ls
239:09 i clear the screen and do another ls space minus al i can see that my dot
239:12 space minus al i can see that my dot customize underscore environment file
239:15 customize underscore environment file has been created and so now i'm going to
239:17 has been created and so now i'm going to need the script to install terraform
239:20 need the script to install terraform which means i would have to edit it and
239:22 which means i would have to edit it and so another great feature of cloud shell
239:25 so another great feature of cloud shell is that it comes with a code editor and
239:27 is that it comes with a code editor and i can do it one of two ways i can either
239:29 i can do it one of two ways i can either come up here and click on the open
239:32 come up here and click on the open editor button which will open up a new
239:34 editor button which will open up a new tab or i can simply use the edit command
239:37 tab or i can simply use the edit command with the file name and i'm going to do
239:39 with the file name and i'm going to do just that so edit
239:46 dot customize underscore environment and i'm just going to hit enter and as you
239:48 i'm just going to hit enter and as you can see i got a prompt saying that it's
239:51 can see i got a prompt saying that it's unable to load the code editor and this
239:53 unable to load the code editor and this is because when using code editor you
239:56 is because when using code editor you need cookies enabled on your browser and
239:59 need cookies enabled on your browser and because i am using a private browser
240:01 because i am using a private browser session cookies are disabled and because
240:04 session cookies are disabled and because my cloud shell environment persists i'm
240:07 my cloud shell environment persists i'm going to open up a regular browser
240:08 going to open up a regular browser window and i'm going to continue where i
240:11 window and i'm going to continue where i left off and so here i am back with a
240:13 left off and so here i am back with a new browser window again logged in as
240:16 new browser window again logged in as tony bowtie ace and so just to show you
240:18 tony bowtie ace and so just to show you the persistence that happens in cloud
240:20 the persistence that happens in cloud shell i'm going to run the command ls
240:23 shell i'm going to run the command ls space minus al and as you can see here
240:26 space minus al and as you can see here the customize environment is still here
240:29 the customize environment is still here and so again i wanted to install
240:31 and so again i wanted to install terraform as an extra tool to have in my
240:34 terraform as an extra tool to have in my environment and so i'm going to open up
240:36 environment and so i'm going to open up the editor by typing in edit dot
240:39 the editor by typing in edit dot customize underscore environment and i'm
240:42 customize underscore environment and i'm going to hit enter and here is the
240:44 going to hit enter and here is the editor that popped up
240:46 editor that popped up as you can see here it's built with
240:47 as you can see here it's built with eclipse thea and this is an open source
240:50 eclipse thea and this is an open source code editor that you can download from
240:52 code editor that you can download from eclipse and this is what the editor is
240:55 eclipse and this is what the editor is built on now this menu here on the left
240:57 built on now this menu here on the left i can make it a little bit bigger and
241:00 i can make it a little bit bigger and because the only viewable file on my
241:02 because the only viewable file on my persistent disk is the readme cloud
241:05 persistent disk is the readme cloud shell dot text file i'm not able to see
241:08 shell dot text file i'm not able to see my dot customize underscore environment
241:11 my dot customize underscore environment so in order to open it and edit it i'm
241:13 so in order to open it and edit it i'm going to go to the menu at the top of
241:15 going to go to the menu at the top of the editor and click on file open
241:18 the editor and click on file open and here i'll be able to select the file
241:20 and here i'll be able to select the file that i need so i'm going to select
241:22 that i need so i'm going to select customize environment and click on open
241:24 customize environment and click on open and so i'm going to paste in my script
241:27 and so i'm going to paste in my script to install terraform and i'm just going
241:29 to install terraform and i'm just going to paste in my script from my clipboard
241:32 to paste in my script from my clipboard and i'll be including the script in the
241:34 and i'll be including the script in the github repo for those of you who use
241:36 github repo for those of you who use terraform and i'm going to move over to
241:38 terraform and i'm going to move over to the menu on the left click on file and
241:41 the menu on the left click on file and then hit save and so now in order for me
241:44 then hit save and so now in order for me to allow this to work
241:46 to allow this to work the customize environment needs to be
241:48 the customize environment needs to be loaded into my cloud shell so i'm going
241:50 loaded into my cloud shell so i'm going to have to restart it and so in order to
241:53 to have to restart it and so in order to accomplish this i'm going to move over
241:55 accomplish this i'm going to move over to the menu on the right i'm going to
241:57 to the menu on the right i'm going to click on the icon with the three dots
242:00 click on the icon with the three dots and click on restart and you'll be
242:02 and click on restart and you'll be presented with a prompt it's saying that
242:04 presented with a prompt it's saying that it will immediately terminate my session
242:06 it will immediately terminate my session and then a new vm will be provisioned
242:08 and then a new vm will be provisioned for me and you'll also be presented with
242:11 for me and you'll also be presented with an optional response from google telling
242:14 an optional response from google telling them why you're restarting the vm and
242:16 them why you're restarting the vm and this is merely for statistical purposes
242:19 this is merely for statistical purposes so i'm going to click on restart and i'm
242:21 so i'm going to click on restart and i'm going to wait till a new cloud shell is
242:23 going to wait till a new cloud shell is provisioned and my new cloud shell is
242:25 provisioned and my new cloud shell is provisioned and up and running and so i
242:27 provisioned and up and running and so i want to double check to see if terraform
242:29 want to double check to see if terraform has been installed so i'm going to go
242:31 has been installed so i'm going to go over here to the open terminal button on
242:33 over here to the open terminal button on the right hand side toolbar and i'm
242:35 the right hand side toolbar and i'm going to move back to my terminal and
242:37 going to move back to my terminal and i'm going to simply run the command
242:39 i'm going to simply run the command terraform dash dash version
242:42 terraform dash dash version and so it looks like terraform has been
242:44 and so it looks like terraform has been installed and as you can see i'm running
242:46 installed and as you can see i'm running version.12 but it says my terraform
242:49 version.12 but it says my terraform version is out of date and that the
242:51 version is out of date and that the latest version is dot 13. and so because
242:54 latest version is dot 13. and so because i really want to be up to date with
242:56 i really want to be up to date with terraform i want to be able to go into
242:59 terraform i want to be able to go into my customize environment file and edit
243:01 my customize environment file and edit my version of terraform so that when my
243:04 my version of terraform so that when my cloud shell is initiated
243:07 cloud shell is initiated terraform.13 can be installed and so i'm
243:09 terraform.13 can be installed and so i'm going to simply type in the command edit
243:12 going to simply type in the command edit dot customize underscore environment and
243:15 dot customize underscore environment and i'm back to my editor and i'm going to
243:17 i'm back to my editor and i'm going to change the terraform version from dot 12
243:20 change the terraform version from dot 12 to dot 13 and then go over here to the
243:23 to dot 13 and then go over here to the left-hand menu click on file and then
243:26 left-hand menu click on file and then save and now i'm going to restart my
243:29 save and now i'm going to restart my machine again
243:31 machine again and come back when it's fully
243:32 and come back when it's fully provisioned and i'm back again my
243:35 provisioned and i'm back again my machine has been provisioned and i'm
243:37 machine has been provisioned and i'm going to go back to my terminal by
243:39 going to go back to my terminal by clicking on the open terminal button and
243:41 clicking on the open terminal button and so i'm going to type in the command
243:43 so i'm going to type in the command terraform dash dash version and as you
243:46 terraform dash dash version and as you can see i'm at version dot 13 and i'm
243:49 can see i'm at version dot 13 and i'm going to run a simple terraform command
243:51 going to run a simple terraform command to see if it's working and as you can
243:54 to see if it's working and as you can see i am successful in running terraform
243:56 see i am successful in running terraform on cloud shell now customizing the
243:59 on cloud shell now customizing the environment is not on the exam but it is
244:02 environment is not on the exam but it is such an amazing feature that i wanted to
244:04 such an amazing feature that i wanted to highlight it for you with a real world
244:07 highlight it for you with a real world example like terraform in case you're
244:09 example like terraform in case you're away from your computer
244:11 away from your computer and you're logged into a browser and you
244:13 and you're logged into a browser and you need some special tools to use in cloud
244:16 need some special tools to use in cloud shell this is the best way to do it now
244:19 shell this is the best way to do it now as i mentioned before the cloud sdk is
244:22 as i mentioned before the cloud sdk is pre-installed on this and so everything
244:25 pre-installed on this and so everything that i've showed you in the last lesson
244:27 that i've showed you in the last lesson with regards to cloud sdk can be done in
244:30 with regards to cloud sdk can be done in the cloud shell as well so if i run the
244:32 the cloud shell as well so if i run the command gcloud
244:34 command gcloud beta interactive i'd be able to bring up
244:38 beta interactive i'd be able to bring up the interactive cloud shell and i'll be
244:40 the interactive cloud shell and i'll be able to run the same commands so now if
244:42 able to run the same commands so now if i go ahead and run the command
244:44 i go ahead and run the command gcloud components list i'll be able to
244:48 gcloud components list i'll be able to see all the components installed and as
244:50 see all the components installed and as you can see with the cloud shell there
244:52 you can see with the cloud shell there are more components installed than
244:54 are more components installed than what's installed on the default
244:56 what's installed on the default installation of the sdk i can also run
245:00 installation of the sdk i can also run the gcloud config list command to see
245:03 the gcloud config list command to see all the properties in my active
245:04 all the properties in my active configuration and so this goes to show
245:07 configuration and so this goes to show you that the sdk installation that's on
245:10 you that the sdk installation that's on cloud shell is just as capable as the
245:13 cloud shell is just as capable as the one that you've installed on your
245:15 one that you've installed on your computer the only difference here is
245:17 computer the only difference here is that the sdk along with all the other
245:20 that the sdk along with all the other tools that come installed in cloud shell
245:22 tools that come installed in cloud shell is updated every week and so you can
245:25 is updated every week and so you can always depend that they're up to date
245:27 always depend that they're up to date and so moving on to a few more features
245:30 and so moving on to a few more features of cloud shell i wanted to point out the
245:32 of cloud shell i wanted to point out the obvious ones
245:33 obvious ones up here in the cloud shell toolbar right
245:36 up here in the cloud shell toolbar right beside the open terminal i can open
245:38 beside the open terminal i can open brand new tabs opening up different
245:41 brand new tabs opening up different projects
245:42 projects or even the same project
245:44 or even the same project but just a different terminal and moving
245:46 but just a different terminal and moving over to the right hand menu of cloud
245:48 over to the right hand menu of cloud shell this keyboard icon can send key
245:52 shell this keyboard icon can send key combinations that you would normally not
245:54 combinations that you would normally not have access to moving on to the gear
245:57 have access to moving on to the gear icon with this you're able to change
245:59 icon with this you're able to change your preferences
246:01 your preferences and looking at the first item on the
246:02 and looking at the first item on the list when it comes to color themes you
246:05 list when it comes to color themes you can go from a dark theme to a light
246:07 can go from a dark theme to a light theme or if you prefer a different color
246:10 theme or if you prefer a different color in my case i prefer the dark theme
246:13 in my case i prefer the dark theme as well you have the options of changing
246:16 as well you have the options of changing your text size we can go to largest
246:21 your text size we can go to largest but i think we'll just keep things back
246:23 but i think we'll just keep things back down to medium and as well
246:25 down to medium and as well we have
246:26 we have the different fonts the copy settings
246:29 the different fonts the copy settings from which i showed you earlier as well
246:31 from which i showed you earlier as well as keyboard preferences you also have
246:33 as keyboard preferences you also have the option of showing your scroll bar
246:35 the option of showing your scroll bar now moving on to this icon right beside
246:38 now moving on to this icon right beside the gear
246:39 the gear is the web preview button and so the web
246:41 is the web preview button and so the web preview button is designed so that you
246:44 preview button is designed so that you can run any web application that listens
246:47 can run any web application that listens to http requests on the cloud shell and
246:51 to http requests on the cloud shell and be able to view it in a new web browser
246:53 be able to view it in a new web browser tab when running these web applications
246:56 tab when running these web applications web preview also supports applications
246:59 web preview also supports applications run in app engine now mind you these
247:01 run in app engine now mind you these ports are only available to the secure
247:04 ports are only available to the secure cloud shell proxy service which
247:06 cloud shell proxy service which restricts access over https to your user
247:10 restricts access over https to your user account only and so to demonstrate this
247:13 account only and so to demonstrate this feature i am going to run a simple http
247:17 feature i am going to run a simple http server running a hello world page so
247:20 server running a hello world page so first i'm going to clear my screen and
247:22 first i'm going to clear my screen and then i'm going to exit the interactive
247:24 then i'm going to exit the interactive shell and again i'm going to paste in
247:26 shell and again i'm going to paste in for my clipboard a simple script that
247:29 for my clipboard a simple script that will run my simple http server and as
247:32 will run my simple http server and as you can see it's running on port 8080
247:35 you can see it's running on port 8080 and now i'm able to click on the web
247:37 and now i'm able to click on the web preview button and i'm able to preview
247:39 preview button and i'm able to preview it on port 8080
247:41 it on port 8080 and a new web browser tab will open up
247:44 and a new web browser tab will open up and here i'll see my hello world page
247:47 and here i'll see my hello world page now this is just a simple example and so
247:49 now this is just a simple example and so i'm sure that many of you can find great
247:51 i'm sure that many of you can find great use for this and so i'm going to stop
247:53 use for this and so i'm going to stop this http server now by hitting ctrl c
247:57 this http server now by hitting ctrl c and just as a quick note web preview can
248:00 and just as a quick note web preview can also run on a different port anywhere
248:02 also run on a different port anywhere from port 2000
248:04 from port 2000 all the way up to 65 000. now moving on
248:07 all the way up to 65 000. now moving on to the rest of the features
248:09 to the rest of the features hitting on the more button here with the
248:11 hitting on the more button here with the three dots starting from the top we
248:14 three dots starting from the top we covered restart earlier when we had to
248:16 covered restart earlier when we had to restart our cloud shell you're able to
248:18 restart our cloud shell you're able to both upload and download a file within
248:22 both upload and download a file within cloud shell when the demands are needed
248:24 cloud shell when the demands are needed as well if i have a misconfigured
248:26 as well if i have a misconfigured configuration i can boot into safe mode
248:29 configuration i can boot into safe mode and fix the issue instead of having to
248:32 and fix the issue instead of having to start from scratch again moving on to
248:34 start from scratch again moving on to boost cloud shell also known as boost
248:37 boost cloud shell also known as boost mode is a feature that increases your
248:39 mode is a feature that increases your cloud shell vm from the default e2 small
248:43 cloud shell vm from the default e2 small to an e2 medium so in essence a memory
248:46 to an e2 medium so in essence a memory bump from 2 gigabytes to 4 gigabytes and
248:50 bump from 2 gigabytes to 4 gigabytes and once it's activated all your sessions
248:52 once it's activated all your sessions will be boosted for the next 24 hours
248:55 will be boosted for the next 24 hours and just as a quick note
248:57 and just as a quick note enabling boost mode restarts your cloud
249:00 enabling boost mode restarts your cloud shell and immediately terminates your
249:02 shell and immediately terminates your session but don't worry the data in your
249:04 session but don't worry the data in your home directory will persist but any of
249:07 home directory will persist but any of the processes that you are running will
249:09 the processes that you are running will be lost now when it comes to usage quota
249:12 be lost now when it comes to usage quota cloud shell has a 50 hour weekly usage
249:16 cloud shell has a 50 hour weekly usage limit so if you reach your usage limit
249:18 limit so if you reach your usage limit you'll need to wait until your quota is
249:20 you'll need to wait until your quota is reset before you can use cloud shell
249:23 reset before you can use cloud shell again so it's always good to keep your
249:25 again so it's always good to keep your eyes on this in case you're a heavy user
249:28 eyes on this in case you're a heavy user of cloud shell
249:30 of cloud shell and moving back to the menu again you
249:32 and moving back to the menu again you have your usage statistics which
249:35 have your usage statistics which collects statistics on commands that
249:37 collects statistics on commands that come pre-installed in the vm and you can
249:39 come pre-installed in the vm and you can turn them on or off
249:41 turn them on or off and as well help for cloud shell is
249:43 and as well help for cloud shell is available here as well if you wanted to
249:46 available here as well if you wanted to give feedback to the google cloud team
249:49 give feedback to the google cloud team with regards to cloud shell this is the
249:51 with regards to cloud shell this is the place to do it and so one last thing
249:53 place to do it and so one last thing about cloud shell before we end this
249:55 about cloud shell before we end this demo is that if you do not access cloud
249:58 demo is that if you do not access cloud shell for 120 days your home disk will
250:02 shell for 120 days your home disk will be deleted now don't worry you'll
250:04 be deleted now don't worry you'll receive an email notification before its
250:07 receive an email notification before its deletion and if you just log in and
250:10 deletion and if you just log in and start up a session you'll prevent it
250:12 start up a session you'll prevent it being removed now moving ahead in this
250:14 being removed now moving ahead in this course i will be using cloud shell quite
250:17 course i will be using cloud shell quite a bit and so feel free to use either
250:20 a bit and so feel free to use either cloud shell or the cloud sdk installed
250:23 cloud shell or the cloud sdk installed on your computer or feel free to follow
250:25 on your computer or feel free to follow along with me in the cloud shell within
250:28 along with me in the cloud shell within your google cloud environment and so if
250:30 your google cloud environment and so if you are following along please make sure
250:32 you are following along please make sure that you keep an eye on your quota and
250:35 that you keep an eye on your quota and so i hope this demonstration has given
250:37 so i hope this demonstration has given you some really good insight as to what
250:40 you some really good insight as to what you can do with cloud shell and its
250:42 you can do with cloud shell and its limitations and so that's pretty much
250:44 limitations and so that's pretty much all i wanted to cover in this
250:46 all i wanted to cover in this demonstration of cloud shell so you can
250:48 demonstration of cloud shell so you can now mark this as complete and let's move
250:51 now mark this as complete and let's move on to the next one
250:52 on to the next one [Music]
251:00 [Music] welcome back in this lesson and
251:02 welcome back in this lesson and demonstration i am going to go over
251:05 demonstration i am going to go over limits and quotas and how they affect
251:07 limits and quotas and how they affect your cloud usage within google cloud i'm
251:11 your cloud usage within google cloud i'm going to quickly go over some theory
251:13 going to quickly go over some theory followed by a demonstration on where to
251:16 followed by a demonstration on where to find the quotas and how to edit them
251:18 find the quotas and how to edit them accordingly so google cloud enforces
251:21 accordingly so google cloud enforces quotas on resource usage for project
251:24 quotas on resource usage for project owners
251:25 owners setting a hard limit on how much of a
251:28 setting a hard limit on how much of a particular google cloud resource your
251:31 particular google cloud resource your project can use and so there are two
251:34 project can use and so there are two types of resource usage that google
251:37 types of resource usage that google limits with quota the first one is rate
251:40 limits with quota the first one is rate quota such as api requests per day this
251:44 quota such as api requests per day this quota resets after a specified time such
251:47 quota resets after a specified time such as a minute or a day the second one is
251:51 as a minute or a day the second one is allocation quota
251:53 allocation quota an example is the number of virtual
251:55 an example is the number of virtual machines or load balancers used by your
251:57 machines or load balancers used by your project
251:59 project and this quota does not reset over time
252:02 and this quota does not reset over time but must be explicitly released when you
252:05 but must be explicitly released when you no longer want to use the resource
252:08 no longer want to use the resource for example by deleting a gke cluster
252:12 for example by deleting a gke cluster now quotas are enforced for a variety of
252:15 now quotas are enforced for a variety of reasons
252:16 reasons for example they protect other google
252:19 for example they protect other google cloud users by preventing unforeseen
252:22 cloud users by preventing unforeseen usage spikes
252:24 usage spikes quotas also help with resource
252:26 quotas also help with resource management so you can set your own
252:28 management so you can set your own limits on service usage within your
252:31 limits on service usage within your quota while developing and testing your
252:33 quota while developing and testing your applications
252:35 applications each quota limit is expressed in terms
252:38 each quota limit is expressed in terms of a particular countable resource from
252:41 of a particular countable resource from requests per day to an api to the number
252:45 requests per day to an api to the number of load balancers used by your
252:47 of load balancers used by your application not all projects have the
252:50 application not all projects have the same quotas for the same services and so
252:53 same quotas for the same services and so using this free trial account you may
252:55 using this free trial account you may have very limited quota compared to a
252:59 have very limited quota compared to a higher quota on a regular account as
253:01 higher quota on a regular account as well with your use of google cloud over
253:04 well with your use of google cloud over time your quotas may increase
253:06 time your quotas may increase accordingly and so you can also request
253:09 accordingly and so you can also request more quota if you need it and set up
253:12 more quota if you need it and set up monitoring and alerts
253:14 monitoring and alerts and cloud monitoring to warn you about
253:16 and cloud monitoring to warn you about unusual quota usage behavior or when
253:19 unusual quota usage behavior or when you're actually running out of quota now
253:22 you're actually running out of quota now in addition to viewing basic quota
253:24 in addition to viewing basic quota information in the console
253:26 information in the console google cloud lets you monitor quota
253:28 google cloud lets you monitor quota usage
253:30 usage limits and errors in greater depth using
253:33 limits and errors in greater depth using the cloud monitoring api and ui along
253:37 the cloud monitoring api and ui along with quota metrics appearing in the
253:40 with quota metrics appearing in the metrics explorer you can then use these
253:42 metrics explorer you can then use these metrics to create custom dashboards and
253:45 metrics to create custom dashboards and alerts
253:46 alerts letting you monitor quota usage over
253:49 letting you monitor quota usage over time
253:50 time and receive alerts when for example
253:53 and receive alerts when for example you're near a quota limit only your
253:56 you're near a quota limit only your services that support quota metrics are
253:58 services that support quota metrics are displayed and so popular supported
254:01 displayed and so popular supported services include compute engine
254:04 services include compute engine data flow
254:05 data flow cloud spanner cloud monitoring and cloud
254:08 cloud spanner cloud monitoring and cloud logging common services that are not
254:10 logging common services that are not supported include app engine cloud
254:13 supported include app engine cloud storage and cloud sql now as a note be
254:17 storage and cloud sql now as a note be aware that quota limits are updated once
254:19 aware that quota limits are updated once a day
254:20 a day and hence new limits may take up to 24
254:23 and hence new limits may take up to 24 hours to be reflected in the google
254:26 hours to be reflected in the google cloud console if your project exceeds a
254:29 cloud console if your project exceeds a particular quota while using a service
254:32 particular quota while using a service the platform will return an error
254:35 the platform will return an error in general
254:36 in general google cloud will return an http
254:40 google cloud will return an http 429 error code if you're using http or
254:45 429 error code if you're using http or rest to access the service
254:48 rest to access the service or resource exhausted if you're using
254:51 or resource exhausted if you're using grpc if you're using cloud monitoring
254:54 grpc if you're using cloud monitoring you can use it to identify the quota
254:56 you can use it to identify the quota associated with the error
254:59 associated with the error and then create custom alerts upon
255:01 and then create custom alerts upon getting a quota error and we will be
255:04 getting a quota error and we will be going into greater depth with regards to
255:07 going into greater depth with regards to monitoring later on in the course now
255:10 monitoring later on in the course now there are two ways to view your current
255:12 there are two ways to view your current quota limits in the google cloud console
255:15 quota limits in the google cloud console the first is using the quotas page which
255:18 the first is using the quotas page which gives you a list of all of your
255:20 gives you a list of all of your project's quota usage and limits the
255:23 project's quota usage and limits the second is using the api dashboard which
255:27 second is using the api dashboard which gives you the quota information for a
255:29 gives you the quota information for a particular api
255:31 particular api including resource usage over time quota
255:35 including resource usage over time quota limits are also accessible
255:37 limits are also accessible programmatically through the service
255:39 programmatically through the service usage api and so let's head into the
255:43 usage api and so let's head into the console where i will provide a
255:45 console where i will provide a demonstration
255:46 demonstration on where to look for quotas
255:48 on where to look for quotas and how to increase them when you need
255:50 and how to increase them when you need to
255:51 to and so here we are back in the console
255:54 and so here we are back in the console and so as i explained before there are
255:56 and so as i explained before there are two main ways to view your current quota
255:59 two main ways to view your current quota limits
256:00 limits in the console and so the first one is
256:02 in the console and so the first one is using the quotas page and so in order to
256:05 using the quotas page and so in order to get to the quotas page i need to go to
256:07 get to the quotas page i need to go to iam so i'm going to do that now by going
256:10 iam so i'm going to do that now by going up to the navigation menu in the top
256:12 up to the navigation menu in the top left hand corner
256:14 left hand corner i'm going to go to i am and admin and
256:16 i'm going to go to i am and admin and over to quotas
256:19 over to quotas and so here i am shown all the quotas of
256:22 and so here i am shown all the quotas of the current apis that i have enabled as
256:25 the current apis that i have enabled as you can see here it shows me the service
256:28 you can see here it shows me the service the limit name
256:30 the limit name the quota status and the details in this
256:33 the quota status and the details in this panel here on the right hand side shows
256:36 panel here on the right hand side shows me a little bit more information with
256:38 me a little bit more information with regards to the service and the quota
256:40 regards to the service and the quota itself and so let's say i wanted to
256:43 itself and so let's say i wanted to increase my quota on the compute engine
256:46 increase my quota on the compute engine api
256:47 api within networks so i'm going to select
256:50 within networks so i'm going to select this service and over here on the right
256:52 this service and over here on the right hand panel i'm going to tick the box
256:55 hand panel i'm going to tick the box that says global and i'm going to go
256:57 that says global and i'm going to go back over here to the top left
257:00 back over here to the top left and click on the edit quotas button and
257:02 and click on the edit quotas button and a panel will pop up and i am prompted to
257:05 a panel will pop up and i am prompted to enter a new quota limit
257:07 enter a new quota limit along with a description explaining to
257:10 along with a description explaining to google why i need this quota limit
257:12 google why i need this quota limit increase and so once i've completed my
257:15 increase and so once i've completed my request
257:16 request i can click on done and then submit
257:18 i can click on done and then submit request and like i said before once the
257:21 request and like i said before once the request has been submitted it will go to
257:24 request has been submitted it will go to somebody at google to evaluate the
257:26 somebody at google to evaluate the requests for approval and don't worry
257:29 requests for approval and don't worry these quota limit increases are usually
257:32 these quota limit increases are usually approved within two business days and
257:35 approved within two business days and can often times be sooner than that also
257:38 can often times be sooner than that also a great way to enter multiple quota
257:40 a great way to enter multiple quota changes is to click on the selected apis
257:44 changes is to click on the selected apis let's do bigquery api
257:46 let's do bigquery api and cloud data store api and so i've
257:49 and cloud data store api and so i've clicked off three and now i can go back
257:51 clicked off three and now i can go back up to the top and click on the edit
257:54 up to the top and click on the edit quotas button and as you can see in the
257:56 quotas button and as you can see in the panel i have all three apis that i want
258:00 panel i have all three apis that i want to increase my quotas on so i can enter
258:03 to increase my quotas on so i can enter all my new limit requests for each api
258:06 all my new limit requests for each api and then i can submit it as a bulk
258:09 and then i can submit it as a bulk request with all my new quota limit
258:11 request with all my new quota limit changes and so doing it this way would
258:14 changes and so doing it this way would increase the efficiency
258:16 increase the efficiency instead of increasing the quotas for
258:18 instead of increasing the quotas for each service one by one and because i'm
258:21 each service one by one and because i'm not going to submit any quota changes
258:24 not going to submit any quota changes i'm going to close this panel and so
258:26 i'm going to close this panel and so again using the quotas page will give
258:29 again using the quotas page will give you a list of all your project quota
258:31 you a list of all your project quota usage and its limits and allow you to
258:34 usage and its limits and allow you to request changes accordingly and so now
258:37 request changes accordingly and so now moving on to the second way which you
258:39 moving on to the second way which you can view your current quota limits i'm
258:41 can view your current quota limits i'm going to go to the api dashboard which
258:44 going to go to the api dashboard which will give me a more granular view
258:46 will give me a more granular view including the resource usage over time
258:49 including the resource usage over time so to get there i'm going to go back up
258:51 so to get there i'm going to go back up to the left hand side to the navigation
258:54 to the left hand side to the navigation menu i'm going to go to apis and
258:56 menu i'm going to go to apis and services and click on dashboard
259:00 services and click on dashboard and here i will see all the names of the
259:02 and here i will see all the names of the apis and i'm going to click on compute
259:05 apis and i'm going to click on compute engine api for this demonstration
259:08 engine api for this demonstration and over here on the left hand menu you
259:11 and over here on the left hand menu you will see quotas
259:12 will see quotas and in here as i said before
259:15 and in here as i said before you can get some really granular data
259:18 you can get some really granular data with regards to queries read requests
259:21 with regards to queries read requests list requests and a whole bunch of other
259:23 list requests and a whole bunch of other requests i'm going to drill down into
259:26 requests i'm going to drill down into queries here and i can see my queries
259:28 queries here and i can see my queries per day per 100 seconds per user and per
259:32 per day per 100 seconds per user and per 100 seconds and i can see here that my
259:35 100 seconds and i can see here that my queries per 100 seconds is at a limit of
259:38 queries per 100 seconds is at a limit of 2 000 so if i wanted to increase that
259:41 2 000 so if i wanted to increase that limit i can simply click on the pencil
259:43 limit i can simply click on the pencil icon
259:44 icon and a panel on the right hand side will
259:47 and a panel on the right hand side will prompt me to enter a new quota limit but
259:50 prompt me to enter a new quota limit but i currently see that my quota limit is
259:52 i currently see that my quota limit is at its maximum and that i need to apply
259:55 at its maximum and that i need to apply for a higher quota so when i click on
259:58 for a higher quota so when i click on the link it will bring me back to my iam
260:01 the link it will bring me back to my iam page where my services are filtered and
260:04 page where my services are filtered and i can easily find the service that i was
260:07 i can easily find the service that i was looking at to raise my quota limit and i
260:10 looking at to raise my quota limit and i can increase the quota by checking off
260:12 can increase the quota by checking off this box and clicking on the edit quotas
260:15 this box and clicking on the edit quotas button at the top of the page and so as
260:18 button at the top of the page and so as you can see the quotas page as well as
260:21 you can see the quotas page as well as the api dashboard work in tandem so that
260:24 the api dashboard work in tandem so that you can get all the information you need
260:27 you can get all the information you need with regards to quotas and limits and to
260:30 with regards to quotas and limits and to edit them accordingly and so i hope this
260:33 edit them accordingly and so i hope this gave you a good idea and some great
260:36 gave you a good idea and some great insight
260:37 insight on how you can view and edit your quotas
260:40 on how you can view and edit your quotas and quota limits according to the
260:42 and quota limits according to the resources you use and so that about
260:45 resources you use and so that about wraps up this brief yet important demo
260:48 wraps up this brief yet important demo on limits and quotas so you can now mark
260:51 on limits and quotas so you can now mark this as complete and let's move on to
260:53 this as complete and let's move on to the next section
260:54 the next section [Music]
260:58 [Music] welcome back
261:00 welcome back and in this section we're going to be
261:02 and in this section we're going to be going through in my opinion one of the
261:04 going through in my opinion one of the most important services in google cloud
261:08 most important services in google cloud identity and access management also
261:11 identity and access management also known as iam for short and i'll be
261:14 known as iam for short and i'll be diving into identities roles and the
261:17 diving into identities roles and the architecture of policies that will give
261:19 architecture of policies that will give you a very good understanding of how
261:22 you a very good understanding of how permissions are granted and how policies
261:25 permissions are granted and how policies are inherited so before i jump into i am
261:28 are inherited so before i jump into i am i wanted to touch on the principle of
261:30 i wanted to touch on the principle of least privilege just for a second now
261:33 least privilege just for a second now the principle of least privilege states
261:35 the principle of least privilege states that a user program or process
261:38 that a user program or process should have access to the bare minimum
261:41 should have access to the bare minimum privileges necessary
261:42 privileges necessary or the exact resources it needs in order
261:46 or the exact resources it needs in order to perform its function so for example
261:49 to perform its function so for example if lisa is performing a create function
261:52 if lisa is performing a create function to a cloud storage bucket
261:54 to a cloud storage bucket lisa should be restricted to create
261:56 lisa should be restricted to create permissions only on exactly one cloud
261:59 permissions only on exactly one cloud storage bucket
262:01 storage bucket she doesn't need read edit or even
262:04 she doesn't need read edit or even delete permissions on a cloud storage
262:06 delete permissions on a cloud storage bucket to perform her job and so this is
262:09 bucket to perform her job and so this is a great illustration of how this
262:11 a great illustration of how this principle works
262:13 principle works and this is something that happens in
262:15 and this is something that happens in not only google cloud but in every cloud
262:18 not only google cloud but in every cloud environment as well as any on-premises
262:21 environment as well as any on-premises environment so note that the principle
262:24 environment so note that the principle of least privilege is something that i
262:26 of least privilege is something that i have previously and will continue to be
262:29 have previously and will continue to be talking about a lot in this course and
262:32 talking about a lot in this course and this is a key term that comes up quite a
262:35 this is a key term that comes up quite a bit
262:35 bit in any major exam
262:37 in any major exam and is a rule that most apply in their
262:40 and is a rule that most apply in their working environment to avoid any
262:42 working environment to avoid any unnecessary granted permissions a
262:45 unnecessary granted permissions a well-known and unsaid rule when it comes
262:48 well-known and unsaid rule when it comes to security hence me wanting to touch on
262:51 to security hence me wanting to touch on this for a brief moment so now with that
262:53 this for a brief moment so now with that out of the way i'd like to move on to
262:55 out of the way i'd like to move on to identity and access management or i am
262:59 identity and access management or i am for short so what is it really well with
263:02 for short so what is it really well with iam you manage access control by
263:05 iam you manage access control by defining who the identity
263:08 defining who the identity has what access which is the role for
263:12 has what access which is the role for which resource and this also includes
263:14 which resource and this also includes organizations
263:16 organizations folders and projects
263:18 folders and projects in iam permission to access a resource
263:21 in iam permission to access a resource isn't granted directly to the end user
263:24 isn't granted directly to the end user instead permissions are grouped into
263:27 instead permissions are grouped into roles and roles are then granted to
263:30 roles and roles are then granted to authenticated members an iam policy
263:33 authenticated members an iam policy defines and enforces what roles are
263:36 defines and enforces what roles are granted to which members
263:38 granted to which members and this policy is attached to a
263:40 and this policy is attached to a resource so when an authenticated member
263:43 resource so when an authenticated member attempts to access a resource iam checks
263:46 attempts to access a resource iam checks the resources policy to determine
263:49 the resources policy to determine whether the action is permitted and so
263:52 whether the action is permitted and so with that being said i want to dive into
263:54 with that being said i want to dive into the policy architecture breaking it down
263:57 the policy architecture breaking it down by means of components in this policy
263:59 by means of components in this policy architecture will give you a better
264:01 architecture will give you a better understanding of how policies are put
264:04 understanding of how policies are put together so now what is a policy a
264:07 together so now what is a policy a policy is a collection of bindings audit
264:10 policy is a collection of bindings audit configuration and metadata now the
264:12 configuration and metadata now the binding specifies how access should be
264:15 binding specifies how access should be granted on resources
264:17 granted on resources and it binds one or more members with a
264:20 and it binds one or more members with a single role and any contact specific
264:23 single role and any contact specific conditions that change how and when the
264:26 conditions that change how and when the role is granted now the metadata
264:29 role is granted now the metadata includes additional information about
264:31 includes additional information about the policy such as an etag and version
264:34 the policy such as an etag and version to facilitate policy management and
264:37 to facilitate policy management and finally the audit config field specifies
264:40 finally the audit config field specifies the configuration data of how access
264:43 the configuration data of how access attempts should be audited and so now i
264:46 attempts should be audited and so now i wanted to take a moment to dive deeper
264:48 wanted to take a moment to dive deeper into each component starting with member
264:52 into each component starting with member now when it comes to members this is an
264:54 now when it comes to members this is an identity that can access a resource
264:57 identity that can access a resource so the identity of a member is an email
265:00 so the identity of a member is an email address associated with a user
265:03 address associated with a user service account or google group or even
265:06 service account or google group or even a domain name associated with a g suite
265:09 a domain name associated with a g suite or cloud identity domains now when it
265:12 or cloud identity domains now when it comes to a google account this
265:14 comes to a google account this represents any person who interacts with
265:17 represents any person who interacts with google cloud any email address that is
265:20 google cloud any email address that is associated with a google account can be
265:23 associated with a google account can be an identity including gmail.com or other
265:26 an identity including gmail.com or other domains now a service account is an
265:29 domains now a service account is an account that belongs to your application
265:32 account that belongs to your application instead of an individual end user
265:35 instead of an individual end user so when you run your code that is hosted
265:37 so when you run your code that is hosted on gcp
265:38 on gcp this is the identity you would specify
265:41 this is the identity you would specify to run your code a google group is a
265:44 to run your code a google group is a named collection of google accounts and
265:47 named collection of google accounts and can also include service accounts now
265:50 can also include service accounts now the advantages of using google groups is
265:52 the advantages of using google groups is that you can grant and change
265:54 that you can grant and change permissions
265:55 permissions for the collection of accounts all at
265:57 for the collection of accounts all at once instead of changing access one by
266:00 once instead of changing access one by one google groups can help you manage
266:03 one google groups can help you manage users at scale and each member of a
266:05 users at scale and each member of a google group inherits the iam roles
266:08 google group inherits the iam roles granted to that group the inheritance
266:11 granted to that group the inheritance means that you can use a group's
266:13 means that you can use a group's membership to manage users roles instead
266:16 membership to manage users roles instead of granting iam roles to individual
266:19 of granting iam roles to individual users moving on to g suite domains this
266:22 users moving on to g suite domains this represents your organization's internet
266:24 represents your organization's internet domain name such as
266:26 domain name such as antonyt.com and when you add a user to
266:29 antonyt.com and when you add a user to your g suite domain a new google account
266:32 your g suite domain a new google account is created for the user inside this
266:34 is created for the user inside this virtual group such as antony
266:37 virtual group such as antony antonyt.com a g suite domain in
266:40 antonyt.com a g suite domain in actuality represents a virtual group of
266:43 actuality represents a virtual group of all of the google accounts that have
266:45 all of the google accounts that have been created like google groups g suite
266:48 been created like google groups g suite domains cannot be used to establish
266:50 domains cannot be used to establish identity but they simply enable
266:53 identity but they simply enable permission management now a cloud
266:55 permission management now a cloud identity domain is like a g suite domain
266:58 identity domain is like a g suite domain but the difference is that domain users
267:00 but the difference is that domain users don't have access to g suite
267:03 don't have access to g suite applications and features so a couple
267:05 applications and features so a couple more members that i wanted to address
267:08 more members that i wanted to address is the all authenticated users and the
267:10 is the all authenticated users and the all users members the all authenticated
267:13 all users members the all authenticated users is a special identifier that
267:15 users is a special identifier that represents anyone who is authenticated
267:18 represents anyone who is authenticated with a google account or a service
267:20 with a google account or a service account users who are not authenticated
267:23 account users who are not authenticated such as anonymous visitors are not
267:25 such as anonymous visitors are not included and finally the all users
267:28 included and finally the all users member is a special identifier that
267:30 member is a special identifier that represents anyone and everyone so any
267:34 represents anyone and everyone so any user who is on the internet including
267:36 user who is on the internet including authenticated and unauthenticated users
267:40 authenticated and unauthenticated users and this covers the slew of the
267:42 and this covers the slew of the different types of members now touching
267:44 different types of members now touching on the next component of policies is
267:47 on the next component of policies is roles now diving into roles this is a
267:50 roles now diving into roles this is a named collection of permissions that
267:53 named collection of permissions that grant access to perform actions on
267:56 grant access to perform actions on google cloud resources
267:58 google cloud resources so at the heart of it permissions are
268:00 so at the heart of it permissions are what determines what operations are
268:03 what determines what operations are allowed on a resource they usually but
268:06 allowed on a resource they usually but not always correspond one-to-one with
268:09 not always correspond one-to-one with rest methods that is each google cloud
268:12 rest methods that is each google cloud service has an associated permission for
268:15 service has an associated permission for each rest method that it has so to call
268:18 each rest method that it has so to call a method the caller needs that
268:20 a method the caller needs that permission now these permissions are not
268:22 permission now these permissions are not granted to the users directly but
268:25 granted to the users directly but grouped together within the role you
268:27 grouped together within the role you would then grant roles which contain one
268:30 would then grant roles which contain one or more permissions
268:32 or more permissions you can also create a custom role by
268:34 you can also create a custom role by combining one or more of the available
268:37 combining one or more of the available iam permissions and again permissions
268:40 iam permissions and again permissions allow users to perform specific actions
268:44 allow users to perform specific actions on google cloud resources so you will
268:46 on google cloud resources so you will typically see a permission such as the
268:48 typically see a permission such as the one you see here
268:50 one you see here compute.instances.list
268:53 compute.instances.list and within google cloud iam permissions
268:56 and within google cloud iam permissions are represented in this form
268:59 are represented in this form service.resource.verb
269:01 service.resource.verb so just as a recap on roles this is a
269:04 so just as a recap on roles this is a collection of permissions
269:06 collection of permissions and you cannot grant a permission
269:08 and you cannot grant a permission directly to the user but you grant a
269:11 directly to the user but you grant a role to a user and all the permissions
269:14 role to a user and all the permissions that the role contains so an example is
269:17 that the role contains so an example is shown here where the compute instances
269:20 shown here where the compute instances permissions are grouped together in a
269:22 permissions are grouped together in a role now you can grant permissions by
269:24 role now you can grant permissions by granting roles to a user a group or a
269:28 granting roles to a user a group or a service account so moving up into a more
269:31 service account so moving up into a more broader level there are three types of
269:33 broader level there are three types of roles in iam
269:35 roles in iam there are the primitive roles the
269:37 there are the primitive roles the predefined roles
269:39 predefined roles and the custom roles
269:41 and the custom roles with the primitive roles
269:43 with the primitive roles these are roles that existed prior to
269:46 these are roles that existed prior to the introduction of iam
269:48 the introduction of iam and they consist of three specific roles
269:51 and they consist of three specific roles owner editor and viewer and these roles
269:54 owner editor and viewer and these roles are concentric which means that the
269:57 are concentric which means that the owner role includes the permissions in
269:59 owner role includes the permissions in the editor role and the editor role
270:01 the editor role and the editor role includes the permissions in the viewer
270:03 includes the permissions in the viewer role and you can apply primitive roles
270:06 role and you can apply primitive roles at the project or service resource
270:08 at the project or service resource levels by using the console the api and
270:12 levels by using the console the api and the gcloud tool just as a note you
270:15 the gcloud tool just as a note you cannot grant the owner role to a member
270:18 cannot grant the owner role to a member for a project using the iam api or the
270:22 for a project using the iam api or the gcloud command line tool you can only
270:24 gcloud command line tool you can only add owners to a project using the cloud
270:27 add owners to a project using the cloud console as well
270:29 console as well google recommends avoiding these roles
270:31 google recommends avoiding these roles if possible due to the nature of how
270:34 if possible due to the nature of how much access the permissions are given in
270:37 much access the permissions are given in these specific roles google recommends
270:40 these specific roles google recommends that you use pre-defined roles over
270:42 that you use pre-defined roles over primitive roles and so moving into
270:44 primitive roles and so moving into predefined roles
270:46 predefined roles these are roles that give granular and
270:49 these are roles that give granular and finer-grained access control than the
270:51 finer-grained access control than the primitive roles to specific google cloud
270:54 primitive roles to specific google cloud resources and prevent any unwanted
270:57 resources and prevent any unwanted access to other resources predefined
271:00 access to other resources predefined roles are created and maintained by
271:03 roles are created and maintained by google their permissions are
271:05 google their permissions are automatically updated as necessary when
271:08 automatically updated as necessary when new features or services are added to
271:10 new features or services are added to google cloud now when it comes to custom
271:13 google cloud now when it comes to custom roles these are user defined and allow
271:16 roles these are user defined and allow you to bundle one or more supported
271:18 you to bundle one or more supported permissions to meet your specific needs
271:22 permissions to meet your specific needs unlike predefined roles custom roles are
271:24 unlike predefined roles custom roles are not maintained by google so when new
271:27 not maintained by google so when new permissions features or services are
271:29 permissions features or services are added to google cloud your custom roles
271:32 added to google cloud your custom roles will not be updated automatically when
271:35 will not be updated automatically when you create a custom role you must choose
271:37 you create a custom role you must choose an organization or project to create it
271:40 an organization or project to create it in you can then grant the custom role on
271:43 in you can then grant the custom role on the organization or project as well as
271:46 the organization or project as well as any resources within that organization
271:48 any resources within that organization or project and just as a note you cannot
271:51 or project and just as a note you cannot create custom roles at the folder level
271:54 create custom roles at the folder level if you need to use a custom role within
271:56 if you need to use a custom role within a folder define the custom role on the
271:59 a folder define the custom role on the parent of that folder as well the custom
272:02 parent of that folder as well the custom roles user interface is only available
272:05 roles user interface is only available to users who have permissions to create
272:08 to users who have permissions to create or manage custom roles by default only
272:12 or manage custom roles by default only project owners can create new roles now
272:15 project owners can create new roles now there is one limitation that i wanted to
272:17 there is one limitation that i wanted to point out
272:18 point out and that is that some predefined roles
272:21 and that is that some predefined roles contain permissions that are not
272:23 contain permissions that are not permitted in custom roles so i highly
272:26 permitted in custom roles so i highly recommend that you check whether you can
272:28 recommend that you check whether you can use a specific permission when making a
272:30 use a specific permission when making a custom role custom roles also have a
272:34 custom role custom roles also have a really cool feature that includes a
272:36 really cool feature that includes a launch stage which is stored in the
272:38 launch stage which is stored in the stage property for the role the stage is
272:42 stage property for the role the stage is informational and helps you keep track
272:44 informational and helps you keep track of how close each role is to being
272:47 of how close each role is to being generally available and these launch
272:49 generally available and these launch stages are available in the stages shown
272:52 stages are available in the stages shown here
272:53 here alpha which is in testing beta which is
272:56 alpha which is in testing beta which is tested and awaiting approval and of
272:58 tested and awaiting approval and of course ga which is generally available
273:02 course ga which is generally available and i'll be getting hands-on later with
273:04 and i'll be getting hands-on later with these roles in an upcoming demonstration
273:07 these roles in an upcoming demonstration so now moving on to the next component
273:09 so now moving on to the next component is conditions and so a condition is a
273:13 is conditions and so a condition is a logic expression and is used to define
273:16 logic expression and is used to define and enforce
273:17 and enforce conditional
273:18 conditional attribute-based access control for
273:21 attribute-based access control for google cloud resources conditions allow
273:23 google cloud resources conditions allow you to choose
273:25 you to choose granting resource access to identities
273:28 granting resource access to identities also known as members only if configured
273:31 also known as members only if configured conditions are met for example this
273:34 conditions are met for example this could be done to configure temporary
273:36 could be done to configure temporary access for users that are contractors
273:39 access for users that are contractors and have been given specific access for
273:42 and have been given specific access for a certain amount of time a condition
273:44 a certain amount of time a condition could be put in place to remove the
273:47 could be put in place to remove the access they needed once the contract has
273:50 access they needed once the contract has ended conditions are specified in the
273:52 ended conditions are specified in the role bindings of a resources im policy
273:56 role bindings of a resources im policy so when a condition exists the access
273:59 so when a condition exists the access request is only granted if the condition
274:02 request is only granted if the condition expression is true so now moving on to
274:04 expression is true so now moving on to metadata this component carries both e
274:07 metadata this component carries both e tags and version so first touching on e
274:11 tags and version so first touching on e when multiple systems try to write to
274:14 when multiple systems try to write to the same im policy at the same time
274:17 the same im policy at the same time there is a risk that those systems might
274:20 there is a risk that those systems might overwrite each other's changes and the
274:22 overwrite each other's changes and the risk exists because updating an im
274:25 risk exists because updating an im policy involves multiple operations so
274:29 policy involves multiple operations so in order to help prevent this issue iam
274:32 in order to help prevent this issue iam supports concurrency control through the
274:34 supports concurrency control through the use of an etag field in the policy the
274:38 use of an etag field in the policy the value of this field changes each time a
274:41 value of this field changes each time a policy is updated now when it comes to a
274:43 policy is updated now when it comes to a version
274:44 version this is a version number that is added
274:46 this is a version number that is added to determine features such as a
274:49 to determine features such as a condition
274:50 condition and for future releases of new features
274:53 and for future releases of new features it is also used to avoid breaking your
274:56 it is also used to avoid breaking your existing integrations on new feature
274:58 existing integrations on new feature releases that rely on consistency in the
275:01 releases that rely on consistency in the policy structure
275:03 policy structure also when new policy schema versions are
275:05 also when new policy schema versions are introduced and lastly we have the
275:08 introduced and lastly we have the auditconfig component and this is used
275:11 auditconfig component and this is used in order to configure audit logging for
275:13 in order to configure audit logging for the policy it determines which
275:15 the policy it determines which permission types are logged and what
275:18 permission types are logged and what identities if any are exempted from
275:20 identities if any are exempted from logging and so to sum it up this is a
275:23 logging and so to sum it up this is a policy in all its entirety each
275:26 policy in all its entirety each component as you can see plays a
275:28 component as you can see plays a different part and i will be going
275:30 different part and i will be going through policies and how they are
275:32 through policies and how they are assembled in statements in a later
275:34 assembled in statements in a later lesson and so there is one more thing
275:36 lesson and so there is one more thing that i wanted to touch on before ending
275:39 that i wanted to touch on before ending this lesson and that is the policy
275:41 this lesson and that is the policy inheritance when it comes to resource
275:44 inheritance when it comes to resource hierarchy and so as explained in an
275:46 hierarchy and so as explained in an earlier lesson you can set an im policy
275:49 earlier lesson you can set an im policy at any level in the resource hierarchy
275:52 at any level in the resource hierarchy the organization level the folder level
275:55 the organization level the folder level the project level or the resource level
275:57 the project level or the resource level and resources inherit the policies of
276:01 and resources inherit the policies of all their parent resources the effective
276:03 all their parent resources the effective policy for a resource
276:06 policy for a resource is the union of the policy set on that
276:09 is the union of the policy set on that resource and the policies inherited from
276:12 resource and the policies inherited from higher up in the hierarchy and so again
276:14 higher up in the hierarchy and so again i wanted to reiterate that this policy
276:17 i wanted to reiterate that this policy inheritance is transitive in other words
276:20 inheritance is transitive in other words resources inherit policies from the
276:23 resources inherit policies from the project
276:24 project which inherit policies from folders
276:26 which inherit policies from folders which inherit policies from the
276:28 which inherit policies from the organization therefore the organization
276:31 organization therefore the organization level policies
276:33 level policies also apply at the resource level and so
276:36 also apply at the resource level and so just a quick example if i apply a policy
276:39 just a quick example if i apply a policy on project x
276:41 on project x on any resources within that project the
276:43 on any resources within that project the effective policy is going to be a union
276:46 effective policy is going to be a union of these policies as the resources will
276:49 of these policies as the resources will inherit the policy that is granted to
276:51 inherit the policy that is granted to project x so i hope this gave you a
276:54 project x so i hope this gave you a better understanding of how policies are
276:57 better understanding of how policies are granted as well as the course structure
277:00 granted as well as the course structure and so that's all i have for this lesson
277:02 and so that's all i have for this lesson so you can now mark this lesson as
277:04 so you can now mark this lesson as complete and let's move on to the next
277:06 complete and let's move on to the next one
277:07 one [Music]
277:11 [Music] welcome back and in this lesson i wanted
277:13 welcome back and in this lesson i wanted to build on the last lesson where we
277:16 to build on the last lesson where we went through iam and policy architecture
277:19 went through iam and policy architecture and dive deeper into policies and
277:21 and dive deeper into policies and conditions when it comes to putting them
277:24 conditions when it comes to putting them together in policy statements as cloud
277:26 together in policy statements as cloud engineers you should be able to read and
277:29 engineers you should be able to read and decipher policy statements and
277:31 decipher policy statements and understand how they're put together by
277:34 understand how they're put together by using all the components that we
277:36 using all the components that we discussed earlier so just as a refresher
277:39 discussed earlier so just as a refresher i wanted to go over the policy
277:40 i wanted to go over the policy architecture again now as i discussed
277:43 architecture again now as i discussed previously a policy is a collection of
277:45 previously a policy is a collection of statements that define who has what type
277:48 statements that define who has what type of access it is attached to a resource
277:51 of access it is attached to a resource and is used to enforce access control
277:54 and is used to enforce access control whenever that resource is accessed now
277:57 whenever that resource is accessed now the binding within that policy binds one
278:00 the binding within that policy binds one or more members with a single role and
278:03 or more members with a single role and any context specific conditions so in
278:06 any context specific conditions so in other words the member roles and
278:08 other words the member roles and conditions are bound together using a
278:11 conditions are bound together using a binding combined with the metadata and
278:13 binding combined with the metadata and audit config we have a policy so now
278:17 audit config we have a policy so now taking all of this and putting it
278:18 taking all of this and putting it together in a policy statement shown
278:21 together in a policy statement shown here you can see the bindings which have
278:24 here you can see the bindings which have the role the members and conditions the
278:28 the role the members and conditions the first member being tony beauties
278:30 first member being tony beauties gmail.com holding the role of storage
278:33 gmail.com holding the role of storage admin and the second member as
278:36 admin and the second member as larkfetterlogin at gmail.com
278:38 larkfetterlogin at gmail.com holding the role of storage object
278:41 holding the role of storage object viewer now because lark only needs to
278:43 viewer now because lark only needs to view the files for this project in cloud
278:46 view the files for this project in cloud storage till the new year a condition
278:48 storage till the new year a condition has been applied that does not grant
278:51 has been applied that does not grant access for lark to view these files
278:53 access for lark to view these files after january the 1st an e tag has been
278:56 after january the 1st an e tag has been put in and the version is numbered 3 due
278:59 put in and the version is numbered 3 due to the condition which i will get into a
279:02 to the condition which i will get into a little bit later this policy statement
279:05 little bit later this policy statement has been structured in json format and
279:08 has been structured in json format and is a common format used in policy
279:10 is a common format used in policy statements moving on we have the exact
279:13 statements moving on we have the exact same policy statement but has been
279:15 same policy statement but has been formatted in yaml as you can see the
279:18 formatted in yaml as you can see the members roles and conditions in the
279:20 members roles and conditions in the bindings are exactly the same as well as
279:23 bindings are exactly the same as well as the etag and version but due to the
279:26 the etag and version but due to the formatting it is much more condensed so
279:29 formatting it is much more condensed so as you can see policy statements can be
279:31 as you can see policy statements can be written in both json or yaml depending
279:34 written in both json or yaml depending on your preference my personal
279:36 on your preference my personal preference is to write my policy
279:39 preference is to write my policy statements in yaml due to the shorter
279:41 statements in yaml due to the shorter and cleaner format so i will be moving
279:44 and cleaner format so i will be moving ahead in this course with more
279:46 ahead in this course with more statements written in yaml when you are
279:48 statements written in yaml when you are looking to query your projects for its
279:51 looking to query your projects for its granted policies
279:52 granted policies an easy way to do this would be to query
279:55 an easy way to do this would be to query it from the command line as shown here
279:58 it from the command line as shown here here i've taken a screenshot from tony
280:00 here i've taken a screenshot from tony bowtie ace in the cloud shell and have
280:03 bowtie ace in the cloud shell and have used the command gcloud projects get
280:07 used the command gcloud projects get dash iam
280:08 dash iam policy with the project id and this
280:12 policy with the project id and this brought up all the members and roles
280:14 brought up all the members and roles within the bindings
280:15 within the bindings as well as the etag and version for the
280:18 as well as the etag and version for the policy that has been attached to this
280:21 policy that has been attached to this project and as you can see here i have
280:23 project and as you can see here i have no conditions in place for any of my
280:26 no conditions in place for any of my bindings and so again using the command
280:29 bindings and so again using the command gcloud projects
280:31 gcloud projects get dash iam dash policy along with the
280:35 get dash iam dash policy along with the project id will bring up any policies
280:37 project id will bring up any policies that are attached to this resource and
280:40 that are attached to this resource and the resource being the project id if the
280:42 the resource being the project id if the resource were to be the folder id then
280:44 resource were to be the folder id then you could use the command gcloud
280:47 you could use the command gcloud resource dash manager
280:49 resource dash manager folders get dash iam-policy
280:53 folders get dash iam-policy with the folder id and for organizations
280:56 with the folder id and for organizations the command would be gcloud
280:58 the command would be gcloud organizations get dash
281:00 organizations get dash iam-policy along with the organization
281:03 iam-policy along with the organization id now because we don't have any folders
281:06 id now because we don't have any folders or organizations in our environment
281:09 or organizations in our environment typing these commands in wouldn't bring
281:11 typing these commands in wouldn't bring up anything and just as a note using
281:13 up anything and just as a note using these commands in the cloud shell or in
281:15 these commands in the cloud shell or in the sdk will bring up the policy
281:18 the sdk will bring up the policy statement formatted in yaml so now i
281:21 statement formatted in yaml so now i wanted to just take a second to dive
281:23 wanted to just take a second to dive into policy versions now as i haven't
281:26 into policy versions now as i haven't covered versions in detail i wanted to
281:28 covered versions in detail i wanted to quickly go over it and the reasons for
281:31 quickly go over it and the reasons for each numbered version now version one of
281:34 each numbered version now version one of the i am syntax schema for policies
281:37 the i am syntax schema for policies supports binding one role to one or more
281:41 supports binding one role to one or more members
281:42 members it does not support conditional role
281:44 it does not support conditional role bindings and so usually with version 1
281:47 bindings and so usually with version 1 you will not see any conditions version
281:50 you will not see any conditions version 2 is used for google's internal use and
281:53 2 is used for google's internal use and so querying policies
281:55 so querying policies usually you will not see a version 2.
281:58 usually you will not see a version 2. and finally with version 3 this
282:00 and finally with version 3 this introduces the condition field in the
282:03 introduces the condition field in the role binding which constrains the role
282:06 role binding which constrains the role binding via contact space and attributes
282:09 binding via contact space and attributes based rules so just as a note if your
282:12 based rules so just as a note if your request does not specify a policy
282:14 request does not specify a policy version
282:15 version iam will assume that you want a version
282:18 iam will assume that you want a version 1 policy and again if the policy does
282:21 1 policy and again if the policy does not contain any conditions
282:23 not contain any conditions then iam always returns a version one
282:26 then iam always returns a version one policy regardless of the version number
282:29 policy regardless of the version number in the request so moving on to some
282:32 in the request so moving on to some policy limitations each resource can
282:34 policy limitations each resource can only have one policy and this includes
282:38 only have one policy and this includes organizations folders and projects
282:41 organizations folders and projects another limitation
282:42 another limitation is that each iam policy can contain up
282:46 is that each iam policy can contain up to 1500 members
282:48 to 1500 members and up to 250 of these members
282:51 and up to 250 of these members can be google groups now when making
282:54 can be google groups now when making policy changes it will take up to seven
282:57 policy changes it will take up to seven minutes to fully propagate across the
282:59 minutes to fully propagate across the google cloud platform this does not
283:02 google cloud platform this does not happen instantaneously as iam is global
283:06 happen instantaneously as iam is global as well there is a limit of 100
283:08 as well there is a limit of 100 conditional role bindings per policy now
283:11 conditional role bindings per policy now getting a little bit deeper into
283:13 getting a little bit deeper into conditions
283:14 conditions these are attributes that are either
283:16 these are attributes that are either based on resource or based on details
283:19 based on resource or based on details about the request and this could vary
283:21 about the request and this could vary from time stamp to originating or
283:24 from time stamp to originating or destination ip address now as you
283:27 destination ip address now as you probably heard me use the term earlier
283:29 probably heard me use the term earlier conditional role bindings are another
283:32 conditional role bindings are another name for a policy that holds a condition
283:35 name for a policy that holds a condition within the binding conditional role
283:37 within the binding conditional role bindings can be added to new or existing
283:40 bindings can be added to new or existing iam policies
283:42 iam policies to further control access to google
283:44 to further control access to google cloud resources so when it comes to
283:46 cloud resources so when it comes to resource attributes this would enable
283:49 resource attributes this would enable you to create conditions that evaluate
283:52 you to create conditions that evaluate the resource in the access request
283:55 the resource in the access request including the resource type the resource
283:57 including the resource type the resource name and the google cloud service being
284:00 name and the google cloud service being used request attributes allow you to
284:03 used request attributes allow you to manage access based on days or hours of
284:06 manage access based on days or hours of the week a conditional role binding can
284:09 the week a conditional role binding can be used to grant time bounded access to
284:12 be used to grant time bounded access to a resource ensuring that a user can no
284:15 a resource ensuring that a user can no longer access that resource after the
284:18 longer access that resource after the specified expiry date and time and this
284:21 specified expiry date and time and this sets temporary access to google cloud
284:24 sets temporary access to google cloud resources using conditional role
284:26 resources using conditional role bindings in iam policies by using the
284:29 bindings in iam policies by using the date time attributes shown here you can
284:32 date time attributes shown here you can enforce time-based controls when
284:35 enforce time-based controls when accessing a given resource now showing
284:38 accessing a given resource now showing another example of a time-based
284:40 another example of a time-based condition it is possible to get even
284:43 condition it is possible to get even more granular and scope the geographic
284:45 more granular and scope the geographic region
284:46 region along with the day and time for access
284:49 along with the day and time for access in this policy lark only has access
284:52 in this policy lark only has access during business hours to view any
284:55 during business hours to view any objects within cloud storage lark can
284:57 objects within cloud storage lark can only access these objects from monday to
285:00 only access these objects from monday to friday nine to five this policy can also
285:03 friday nine to five this policy can also be used as a great example for
285:06 be used as a great example for contractors coming into your business
285:09 contractors coming into your business yet only needing access during business
285:11 yet only needing access during business hours now an example of a resource-based
285:14 hours now an example of a resource-based condition shown here a group member has
285:18 condition shown here a group member has a condition tied to it where dev only
285:21 a condition tied to it where dev only access has been implemented any
285:23 access has been implemented any developers that are part of this group
285:25 developers that are part of this group will only have access to vm resources
285:29 will only have access to vm resources within project cat bowties and tied to
285:32 within project cat bowties and tied to any resources that's name starts with
285:35 any resources that's name starts with the word development now some
285:36 the word development now some limitations when it comes to conditions
285:39 limitations when it comes to conditions is that conditions are limited to
285:41 is that conditions are limited to specific services
285:43 specific services primitive roles are unsupported and
285:46 primitive roles are unsupported and members cannot be of the all users or
285:49 members cannot be of the all users or all authenticated users members
285:52 all authenticated users members conditions also hold a limit of 100
285:55 conditions also hold a limit of 100 conditional role bindings per policy as
285:58 conditional role bindings per policy as well as 20 role bindings for the same
286:01 well as 20 role bindings for the same role and same member and so for the last
286:04 role and same member and so for the last part of the policy statements i wanted
286:07 part of the policy statements i wanted to touch on audit config logs and this
286:10 to touch on audit config logs and this specifies the audit configuration for a
286:12 specifies the audit configuration for a service the configuration determines
286:15 service the configuration determines which permission types are logged and
286:18 which permission types are logged and what identities if any are exempted from
286:21 what identities if any are exempted from logging and when specifying audit
286:23 logging and when specifying audit configs they must have one or more audit
286:26 configs they must have one or more audit log configs now as shown here
286:29 log configs now as shown here this policy enables data read data write
286:33 this policy enables data read data write and admin read logging on all services
286:37 and admin read logging on all services while exempting tony bowtie ace
286:40 while exempting tony bowtie ace gmail.com
286:42 gmail.com from admin read logging on cloud storage
286:45 from admin read logging on cloud storage and so that's pretty much all i wanted
286:47 and so that's pretty much all i wanted to cover in this lesson
286:49 to cover in this lesson on policies policy statements
286:52 on policies policy statements and conditions and so i highly recommend
286:55 and conditions and so i highly recommend as you come across more policy
286:57 as you come across more policy statements take the time to read through
287:00 statements take the time to read through it and get to know exactly
287:02 it and get to know exactly what the statement is referring to and
287:04 what the statement is referring to and what type of permissions that are given
287:07 what type of permissions that are given and this will help you not only in the
287:09 and this will help you not only in the exam but will also help you in reading
287:12 exam but will also help you in reading and writing policy statements in future
287:15 and writing policy statements in future and so that's all i have for this lesson
287:17 and so that's all i have for this lesson so you can now mark this lesson as
287:19 so you can now mark this lesson as complete and let's move on to the next
287:21 complete and let's move on to the next one
287:22 one [Music]
287:26 [Music] welcome back
287:27 welcome back and in this demonstration i'm going to
287:30 and in this demonstration i'm going to do a hands-on tour working with iam here
287:34 do a hands-on tour working with iam here in the google cloud console we're going
287:36 in the google cloud console we're going to go through the available services in
287:38 to go through the available services in the iam console as well as touching on
287:41 the iam console as well as touching on the command line in the cloud shell to
287:44 the command line in the cloud shell to show how policies can be both added and
287:46 show how policies can be both added and edited we're also going to be bringing
287:48 edited we're also going to be bringing in another new user to really bring this
287:51 in another new user to really bring this demo to life and to show you how to edit
287:54 demo to life and to show you how to edit existing policies so with that being
287:57 existing policies so with that being said let's dive in so if i go over here
288:00 said let's dive in so if i go over here to my user icon in the top right hand
288:03 to my user icon in the top right hand corner i can see that i am logged in as
288:05 corner i can see that i am logged in as tony bowtie ace
288:07 tony bowtie ace gmail.com and as you can see at the top
288:10 gmail.com and as you can see at the top i'm here in project tony so now to get
288:13 i'm here in project tony so now to get to iam i'm going to go over to the
288:15 to iam i'm going to go over to the navigation menu
288:17 navigation menu and i'm going to go to i am in admin and
288:19 and i'm going to go to i am in admin and over to iam now moving over here to the
288:22 over to iam now moving over here to the menu on the left i wanted to go through
288:24 menu on the left i wanted to go through the different options that we have in
288:27 the different options that we have in iam so under iam itself this is where
288:30 iam so under iam itself this is where you would add or edit permissions
288:33 you would add or edit permissions with regards to members and roles for
288:36 with regards to members and roles for the policy added to your given project
288:39 the policy added to your given project which in my case is project tony and
288:42 which in my case is project tony and i'll be coming back in just a bit to go
288:44 i'll be coming back in just a bit to go greater in depth with regards to adding
288:47 greater in depth with regards to adding and editing the policy permissions
288:49 and editing the policy permissions moving on to identity and organization
288:52 moving on to identity and organization now although we haven't touched on cloud
288:54 now although we haven't touched on cloud identity yet i will be covering this in
288:56 identity yet i will be covering this in high level detail in a different lesson
288:59 high level detail in a different lesson but for now know that cloud identity is
289:02 but for now know that cloud identity is google cloud's identity as a service
289:04 google cloud's identity as a service solution and it allows you to create and
289:07 solution and it allows you to create and manage users and groups within google
289:09 manage users and groups within google cloud now if i was signed into cloud
289:12 cloud now if i was signed into cloud identity i would have a whole bunch of
289:14 identity i would have a whole bunch of options here but since this is a
289:16 options here but since this is a personal account i cannot create or
289:18 personal account i cannot create or manage any users as well i do not have a
289:21 manage any users as well i do not have a domain tied to any cloud identity
289:24 domain tied to any cloud identity account as well as any g suite account
289:27 account as well as any g suite account so just know that if you had cloud
289:29 so just know that if you had cloud identity or g suite set up you would
289:32 identity or g suite set up you would have a bunch of different options to
289:34 have a bunch of different options to choose from in order to help you manage
289:36 choose from in order to help you manage your users and groups and here under
289:39 your users and groups and here under organization policies
289:41 organization policies i'm able to manage organization policies
289:44 i'm able to manage organization policies but since i am not an organization
289:46 but since i am not an organization policy administrator and i don't have an
289:48 policy administrator and i don't have an organization there's not much that i can
289:51 organization there's not much that i can do here just know that when you have an
289:53 do here just know that when you have an organization set up you are able to come
289:56 organization set up you are able to come here in order to manage and edit your
289:59 here in order to manage and edit your organization policies now moving under
290:02 organization policies now moving under quotas we went over this in a little bit
290:04 quotas we went over this in a little bit of detail in a previous lesson and again
290:08 of detail in a previous lesson and again this is to edit any quotas for any of
290:10 this is to edit any quotas for any of your services in case you need a limit
290:13 your services in case you need a limit increase moving on to service accounts
290:16 increase moving on to service accounts i will be covering this topic in great
290:18 i will be covering this topic in great depth in a later lesson and we'll be
290:21 depth in a later lesson and we'll be going through a hands-on demonstration
290:23 going through a hands-on demonstration as well now i know i haven't touched
290:25 as well now i know i haven't touched much on labels as of yet but know that
290:28 much on labels as of yet but know that labels are a key value pair that helps
290:30 labels are a key value pair that helps you organize and then filter your
290:33 you organize and then filter your resources based on their labels these
290:36 resources based on their labels these same labels are also forwarded to your
290:38 same labels are also forwarded to your billing system so you can then break
290:40 billing system so you can then break down your billing charges by label and
290:43 down your billing charges by label and you can also use labels based on teams
290:46 you can also use labels based on teams cost centers components and even
290:48 cost centers components and even environments
290:50 environments so for example if i wanted to label my
290:52 so for example if i wanted to label my virtual machines by environment i can
290:56 virtual machines by environment i can simply use environment as the key and as
290:58 simply use environment as the key and as the value i can use anything from
291:01 the value i can use anything from development
291:03 development to qa
291:04 to qa to testing
291:06 to testing to production and i could simply add
291:08 to production and i could simply add this label and add all the different
291:10 this label and add all the different environments and later i'd be able to
291:13 environments and later i'd be able to query based on these specific labels now
291:16 query based on these specific labels now a good rule of thumb is to label all of
291:19 a good rule of thumb is to label all of your resources so that this way you're
291:21 your resources so that this way you're able to find them a lot easier and
291:23 able to find them a lot easier and you're able to query them a lot easier
291:26 you're able to query them a lot easier so moving forward with any of your
291:28 so moving forward with any of your resources that you are creating be sure
291:31 resources that you are creating be sure to add some labels to give you maximum
291:33 to add some labels to give you maximum flexibility so i'm going to discard
291:35 flexibility so i'm going to discard these changes and we're going to move on
291:38 these changes and we're going to move on to settings and we touched on settings
291:40 to settings and we touched on settings in an earlier lesson with regards to
291:43 in an earlier lesson with regards to projects and so here i could change the
291:45 projects and so here i could change the project name it'll give me the project
291:47 project name it'll give me the project id the project number and i'm able to
291:50 id the project number and i'm able to migrate or shut down the project now
291:52 migrate or shut down the project now when it comes to access transparency
291:55 when it comes to access transparency this provides you with logs that capture
291:58 this provides you with logs that capture the actions that google personnel take
292:01 the actions that google personnel take when they're accessing your content for
292:03 when they're accessing your content for troubleshooting so they're like cloud
292:05 troubleshooting so they're like cloud audit logs but for google support now in
292:08 audit logs but for google support now in order to enable access transparency for
292:11 order to enable access transparency for your google cloud organization
292:14 your google cloud organization your google cloud account must have a
292:16 your google cloud account must have a premium support plan or a minimum level
292:20 premium support plan or a minimum level of a 400 a month support plan and
292:23 of a 400 a month support plan and because i don't have this i wouldn't be
292:25 because i don't have this i wouldn't be able to enable access transparency now
292:28 able to enable access transparency now although access transparency is not on
292:30 although access transparency is not on the exam
292:31 the exam this is a great feature to know about in
292:34 this is a great feature to know about in case you are working in any bigger
292:36 case you are working in any bigger environments that have these support
292:38 environments that have these support plans and compliance is of the utmost
292:42 plans and compliance is of the utmost importance now moving into privacy and
292:44 importance now moving into privacy and security this is where google supplies
292:47 security this is where google supplies all of their clients of google cloud the
292:50 all of their clients of google cloud the compliance that they need in order to
292:52 compliance that they need in order to meet regulations across the world and
292:55 meet regulations across the world and across various industries such as health
292:58 across various industries such as health care and education and because google
293:01 care and education and because google has a broad base in europe google
293:03 has a broad base in europe google provides capabilities and contractual
293:06 provides capabilities and contractual commitments created to meet data
293:09 commitments created to meet data protection recommendations which is why
293:11 protection recommendations which is why you can see here eu model contract
293:14 you can see here eu model contract clauses
293:15 clauses and eu representative contacts as well
293:18 and eu representative contacts as well under transparency and control i'm able
293:21 under transparency and control i'm able to disable the usage data that google
293:23 to disable the usage data that google collects in order to provide better data
293:26 collects in order to provide better data insights and recommendations and this is
293:29 insights and recommendations and this is done at the project level and as well i
293:31 done at the project level and as well i have the option of going over to my
293:33 have the option of going over to my billing account and i could select a
293:36 billing account and i could select a different billing account that's linked
293:37 different billing account that's linked to some other projects that you can get
293:40 to some other projects that you can get recommendations on and so continuing
293:42 recommendations on and so continuing forward identity aware proxy is
293:45 forward identity aware proxy is something that i will be covering in a
293:46 something that i will be covering in a later lesson and so i won't be getting
293:49 later lesson and so i won't be getting into any detail about that right now and
293:51 into any detail about that right now and so what i really wanted to dig into is
293:53 so what i really wanted to dig into is roles now this may look familiar as i
293:56 roles now this may look familiar as i touched on this very briefly in a
293:59 touched on this very briefly in a previous lesson and here's where i can
294:01 previous lesson and here's where i can create roles
294:03 create roles i can create some custom roles from
294:05 i can create some custom roles from different selections and here i have
294:07 different selections and here i have access to all the permissions
294:09 access to all the permissions and if i wanted to i can filter down
294:12 and if i wanted to i can filter down from the different types the names the
294:15 from the different types the names the permissions even the status so let's say
294:17 permissions even the status so let's say i was looking for a specific permission
294:20 i was looking for a specific permission and i'm looking all the permissions for
294:22 and i'm looking all the permissions for projects this could help me find exactly
294:24 projects this could help me find exactly what it is that i'm looking for and
294:26 what it is that i'm looking for and these filters allow me to get really
294:28 these filters allow me to get really granular so i can find the exact
294:31 granular so i can find the exact permission and so you can get really
294:33 permission and so you can get really granular with regards to your
294:35 granular with regards to your permissions and create roles that are
294:37 permissions and create roles that are custom to your environment now moving on
294:40 custom to your environment now moving on to audit logs here i can enable the auto
294:43 to audit logs here i can enable the auto logs without having to use a specific
294:45 logs without having to use a specific policy by simply clicking on default
294:48 policy by simply clicking on default autoconfig and here i can turn on and
294:51 autoconfig and here i can turn on and off
294:52 off all the selected logging
294:54 all the selected logging as well as add any exempted users now i
294:57 as well as add any exempted users now i don't recommend that you turn these on
295:00 don't recommend that you turn these on as audit logging can create an extremely
295:02 as audit logging can create an extremely large amount of data and can quickly
295:05 large amount of data and can quickly blow through all of your 300 credit so
295:08 blow through all of your 300 credit so i'm going to keep that off move back to
295:10 i'm going to keep that off move back to the main screen of the audit logs and as
295:13 the main screen of the audit logs and as well here i'm able to get really
295:15 well here i'm able to get really granular about what i want to log now
295:18 granular about what i want to log now quickly touching on audit logs in the
295:20 quickly touching on audit logs in the command line i wanted to quickly open up
295:23 command line i wanted to quickly open up cloud shell and show you an example of
295:26 cloud shell and show you an example of how i can edit the policy in order to
295:28 how i can edit the policy in order to enable audit logging just going to make
295:31 enable audit logging just going to make this a little bit bigger
295:32 this a little bit bigger and i'm going to paste in my command
295:35 and i'm going to paste in my command gcloud projects get dash iam
295:39 gcloud projects get dash iam dash policy
295:40 dash policy with the project id which is project
295:42 with the project id which is project tony
295:44 tony 286016 and i'm gonna just hit enter
295:47 286016 and i'm gonna just hit enter and as you can see here this is my
295:49 and as you can see here this is my current policy and as well as expected
295:52 current policy and as well as expected audit logs are not enabled due to the
295:55 audit logs are not enabled due to the fact that the audit config field is not
295:58 fact that the audit config field is not present so in order for me to enable the
296:00 present so in order for me to enable the audit config logs i'm going to have to
296:03 audit config logs i'm going to have to edit the policy and so the easiest way
296:05 edit the policy and so the easiest way for me to do that is for me to run the
296:07 for me to do that is for me to run the same command and output it to a file
296:10 same command and output it to a file where i can edit it and i'm going to
296:12 where i can edit it and i'm going to call this
296:14 call this new dash policy dot yaml and so now that
296:17 new dash policy dot yaml and so now that my policy has been outputted to this
296:19 my policy has been outputted to this file i'm going to now go into the editor
296:22 file i'm going to now go into the editor and as you can see my new policy.yaml is
296:25 and as you can see my new policy.yaml is right here and so for me to enable the
296:27 right here and so for me to enable the autoconfig logs i'm going to simply
296:30 autoconfig logs i'm going to simply append it to the file and then i'm going
296:32 append it to the file and then i'm going to go over here to the top menu
296:35 to go over here to the top menu and click on file and save and so now
296:38 and click on file and save and so now for me to apply this new policy i'm
296:40 for me to apply this new policy i'm going to go back over to the terminal
296:42 going to go back over to the terminal and now i'm going to paste in the
296:44 and now i'm going to paste in the command
296:45 command gcloud projects set dash
296:48 gcloud projects set dash iam-policy with the project id and the
296:51 iam-policy with the project id and the file name new dash policy dot yaml and
296:54 file name new dash policy dot yaml and i'm just going to hit enter
296:57 i'm just going to hit enter and as you can see the audit log configs
297:00 and as you can see the audit log configs have been enabled for all services and
297:02 have been enabled for all services and because this may take some time to
297:05 because this may take some time to reflect in the console it will not show
297:07 reflect in the console it will not show up right away but either way audit logs
297:10 up right away but either way audit logs usually take up a lot of data and i
297:13 usually take up a lot of data and i don't want to blow through my 300 credit
297:16 don't want to blow through my 300 credit and so i'm going to disable them now the
297:18 and so i'm going to disable them now the easiest way for me to do this is to
297:21 easiest way for me to do this is to output this policy to another file edit
297:24 output this policy to another file edit it and set it again and so i'm going to
297:26 it and set it again and so i'm going to go ahead and do that i'm going to first
297:28 go ahead and do that i'm going to first clear the screen and then i'm going to
297:29 clear the screen and then i'm going to paste in my command while outputting it
297:32 paste in my command while outputting it to a new file called updated dash policy
297:35 to a new file called updated dash policy dot yaml and i'm gonna hit enter and now
297:38 dot yaml and i'm gonna hit enter and now i'm gonna go into the editor so i can
297:40 i'm gonna go into the editor so i can edit the file now the one thing i wanted
297:43 edit the file now the one thing i wanted to point out is that i could have
297:45 to point out is that i could have overwritten
297:46 overwritten the file new dash policy but if you look
297:49 the file new dash policy but if you look here in the updated policy the e-tag
297:53 here in the updated policy the e-tag is different than the e-tag in the old
297:55 is different than the e-tag in the old policy and so this allowed me to
297:57 policy and so this allowed me to highlight e-tags when it comes to
298:00 highlight e-tags when it comes to editing and creating new policies and so
298:03 editing and creating new policies and so when editing policies make sure that the
298:06 when editing policies make sure that the etag is correct otherwise you will
298:08 etag is correct otherwise you will receive an error and not be able to set
298:11 receive an error and not be able to set the new policy so going back to the
298:13 the new policy so going back to the updated policy file i'm going to take
298:16 updated policy file i'm going to take out the audit log configs and i'm going
298:18 out the audit log configs and i'm going to leave the auto configs field there
298:20 to leave the auto configs field there and i'm going to go to the menu click on
298:22 and i'm going to go to the menu click on file and then save now i'm going to go
298:25 file and then save now i'm going to go back to the terminal and i'm going to
298:27 back to the terminal and i'm going to paste in the new command and this will
298:29 paste in the new command and this will update my policy and as you can see the
298:31 update my policy and as you can see the audit config logs have been disabled and
298:34 audit config logs have been disabled and the policy has been updated now this is
298:36 the policy has been updated now this is the same process that you can use when
298:39 the same process that you can use when you want to update any parts of the
298:41 you want to update any parts of the policy when it comes to your members or
298:44 policy when it comes to your members or roles and even adding any conditions so
298:48 roles and even adding any conditions so now moving on to the last item on the
298:50 now moving on to the last item on the menu
298:51 menu is groups
298:53 is groups and as you can see here because i do not
298:55 and as you can see here because i do not have an organization i'm not able to
298:58 have an organization i'm not able to view any groups and so if i did have an
299:00 view any groups and so if i did have an organization i could manage my groups
299:03 organization i could manage my groups right here in this page now moving back
299:06 right here in this page now moving back over to iam
299:07 over to iam i wanted to dig into policies in a
299:10 i wanted to dig into policies in a little bit of further detail now what we
299:12 little bit of further detail now what we see here are the permissions and roles
299:15 see here are the permissions and roles that have been granted to selected
299:17 that have been granted to selected members in this specific project which
299:20 members in this specific project which is project tony now remember an im
299:23 is project tony now remember an im policy is a total collection of members
299:26 policy is a total collection of members that have roles granted to them in
299:29 that have roles granted to them in what's known as a binding and then the
299:31 what's known as a binding and then the binding is applied to that layer and all
299:33 binding is applied to that layer and all other layers underneath it and since i'm
299:36 other layers underneath it and since i'm at the project layer this policy is
299:38 at the project layer this policy is inherited by all the resources
299:41 inherited by all the resources underneath it and so just to verify
299:43 underneath it and so just to verify through the command line i'm going to
299:45 through the command line i'm going to open up cloud shell
299:52 and i'm going to paste in the command gcloud projects get dash
299:55 gcloud projects get dash iam-policy with my project id
299:59 iam-policy with my project id and i'm going to hit enter
300:00 and i'm going to hit enter and as you can see here the policy is a
300:03 and as you can see here the policy is a reflection of exactly what you see here
300:05 reflection of exactly what you see here in the console so as you can see here
300:08 in the console so as you can see here here's the service agent which you will
300:10 here's the service agent which you will find here and the other two service
300:12 find here and the other two service accounts which you will find above
300:14 accounts which you will find above as well as tony bowtie ace gmail.com and
300:18 as well as tony bowtie ace gmail.com and all the other roles that accompany those
300:21 all the other roles that accompany those members so as i mentioned earlier i've
300:23 members so as i mentioned earlier i've gone ahead and created a new user and so
300:26 gone ahead and created a new user and so for those who are following along you
300:28 for those who are following along you can go ahead and feel free to create a
300:31 can go ahead and feel free to create a new gmail user now going ahead with this
300:34 new gmail user now going ahead with this demonstration the user i created is
300:37 demonstration the user i created is named laura delightful now tony needed
300:40 named laura delightful now tony needed an extra hand and decided to bring her
300:42 an extra hand and decided to bring her onto the team from another department
300:44 onto the team from another department now unfortunately in order for laura to
300:47 now unfortunately in order for laura to help tony on the project she needs
300:49 help tony on the project she needs access to this project and as you can
300:52 access to this project and as you can see she doesn't have any access and so
300:55 see she doesn't have any access and so we're going to go ahead and change that
300:57 we're going to go ahead and change that and give her access to this project so
300:59 and give her access to this project so i'm going to go back over to my open tab
301:02 i'm going to go back over to my open tab for tony bowtie ace and we're gonna go
301:05 for tony bowtie ace and we're gonna go ahead and give laura permissions and so
301:07 ahead and give laura permissions and so i'm gonna go ahead and click on this add
301:09 i'm gonna go ahead and click on this add button at the top of the page and the
301:11 button at the top of the page and the prompt will ask me to add a new member
301:14 prompt will ask me to add a new member so i'm gonna add laura in here now
301:16 so i'm gonna add laura in here now and here she is and i'm going to select
301:18 and here she is and i'm going to select the role
301:20 the role as project viewer i'm not going to add
301:22 as project viewer i'm not going to add any conditions and i'm simply going to
301:25 any conditions and i'm simply going to click on save and the policy has been
301:27 click on save and the policy has been updated and as you can see here laura
301:30 updated and as you can see here laura has been granted the role of project
301:32 has been granted the role of project viewer so i'm going to move over to the
301:34 viewer so i'm going to move over to the other open tab where laura's console is
301:36 other open tab where laura's console is open and i'm going to simply do a
301:38 open and i'm going to simply do a refresh
301:45 and now laura has access to view all the resources within project tony now laura
301:48 resources within project tony now laura is able to view everything in the
301:50 is able to view everything in the project but laura isn't actually able to
301:52 project but laura isn't actually able to do anything and so in order for laura to
301:55 do anything and so in order for laura to get things done a big part of her job is
301:58 get things done a big part of her job is going to be creating files with new
302:01 going to be creating files with new ideas for the fall winter line of bow
302:04 ideas for the fall winter line of bow ties in 2021 and so because laura holds
302:08 ties in 2021 and so because laura holds the project viewer role she is able to
302:10 the project viewer role she is able to see everything in cloud storage but she
302:13 see everything in cloud storage but she is unable to create buckets
302:16 is unable to create buckets to upload edit or delete any files or
302:20 to upload edit or delete any files or even folders and as you can see here
302:22 even folders and as you can see here there is a folder marked bowtie inc
302:25 there is a folder marked bowtie inc fallwinter 2021 ideas but laura cannot
302:29 fallwinter 2021 ideas but laura cannot create any new buckets because she
302:31 create any new buckets because she doesn't have the required permissions as
302:33 doesn't have the required permissions as well drilling down into this bucket
302:36 well drilling down into this bucket laura is unable to create any folders as
302:39 laura is unable to create any folders as explained earlier
302:41 explained earlier and the same stands for uploading any
302:43 and the same stands for uploading any files and so i'm going to cancel out of
302:45 files and so i'm going to cancel out of this and so in order to give laura the
302:47 this and so in order to give laura the proper permissions for her to do her job
302:51 proper permissions for her to do her job we're going to give laura the storage
302:52 we're going to give laura the storage admin role and so moving back over to
302:56 admin role and so moving back over to the open console for tony bowtie i'm
302:58 the open console for tony bowtie i'm going to give laura access by using the
303:01 going to give laura access by using the command line so i'm going to go up to
303:03 command line so i'm going to go up to the top right and open up cloud shell
303:06 the top right and open up cloud shell and so the command i need to run
303:08 and so the command i need to run to give laura the role of storage admin
303:11 to give laura the role of storage admin would be the following gcloud projects
303:14 would be the following gcloud projects add dash iam dash policy dash binding
303:18 add dash iam dash policy dash binding with the project id dash dash member
303:21 with the project id dash dash member user followed by colon and then the user
303:24 user followed by colon and then the user name which is laura delightful gmail.com
303:27 name which is laura delightful gmail.com dash dash role and the role which is
303:30 dash dash role and the role which is storage admin and i'm going to go ahead
303:32 storage admin and i'm going to go ahead and hit enter
303:34 and hit enter and as you can see it has been executed
303:36 and as you can see it has been executed successfully so if i do a refresh of the
303:38 successfully so if i do a refresh of the web page here i'm going to be able to
303:40 web page here i'm going to be able to see the changes reflected in the console
303:44 see the changes reflected in the console and after a refresh you can see here
303:46 and after a refresh you can see here storage admin has been added to the role
303:49 storage admin has been added to the role for laura delightful gmail.com and so if
303:52 for laura delightful gmail.com and so if i go over to the open tab where laura
303:54 i go over to the open tab where laura has her console open i can simply do a
303:57 has her console open i can simply do a refresh
303:59 refresh and if i go back to the home page for
304:02 and if i go back to the home page for cloud storage you can see here that
304:04 cloud storage you can see here that laura now has the permissions to create
304:07 laura now has the permissions to create a bucket laura also now has permissions
304:10 a bucket laura also now has permissions to create new folders create edit and
304:14 to create new folders create edit and delete new files on top of being able to
304:17 delete new files on top of being able to create new storage buckets and so that
304:20 create new storage buckets and so that about wraps up this demonstration on
304:22 about wraps up this demonstration on getting hands-on with iam
304:25 getting hands-on with iam in both the console and the command line
304:28 in both the console and the command line and i also hope that this demo has given
304:30 and i also hope that this demo has given you a bit more confidence
304:32 you a bit more confidence on working in the shell running the
304:34 on working in the shell running the commands needed in order to create new
304:37 commands needed in order to create new bindings along with editing existing
304:40 bindings along with editing existing policies
304:41 policies and this will get you comfortable for
304:43 and this will get you comfortable for when you need to assign roles to new and
304:46 when you need to assign roles to new and existing users that are added to your
304:49 existing users that are added to your gcp environment and so you can now mark
304:52 gcp environment and so you can now mark this lesson as complete
304:54 this lesson as complete and let's move on to the next one
305:03 welcome back in this lesson i'm going to take a deep dive into service accounts
305:06 take a deep dive into service accounts now service accounts play a powerful
305:08 now service accounts play a powerful part in google cloud and can allow a
305:11 part in google cloud and can allow a different approach for application
305:14 different approach for application interaction with the resources in google
305:16 interaction with the resources in google cloud now service accounts being both an
305:20 cloud now service accounts being both an identity and a resource can cause some
305:22 identity and a resource can cause some confusion for some and so i really
305:25 confusion for some and so i really wanted to spend some time breaking it
305:27 wanted to spend some time breaking it down for better understanding and so i'm
305:30 down for better understanding and so i'm first going to start off by explaining
305:32 first going to start off by explaining what exactly is a service account and so
305:35 what exactly is a service account and so a service account is a special kind of
305:37 a service account is a special kind of account that is used by an application
305:41 account that is used by an application or a virtual machine instance and not a
305:44 or a virtual machine instance and not a person
305:45 person an application uses the service account
305:48 an application uses the service account to authenticate between the application
305:51 to authenticate between the application and gcp services so that the users
305:54 and gcp services so that the users aren't directly involved in short it is
305:56 aren't directly involved in short it is a special type of google account
305:59 a special type of google account intended to represent a non-human user
306:03 intended to represent a non-human user that needs to authenticate and be
306:05 that needs to authenticate and be authorized to access data in google apis
306:09 authorized to access data in google apis this way the service account is the
306:11 this way the service account is the identity of the service and the service
306:14 identity of the service and the service accounts permissions control which
306:16 accounts permissions control which resources the service can access and as
306:19 resources the service can access and as a note a service account is identified
306:22 a note a service account is identified by its email address
306:24 by its email address which is unique to the account now the
306:27 which is unique to the account now the different service account types
306:29 different service account types come in three different flavors
306:32 come in three different flavors user managed
306:33 user managed default and google managed service
306:35 default and google managed service accounts when it comes to the user
306:37 accounts when it comes to the user managed service accounts
306:39 managed service accounts these are service accounts that you
306:41 these are service accounts that you create
306:42 create you're responsible for managing and
306:44 you're responsible for managing and securing these accounts
306:46 securing these accounts and by default you can create up to 100
306:49 and by default you can create up to 100 user managed service accounts in a
306:52 user managed service accounts in a project or you can also request a quota
306:55 project or you can also request a quota increase in case you need more now when
306:58 increase in case you need more now when you create a user managed service
307:00 you create a user managed service account in your project it is you that
307:02 account in your project it is you that chooses a name for the service account
307:04 chooses a name for the service account this name appears in the email address
307:07 this name appears in the email address that identifies the service account
307:09 that identifies the service account which uses the following format seen
307:12 which uses the following format seen here the service account name
307:14 here the service account name at the project id dot
307:17 at the project id dot iam.gserviceaccount.com
307:22 now moving on to the default service accounts
307:23 accounts when you use some google cloud services
307:26 when you use some google cloud services they create user managed service
307:28 they create user managed service accounts that enable the service to
307:31 accounts that enable the service to deploy jobs that access other google
307:34 deploy jobs that access other google cloud resources these accounts are known
307:37 cloud resources these accounts are known as default service accounts so when it
307:39 as default service accounts so when it comes to production workloads google
307:42 comes to production workloads google strongly recommends that you create your
307:45 strongly recommends that you create your own user managed service accounts and
307:48 own user managed service accounts and grant the appropriate roles to each
307:50 grant the appropriate roles to each service account when a default service
307:52 service account when a default service account is created it is automatically
307:55 account is created it is automatically granted the editor role on your project
307:58 granted the editor role on your project now following the principle of lease
308:00 now following the principle of lease privilege google strongly recommends
308:04 privilege google strongly recommends that you disable the automatic role
308:06 that you disable the automatic role grant
308:07 grant by adding a constraint to your
308:09 by adding a constraint to your organization policy
308:11 organization policy or by revoking the editor role manually
308:14 or by revoking the editor role manually the default service account will be
308:16 the default service account will be assigned an email address
308:18 assigned an email address following the format you see here
308:21 following the format you see here project id
308:22 project id at appspot.gserviceaccount.com
308:28 for any service accounts created by app engine and project number dash compute
308:32 engine and project number dash compute at developer.gserviceaccount.com
308:38 for compute engine and so lastly when it comes to google managed service accounts
308:41 comes to google managed service accounts these are created and managed by google
308:44 these are created and managed by google and they are used by google services the
308:47 and they are used by google services the display name of most google managed
308:49 display name of most google managed service accounts
308:50 service accounts ends with a
308:52 ends with a gserviceaccount.com address now some of
308:55 gserviceaccount.com address now some of these service accounts are visible but
308:57 these service accounts are visible but others are hidden so for example
308:59 others are hidden so for example google api service agent is a service
309:03 google api service agent is a service account named with an email address that
309:05 account named with an email address that uses the following format
309:07 uses the following format project number at cloudservices.gerisa
309:16 and this runs internal google processes on your behalf and this is just one
309:18 on your behalf and this is just one example of the many google managed
309:20 example of the many google managed services that run in your environment
309:23 services that run in your environment and just as a warning it is not
309:25 and just as a warning it is not recommended to change or revoke the
309:28 recommended to change or revoke the roles that are granted to the google api
309:31 roles that are granted to the google api service agent or to any other google
309:33 service agent or to any other google managed service accounts for that matter
309:35 managed service accounts for that matter if you change or revoke these roles
309:38 if you change or revoke these roles some google cloud services will no
309:41 some google cloud services will no longer work now when it comes to
309:43 longer work now when it comes to authentication for service accounts they
309:45 authentication for service accounts they authenticate using service account keys
309:48 authenticate using service account keys so each service account is associated
309:51 so each service account is associated with two sets of public and private rsa
309:54 with two sets of public and private rsa key pairs that are used to authenticate
309:57 key pairs that are used to authenticate to google they are the google manage
309:59 to google they are the google manage keys and the user manage keys with the
310:02 keys and the user manage keys with the google manage keys google stores both
310:05 google manage keys google stores both the public and private portion of the
310:08 the public and private portion of the key
310:08 key rotates them regularly and the private
310:11 rotates them regularly and the private key is always held in escrow and is
310:14 key is always held in escrow and is never directly accessible iam provides
310:17 never directly accessible iam provides apis to use these keys to sign on behalf
310:21 apis to use these keys to sign on behalf of the service account now when using
310:24 of the service account now when using user managed key pairs this implies that
310:27 user managed key pairs this implies that you own both the public and private
310:29 you own both the public and private portions of a key pair you can create
310:32 portions of a key pair you can create one or more user managed key pairs also
310:35 one or more user managed key pairs also known as external keys that can be used
310:38 known as external keys that can be used from outside of google cloud google only
310:41 from outside of google cloud google only stores the public portion of a user
310:43 stores the public portion of a user managed key
310:44 managed key so you are responsible for the security
310:48 so you are responsible for the security of the private key as well as the key
310:51 of the private key as well as the key rotation private keys cannot be
310:53 rotation private keys cannot be retrieved by google so if you're using a
310:56 retrieved by google so if you're using a user manage key
310:58 user manage key please be aware that if you lose your
311:00 please be aware that if you lose your key your service account will
311:02 key your service account will effectively stop working google
311:04 effectively stop working google recommends storing these keys in cloud
311:07 recommends storing these keys in cloud kms for better security and better
311:10 kms for better security and better management user managed keys are
311:12 management user managed keys are extremely powerful credentials and they
311:15 extremely powerful credentials and they can represent a security risk if they
311:18 can represent a security risk if they are not managed correctly and as you can
311:20 are not managed correctly and as you can see here a user managed key has many
311:23 see here a user managed key has many different areas that need to be
311:25 different areas that need to be addressed when it comes to key
311:27 addressed when it comes to key management now when it comes to service
311:29 management now when it comes to service account permissions
311:31 account permissions in addition to being an identity a
311:33 in addition to being an identity a service account is a resource which has
311:36 service account is a resource which has im policies attached to it and these
311:39 im policies attached to it and these policies determine who can use the
311:42 policies determine who can use the service account so for instance
311:44 service account so for instance lark can have the editor role on a
311:46 lark can have the editor role on a service account and laura can have a
311:49 service account and laura can have a viewer role on a service account so this
311:52 viewer role on a service account so this is just like granting roles for any
311:54 is just like granting roles for any other google cloud resource just as a
311:57 other google cloud resource just as a note
311:58 note the default compute engine and app
312:00 the default compute engine and app engine service accounts are granted
312:02 engine service accounts are granted editor roles on the project when they
312:05 editor roles on the project when they are created so that the code executing
312:08 are created so that the code executing in your app or vm instance has the
312:11 in your app or vm instance has the necessary permissions now you can grant
312:14 necessary permissions now you can grant the service account user role at both
312:17 the service account user role at both the project level for all service
312:19 the project level for all service accounts in the project or at the
312:21 accounts in the project or at the service account level now granting the
312:24 service account level now granting the service account user role to a user for
312:27 service account user role to a user for a project
312:28 a project gives the user access to all service
312:31 gives the user access to all service accounts in the project
312:33 accounts in the project including service accounts that may be
312:35 including service accounts that may be created in the future granting the
312:37 created in the future granting the service account user role to a user for
312:41 service account user role to a user for a specific
312:42 a specific service account gives a user access to
312:45 service account gives a user access to only that service account so please be
312:48 only that service account so please be aware when granting the service account
312:50 aware when granting the service account user role to any member now users who
312:54 user role to any member now users who are granted the service account user
312:55 are granted the service account user role on a service account can use it to
312:59 role on a service account can use it to indirectly access
313:01 indirectly access all the resources to which the service
313:03 all the resources to which the service account has access
313:05 account has access when this happens the user impersonates
313:08 when this happens the user impersonates the service account to perform any tasks
313:12 the service account to perform any tasks using its granted roles and permissions
313:15 using its granted roles and permissions and is known as service account
313:17 and is known as service account impersonation now when it comes to
313:19 impersonation now when it comes to service account permissions there is
313:22 service account permissions there is also another method use called access
313:24 also another method use called access scopes service account scopes are the
313:28 scopes service account scopes are the legacy method of specifying permissions
313:31 legacy method of specifying permissions for your instance
313:33 for your instance and they are used in substitution of iam
313:36 and they are used in substitution of iam roles these are used specifically for
313:39 roles these are used specifically for default or automatically created service
313:42 default or automatically created service accounts based on enabled apis now
313:45 accounts based on enabled apis now before the existence of iam roles access
313:49 before the existence of iam roles access scopes were the only way for granting
313:52 scopes were the only way for granting permissions to service accounts and
313:54 permissions to service accounts and although they are not the primary way of
313:56 although they are not the primary way of granting permissions now
313:58 granting permissions now you must still set service account
314:00 you must still set service account scopes when configuring an instance to
314:04 scopes when configuring an instance to run as a service account however when
314:07 run as a service account however when you are using a custom service account
314:09 you are using a custom service account you will not be using scopes rather you
314:12 you will not be using scopes rather you will be using iam roles
314:15 will be using iam roles so when you are using a default service
314:17 so when you are using a default service account for your compute instance it
314:20 account for your compute instance it will default to using scopes instead of
314:23 will default to using scopes instead of iam roles and so i wanted to quickly
314:26 iam roles and so i wanted to quickly touch on how service accounts are used
314:29 touch on how service accounts are used now one way of using a service account
314:31 now one way of using a service account is to attach this service account to a
314:34 is to attach this service account to a resource
314:35 resource so if you want to start a long-running
314:37 so if you want to start a long-running job that authenticates as a service
314:40 job that authenticates as a service account you need to attach a service
314:42 account you need to attach a service account to the resource that will run
314:45 account to the resource that will run the job and this will bind the service
314:48 the job and this will bind the service account to the resource now the other
314:50 account to the resource now the other way of using a service account is
314:52 way of using a service account is directly impersonating a service account
314:55 directly impersonating a service account which i had explained a little bit
314:57 which i had explained a little bit earlier so once granted they require
314:59 earlier so once granted they require permissions a user or a service can
315:02 permissions a user or a service can directly impersonate the identity of a
315:06 directly impersonate the identity of a service account in a few common
315:08 service account in a few common scenarios you can impersonate the
315:10 scenarios you can impersonate the service account without requiring the
315:12 service account without requiring the use of a downloaded external service
315:15 use of a downloaded external service account key as well a user may get
315:18 account key as well a user may get artifacts signed by the google managed
315:20 artifacts signed by the google managed private key of the service account
315:23 private key of the service account without ever actually retrieving a
315:26 without ever actually retrieving a credential for the service account and
315:28 credential for the service account and this is an advanced use case and is only
315:31 this is an advanced use case and is only supported for programmatic access now
315:34 supported for programmatic access now although i'm going to be covering best
315:36 although i'm going to be covering best practices
315:37 practices at the end of this section i wanted to
315:40 at the end of this section i wanted to go over some best practices for service
315:42 go over some best practices for service accounts specifically so you should
315:44 accounts specifically so you should always look at auditing the service
315:46 always look at auditing the service accounts
315:47 accounts and their keys
315:49 and their keys using either the service account dot
315:52 using either the service account dot keys dot list method
315:54 keys dot list method or the logs viewer page in the console
315:57 or the logs viewer page in the console now if your service accounts don't need
315:59 now if your service accounts don't need external keys
316:01 external keys you should definitely delete them you
316:03 you should definitely delete them you should always grant the service account
316:05 should always grant the service account only the minimum set of permissions
316:08 only the minimum set of permissions required to achieve the goal
316:10 required to achieve the goal service accounts should also be created
316:13 service accounts should also be created for each specific service with only the
316:16 for each specific service with only the permissions required for that service
316:19 permissions required for that service and finally when it comes to
316:21 and finally when it comes to implementing key rotation you should
316:23 implementing key rotation you should take advantage of the iam service
316:25 take advantage of the iam service account api to get the job done and so
316:28 account api to get the job done and so that's all i have for this lesson on
316:30 that's all i have for this lesson on service accounts
316:32 service accounts so you can now mark this lesson as
316:34 so you can now mark this lesson as complete
316:35 complete and please join me in the next one where
316:37 and please join me in the next one where we go hands-on in the console
316:40 we go hands-on in the console [Music]
316:44 [Music] welcome back
316:45 welcome back so in this demonstration i'm going to
316:47 so in this demonstration i'm going to take a hands-on tour diving through
316:49 take a hands-on tour diving through various aspects of working with both
316:52 various aspects of working with both default and custom-made service accounts
316:55 default and custom-made service accounts we're going to start off fresh
316:56 we're going to start off fresh observing a new service account being
316:59 observing a new service account being automatically created along with viewing
317:02 automatically created along with viewing scopes observing how to edit them and
317:05 scopes observing how to edit them and creating custom service accounts that
317:07 creating custom service accounts that get a little bit more granular with the
317:09 get a little bit more granular with the permissions assigned so with that being
317:12 permissions assigned so with that being said let's dive in so as you can see
317:14 said let's dive in so as you can see here from the top right hand corner that
317:17 here from the top right hand corner that i am logged in under tony bowtie ace
317:20 i am logged in under tony bowtie ace gmail.com and looking over here from the
317:22 gmail.com and looking over here from the top drop down menu you can see that i am
317:25 top drop down menu you can see that i am in the project of cat bow ties fall 2021
317:29 in the project of cat bow ties fall 2021 and this is a brand new project that i
317:31 and this is a brand new project that i had created specifically for this demo
317:34 had created specifically for this demo and so i currently have no resources
317:37 and so i currently have no resources created along with no apis enabled so
317:40 created along with no apis enabled so now i want to navigate over to iam so
317:43 now i want to navigate over to iam so i'm going to go up to the left hand
317:44 i'm going to go up to the left hand corner to the navigation menu
317:46 corner to the navigation menu and i'm going to go to i am an admin
317:49 and i'm going to go to i am an admin and over to iam and as expected i have
317:52 and over to iam and as expected i have no members here other than myself tony
317:55 no members here other than myself tony bowtie ace gmail.com with no other
317:58 bowtie ace gmail.com with no other members and if i go over here to the
318:00 members and if i go over here to the left hand menu under service accounts
318:03 left hand menu under service accounts you can see that i have no service
318:05 you can see that i have no service accounts created so now in order to
318:07 accounts created so now in order to demonstrate a default service account
318:10 demonstrate a default service account i'm going to go over to the navigation
318:12 i'm going to go over to the navigation menu and go into compute engine and as
318:15 menu and go into compute engine and as you can see the compute engine api is
318:18 you can see the compute engine api is starting up and so this may take a
318:20 starting up and so this may take a couple minutes to get ready okay and the
318:22 couple minutes to get ready okay and the compute engine api has been enabled so
318:25 compute engine api has been enabled so now if i go back over to iam to take a
318:29 now if i go back over to iam to take a look at my service accounts as expected
318:32 look at my service accounts as expected i have my compute engine default service
318:34 i have my compute engine default service account now again i did not create this
318:37 account now again i did not create this manually this service account was
318:40 manually this service account was automatically created when i had enabled
318:42 automatically created when i had enabled the compute engine api along with the
318:45 the compute engine api along with the api's service agent and the compute
318:48 api's service agent and the compute engine service agent and the same would
318:50 engine service agent and the same would happen to other various apis that are
318:53 happen to other various apis that are enabled as well and so now that i have
318:56 enabled as well and so now that i have my default service account i want to go
318:58 my default service account i want to go back over to compute engine
319:00 back over to compute engine and i'm going to go ahead and create a
319:03 and i'm going to go ahead and create a vm instance so i'm going to just click
319:05 vm instance so i'm going to just click on create
319:07 on create i'm going to keep everything as the
319:08 i'm going to keep everything as the default except i'm going to change the
319:11 default except i'm going to change the machine type from an e2 medium to an e2
319:14 machine type from an e2 medium to an e2 micro and so now i'm going to scroll
319:16 micro and so now i'm going to scroll down to where it says identity and api
319:20 down to where it says identity and api access
319:21 access now here under service account you can
319:23 now here under service account you can see that the compute engine default
319:26 see that the compute engine default service account has been highlighted and
319:28 service account has been highlighted and this is because i don't have any other
319:30 this is because i don't have any other service accounts that i am able to
319:33 service accounts that i am able to select from now when a default service
319:35 select from now when a default service account is the only service account you
319:37 account is the only service account you have access to
319:39 have access to access scopes are the only permissions
319:41 access scopes are the only permissions that will be available for you to select
319:44 that will be available for you to select from now remember access scopes are the
319:46 from now remember access scopes are the legacy method of specifying permissions
319:49 legacy method of specifying permissions in google cloud now under access scopes
319:52 in google cloud now under access scopes i can select from the allow default
319:54 i can select from the allow default access
319:55 access allow full access to all cloud apis
319:59 allow full access to all cloud apis and set access for each api and so i
320:02 and set access for each api and so i want to click on set access for each api
320:04 want to click on set access for each api for just a second and so as you can see
320:07 for just a second and so as you can see here i have access to set permissions
320:10 here i have access to set permissions for each api the difference being is
320:13 for each api the difference being is that i only have access to primitive
320:16 that i only have access to primitive roles and so now that i'm looking to
320:18 roles and so now that i'm looking to grant access to my service account i'm
320:21 grant access to my service account i'm going to grant access to cloud storage
320:24 going to grant access to cloud storage on a read-only capacity
320:26 on a read-only capacity and so now that i have granted
320:27 and so now that i have granted permissions for my service account i'm
320:30 permissions for my service account i'm going to now create my instance by
320:32 going to now create my instance by simply clicking on the create button
320:34 simply clicking on the create button and so now that my instance is created
320:37 and so now that my instance is created i want to head over to cloud storage to
320:40 i want to head over to cloud storage to see exactly what my service account will
320:42 see exactly what my service account will have access to so i'm going to go over
320:44 have access to so i'm going to go over to my navigation menu and scroll down
320:47 to my navigation menu and scroll down and click on storage and as you can see
320:49 and click on storage and as you can see here i have created a bucket in advance
320:52 here i have created a bucket in advance called bow tie ink fall winter 2012
320:55 called bow tie ink fall winter 2012 designs and this is due to bow tie ink
320:58 designs and this is due to bow tie ink bringing back some old designs from 2012
321:01 bringing back some old designs from 2012 and making them relevant for today and
321:04 and making them relevant for today and within that bucket there are a few files
321:07 within that bucket there are a few files of different design ideas that were best
321:09 of different design ideas that were best sellers back in 2012
321:11 sellers back in 2012 that tony bowtie wanted to re-release
321:15 that tony bowtie wanted to re-release for the fall winter 2012 collection and
321:18 for the fall winter 2012 collection and so with the new granted access to my
321:20 so with the new granted access to my default service account i should have
321:22 default service account i should have access to view these files so in order
321:25 access to view these files so in order to test this i'm going to go back over
321:28 to test this i'm going to go back over to the navigation menu and go back to
321:30 to the navigation menu and go back to compute engine and i'm going to ssh into
321:33 compute engine and i'm going to ssh into my instance
321:38 and so now that i've sshed into my virtual machine
321:40 virtual machine i wanted to first check to see who is it
321:43 i wanted to first check to see who is it that's running the commands is it my
321:45 that's running the commands is it my user account or is it my service account
321:47 user account or is it my service account and so i'll be able to do this very
321:49 and so i'll be able to do this very easily by checking the configuration and
321:52 easily by checking the configuration and i can do this by running the command
321:54 i can do this by running the command gcloud config list and as you can see my
321:57 gcloud config list and as you can see my current configuration is showing that my
322:00 current configuration is showing that my service account is the member that is
322:02 service account is the member that is being used to run this command in the
322:04 being used to run this command in the project of cat bow ties fall 2021 now if
322:08 project of cat bow ties fall 2021 now if i wanted to run any commands using my
322:11 i wanted to run any commands using my tony bowtie ace
322:13 tony bowtie ace gmail.com user account i can simply run
322:16 gmail.com user account i can simply run the command gcloud auth login and it
322:19 the command gcloud auth login and it will bring me through the login process
322:21 will bring me through the login process that we've seen earlier on in the course
322:24 that we've seen earlier on in the course for my tony bowtie ace
322:26 for my tony bowtie ace gmail.com account but now since i'm
322:29 gmail.com account but now since i'm running all my commands using my service
322:31 running all my commands using my service account from this compute engine
322:33 account from this compute engine instance i'm using the permissions
322:36 instance i'm using the permissions granted to that service account that we
322:38 granted to that service account that we saw earlier and so since i set the
322:41 saw earlier and so since i set the storage scope for the service account to
322:43 storage scope for the service account to read only we should be able to see the
322:46 read only we should be able to see the cloud storage bucket and all the files
322:48 cloud storage bucket and all the files within it by simply running the gsutil
322:51 within it by simply running the gsutil command so to list the contents of the
322:53 command so to list the contents of the bucket i'm going to type in the command
322:56 bucket i'm going to type in the command gsutil ls for list and the name of the
322:59 gsutil ls for list and the name of the bucket and the syntax for that would be
323:01 bucket and the syntax for that would be gs
323:02 gs colon forward slash forward slash
323:05 colon forward slash forward slash followed by the name of the bucket which
323:07 followed by the name of the bucket which would be bowtie inc
323:09 would be bowtie inc fw2012
323:11 fw2012 designs
323:13 designs and as you can see we're able to view
323:15 and as you can see we're able to view all the files that are in the bucket and
323:18 all the files that are in the bucket and so it is working as expected and so now
323:20 so it is working as expected and so now because i've only granted viewing
323:22 because i've only granted viewing permissions for this service account i
323:25 permissions for this service account i cannot create any files due to the lack
323:28 cannot create any files due to the lack of permissions so for instance if i was
323:30 of permissions so for instance if i was to create a file
323:32 to create a file using the command touch
323:35 using the command touch file one i have now created that file
323:37 file one i have now created that file here on the instance so now i want to
323:40 here on the instance so now i want to copy this file to my bucket and so i'm
323:43 copy this file to my bucket and so i'm going to run the gsutil command
323:46 going to run the gsutil command cp for copy
323:48 cp for copy file 1 which is the name of my file and
323:50 file 1 which is the name of my file and gs
323:52 gs colon forward slash forward slash along
323:54 colon forward slash forward slash along with the name of the bucket which is bow
323:57 with the name of the bucket which is bow tie inc fw
323:59 tie inc fw 2012
324:00 2012 designs and as expected i am getting an
324:03 designs and as expected i am getting an access denied exception with a prompt
324:06 access denied exception with a prompt telling me that i have insufficient
324:08 telling me that i have insufficient permissions and so now that i've shown
324:10 permissions and so now that i've shown you how to create a default service
324:13 you how to create a default service account and give it permissions using
324:15 account and give it permissions using access scopes let's now create a custom
324:18 access scopes let's now create a custom service account and assign it proper
324:20 service account and assign it proper permissions
324:21 permissions to not only read files from cloud
324:23 to not only read files from cloud storage but be able to write files to
324:26 storage but be able to write files to cloud storage as well so i'm going to
324:28 cloud storage as well so i'm going to now close down this tab
324:31 now close down this tab and i'm going to go back over to the
324:32 and i'm going to go back over to the navigation menu
324:34 navigation menu and go back to iam where we can go in
324:36 and go back to iam where we can go in and create our new service account
324:39 and create our new service account under service accounts
324:41 under service accounts and so as you can see here this is the
324:43 and so as you can see here this is the default service account and since we
324:45 default service account and since we want to create a custom one i'm going to
324:47 want to create a custom one i'm going to go ahead and go up to the top here and
324:49 go ahead and go up to the top here and click on the button that says create
324:51 click on the button that says create service account
324:53 service account and so now i'm prompted to enter some
324:55 and so now i'm prompted to enter some information with regards to details of
324:58 information with regards to details of this service account including the
324:59 this service account including the service account name the account id
325:02 service account name the account id along with a description and so i'm
325:04 along with a description and so i'm going to call this service account sa
325:07 going to call this service account sa hyphen bowtie hyphen demo and as you can
325:10 hyphen bowtie hyphen demo and as you can see it automatically propagated the
325:13 see it automatically propagated the service account id and i'm going to give
325:15 service account id and i'm going to give this service account a description
325:18 this service account a description storage read write access
325:20 storage read write access and i'm going to click on the button
325:21 and i'm going to click on the button create
325:23 create and so now i've been prompted to grant
325:25 and so now i've been prompted to grant permissions to the service account and i
325:27 permissions to the service account and i can do that by simply clicking on the
325:29 can do that by simply clicking on the drop down and selecting a roll but i'm
325:32 drop down and selecting a roll but i'm looking to get a little bit more
325:33 looking to get a little bit more granular and so i'm going to simply type
325:36 granular and so i'm going to simply type in storage and as you can see i'm coming
325:38 in storage and as you can see i'm coming up with some more granular roles as
325:40 up with some more granular roles as opposed to the primitive roles that i
325:43 opposed to the primitive roles that i only had access to prior to the search
325:45 only had access to prior to the search so i'm going to click on storage object
325:47 so i'm going to click on storage object viewer for read access to cloud storage
325:50 viewer for read access to cloud storage i'm not going to add any conditions and
325:52 i'm not going to add any conditions and i'm going to add another role and this
325:54 i'm going to add another role and this time
325:55 time i'm going to add storage object creator
325:58 i'm going to add storage object creator and so those are all the permissions i
326:00 and so those are all the permissions i need for read write access to cloud
326:02 need for read write access to cloud storage and so now i can simply click on
326:05 storage and so now i can simply click on continue and so now i'm being prompted
326:08 continue and so now i'm being prompted to add another user to act as a service
326:10 to add another user to act as a service account and this is what we discussed in
326:12 account and this is what we discussed in the last lesson about service accounts
326:15 the last lesson about service accounts being both a member and a resource now
326:18 being both a member and a resource now notice that i have an option for both
326:20 notice that i have an option for both the service account users role and the
326:22 the service account users role and the service account admins role now as
326:25 service account admins role now as discussed earlier the service account
326:27 discussed earlier the service account and men's role has the ability to grant
326:29 and men's role has the ability to grant other users the role of service account
326:32 other users the role of service account user and so because we don't want to do
326:35 user and so because we don't want to do that i'm going to leave both of these
326:36 that i'm going to leave both of these fields blank
326:38 fields blank and simply click on done now i know in
326:40 and simply click on done now i know in the last lesson i talked about creating
326:43 the last lesson i talked about creating custom keys for authentication
326:46 custom keys for authentication in case you're hosting your code on
326:48 in case you're hosting your code on premise or on another cloud and so if i
326:51 premise or on another cloud and so if i wanted to do that i can simply go to the
326:53 wanted to do that i can simply go to the actions menu and click on create key and
326:56 actions menu and click on create key and it'll give me the option on creating a
326:59 it'll give me the option on creating a private key either using json or p12
327:02 private key either using json or p12 format and because i'm not creating any
327:04 format and because i'm not creating any keys i'm going to simply click on cancel
327:07 keys i'm going to simply click on cancel and so in order for me to apply this
327:09 and so in order for me to apply this service account
327:10 service account to our vm instance i'm going to now go
327:13 to our vm instance i'm going to now go back over to the navigation menu and go
327:16 back over to the navigation menu and go back into compute engine and so now in
327:18 back into compute engine and so now in order for me to change this service
327:20 order for me to change this service account that's currently assigned to
327:23 account that's currently assigned to this instance i'm going to go ahead and
327:25 this instance i'm going to go ahead and check off this instance and click on
327:27 check off this instance and click on stop
327:28 stop now please note that in order to change
327:31 now please note that in order to change service accounts on any instance you
327:34 service accounts on any instance you must stop it first before you can edit
327:36 must stop it first before you can edit the service account and so now that the
327:38 the service account and so now that the instance has stopped i'm going to drill
327:40 instance has stopped i'm going to drill down into this instance one
327:43 down into this instance one and i'm going to click on edit
327:45 and i'm going to click on edit now i'm going to scroll down to the
327:47 now i'm going to scroll down to the bottom
327:48 bottom and at the bottom you will find the
327:50 and at the bottom you will find the service account field and clicking on
327:52 service account field and clicking on the drop down i'll find my custom
327:54 the drop down i'll find my custom service account as a bow tie demo so i
327:57 service account as a bow tie demo so i want to select this and simply click on
327:59 want to select this and simply click on save and so now that i've selected my
328:02 save and so now that i've selected my new service account to be used in this
328:04 new service account to be used in this vm instance i can now start up the
328:07 vm instance i can now start up the instance again to test out the
328:09 instance again to test out the permissions that were granted
328:11 permissions that were granted and so just as a quick note here i
328:13 and so just as a quick note here i wanted to bring your attention to the
328:15 wanted to bring your attention to the external ip whenever stopping and
328:18 external ip whenever stopping and starting an instance with an ephemeral
328:20 starting an instance with an ephemeral ip in other words it is not assigned a
328:23 ip in other words it is not assigned a static ip your vm instance will receive
328:26 static ip your vm instance will receive a new ip address and i'll be getting
328:28 a new ip address and i'll be getting into this in a lot deeper detail in the
328:31 into this in a lot deeper detail in the compute engine section of the course and
328:34 compute engine section of the course and so now i'm going to ssh into this
328:36 so now i'm going to ssh into this instance
328:37 instance now i'm going to run the same gsutil
328:40 now i'm going to run the same gsutil command that i did previously to list
328:42 command that i did previously to list all the files in the bucket so i'm going
328:45 all the files in the bucket so i'm going to run the command gsutil ls for list
328:48 to run the command gsutil ls for list and gs
328:50 and gs colon forward slash forward slash bow
328:52 colon forward slash forward slash bow tie inc fw 2012 designs
328:57 tie inc fw 2012 designs and as you can see i'm able to read all
329:00 and as you can see i'm able to read all the files in the bucket now the
329:02 the files in the bucket now the difference in the permissions granted
329:04 difference in the permissions granted for the service account is that i'm able
329:06 for the service account is that i'm able to write files to cloud storage and so
329:10 to write files to cloud storage and so in order to test that i'm going to use
329:12 in order to test that i'm going to use the touch command again and i'm going to
329:14 the touch command again and i'm going to name the file file2 and so now i'm going
329:17 name the file file2 and so now i'm going to copy this file to the cloud storage
329:20 to copy this file to the cloud storage bucket by using the command gsutil cp
329:25 bucket by using the command gsutil cp file2 and the bucket name
329:27 file2 and the bucket name gs colon forward slash forward slash bow
329:30 gs colon forward slash forward slash bow tie inc fw 2012 designs and as expected
329:35 tie inc fw 2012 designs and as expected the file copied over successfully as we
329:37 the file copied over successfully as we do have permissions to write to cloud
329:40 do have permissions to write to cloud storage and so before i end this
329:42 storage and so before i end this demonstration i wanted to quickly go
329:45 demonstration i wanted to quickly go over exactly how to create service
329:48 over exactly how to create service accounts
329:49 accounts using the command line and so i'm going
329:50 using the command line and so i'm going to close down this tab and i'm going to
329:53 to close down this tab and i'm going to head up to the top right hand corner and
329:55 head up to the top right hand corner and activate my cloud shell i'm going to
329:58 activate my cloud shell i'm going to make this window a little bit bigger and
330:00 make this window a little bit bigger and so now in order to view the service
330:02 so now in order to view the service accounts i currently have
330:04 accounts i currently have i'm going to run the command
330:06 i'm going to run the command gcloud
330:07 gcloud iam
330:08 iam service dash accounts
330:12 service dash accounts list
330:17 and so as expected the compute engine default service account along with the
330:20 default service account along with the custom service account that i created
330:22 custom service account that i created earlier called sa bowtie demo is now
330:25 earlier called sa bowtie demo is now displaying and in order to just verify
330:28 displaying and in order to just verify that i'm going to go over to iam
330:30 that i'm going to go over to iam under service accounts and as you can
330:33 under service accounts and as you can see it is reflecting exactly the same in
330:35 see it is reflecting exactly the same in the console so now in order for me to
330:37 the console so now in order for me to create a new service account using the
330:40 create a new service account using the command line i'm going to run the
330:42 command line i'm going to run the command
330:43 command gcloud iam service accounts create
330:47 gcloud iam service accounts create and the name of the service account
330:49 and the name of the service account which i'm going to call sa-tony bowtie
330:52 which i'm going to call sa-tony bowtie along with the display name as essay
330:55 along with the display name as essay tony bowtie as well and i'm going to hit
330:57 tony bowtie as well and i'm going to hit enter
330:59 enter and my service account has been created
331:01 and my service account has been created so now if i run the command gcloud i am
331:04 so now if i run the command gcloud i am service accounts list
331:06 service accounts list i should see my new service account and
331:08 i should see my new service account and as well if i did a refresh here on the
331:11 as well if i did a refresh here on the console i can see that it is reflecting
331:14 console i can see that it is reflecting the same so now that we've created our
331:17 the same so now that we've created our new service account we need to assign
331:19 new service account we need to assign some permissions to it in order for us
331:22 some permissions to it in order for us to be able to use it and so if i go over
331:24 to be able to use it and so if i go over here to iam in the console i can see
331:27 here to iam in the console i can see here that my service account has not
331:29 here that my service account has not been assigned any permissions and so in
331:31 been assigned any permissions and so in order to do that i am going to simply
331:34 order to do that i am going to simply run the command
331:35 run the command gcloud projects
331:37 gcloud projects add dash iam-policy-binding
331:40 add dash iam-policy-binding so we're adding a policy binding and
331:42 so we're adding a policy binding and then the name of the project catbow ties
331:45 then the name of the project catbow ties fall 2021 we need to add the member
331:48 fall 2021 we need to add the member which is the new service account email
331:50 which is the new service account email address along with the role of storage
331:53 address along with the role of storage object viewer i'm going to hit enter
331:56 object viewer i'm going to hit enter and as you can see
331:57 and as you can see my member sa tony bowtie has been
332:00 my member sa tony bowtie has been assigned the storage object viewer role
332:03 assigned the storage object viewer role and so if i wanted to grant some other
332:05 and so if i wanted to grant some other roles to the service account i can do
332:07 roles to the service account i can do that as well and so if i did a refresh
332:09 that as well and so if i did a refresh here i can see that the console reflects
332:12 here i can see that the console reflects exactly the same and so in order for me
332:15 exactly the same and so in order for me to use this account in my instance i'm
332:17 to use this account in my instance i'm going to first have to stop my instance
332:20 going to first have to stop my instance attach my service account and then start
332:22 attach my service account and then start up my instance again so i'm going to go
332:24 up my instance again so i'm going to go over to my cloud shell i'm just going to
332:26 over to my cloud shell i'm just going to clear the screen and i'm going to paste
332:28 clear the screen and i'm going to paste in the command gcloud compute instances
332:31 in the command gcloud compute instances stop the name of the instance along with
332:34 stop the name of the instance along with the zone and now that the instance has
332:36 the zone and now that the instance has stopped i can now add my surface account
332:39 stopped i can now add my surface account to the instance and so i'm going to use
332:42 to the instance and so i'm going to use the command gcloud compute instances
332:45 the command gcloud compute instances set service account instance 1
332:48 set service account instance 1 along with the zone and the service
332:50 along with the zone and the service account email address i'm going to go
332:52 account email address i'm going to go ahead and hit enter
332:55 ahead and hit enter and it has now been successfully added
332:57 and it has now been successfully added and so now that that's done i can now
332:59 and so now that that's done i can now start up the instance by using the
333:01 start up the instance by using the command gcloud compute instances start
333:04 command gcloud compute instances start along with the instance name and the
333:06 along with the instance name and the zone and so now if i go over to my
333:09 zone and so now if i go over to my navigation menu and go over to compute
333:11 navigation menu and go over to compute engine and drill down on the instance if
333:14 engine and drill down on the instance if i scroll down to the bottom
333:16 i scroll down to the bottom i'll be able to see that my new service
333:19 i'll be able to see that my new service account has been added
333:20 account has been added and so this is a great demonstration for
333:23 and so this is a great demonstration for when you want to add different service
333:25 when you want to add different service accounts for your different applications
333:28 accounts for your different applications on different instances or even on
333:30 on different instances or even on different resources and so that's pretty
333:32 different resources and so that's pretty much all i wanted to cover in this
333:34 much all i wanted to cover in this demonstration so you can now mark this
333:36 demonstration so you can now mark this lesson as complete and let's move on to
333:39 lesson as complete and let's move on to the next one
333:45 welcome back in this lesson i'm going to dive into
333:47 in this lesson i'm going to dive into cloud identity google's identity as a
333:50 cloud identity google's identity as a service offering for google cloud that
333:53 service offering for google cloud that maximizes end user efficiency protect
333:56 maximizes end user efficiency protect company data
333:58 company data and so much more
334:00 and so much more now cloud identity as i said before is
334:02 now cloud identity as i said before is an identity as a service solution that
334:05 an identity as a service solution that centrally manages users and groups this
334:08 centrally manages users and groups this would be the sole system for
334:10 would be the sole system for authentication and that provides a
334:12 authentication and that provides a single sign-on experience for all
334:15 single sign-on experience for all employees of an organization to be used
334:19 employees of an organization to be used for all your internal and external
334:21 for all your internal and external applications cloud identity also gives
334:24 applications cloud identity also gives you more control over the accounts that
334:27 you more control over the accounts that are used in your organization for
334:29 are used in your organization for example if developers in your
334:31 example if developers in your organization use personal accounts such
334:34 organization use personal accounts such as gmail accounts those accounts are
334:36 as gmail accounts those accounts are outside of your control
334:38 outside of your control so when you adopt cloud identity you can
334:41 so when you adopt cloud identity you can manage access and compliance across all
334:44 manage access and compliance across all the users in your domain now when you
334:47 the users in your domain now when you adopt cloud identity you create a cloud
334:50 adopt cloud identity you create a cloud identity account for each of your users
334:53 identity account for each of your users and groups you can then use iam to
334:55 and groups you can then use iam to manage access to google cloud resources
334:58 manage access to google cloud resources for each cloud identity account and you
335:01 for each cloud identity account and you can also configure cloud identity to
335:04 can also configure cloud identity to federate identities between google and
335:07 federate identities between google and other identity providers such as active
335:10 other identity providers such as active directory and azure active directory and
335:13 directory and azure active directory and i'll be getting more into that a little
335:15 i'll be getting more into that a little bit later
335:17 bit later so now when it comes to cloud identity
335:19 so now when it comes to cloud identity it gives you so much more than just user
335:23 it gives you so much more than just user and group management it provides a slew
335:25 and group management it provides a slew of features such as device management
335:29 of features such as device management security
335:31 security single sign-on
335:33 single sign-on reporting
335:34 reporting and directory management
335:36 and directory management and i will be diving deeper into each
335:39 and i will be diving deeper into each one of these features of cloud identity
335:41 one of these features of cloud identity now starting with device management
335:44 now starting with device management this lets people in any organization
335:47 this lets people in any organization access their work accounts from mobile
335:49 access their work accounts from mobile devices while keeping the organization's
335:52 devices while keeping the organization's data more secure in today's world
335:55 data more secure in today's world employees want to access business
335:57 employees want to access business applications from wherever they are
336:00 applications from wherever they are whether at home at work
336:03 whether at home at work or even traveling
336:04 or even traveling and many even want to use their own
336:06 and many even want to use their own devices which is also known as bring
336:09 devices which is also known as bring your own device or byod for short using
336:13 your own device or byod for short using mobile device management there are
336:15 mobile device management there are several ways that you can provide the
336:18 several ways that you can provide the business applications employees need
336:21 business applications employees need on their personal devices while
336:23 on their personal devices while implementing policies that keep the
336:25 implementing policies that keep the corporate data safe you can create a
336:27 corporate data safe you can create a white list of approved applications
336:30 white list of approved applications where users can access corporate data
336:32 where users can access corporate data securely through those applications
336:35 securely through those applications you can enforce work profiles on android
336:37 you can enforce work profiles on android devices and requiring managed
336:40 devices and requiring managed applications on ios devices policies can
336:44 applications on ios devices policies can also be pushed out on these devices to
336:46 also be pushed out on these devices to protect corporate data and identities as
336:50 protect corporate data and identities as well as keeping inventory of devices
336:53 well as keeping inventory of devices with corporate data present then when
336:55 with corporate data present then when these devices are either no longer being
336:57 these devices are either no longer being used for corporate use or stolen the
337:00 used for corporate use or stolen the device can then be wiped of all its
337:03 device can then be wiped of all its corporate data device management also
337:06 corporate data device management also gives organizations the power to enforce
337:09 gives organizations the power to enforce passcodes
337:10 passcodes as well as auditing now moving into the
337:13 as well as auditing now moving into the security component of cloud identity
337:15 security component of cloud identity this is where two-step verification
337:18 this is where two-step verification steps in now as explained earlier
337:20 steps in now as explained earlier two-step verification or to sv
337:24 two-step verification or to sv is a security feature that requires
337:26 is a security feature that requires users to verify their identity through
337:29 users to verify their identity through something they know such as a password
337:32 something they know such as a password plus something they have such as a
337:34 plus something they have such as a physical key or access code and this can
337:36 physical key or access code and this can be anything from security keys to google
337:39 be anything from security keys to google prompt
337:40 prompt the authenticator app and backup codes
337:44 the authenticator app and backup codes so cloud identity helps by applying
337:47 so cloud identity helps by applying security best practices along with being
337:50 security best practices along with being able to deploy
337:52 able to deploy two-step verification for the whole
337:54 two-step verification for the whole company along with enforcement controls
337:57 company along with enforcement controls and can also manage passwords to make
338:00 and can also manage passwords to make sure they are meeting the enforced
338:03 sure they are meeting the enforced password requirements automatically so
338:06 password requirements automatically so single sign-on is where users can access
338:10 single sign-on is where users can access many applications
338:11 many applications without having to enter their username
338:14 without having to enter their username and password for each application single
338:17 and password for each application single sign-on also known as sso can provide a
338:21 sign-on also known as sso can provide a single point of authentication through
338:23 single point of authentication through an identity provider also known as idp
338:27 an identity provider also known as idp for short you can set up sso using
338:30 for short you can set up sso using google as an identity provider to access
338:33 google as an identity provider to access a slew of third-party applications
338:36 a slew of third-party applications as well as any on-premise or custom
338:39 as well as any on-premise or custom in-house applications you can also
338:42 in-house applications you can also access a centralized dashboard for
338:44 access a centralized dashboard for conveniently accessing your applications
338:47 conveniently accessing your applications so now when lisa logs in with her
338:50 so now when lisa logs in with her employee credentials she will then have
338:52 employee credentials she will then have access to many cloud applications that
338:56 access to many cloud applications that bowtie inc it department has approved
338:59 bowtie inc it department has approved through a catalog of sso applications
339:02 through a catalog of sso applications and this will increase both security and
339:05 and this will increase both security and productivity for lisa and bowtie inc as
339:09 productivity for lisa and bowtie inc as lisa won't have to enter
339:11 lisa won't have to enter a separate username and password for
339:13 a separate username and password for separate applications now getting into
339:16 separate applications now getting into reporting this covers audit logs for
339:19 reporting this covers audit logs for logins groups devices and even tokens
339:24 logins groups devices and even tokens you're even able to export these logs to
339:27 you're even able to export these logs to bigquery for analysis
339:29 bigquery for analysis and then you can create reports from
339:31 and then you can create reports from these logs that cover security
339:34 these logs that cover security applications and activity
339:36 applications and activity now moving on to the last component of
339:38 now moving on to the last component of cloud identity is directory management
339:42 cloud identity is directory management and this provides profile information
339:45 and this provides profile information for users in your organization
339:47 for users in your organization email
339:48 email and group addresses and shared external
339:51 and group addresses and shared external contacts in the directory using google
339:54 contacts in the directory using google cloud directory sync or gcds you can
339:58 cloud directory sync or gcds you can synchronize the data in your google
340:00 synchronize the data in your google account with your microsoft active
340:02 account with your microsoft active directory or ldap server gcds doesn't
340:06 directory or ldap server gcds doesn't migrate any content such as your email
340:09 migrate any content such as your email your calendar events or your files to
340:12 your calendar events or your files to your google account gcds is used to
340:15 your google account gcds is used to synchronize all your users groups and
340:18 synchronize all your users groups and shared contacts to match the information
340:21 shared contacts to match the information in your ldap server which could be your
340:23 in your ldap server which could be your active directory server or your azure
340:26 active directory server or your azure active directory domain now getting
340:28 active directory domain now getting deeper into google cloud directory sync
340:31 deeper into google cloud directory sync i'd like to touch on active directory
340:34 i'd like to touch on active directory for just a minute now active directory
340:36 for just a minute now active directory is a very common directory service
340:39 is a very common directory service developed by microsoft and is a
340:42 developed by microsoft and is a cornerstone in most big corporate
340:44 cornerstone in most big corporate on-premises environments it
340:46 on-premises environments it authenticates and authorizes all users
340:49 authenticates and authorizes all users and computers in a windows domain type
340:52 and computers in a windows domain type network signing and enforcing security
340:56 network signing and enforcing security policies for all computers and
340:58 policies for all computers and installing or updating software as
341:01 installing or updating software as necessary now as you can see here in the
341:03 necessary now as you can see here in the diagram the active directory forest
341:06 diagram the active directory forest contains the active directory domain a
341:09 contains the active directory domain a bowtieinc.co
341:11 bowtieinc.co and the active directory federation
341:13 and the active directory federation services of bowtieinc.co where the
341:16 services of bowtieinc.co where the active directory forest is the
341:18 active directory forest is the hierarchical structure for active
341:21 hierarchical structure for active directory the active directory domain is
341:24 directory the active directory domain is responsible for storing information
341:26 responsible for storing information about members of the domain including
341:29 about members of the domain including devices and users and it verifies their
341:32 devices and users and it verifies their credentials and defines their access
341:35 credentials and defines their access rights active directory federation
341:37 rights active directory federation services or adfs
341:40 services or adfs is a single sign-on service where
341:42 is a single sign-on service where federation is the means of linking a
341:45 federation is the means of linking a person's electronic identity and
341:47 person's electronic identity and attributes stored across multiple
341:50 attributes stored across multiple distinct identity management systems so
341:54 distinct identity management systems so you can think of it as a subset of sso
341:57 you can think of it as a subset of sso as it relates only to authentication
342:00 as it relates only to authentication technologies used for federated identity
342:03 technologies used for federated identity include some common terms that you may
342:05 include some common terms that you may hear me or others in the industry use
342:08 hear me or others in the industry use from time to time such as saml which
342:10 from time to time such as saml which stands for security assertion markup
342:13 stands for security assertion markup language oauth open id and even security
342:17 language oauth open id and even security tokens such as simple web tokens json
342:20 tokens such as simple web tokens json web tokens and saml assertions and so
342:23 web tokens and saml assertions and so when you have identities already in your
342:25 when you have identities already in your on-premises environment that live in
342:28 on-premises environment that live in active directory you need a way to tie
342:31 active directory you need a way to tie these identities to the cloud and so
342:34 these identities to the cloud and so here's where you would use google cloud
342:36 here's where you would use google cloud directory sync to automatically
342:39 directory sync to automatically provision users and groups from active
342:42 provision users and groups from active directory to cloud identity or g suite
342:45 directory to cloud identity or g suite google cloud directory sync is a free
342:48 google cloud directory sync is a free google provided tool that implements the
342:51 google provided tool that implements the synchronization process and can be run
342:54 synchronization process and can be run on google cloud or in your on-premises
342:57 on google cloud or in your on-premises environment synchronization is one way
343:01 environment synchronization is one way so that active directory remains the
343:03 so that active directory remains the source of truth cloud identity or g
343:06 source of truth cloud identity or g suite uses active directory federation
343:09 suite uses active directory federation services or adfs for single sign-on any
343:14 services or adfs for single sign-on any existing corporate applications and
343:16 existing corporate applications and other sas services can continue to use
343:20 other sas services can continue to use your adfs as an identity provider now i
343:23 your adfs as an identity provider now i know this may be a review for some who
343:26 know this may be a review for some who are advanced in this topic but for those
343:28 are advanced in this topic but for those who aren't this is a very important
343:31 who aren't this is a very important topic to know as google cloud directory
343:34 topic to know as google cloud directory sync is a big part of cloud identity and
343:38 sync is a big part of cloud identity and is a common way that is used in many
343:40 is a common way that is used in many corporate environments to sync active
343:43 corporate environments to sync active directory or any other ldap server to
343:46 directory or any other ldap server to google cloud especially when you want to
343:49 google cloud especially when you want to keep your active directory as the single
343:51 keep your active directory as the single source of truth and so that's pretty
343:53 source of truth and so that's pretty much all i wanted to cover
343:55 much all i wanted to cover when it comes to cloud identity and
343:58 when it comes to cloud identity and google cloud directory sync so you can
344:00 google cloud directory sync so you can now mark this lesson as complete and
344:02 now mark this lesson as complete and let's move on to the next one
344:04 let's move on to the next one [Music]
344:08 [Music] welcome back now i wanted to close out
344:11 welcome back now i wanted to close out this section by briefly going over the
344:13 this section by briefly going over the best practices to follow when working
344:16 best practices to follow when working with identity and access management so
344:19 with identity and access management so the phrase that was discussed in the
344:21 the phrase that was discussed in the beginning of this lesson that will
344:23 beginning of this lesson that will continuously come up in the exam is the
344:26 continuously come up in the exam is the principle of least privilege and again
344:29 principle of least privilege and again this is where you would apply only the
344:31 this is where you would apply only the minimal access level required for what
344:34 minimal access level required for what is needed to be done and this can be
344:36 is needed to be done and this can be done using predefined roles which is a
344:39 done using predefined roles which is a more granular level role than using
344:41 more granular level role than using primitive roles which are very wide
344:44 primitive roles which are very wide scoped roles that are applied to the
344:46 scoped roles that are applied to the whole project roles should also be
344:48 whole project roles should also be granted at the smallest scope necessary
344:52 granted at the smallest scope necessary so for instance when assigning somebody
344:54 so for instance when assigning somebody the permissions needed
344:56 the permissions needed for managing pre-existing compute
344:58 for managing pre-existing compute instances assigning a compute instance
345:01 instances assigning a compute instance admin role might be sufficient for what
345:04 admin role might be sufficient for what they need to do as opposed to assigning
345:06 they need to do as opposed to assigning them the compute instance role that has
345:09 them the compute instance role that has full control of all compute engine
345:12 full control of all compute engine instance resources now when it comes to
345:15 instance resources now when it comes to child resources they cannot restrict
345:18 child resources they cannot restrict access granted on its parent
345:20 access granted on its parent so always remember to check the policy
345:22 so always remember to check the policy granted on every resource and make sure
345:25 granted on every resource and make sure you understand the hierarchical
345:28 you understand the hierarchical inheritance you also want to make sure
345:31 inheritance you also want to make sure that you restrict access to members
345:33 that you restrict access to members abilities to create and manage service
345:36 abilities to create and manage service accounts
345:37 accounts as users who are granted the service
345:39 as users who are granted the service account actor role for a service account
345:43 account actor role for a service account can access all the resources for which
345:46 can access all the resources for which the service account has access and
345:48 the service account has access and granting someone with the owner role
345:50 granting someone with the owner role should be used with caution as they will
345:53 should be used with caution as they will have access to modify almost all
345:56 have access to modify almost all resources
345:57 resources project-wide including iam policies and
346:01 project-wide including iam policies and billing granting an editor role might be
346:04 billing granting an editor role might be more sufficient for the needs of most
346:07 more sufficient for the needs of most when using primitive roles
346:09 when using primitive roles now when dealing with resource hierarchy
346:12 now when dealing with resource hierarchy to make it easy on how to structure your
346:14 to make it easy on how to structure your environment
346:16 environment you should look at mirroring your google
346:18 you should look at mirroring your google cloud resource hierarchy structure to
346:20 cloud resource hierarchy structure to your organizational structure in other
346:23 your organizational structure in other words the google cloud resource
346:25 words the google cloud resource hierarchy should reflect how your
346:27 hierarchy should reflect how your company is organized you should also use
346:29 company is organized you should also use projects to group resources that share
346:32 projects to group resources that share the same trust boundary as well as
346:34 the same trust boundary as well as setting policies at the organization
346:37 setting policies at the organization level and at the project level rather
346:40 level and at the project level rather than at the resource level now going
346:42 than at the resource level now going back to what we discussed earlier about
346:44 back to what we discussed earlier about the principle of least privilege
346:46 the principle of least privilege you should use this guideline to grant
346:49 you should use this guideline to grant iam roles that is
346:51 iam roles that is only give the least amount of access
346:53 only give the least amount of access necessary to your resources and when
346:56 necessary to your resources and when granting roles across multiple projects
346:59 granting roles across multiple projects it is recommended to grant them at the
347:02 it is recommended to grant them at the folder level instead of at the project
347:04 folder level instead of at the project level
347:05 level now diving back into service accounts a
347:08 now diving back into service accounts a separate trust boundary should always be
347:11 separate trust boundary should always be applied for any given application in
347:14 applied for any given application in other words create a new service account
347:17 other words create a new service account when multiple components are involved in
347:19 when multiple components are involved in your application you also want to make
347:22 your application you also want to make sure that you don't delete any service
347:24 sure that you don't delete any service accounts that are in use by running
347:27 accounts that are in use by running instances as your application is likely
347:30 instances as your application is likely to fail
347:31 to fail so you will want to schedule this during
347:33 so you will want to schedule this during plan down time to avoid any outages now
347:37 plan down time to avoid any outages now earlier on in this section we discussed
347:39 earlier on in this section we discussed service account keys and how they
347:42 service account keys and how they interact with google cloud and that is
347:44 interact with google cloud and that is the main authentication mechanism used
347:47 the main authentication mechanism used for keys so you want to make sure that
347:49 for keys so you want to make sure that any user managed keys are rotated
347:52 any user managed keys are rotated periodically to avoid being compromised
347:56 periodically to avoid being compromised you can rotate a key by creating a new
347:58 you can rotate a key by creating a new key switching applications to use the
348:01 key switching applications to use the new key and then deleting the old key
348:03 new key and then deleting the old key but be sure to create the new key first
348:06 but be sure to create the new key first before deleting the old one as this will
348:08 before deleting the old one as this will result in parts or even your entire
348:11 result in parts or even your entire application failing and also when
348:13 application failing and also when working with service account keys it's
348:15 working with service account keys it's always good practice to name your
348:18 always good practice to name your service keys and this will reflect your
348:20 service keys and this will reflect your use for those keys and permissions for
348:22 use for those keys and permissions for those keys so you know what they are
348:24 those keys so you know what they are used for when you're looking at them now
348:26 used for when you're looking at them now when you are giving access to service
348:28 when you are giving access to service accounts you want to make sure that only
348:31 accounts you want to make sure that only those who truly need access are the ones
348:34 those who truly need access are the ones that have it
348:35 that have it others in your environment should be
348:37 others in your environment should be restricted to avoid any misuse now when
348:40 restricted to avoid any misuse now when it comes to keeping your service account
348:42 it comes to keeping your service account keys safe i can't stress this enough you
348:45 keys safe i can't stress this enough you never want to check in these keys source
348:48 never want to check in these keys source code or leave them in your downloads
348:50 code or leave them in your downloads directory as this is a prime way of not
348:53 directory as this is a prime way of not only getting your keys compromised but
348:56 only getting your keys compromised but compromising your entire environment to
348:59 compromising your entire environment to be accessed publicly
349:01 be accessed publicly now we touched a bit on auditing but we
349:04 now we touched a bit on auditing but we haven't really gone into it in detail
349:07 haven't really gone into it in detail and we'll be going into it later on in
349:09 and we'll be going into it later on in the course but touching on best
349:11 the course but touching on best practices
349:12 practices you want to be sure to check your cloud
349:14 you want to be sure to check your cloud audit logs regularly and audit all i am
349:17 audit logs regularly and audit all i am policy changes whenever you edit any iam
349:21 policy changes whenever you edit any iam policies a log is generated that records
349:24 policies a log is generated that records that change and so you always want to
349:26 that change and so you always want to periodically check these logs to make
349:29 periodically check these logs to make sure that there are no changes that are
349:32 sure that there are no changes that are out of your security scope you also want
349:34 out of your security scope you also want to check to see
349:36 to check to see who has editing permissions on these iam
349:39 who has editing permissions on these iam policies and make sure that those who
349:41 policies and make sure that those who hold them have the rights to do so point
349:44 hold them have the rights to do so point being is that you want to restrict who
349:47 being is that you want to restrict who has the ability to edit policies and
349:50 has the ability to edit policies and once these audit logs have been
349:52 once these audit logs have been generated you want to export them to
349:54 generated you want to export them to cloud storage so that you're able to
349:56 cloud storage so that you're able to store them for long term retention as
349:59 store them for long term retention as these logs are typically held for weeks
350:02 these logs are typically held for weeks and not years getting back to service
350:04 and not years getting back to service account keys
350:06 account keys service account key access should be
350:08 service account key access should be periodically audited for viewing of any
350:11 periodically audited for viewing of any misuse or unauthorized access and lastly
350:16 misuse or unauthorized access and lastly audit logs should also be restricted to
350:19 audit logs should also be restricted to only those who need access and others
350:22 only those who need access and others should have no permissions to view them
350:24 should have no permissions to view them and this can be done by adding a role to
350:27 and this can be done by adding a role to be able to view these logs now when
350:30 be able to view these logs now when touching on policy management you want
350:32 touching on policy management you want to grant access to all projects in your
350:36 to grant access to all projects in your organization by using an organization
350:39 organization by using an organization level policy you also want to grant
350:41 level policy you also want to grant roles to a google group instead of
350:44 roles to a google group instead of individual users as it is easier to add
350:47 individual users as it is easier to add or remove members from a google group
350:50 or remove members from a google group instead of updating an im policy and
350:53 instead of updating an im policy and finally when you need to grant multiple
350:56 finally when you need to grant multiple roles to a task you should create a
350:59 roles to a task you should create a google group as it is a lot easier to
351:01 google group as it is a lot easier to grant the roles to that group and then
351:04 grant the roles to that group and then add the users to that group as opposed
351:07 add the users to that group as opposed to adding roles to each individual user
351:10 to adding roles to each individual user and so that's all i wanted to cover on
351:12 and so that's all i wanted to cover on this short yet very important lesson on
351:15 this short yet very important lesson on best practices when it comes to iam now
351:18 best practices when it comes to iam now i know this is not the most exciting
351:20 i know this is not the most exciting topic but will become extremely
351:22 topic but will become extremely necessary when you are dealing with
351:24 necessary when you are dealing with managing users groups and policies in
351:27 managing users groups and policies in environments that require you to use iam
351:31 environments that require you to use iam securely and so please keep this in mind
351:34 securely and so please keep this in mind whenever you are working in any
351:35 whenever you are working in any environment as it will help you grant
351:38 environment as it will help you grant the proper permissions when it comes to
351:40 the proper permissions when it comes to these different topics so now i highly
351:43 these different topics so now i highly recommend that you take a break grab a
351:45 recommend that you take a break grab a tea or coffee before moving on into the
351:48 tea or coffee before moving on into the next section and so for now you can mark
351:50 next section and so for now you can mark this lesson as complete and whenever
351:52 this lesson as complete and whenever you're ready please join me in the next
351:55 you're ready please join me in the next section
351:55 section [Music]
351:59 [Music] welcome back
352:01 welcome back now i wanted to make this as easy as
352:03 now i wanted to make this as easy as possible for those students who do not
352:05 possible for those students who do not have a background in networking or any
352:08 have a background in networking or any networking knowledge in general which is
352:11 networking knowledge in general which is why i wanted to add this quick
352:13 why i wanted to add this quick networking refresher to kick off the
352:15 networking refresher to kick off the networking section of this course so
352:18 networking section of this course so with that being said let's dive in so
352:20 with that being said let's dive in so before the internet computers were
352:22 before the internet computers were standalone and didn't have the
352:24 standalone and didn't have the capabilities to send emails
352:27 capabilities to send emails transfer files or share any information
352:31 transfer files or share any information fast forward some time
352:33 fast forward some time people started to connect their
352:34 people started to connect their computers together to share and be able
352:38 computers together to share and be able to do the things that modern networks
352:40 to do the things that modern networks can do today part of being in this
352:42 can do today part of being in this network is being able to identify each
352:45 network is being able to identify each computer to know where to send and
352:47 computer to know where to send and receive files
352:49 receive files this problem was solved by using an
352:51 this problem was solved by using an address to identify each computer on the
352:54 address to identify each computer on the network like humans use a street address
352:57 network like humans use a street address to identify where they live
352:59 to identify where they live so that mail and packages can be
353:01 so that mail and packages can be delivered to them an ip address is used
353:04 delivered to them an ip address is used to identify a computer or device on any
353:08 to identify a computer or device on any network so communication between
353:10 network so communication between machines was done by the use of an ip
353:13 machines was done by the use of an ip address
353:14 address a numerical label assigned to each
353:17 a numerical label assigned to each device connected to a computer network
353:20 device connected to a computer network that uses the internet protocol for
353:22 that uses the internet protocol for communication also known as ip for short
353:26 communication also known as ip for short so for this system to work a
353:28 so for this system to work a communication system was put in place
353:31 communication system was put in place that defined how the network would
353:33 that defined how the network would function
353:34 function this system was put together as a
353:37 this system was put together as a consistent model of protocol layers
353:40 consistent model of protocol layers defining interoperability
353:42 defining interoperability between network devices and software in
353:45 between network devices and software in layers
353:46 layers to standardize how different protocols
353:49 to standardize how different protocols would communicate in this stack this
353:51 would communicate in this stack this stack is referred to as the open systems
353:55 stack is referred to as the open systems interconnection model or you may hear
353:58 interconnection model or you may hear many refer to it as the seven layer osi
354:01 many refer to it as the seven layer osi model now this is not a deep dive
354:03 model now this is not a deep dive networking course but i did feel the
354:06 networking course but i did feel the need to cover that which is necessary
354:09 need to cover that which is necessary for the understanding of the elements
354:11 for the understanding of the elements taught in this course
354:13 taught in this course for those wanting to learn more about
354:14 for those wanting to learn more about the osi model and the layers within it
354:17 the osi model and the layers within it please check out the links that i have
354:19 please check out the links that i have included in the lesson text below
354:22 included in the lesson text below so for this lesson and the next i will
354:24 so for this lesson and the next i will be covering the specific layers with its
354:27 be covering the specific layers with its protocols that are highlighted here and
354:30 protocols that are highlighted here and will help you understand the networking
354:33 will help you understand the networking concepts in this course with a bit
354:35 concepts in this course with a bit better clarity
354:37 better clarity so i'll be covering a layer 3 being the
354:39 so i'll be covering a layer 3 being the network layer layer 4 being the
354:42 network layer layer 4 being the transport layer and layer 7 being the
354:45 transport layer and layer 7 being the application layer so first up i will be
354:48 application layer so first up i will be covering layer 3 which is the networking
354:50 covering layer 3 which is the networking layer along with the internet protocol
354:54 layer along with the internet protocol now there are two versions of the
354:56 now there are two versions of the internet protocol and are managed
354:58 internet protocol and are managed globally by the regional internet
355:00 globally by the regional internet registries also known as the rir the
355:04 registries also known as the rir the first one which is ipv4
355:06 first one which is ipv4 is the original version of the internet
355:08 is the original version of the internet protocol that first came on the scene in
355:11 protocol that first came on the scene in 1981 the second version is ipv6 which is
355:15 1981 the second version is ipv6 which is a newer version designed in 2017 to deal
355:19 a newer version designed in 2017 to deal with the problem of ipv4 address
355:22 with the problem of ipv4 address exhaustion meaning that the amount of
355:24 exhaustion meaning that the amount of usable ips were slowly being used up
355:28 usable ips were slowly being used up and i will be covering both versions of
355:30 and i will be covering both versions of the internet protocol in a little bit of
355:32 the internet protocol in a little bit of depth so let's first dive into ipv
355:36 depth so let's first dive into ipv version 4. so ipv4 can be read in a
355:39 version 4. so ipv4 can be read in a human readable notation represented in
355:43 human readable notation represented in dotted decimal notation consisting of
355:46 dotted decimal notation consisting of four numbers
355:47 four numbers each ranging from 0 to 255 separated by
355:52 each ranging from 0 to 255 separated by dots each part between the dots
355:55 dots each part between the dots represents a group of 8 bits also known
355:58 represents a group of 8 bits also known as an octet a valid range for an ip
356:01 as an octet a valid range for an ip address starts from 0.0.0.0
356:13 and this would give you a total number of over 4.2 billion ip addresses now
356:18 of over 4.2 billion ip addresses now this range was viewed as extremely large
356:20 this range was viewed as extremely large back then until the number of ip
356:23 back then until the number of ip addresses available were quickly
356:26 addresses available were quickly dwindling due to the many ipconnected
356:28 dwindling due to the many ipconnected devices that we have today
356:31 devices that we have today and this is when a new addressing
356:33 and this is when a new addressing architecture was introduced called
356:35 architecture was introduced called classful addressing where the address
356:38 classful addressing where the address was split into smaller ranges and this
356:41 was split into smaller ranges and this was originally assigned to you when you
356:43 was originally assigned to you when you needed an ip address by one of the
356:46 needed an ip address by one of the registries noted before so for any given
356:48 registries noted before so for any given ip address they're typically made of two
356:52 ip address they're typically made of two separate components the first part of
356:54 separate components the first part of the address is used to identify the
356:56 the address is used to identify the network that the address is a part of
356:59 network that the address is a part of the part that comes afterwards is used
357:01 the part that comes afterwards is used to specify a specific host within that
357:04 to specify a specific host within that network now the first part was assigned
357:06 network now the first part was assigned to you and your business by the
357:08 to you and your business by the registries
357:10 registries and the second part was for you to do it
357:12 and the second part was for you to do it as you'd like and so these ip addresses
357:15 as you'd like and so these ip addresses were assigned from the smaller ranges
357:17 were assigned from the smaller ranges explained earlier called classes the
357:20 explained earlier called classes the first range of classes is class a
357:23 first range of classes is class a and it started at 0.0.0.0
357:27 and it started at 0.0.0.0 and ended at 127.255
357:35 and this would give a total number of over 2.1 billion addresses with 128
357:40 over 2.1 billion addresses with 128 different networks class a ip addresses
357:43 different networks class a ip addresses can support over 16 million hosts per
357:45 can support over 16 million hosts per network and those who were assigned
357:47 network and those who were assigned addresses in this class
357:49 addresses in this class had a fixed value of the first octet the
357:52 had a fixed value of the first octet the second third and fourth octet was free
357:55 second third and fourth octet was free for the business to assign as they
357:57 for the business to assign as they choose class a ip addresses were to be
358:00 choose class a ip addresses were to be used by huge networks like those
358:03 used by huge networks like those deployed by internet service providers
358:05 deployed by internet service providers and so when ips started to dwindle many
358:08 and so when ips started to dwindle many companies return these class a network
358:10 companies return these class a network blocks back to the registries to assist
358:14 blocks back to the registries to assist with extending addressing capacity and
358:16 with extending addressing capacity and so the next range is class b and this is
358:19 so the next range is class b and this is half the size of the class a network the
358:22 half the size of the class a network the class b network range started at one at
358:24 class b network range started at one at 128.0.0.0
358:37 and carries a total number of over 1 billion ip addresses with over 16 000
358:41 billion ip addresses with over 16 000 networks the fixed value in this class
358:44 networks the fixed value in this class is of the first and second octet the
358:46 is of the first and second octet the third and fourth octet can be done with
358:49 third and fourth octet can be done with as you like ip addresses in this class
358:52 as you like ip addresses in this class were to be used for medium and large
358:54 were to be used for medium and large size networks in enterprises and
358:57 size networks in enterprises and organizations the next range is class c
359:00 organizations the next range is class c and this is half the size of the class b
359:03 and this is half the size of the class b network the class c network range starts
359:06 network the class c network range starts at 192
359:18 and carries a total of over half a billion addresses with over two million
359:21 billion addresses with over two million networks and can support up to
359:24 networks and can support up to 256 hosts the fixed value of this class
359:28 256 hosts the fixed value of this class is the first second and third octet and
359:32 is the first second and third octet and the fourth can be done with as you like
359:34 the fourth can be done with as you like ip addresses in this class
359:36 ip addresses in this class were the most common class and were to
359:39 were the most common class and were to be used in small business and home
359:41 be used in small business and home networks now there's a couple more
359:43 networks now there's a couple more classes that were not commonly used
359:46 classes that were not commonly used called class d and class e
359:49 called class d and class e and this is beyond the scope of this
359:51 and this is beyond the scope of this course so we won't be discussing this
359:53 course so we won't be discussing this and so this was the way that was used to
359:55 and so this was the way that was used to assign public ip addresses to devices on
359:59 assign public ip addresses to devices on the internet and allowed communication
360:01 the internet and allowed communication between devices now the problem with
360:04 between devices now the problem with classful addressing was that with
360:06 classful addressing was that with businesses that needed larger address
360:08 businesses that needed larger address blocks than a class c network provided
360:12 blocks than a class c network provided they received a class b block which in
360:14 they received a class b block which in most cases was much larger than required
360:17 most cases was much larger than required and the same thing happened with
360:19 and the same thing happened with requiring more ips than class b
360:22 requiring more ips than class b and getting a class a network block this
360:25 and getting a class a network block this problem introduced a lot of wasted ips
360:28 problem introduced a lot of wasted ips as there was no real middle ground and
360:30 as there was no real middle ground and so this was a way to address any
360:33 so this was a way to address any publicly routable ips now there were
360:36 publicly routable ips now there were certain ranges that were allocated for
360:38 certain ranges that were allocated for private use and were designed to be used
360:41 private use and were designed to be used in private networks whether on-premises
360:44 in private networks whether on-premises or in cloud and again they are not
360:47 or in cloud and again they are not designed for public use and also didn't
360:50 designed for public use and also didn't have the need to communicate over the
360:52 have the need to communicate over the public internet and so these private ip
360:55 public internet and so these private ip address spaces were standardized using
360:58 address spaces were standardized using the rfc standard 1918
361:01 the rfc standard 1918 and again these ip addresses are
361:03 and again these ip addresses are designed for private use and can be used
361:06 designed for private use and can be used anywhere you like as long as they are
361:08 anywhere you like as long as they are still kept private chances are a network
361:11 still kept private chances are a network that you've come across whether it be a
361:13 that you've come across whether it be a cloud provider your home network or
361:16 cloud provider your home network or public wi-fi will use one of these
361:18 public wi-fi will use one of these classes to define their network and
361:21 classes to define their network and these are split into three ranges
361:24 these are split into three ranges first one being single class a with
361:26 first one being single class a with 10.0.0
361:29 10.0.0 ending in 10.255.255.255.
361:35 the class b range ranging from 172.16.0.0
361:40 to 172.31 dot
361:46 and lastly class c which was ranging from 192.168.0.0
361:57 now for those networks that use these private ips over the public internet the
362:00 private ips over the public internet the process they would use is a process
362:02 process they would use is a process called network address translation
362:05 called network address translation or nat for short
362:07 or nat for short and i will be covering this in a
362:08 and i will be covering this in a different lesson later on in the section
362:11 different lesson later on in the section this method of classful addressing has
362:13 this method of classful addressing has been replaced with something a bit more
362:16 been replaced with something a bit more efficient where network blocks can be
362:18 efficient where network blocks can be defined more granularly and was done due
362:21 defined more granularly and was done due to the internet running out of ipv4
362:24 to the internet running out of ipv4 addresses as we needed to allocate these
362:27 addresses as we needed to allocate these ips more efficiently now this method is
362:30 ips more efficiently now this method is called classless inter domain routing or
362:33 called classless inter domain routing or cider for short now with cider based
362:36 cider for short now with cider based networks you aren't limited to only
362:38 networks you aren't limited to only these three classes of networks class a
362:41 these three classes of networks class a b and c have been removed for something
362:44 b and c have been removed for something more efficient which will allow you to
362:47 more efficient which will allow you to create networks in any one of those
362:49 create networks in any one of those ranges cider ranges are represented by
362:53 ranges cider ranges are represented by its starting ip address called a network
362:56 its starting ip address called a network address
362:57 address followed by what is called a prefix
363:00 followed by what is called a prefix which is a slash and then a number this
363:03 which is a slash and then a number this slash number represents the size of the
363:05 slash number represents the size of the network the bigger the number the
363:07 network the bigger the number the smaller the network and the smaller the
363:10 smaller the network and the smaller the number the bigger the network given the
363:12 number the bigger the network given the example here
363:18 192.168.0.0 is the network address and the prefix is a slash 16. now at
363:22 and the prefix is a slash 16. now at this high level it is not necessary to
363:24 this high level it is not necessary to understand the math behind this but i
363:26 understand the math behind this but i will include a link in the lesson text
363:29 will include a link in the lesson text for those of you who are interested in
363:31 for those of you who are interested in learning more about it all you need to
363:33 learning more about it all you need to keep in mind is as i said before
363:36 keep in mind is as i said before the bigger the prefix number the smaller
363:38 the bigger the prefix number the smaller the network and the smaller the prefix
363:41 the network and the smaller the prefix number the bigger the network so just as
363:43 number the bigger the network so just as an example the size of this slash 16
363:46 an example the size of this slash 16 network is represented here by this
363:49 network is represented here by this circle its ip range is 192.168.0.0
364:01 and once you understand the math you will be able to tell that a slash 16
364:04 will be able to tell that a slash 16 range means that the network is the
364:07 range means that the network is the fixed value in the first and second
364:09 fixed value in the first and second octet the hosts on the network or the
364:12 octet the hosts on the network or the range are the values of anything in the
364:15 range are the values of anything in the third or fourth octets so this network
364:18 third or fourth octets so this network in total will provide us with 65
364:22 in total will provide us with 65 536
364:23 536 ip addresses now let's say you decided
364:26 ip addresses now let's say you decided to create a large network such as this
364:28 to create a large network such as this and you wanted to allocate part of it to
364:31 and you wanted to allocate part of it to another part of your business you can
364:33 another part of your business you can simply do so by splitting it in two
364:36 simply do so by splitting it in two and be left with two slash 17 networks
364:40 and be left with two slash 17 networks so instead of one slash 16 network you
364:43 so instead of one slash 16 network you will now have 2 17 networks and each
364:47 will now have 2 17 networks and each network will be assigned 32
364:50 network will be assigned 32 768 ip addresses so just to break it
364:54 768 ip addresses so just to break it down the previous network which was
364:56 down the previous network which was 192.16
365:02 forward slash 16 with the first two octets being the network which is
365:04 octets being the network which is 192.168
365:06 192.168 it leaves the third and fourth octet to
365:09 it leaves the third and fourth octet to distribute as you like and these third
365:11 distribute as you like and these third and fourth octets
365:13 and fourth octets are what you're having to create these
365:15 are what you're having to create these two networks so looking at the blue half
365:18 two networks so looking at the blue half the address range will start at 0.0 and
365:21 the address range will start at 0.0 and will end at 127.255.
365:24 will end at 127.255. the green half will start halfway
365:26 the green half will start halfway through the slash 16 network which will
365:29 through the slash 16 network which will be
365:30 be 128.0 and end at 255.255.
365:34 128.0 and end at 255.255. so now what if i was looking to break
365:36 so now what if i was looking to break this network down even further and break
365:39 this network down even further and break it into four networks well using cider
365:42 it into four networks well using cider ranges this makes things fairly easy as
365:45 ranges this makes things fairly easy as i can have it again and as shown here i
365:48 i can have it again and as shown here i would split the two slash 17 networks to
365:52 would split the two slash 17 networks to create four slash 18 networks so if i
365:55 create four slash 18 networks so if i took the blue half circle and split it
365:57 took the blue half circle and split it into two and then splitting the green
366:00 into two and then splitting the green half circle into
366:01 half circle into this would leave me with four slash 18
366:04 this would leave me with four slash 18 networks as seen here the blue quarter
366:07 networks as seen here the blue quarter would start from 192.168.0.0
366:14 ending with the last two octets of 63.255
366:16 63.255 and the red quarter which starts from
366:18 and the red quarter which starts from where the blue left off
366:20 where the blue left off starting at the last two octets of 64.0
366:24 starting at the last two octets of 64.0 and ending in 127.255.
366:28 and ending in 127.255. the green quarter again starting off
366:29 the green quarter again starting off with the previously defined 128.0
366:32 with the previously defined 128.0 network which is where the red quarter
366:34 network which is where the red quarter left off and ending with the last two
366:37 left off and ending with the last two octets being
366:39 octets being 191.255 and lastly the yellow quarter
366:42 191.255 and lastly the yellow quarter starting off from where the green
366:44 starting off from where the green quarter left off at 192.0 with the last
366:48 quarter left off at 192.0 with the last two octets ending with
366:50 two octets ending with 255.255 and so this would leave us with
366:53 255.255 and so this would leave us with four smaller slash 18 networks broken
366:56 four smaller slash 18 networks broken down from the previous two
366:59 down from the previous two 17 networks with each of these networks
367:02 17 networks with each of these networks consisting of 16
367:05 consisting of 16 384 ip addresses and we can continue
367:08 384 ip addresses and we can continue this process continuously having
367:10 this process continuously having networks and breaking them down into
367:12 networks and breaking them down into smaller networks this process of
367:14 smaller networks this process of dividing each network into two smaller
367:17 dividing each network into two smaller networks is known as subnetting and each
367:20 networks is known as subnetting and each time you subnet a network and create two
367:22 time you subnet a network and create two smaller networks the number in the
367:24 smaller networks the number in the prefix will increase and so i know this
367:27 prefix will increase and so i know this is already a lot to take in so this
367:30 is already a lot to take in so this would be a perfect time for you to grab
367:32 would be a perfect time for you to grab a coffee or a tea and i will be ending
367:34 a coffee or a tea and i will be ending part one here and part two will be
367:37 part one here and part two will be continuing immediately after part one so
367:40 continuing immediately after part one so you can now mark this lesson as complete
367:43 you can now mark this lesson as complete and i'll see you in the next one for
367:44 and i'll see you in the next one for part two
367:46 part two [Music]
367:50 [Music] welcome back and in this lesson i'm
367:52 welcome back and in this lesson i'm going to be covering the second part of
367:54 going to be covering the second part of the networking refresher now part two of
367:57 the networking refresher now part two of this lesson is starting immediately from
368:00 this lesson is starting immediately from the end of part one so with that being
368:02 the end of part one so with that being said let's dive in
368:04 said let's dive in now i know this network refresher has
368:06 now i know this network refresher has been filled with a ton of numbers
368:09 been filled with a ton of numbers with an underlying current of math but i
368:12 with an underlying current of math but i wanted you to focus on the why so that
368:15 wanted you to focus on the why so that things will make sense later i wanted to
368:18 things will make sense later i wanted to introduce the hard stuff first so that
368:20 introduce the hard stuff first so that over the length of this course you will
368:23 over the length of this course you will be able to digest this information
368:26 be able to digest this information and understand where this fits into
368:29 and understand where this fits into when discussing the different network
368:31 when discussing the different network parts of google cloud this will also
368:34 parts of google cloud this will also help you immensely in the real world as
368:37 help you immensely in the real world as well as the exam when configuring
368:40 well as the exam when configuring networks and knowing how to do the job
368:42 networks and knowing how to do the job of an engineer
368:44 of an engineer so getting right into it i wanted to
368:46 so getting right into it i wanted to just do a quick review on classless
368:48 just do a quick review on classless inter-domain routing or cider so as
368:51 inter-domain routing or cider so as discussed in the first refresher an ipv4
368:54 discussed in the first refresher an ipv4 address is referenced in dotted decimal
368:57 address is referenced in dotted decimal notation alongside the slash 16 is the
369:00 notation alongside the slash 16 is the prefix and defines how large the network
369:04 prefix and defines how large the network is and so before i move on i wanted to
369:07 is and so before i move on i wanted to give you some references that i found
369:09 give you some references that i found helpful in order to determine the size
369:11 helpful in order to determine the size of a network and so here i have
369:13 of a network and so here i have referenced three of the most common
369:16 referenced three of the most common prefixes
369:17 prefixes that i continuously run into that i
369:20 that i continuously run into that i think would be an extremely helpful
369:22 think would be an extremely helpful reference for you so if you look at the
369:24 reference for you so if you look at the first i p address
369:26 first i p address 192.168.0.0
369:32 with slash 8 as the prefix slash 8 would fall under a class a network 192 being
369:36 fall under a class a network 192 being the first octet
369:37 the first octet as well as being the network part of the
369:39 as well as being the network part of the address would be fixed and so the host
369:42 address would be fixed and so the host part of it would be anything after that
369:45 part of it would be anything after that so the address could be 192 dot anything
369:48 so the address could be 192 dot anything and this cider range would give you over
369:50 and this cider range would give you over 16 million ip addresses the second most
369:53 16 million ip addresses the second most common network that i see is a slash 16
369:56 common network that i see is a slash 16 network and this would make this ip fall
369:59 network and this would make this ip fall under a class b network making the first
370:02 under a class b network making the first two octets fixed and being the network
370:05 two octets fixed and being the network part meaning that anything after 192.168
370:09 part meaning that anything after 192.168 would be the host part meaning that the
370:11 would be the host part meaning that the address could be
370:18 192.168.anything and this would give you 65536 ip addresses and so for the third
370:22 65536 ip addresses and so for the third ip address which is probably the most
370:25 ip address which is probably the most common one that i see
370:26 common one that i see is a slash 24 network which falls under
370:30 is a slash 24 network which falls under a class c network meaning that the first
370:33 a class c network meaning that the first three octets are fixed and the fourth
370:35 three octets are fixed and the fourth octet could be anything from zero to two
370:38 octet could be anything from zero to two five five and this would give you
370:40 five five and this would give you 256 ip addresses and another common one
370:44 256 ip addresses and another common one which is the smallest that you will see
370:46 which is the smallest that you will see is a slash 32 prefix and this is one
370:50 is a slash 32 prefix and this is one that i use constantly for white listing
370:53 that i use constantly for white listing my ip address and because a slash 32 is
370:57 my ip address and because a slash 32 is one ip address this is a good one to
370:59 one ip address this is a good one to know when you are configuring vpn for
371:02 know when you are configuring vpn for yourself or you're whitelisting your ip
371:05 yourself or you're whitelisting your ip address from home or work and for the
371:08 address from home or work and for the last reference as well as being the
371:11 last reference as well as being the biggest network is the ip address of
371:14 biggest network is the ip address of 0.0.0.1
371:16 0.0.0.1 forward slash 0 which covers all ip
371:20 forward slash 0 which covers all ip addresses and you will see this commonly
371:22 addresses and you will see this commonly used for the internet gateway in any
371:25 used for the internet gateway in any cloud environment and so these are some
371:27 cloud environment and so these are some common prefixes that come up very
371:29 common prefixes that come up very frequently and so i hope this reference
371:32 frequently and so i hope this reference will help you now moving back to the osi
371:35 will help you now moving back to the osi model i've covered ipv4 in the network
371:38 model i've covered ipv4 in the network layer and so now it's time to discuss
371:41 layer and so now it's time to discuss ipv6 now as i noted earlier
371:44 ipv6 now as i noted earlier ipv4 notation is called dotted decimal
371:48 ipv4 notation is called dotted decimal and each number between the dots is an
371:51 and each number between the dots is an octet with a value of 0 to 255. now
371:55 octet with a value of 0 to 255. now underneath it all each octet is made up
371:58 underneath it all each octet is made up of an 8-bit value and having four
372:01 of an 8-bit value and having four numbers in an ip address that would make
372:04 numbers in an ip address that would make it a 32-bit value ipv6 is a much longer
372:09 it a 32-bit value ipv6 is a much longer value and is represented in hexadecimal
372:12 value and is represented in hexadecimal and each grouping is two octets which is
372:16 and each grouping is two octets which is 16 bits and is often referred to as a
372:19 16 bits and is often referred to as a hextet now as these addresses are very
372:22 hextet now as these addresses are very long as you can see you're able to
372:24 long as you can see you're able to abbreviate them by removing redundant
372:27 abbreviate them by removing redundant zeros so this example shown here is the
372:30 zeros so this example shown here is the same address as the one above it so if
372:33 same address as the one above it so if there is a sequence of zeros you can
372:35 there is a sequence of zeros you can simply replace them with one zero so in
372:38 simply replace them with one zero so in this address each grouping of four zeros
372:42 this address each grouping of four zeros can be represented by one zero and if
372:44 can be represented by one zero and if you have multiple groups of zeros in one
372:47 you have multiple groups of zeros in one address
372:48 address you can remove them all and replace them
372:50 you can remove them all and replace them with double colons so each of these ipv6
372:54 with double colons so each of these ipv6 addresses that you see here are exactly
372:57 addresses that you see here are exactly the same now each ipv6 address is 128
373:02 the same now each ipv6 address is 128 bits long and is represented in a
373:05 bits long and is represented in a similar way to
373:06 similar way to ipv4 starting with the network address
373:10 ipv4 starting with the network address and ending with the prefix each hextet
373:13 and ending with the prefix each hextet is 16 bits and the prefix number is the
373:16 is 16 bits and the prefix number is the number of bits that represent the
373:18 number of bits that represent the network with this example
373:20 network with this example slash 64 refers to the network address
373:24 slash 64 refers to the network address underlined in green which is 2001
373:27 underlined in green which is 2001 colon de3 each hextet is 16 bits and the
373:32 colon de3 each hextet is 16 bits and the prefix is 64. so that's four groups of
373:36 prefix is 64. so that's four groups of 16 and so this is how we know which part
373:39 16 and so this is how we know which part is the network part of the address and
373:42 is the network part of the address and which is the host part of the address
373:44 which is the host part of the address again notice the double colon here and
373:47 again notice the double colon here and as i explained previously any unneeded
373:50 as i explained previously any unneeded zeros can be replaced by a double colon
373:53 zeros can be replaced by a double colon and so this address would represent a
373:56 and so this address would represent a slew of zeros and so adding in all the
373:58 slew of zeros and so adding in all the zeros the ipv6 starting network address
374:02 zeros the ipv6 starting network address would look like this now because the
374:05 would look like this now because the network address starts at 2001 colon de3
374:09 network address starts at 2001 colon de3 with another two hextets of zeros as the
374:12 with another two hextets of zeros as the network address that was determined by
374:14 network address that was determined by the slash 64 prefix which is four
374:17 the slash 64 prefix which is four hextets it means a network finishes at
374:21 hextets it means a network finishes at that network address followed by all fs
374:24 that network address followed by all fs and so that's the process of how we can
374:27 and so that's the process of how we can determine the start and end of every
374:30 determine the start and end of every ipv6 network now as i've shown you
374:32 ipv6 network now as i've shown you before with all ipv4 addresses they are
374:36 before with all ipv4 addresses they are represented with a 0.0.0.0.0
374:45 and because ipv6 addresses are represented by the same network address
374:47 represented by the same network address and prefix we can represent ipv6
374:50 and prefix we can represent ipv6 addresses as double colon slash zero and
374:55 addresses as double colon slash zero and you will see this frequently when using
374:57 you will see this frequently when using ipv6 and so i know this is really
375:00 ipv6 and so i know this is really complicated but i just wanted to give
375:02 complicated but i just wanted to give you the exposure of ipv6 i don't expect
375:06 you the exposure of ipv6 i don't expect you to understand this right away
375:09 you to understand this right away in the end it should become a lot
375:11 in the end it should become a lot clearer as we go through the course and
375:13 clearer as we go through the course and i promise you it will become a lot
375:16 i promise you it will become a lot easier i had a hard time myself trying
375:18 easier i had a hard time myself trying to understand this network concept
375:21 to understand this network concept but after a few days i was able to
375:23 but after a few days i was able to digest it and as i went back and did
375:26 digest it and as i went back and did some practice it started to make a lot
375:29 some practice it started to make a lot more sense to me and so i know as we
375:32 more sense to me and so i know as we move along with the course
375:33 move along with the course that it will start making sense to you
375:36 that it will start making sense to you as well so now that we've discussed
375:38 as well so now that we've discussed layer 3 in the osi model i wanted to get
375:41 layer 3 in the osi model i wanted to get into layer 4 which is the transport
375:44 into layer 4 which is the transport layer with ip packets discussing tcp and
375:48 layer with ip packets discussing tcp and udp and so in its simplest form a packet
375:52 udp and so in its simplest form a packet is the basic unit of information in
375:54 is the basic unit of information in network transmission so most networks
375:57 network transmission so most networks use tcpip as the network protocol or set
376:01 use tcpip as the network protocol or set of rules for communication between
376:04 of rules for communication between devices and the rules of tcpip require
376:08 devices and the rules of tcpip require information to be split into packets
376:11 information to be split into packets that contain a segment of data to be
376:13 that contain a segment of data to be transferred along with the protocol and
376:16 transferred along with the protocol and its port number the originating address
376:18 its port number the originating address and the address of where the data is to
376:21 and the address of where the data is to be sent now udp is another protocol that
376:24 be sent now udp is another protocol that is sent with ip and is used in specific
376:27 is sent with ip and is used in specific applications but mostly in this course i
376:30 applications but mostly in this course i will be referring to tcpip and so as you
376:33 will be referring to tcpip and so as you can see in this diagram of the ip packet
376:36 can see in this diagram of the ip packet this is a basic datagram of what a
376:39 this is a basic datagram of what a packet would look like again with this
376:41 packet would look like again with this source and destination ip address
376:44 source and destination ip address the protocol port number and the data
376:47 the protocol port number and the data itself now this is mainly just to give
376:49 itself now this is mainly just to give you a high level understanding of tcpip
376:53 you a high level understanding of tcpip and udpip and is not a deep dive into
376:56 and udpip and is not a deep dive into networking now moving on to layer 7 of
376:59 networking now moving on to layer 7 of the osi model
377:01 the osi model this layer is used by networked
377:03 this layer is used by networked applications or applications that use
377:06 applications or applications that use the internet and so there are many
377:08 the internet and so there are many protocols that fall under this layer now
377:12 protocols that fall under this layer now these applications do not reside in this
377:14 these applications do not reside in this layer but use the protocols in this
377:16 layer but use the protocols in this layer to function so the application
377:19 layer to function so the application layer provides services for networked
377:22 layer provides services for networked applications with the help of protocols
377:25 applications with the help of protocols to perform user activities and you will
377:28 to perform user activities and you will see many of these protocols being
377:30 see many of these protocols being addressed as we go through this course
377:33 addressed as we go through this course through resources in google cloud like
377:36 through resources in google cloud like http or https for load balancing dns
377:41 http or https for load balancing dns that uses udp on port 53 and ssh on port
377:45 that uses udp on port 53 and ssh on port 22 for logging into hosts and so these
377:49 22 for logging into hosts and so these are just a few of the many scenarios
377:52 are just a few of the many scenarios where layer 7 and the protocols that
377:54 where layer 7 and the protocols that reside in that layer come up in this
377:56 reside in that layer come up in this course and we will be diving into many
377:59 course and we will be diving into many more in the lessons to come and so that
378:02 more in the lessons to come and so that about wraps up this networking refresher
378:04 about wraps up this networking refresher lesson
378:05 lesson and don't worry like i said before i'm
378:08 and don't worry like i said before i'm not expecting you to pick things up in
378:10 not expecting you to pick things up in this first go
378:11 this first go things will start to make more sense as
378:14 things will start to make more sense as we go through the course and we start
378:16 we go through the course and we start putting these networking concepts into
378:19 putting these networking concepts into practice also feel free to go back and
378:22 practice also feel free to go back and review the last couple of lessons again
378:24 review the last couple of lessons again if things didn't make sense to you the
378:26 if things didn't make sense to you the first time or if you come across some
378:29 first time or if you come across some networking challenges in future lessons
378:32 networking challenges in future lessons and so that's everything i wanted to
378:33 and so that's everything i wanted to cover so you can now mark this lesson as
378:36 cover so you can now mark this lesson as complete and let's move on to the next
378:38 complete and let's move on to the next one
378:39 one [Music]
378:42 [Music] welcome back in this lesson we will be
378:45 welcome back in this lesson we will be discussing the core networking service
378:48 discussing the core networking service of gcp virtual private cloud or vpc for
378:52 of gcp virtual private cloud or vpc for short it is the service that allows you
378:55 short it is the service that allows you to create networks inside google cloud
378:58 to create networks inside google cloud with both private and public
379:00 with both private and public connectivity options both for in-cloud
379:03 connectivity options both for in-cloud deployments and on-premise hybrid cloud
379:06 deployments and on-premise hybrid cloud deployments this is a service that you
379:09 deployments this is a service that you must know well as there are many
379:11 must know well as there are many questions that come up on the exam with
379:14 questions that come up on the exam with regards to vpcs so with that being said
379:17 regards to vpcs so with that being said let's dive in now vpcs are what manages
379:21 let's dive in now vpcs are what manages the networking functionality for your
379:23 the networking functionality for your google cloud resources this is a
379:26 google cloud resources this is a software defined network and is not
379:29 software defined network and is not confined to the physical limitations of
379:32 confined to the physical limitations of networking in a data center this has
379:34 networking in a data center this has been abstracted for you vpc networks
379:38 been abstracted for you vpc networks including their associated routes and
379:40 including their associated routes and firewall rules are global resources they
379:43 firewall rules are global resources they are not associated with any particular
379:46 are not associated with any particular region or zone they are global resources
379:49 region or zone they are global resources and span all available regions across
379:53 and span all available regions across the globe as explained earlier vpcs are
379:56 the globe as explained earlier vpcs are also encapsulated within projects
379:59 also encapsulated within projects projects are the logical container where
380:02 projects are the logical container where your vpcs live
380:04 your vpcs live now these vpcs do not have ip ranges but
380:08 now these vpcs do not have ip ranges but are simply a construct of all of the
380:11 are simply a construct of all of the individual ip addresses and services
380:14 individual ip addresses and services within that network the ip addresses and
380:17 within that network the ip addresses and ranges are defined within the
380:19 ranges are defined within the subnetworks that i will be diving into a
380:22 subnetworks that i will be diving into a bit later as well
380:24 bit later as well traffic to and from instances can be
380:27 traffic to and from instances can be controlled with network firewall rules
380:30 controlled with network firewall rules rules are implemented on the vms
380:33 rules are implemented on the vms themselves so traffic can be controlled
380:36 themselves so traffic can be controlled and logged as it leaves or arrives at a
380:38 and logged as it leaves or arrives at a vm
380:40 vm now resources within a vpc network
380:43 now resources within a vpc network can communicate with one another by
380:45 can communicate with one another by using internal or private ipv4 addresses
380:50 using internal or private ipv4 addresses and these are subject to applicable
380:52 and these are subject to applicable network firewall rules these resources
380:55 network firewall rules these resources must be in the same vpc for
380:58 must be in the same vpc for communication
380:59 communication otherwise they must traverse the public
381:02 otherwise they must traverse the public internet with an assigned public ip or
381:05 internet with an assigned public ip or use a vpc peering connection or
381:08 use a vpc peering connection or establish a vpn connection another
381:11 establish a vpn connection another important thing to note is that vpc
381:13 important thing to note is that vpc networks only support ipv4 unicast
381:17 networks only support ipv4 unicast traffic they do not support ipv6 traffic
381:21 traffic they do not support ipv6 traffic within the network vms in the vpc
381:23 within the network vms in the vpc network can only send to ipv4
381:26 network can only send to ipv4 destinations
381:28 destinations and only receive traffic from ipv4
381:31 and only receive traffic from ipv4 sources however it is possible to create
381:34 sources however it is possible to create an ipv6 address for a global load
381:38 an ipv6 address for a global load balancer now unless you choose to
381:40 balancer now unless you choose to disable it each new project starts with
381:43 disable it each new project starts with a default network in a vpc the default
381:47 a default network in a vpc the default network is an auto mode vpc network with
381:50 network is an auto mode vpc network with predefined subnets a subnet is allocated
381:53 predefined subnets a subnet is allocated for each region with non-overlapping
381:56 for each region with non-overlapping cider blocks also each default network
382:00 cider blocks also each default network has a default firewall rule
382:02 has a default firewall rule these rules are configured to allow
382:05 these rules are configured to allow ingress traffic
382:06 ingress traffic for icmp
382:08 for icmp rdp and ssh traffic from anywhere
382:12 rdp and ssh traffic from anywhere as well as ingress traffic from within
382:15 as well as ingress traffic from within the default network for all protocols
382:18 the default network for all protocols and ports and so there are two different
382:20 and ports and so there are two different types of vpc networks
382:23 types of vpc networks auto mode or custom mode an auto mode
382:26 auto mode or custom mode an auto mode network also has one subnet per region
382:29 network also has one subnet per region the default network is actually an auto
382:32 the default network is actually an auto mode network as explained earlier now
382:35 mode network as explained earlier now these automatically created subnets use
382:38 these automatically created subnets use a set of predefined ip ranges with a
382:41 a set of predefined ip ranges with a slash 20 cider block that can be
382:43 slash 20 cider block that can be expanded to a slash 16 cider block all
382:46 expanded to a slash 16 cider block all of these subnets
382:48 of these subnets fit within the default
382:55 10.128.0.0 ford slash 9 cider block and as new gcp regions become available
382:58 as new gcp regions become available new subnets in those regions are
383:01 new subnets in those regions are automatically added to auto mode
383:03 automatically added to auto mode networks using an ip range on that block
383:07 networks using an ip range on that block now a custom owned network does not
383:09 now a custom owned network does not automatically create subnets
383:11 automatically create subnets this type of network provides you with
383:13 this type of network provides you with complete control over its subnets and ip
383:16 complete control over its subnets and ip ranges as well as another note an auto
383:19 ranges as well as another note an auto mode network can be converted to a
383:22 mode network can be converted to a custom mode network to gain more control
383:25 custom mode network to gain more control but please be aware this conversion is
383:27 but please be aware this conversion is one way meaning that custom networks
383:30 one way meaning that custom networks cannot be changed to auto mode networks
383:32 cannot be changed to auto mode networks so when deciding on the different types
383:34 so when deciding on the different types of networks you want to use
383:36 of networks you want to use make sure that you review all of your
383:38 make sure that you review all of your considerations now custom mode vpc
383:41 considerations now custom mode vpc networks are more flexible and better
383:44 networks are more flexible and better suited to production and google
383:46 suited to production and google recommends that you use custom mode vpc
383:49 recommends that you use custom mode vpc networks in production so here is an
383:52 networks in production so here is an example of a project that contains three
383:55 example of a project that contains three networks all of these networks span
383:58 networks all of these networks span multiple regions across the globe as you
384:00 multiple regions across the globe as you can see here on the right hand side and
384:02 can see here on the right hand side and each network contains separate vms and
384:06 each network contains separate vms and so this diagram is to demonstrate that
384:08 so this diagram is to demonstrate that vms that are in the same network or vpc
384:12 vms that are in the same network or vpc can communicate privately even when
384:14 can communicate privately even when placed in separate regions because vms
384:17 placed in separate regions because vms in network a are in the same network
384:20 in network a are in the same network they can communicate over internal ip
384:22 they can communicate over internal ip addresses even though they're in
384:24 addresses even though they're in different regions essentially your vms
384:27 different regions essentially your vms can communicate even if they exist in
384:30 can communicate even if they exist in different locations across the globe as
384:32 different locations across the globe as long as they are within the same network
384:35 long as they are within the same network the vms in network b and network c
384:38 the vms in network b and network c are not in the same network therefore by
384:41 are not in the same network therefore by default these vms must communicate over
384:44 default these vms must communicate over external ips even though they're in the
384:47 external ips even though they're in the same region as no internal ip
384:49 same region as no internal ip communication is allowed between
384:52 communication is allowed between networks unless you set up vpc network
384:55 networks unless you set up vpc network peering or use a vpn connection now i
384:58 peering or use a vpn connection now i wanted to bring back the focus to the
385:01 wanted to bring back the focus to the default vpc for just a minute unless you
385:04 default vpc for just a minute unless you create an organizational policy that
385:06 create an organizational policy that prohibits it new projects will always
385:08 prohibits it new projects will always start with a default network that has
385:11 start with a default network that has one subnet in each region and again this
385:13 one subnet in each region and again this is an auto mode vpc network in this
385:16 is an auto mode vpc network in this particular example i am showing a
385:18 particular example i am showing a default vpc with seven of its default
385:21 default vpc with seven of its default regions displayed along with their ip
385:24 regions displayed along with their ip ranges and again i want to stress that
385:27 ranges and again i want to stress that vpc networks along with their associated
385:30 vpc networks along with their associated routes and firewall rules are global
385:33 routes and firewall rules are global resources they are not associated with
385:36 resources they are not associated with any particular region or zone so the
385:39 any particular region or zone so the subnets within them are regional and so
385:41 subnets within them are regional and so when an auto mode vpc network is created
385:45 when an auto mode vpc network is created one subnet from each region is
385:47 one subnet from each region is automatically created within it these
385:50 automatically created within it these automatically created subnets use a set
385:52 automatically created subnets use a set of predefined ip ranges that fit within
385:56 of predefined ip ranges that fit within the cider block that you see here
385:58 the cider block that you see here of 10.128.0.049
386:05 and as new google cloud regions become available
386:06 available new subnets in those regions are
386:09 new subnets in those regions are automatically added to auto mode vpc
386:11 automatically added to auto mode vpc networks
386:12 networks by using an ip range from that block in
386:15 by using an ip range from that block in addition to the automatically created
386:17 addition to the automatically created subnets you can add more subnets
386:20 subnets you can add more subnets manually to auto mode vpc networks in
386:23 manually to auto mode vpc networks in regions that you choose by using ip
386:26 regions that you choose by using ip ranges outside of 10.128.0.049
386:35 now if you're using a default vbc or have already created an auto mode vpc
386:38 have already created an auto mode vpc you can switch the vpc network from auto
386:41 you can switch the vpc network from auto mode to custom mode and this is a
386:43 mode to custom mode and this is a one-way conversion only as custom mode
386:45 one-way conversion only as custom mode vpc networks cannot be changed to auto
386:48 vpc networks cannot be changed to auto mode vpc networks now bringing this
386:51 mode vpc networks now bringing this theory into practice with regards to the
386:54 theory into practice with regards to the default vpc
386:55 default vpc i wanted to take the time to do a short
386:58 i wanted to take the time to do a short demo so whenever you're ready join me in
387:01 demo so whenever you're ready join me in the console
387:02 the console and so here we are back in the console
387:04 and so here we are back in the console and if i go here in the top right hand
387:06 and if i go here in the top right hand corner i am logged in as tony bowties at
387:10 corner i am logged in as tony bowties at gmail.com and in the top drop down
387:12 gmail.com and in the top drop down project menu i'm logged in under project
387:16 project menu i'm logged in under project tony and because this demo is geared
387:19 tony and because this demo is geared around the default vpc i want to
387:21 around the default vpc i want to navigate to vpc networks so i'm going to
387:24 navigate to vpc networks so i'm going to go over here to the top left hand corner
387:26 go over here to the top left hand corner to the navigation menu
387:28 to the navigation menu and i'm going to click on it and scroll
387:30 and i'm going to click on it and scroll down
387:31 down to vpc network under networking
387:35 to vpc network under networking and so as you can see here in the left
387:37 and so as you can see here in the left hand menu there are a bunch of different
387:39 hand menu there are a bunch of different options that i can choose from but i
387:41 options that i can choose from but i won't be touching on any of these topics
387:44 won't be touching on any of these topics as i have other lessons that will deep
387:47 as i have other lessons that will deep dive into those topics so in this demo
387:50 dive into those topics so in this demo i'd like to strictly touch on the
387:52 i'd like to strictly touch on the default vpc and as you can see in
387:55 default vpc and as you can see in project tony it has created a default
387:58 project tony it has created a default vpc for me with a one subnet in every
388:01 vpc for me with a one subnet in every region having its own ip address range
388:04 region having its own ip address range and so just as a reminder whenever you
388:07 and so just as a reminder whenever you create a new project a default vpc will
388:11 create a new project a default vpc will be automatically created for you and
388:13 be automatically created for you and when these subnets were created each of
388:16 when these subnets were created each of them have a route out to the public
388:18 them have a route out to the public internet and so the internet gateway is
388:21 internet and so the internet gateway is listed here its corresponding firewall
388:23 listed here its corresponding firewall rules along with global dynamic routing
388:26 rules along with global dynamic routing and flow logs are turned off and again i
388:29 and flow logs are turned off and again i will be getting deeper into routing and
388:31 will be getting deeper into routing and flow logs in later lessons in the
388:34 flow logs in later lessons in the section now earlier i had pointed out
388:36 section now earlier i had pointed out that an auto mode vpc can be converted
388:39 that an auto mode vpc can be converted to a custom vpc and it's as simple as
388:42 to a custom vpc and it's as simple as clicking this button but we don't want
388:44 clicking this button but we don't want to do that just yet and what i'd like to
388:46 to do that just yet and what i'd like to do is drill down into the default vbc
388:49 do is drill down into the default vbc and show you all the different options
388:51 and show you all the different options as you can see here the dns api has not
388:54 as you can see here the dns api has not been enabled and so for most of you a
388:57 been enabled and so for most of you a good idea would be to enable it and so
388:59 good idea would be to enable it and so i'm going to go ahead and do that now as
389:01 i'm going to go ahead and do that now as well you can see here that i can make
389:03 well you can see here that i can make adjustments to each of the different
389:05 adjustments to each of the different subnets or i can change the
389:07 subnets or i can change the configuration of the vpc itself so if i
389:10 configuration of the vpc itself so if i click on this edit button here at the
389:12 click on this edit button here at the top i'm able to change the subnet
389:15 top i'm able to change the subnet creation mode along with the dynamic
389:17 creation mode along with the dynamic routing mode which i will get into in a
389:19 routing mode which i will get into in a later lesson and the same thing with the
389:21 later lesson and the same thing with the dns server policy and so to make this
389:24 dns server policy and so to make this demo a little bit more exciting i want
389:26 demo a little bit more exciting i want to show you the process on how to expand
389:29 to show you the process on how to expand a subnet so i'm going to go into us
389:31 a subnet so i'm going to go into us central one i'm going to drill down here
389:34 central one i'm going to drill down here and here's all the configuration
389:36 and here's all the configuration settings for the default subnet in the
389:39 settings for the default subnet in the us central one region and so for me to
389:41 us central one region and so for me to edit this subnet i can simply click on
389:44 edit this subnet i can simply click on the edit button up here at the top and
389:46 the edit button up here at the top and so right below the ip address range i am
389:49 so right below the ip address range i am prompted with a note saying that the ip
389:52 prompted with a note saying that the ip ranges must be unique and
389:54 ranges must be unique and non-overlapping as we stated before and
389:57 non-overlapping as we stated before and this is a very important point to know
389:59 this is a very important point to know when you're architecting any vpcs or its
390:02 when you're architecting any vpcs or its corresponding sub networks and so i'm
390:05 corresponding sub networks and so i'm going to go ahead and change the subnet
390:07 going to go ahead and change the subnet from a cider range of 20
390:09 from a cider range of 20 and i'm going to change it to 16. i'm
390:11 and i'm going to change it to 16. i'm not going to add any secondary ip ranges
390:14 not going to add any secondary ip ranges i'm going to leave private google access
390:16 i'm going to leave private google access off and so i'm going to leave everything
390:18 off and so i'm going to leave everything else as is
390:20 else as is and simply click on save and so once
390:22 and simply click on save and so once this has completed i'll be able to see
390:25 this has completed i'll be able to see that my subnet range will go from a
390:27 that my subnet range will go from a slash 20 to a slash 16. and so here you
390:31 slash 20 to a slash 16. and so here you can see the ip address range has now
390:34 can see the ip address range has now changed to a slash 16. if i go back to
390:37 changed to a slash 16. if i go back to the main page of the vpc network i can
390:39 the main page of the vpc network i can see that the ip address range is
390:42 see that the ip address range is different from all the other ones now
390:44 different from all the other ones now you're probably asking why can't i just
390:46 you're probably asking why can't i just change the ip address range on all the
390:49 change the ip address range on all the subnets at once and so even though i'd
390:51 subnets at once and so even though i'd love to do that
390:52 love to do that unfortunately google does not give you
390:55 unfortunately google does not give you the option each subnet must be
390:57 the option each subnet must be configured one by one to change the ipa
391:00 configured one by one to change the ipa address range now i wanted to quickly
391:02 address range now i wanted to quickly jump into the default firewall rules and
391:05 jump into the default firewall rules and as discussed earlier the rules for
391:08 as discussed earlier the rules for incoming ssh
391:09 incoming ssh rdp
391:11 rdp and icmp have been pre-populated
391:14 and icmp have been pre-populated along with a default rule that allows
391:17 along with a default rule that allows incoming connections for all protocols
391:20 incoming connections for all protocols and ports
391:21 and ports among instances within the same network
391:24 among instances within the same network so when it comes to routes with regards
391:26 so when it comes to routes with regards to the vpc network the only one i really
391:28 to the vpc network the only one i really wanted to touch on is the default route
391:31 wanted to touch on is the default route to the internet and so without this
391:33 to the internet and so without this route any of the subnets in this vpc
391:36 route any of the subnets in this vpc wouldn't have access to route traffic to
391:39 wouldn't have access to route traffic to the internet and so when the default vpc
391:41 the internet and so when the default vpc is created the default internet gateway
391:44 is created the default internet gateway is also created and so now going back to
391:46 is also created and so now going back to the main page of the vpc network i
391:49 the main page of the vpc network i wanted to go through the process
391:51 wanted to go through the process of making the ip address range bigger
391:53 of making the ip address range bigger but doing it through the command line
391:55 but doing it through the command line and so i'm going to go up to the right
391:57 and so i'm going to go up to the right hand corner and open up cloud shell i'm
391:59 hand corner and open up cloud shell i'm going to make this a little bit bigger
392:01 going to make this a little bit bigger and so for this demo i'm going to
392:03 and so for this demo i'm going to increase the address range for the
392:05 increase the address range for the subnet in us west one from a slash 20 to
392:09 subnet in us west one from a slash 20 to a slash 16 and so i'm going to paste in
392:11 a slash 16 and so i'm going to paste in the command
392:12 the command which is gcloud compute networks
392:15 which is gcloud compute networks subnets expand
392:17 subnets expand ip dash range and then the name of the
392:20 ip dash range and then the name of the network which is default as well as the
392:23 network which is default as well as the region and i'm going to do uswest1 along
392:26 region and i'm going to do uswest1 along with the prefix length which is going to
392:28 with the prefix length which is going to be 16. so i'm going to hit enter
392:31 be 16. so i'm going to hit enter i've been prompted to make sure that
392:33 i've been prompted to make sure that this is what i want to do and so yes i
392:35 this is what i want to do and so yes i do want to continue so i'm going to type
392:37 do want to continue so i'm going to type in y for yes and hit enter and so within
392:40 in y for yes and hit enter and so within a few seconds i should get some
392:41 a few seconds i should get some confirmation and as expected my subnet
392:44 confirmation and as expected my subnet has been updated and so because i like
392:47 has been updated and so because i like to verify everything i'm going to now
392:49 to verify everything i'm going to now clear the screen and i'm going to paste
392:51 clear the screen and i'm going to paste in the command gcloud compute networks
392:55 in the command gcloud compute networks subnets describe and then the subnet
392:57 subnets describe and then the subnet name which is default along with the
392:59 name which is default along with the region which would be uswest1 i'm going
393:02 region which would be uswest1 i'm going to click on enter and as you can see
393:04 to click on enter and as you can see here the ipsider range is consistent
393:07 here the ipsider range is consistent with what we have changed and if i do a
393:10 with what we have changed and if i do a quick refresh on the browser
393:12 quick refresh on the browser i'll be able to see that the console has
393:15 i'll be able to see that the console has reflected the same thing and as expected
393:18 reflected the same thing and as expected the ip address range here for us west
393:20 the ip address range here for us west one in the console reflects that which
393:24 one in the console reflects that which we see here in cloud shell and so now to
393:26 we see here in cloud shell and so now to end this demo i wanted to quickly show
393:29 end this demo i wanted to quickly show you how i can delete the default vpc and
393:32 you how i can delete the default vpc and recreate it so all i need to do is to
393:35 recreate it so all i need to do is to drill into the settings
393:37 drill into the settings and then click on delete vpc network
393:39 and then click on delete vpc network right here at the top i'm going to get a
393:41 right here at the top i'm going to get a prompt to ask me if i'm sure and i'm
393:43 prompt to ask me if i'm sure and i'm going to simply click on delete now just
393:46 going to simply click on delete now just as a note if you have any resources that
393:49 as a note if you have any resources that are in any vpc networks you will not be
393:52 are in any vpc networks you will not be able to delete the vpc you would have to
393:54 able to delete the vpc you would have to delete the resources first and then
393:57 delete the resources first and then delete the vpc afterwards okay and it
394:00 delete the vpc afterwards okay and it has been successfully deleted and as you
394:02 has been successfully deleted and as you can see there are no local vpc networks
394:05 can see there are no local vpc networks in this current project and so i want to
394:07 in this current project and so i want to go ahead and recreate the default vpc so
394:11 go ahead and recreate the default vpc so i'm going to simply click on create vpc
394:13 i'm going to simply click on create vpc network and so here i'm prompted to
394:16 network and so here i'm prompted to enter in a bunch of information
394:18 enter in a bunch of information for creating this new vpc network and so
394:21 for creating this new vpc network and so keeping with the spirit of default vpcs
394:24 keeping with the spirit of default vpcs i'm going to name this vpc default
394:28 i'm going to name this vpc default i'm going to put default in the
394:30 i'm going to put default in the description and under subnet creation
394:32 description and under subnet creation mode i'm going to click on automatic and
394:35 mode i'm going to click on automatic and as you can see a prompt came up telling
394:37 as you can see a prompt came up telling me these ip address ranges will be
394:40 me these ip address ranges will be assigned to each region in your vpc
394:43 assigned to each region in your vpc network and i'm able to review the ip
394:45 network and i'm able to review the ip address ranges for each region and as
394:48 address ranges for each region and as stated before the ip address ranges for
394:51 stated before the ip address ranges for each region will always be the same
394:54 each region will always be the same every time i create
394:56 every time i create this default vpc or create a vpc in the
395:00 this default vpc or create a vpc in the automatic subnet creation mode
395:02 automatic subnet creation mode now as a note here under firewall rules
395:06 now as a note here under firewall rules if i don't select these firewall rules
395:09 if i don't select these firewall rules none will actually be created so if
395:11 none will actually be created so if you're creating a new default vpc be
395:14 you're creating a new default vpc be sure to check these off and so i'm going
395:16 sure to check these off and so i'm going to leave everything else as is and i'm
395:18 to leave everything else as is and i'm going to simply go to the bottom and
395:20 going to simply go to the bottom and click on the create button and within
395:22 click on the create button and within about a minute i should have the new
395:24 about a minute i should have the new default vpc created okay and we are back
395:28 default vpc created okay and we are back in business the default vpc has been
395:31 in business the default vpc has been recreated with all of these subnets in
395:34 recreated with all of these subnets in its corresponding regions all the ip
395:36 its corresponding regions all the ip address ranges the firewall rules
395:39 address ranges the firewall rules everything that we saw earlier in the
395:41 everything that we saw earlier in the default vpc and so that's pretty much
395:44 default vpc and so that's pretty much all i wanted to cover in this demo on
395:46 all i wanted to cover in this demo on the default vpc network along with the
395:49 the default vpc network along with the lesson on vpcs so you can now mark this
395:52 lesson on vpcs so you can now mark this lesson as complete and let's move on to
395:54 lesson as complete and let's move on to the next
396:02 one welcome back and in this lesson i'm going to be discussing vpc network
396:05 going to be discussing vpc network subnets now the terms subnet and sub
396:08 subnets now the terms subnet and sub network are synonymous and are used
396:10 network are synonymous and are used interchangeably in google cloud as
396:12 interchangeably in google cloud as you'll hear me using either one in this
396:14 you'll hear me using either one in this lesson yet i am referring to the same
396:16 lesson yet i am referring to the same thing now when you create a resource in
396:19 thing now when you create a resource in google cloud you choose a network and a
396:22 google cloud you choose a network and a subnet and so because a subnet is needed
396:25 subnet and so because a subnet is needed before creating resources some good
396:27 before creating resources some good knowledge behind it is necessary for
396:30 knowledge behind it is necessary for both building and google cloud as well
396:33 both building and google cloud as well as in the exam so in this lesson i'll be
396:36 as in the exam so in this lesson i'll be covering subnets at a deeper level with
396:39 covering subnets at a deeper level with all of its features and functionality so
396:41 all of its features and functionality so with that being said let's dive in
396:45 with that being said let's dive in now each vpc network consists of one or
396:48 now each vpc network consists of one or more useful ip range partitions
396:51 more useful ip range partitions called subnets also known in google
396:54 called subnets also known in google cloud as sub networks each subnet is
396:58 cloud as sub networks each subnet is associated with the region and vpc
397:00 associated with the region and vpc networks do not have any ip address
397:03 networks do not have any ip address ranges associated with them ip ranges
397:07 ranges associated with them ip ranges are defined for the subnets a network
397:10 are defined for the subnets a network must have at least one subnet before you
397:12 must have at least one subnet before you can use it and as mentioned earlier when
397:15 can use it and as mentioned earlier when you create a project it will create a
397:17 you create a project it will create a default vpc network with subnets in each
397:21 default vpc network with subnets in each region automatically auto mode will run
397:24 region automatically auto mode will run under this same functionality now custom
397:27 under this same functionality now custom vpc networks on the other hand start
397:30 vpc networks on the other hand start with no subnets giving you full control
397:32 with no subnets giving you full control over subnet creation and you can create
397:35 over subnet creation and you can create more than one subnet per region you
397:37 more than one subnet per region you cannot change the name or region of a
397:39 cannot change the name or region of a subnet after you've created it you would
397:42 subnet after you've created it you would have to delete the subnet and replace it
397:45 have to delete the subnet and replace it as long as no resources are using it
397:48 as long as no resources are using it primary and secondary ranges for subnets
397:51 primary and secondary ranges for subnets cannot overlap with any allocated range
397:54 cannot overlap with any allocated range any primary or secondary range of
397:57 any primary or secondary range of another subnet in the same network
397:59 another subnet in the same network or any ip ranges of subnets in peered
398:03 or any ip ranges of subnets in peered networks in other words they must be a
398:05 networks in other words they must be a unique valid cider block now when it
398:08 unique valid cider block now when it comes to ip addresses of a subnet google
398:11 comes to ip addresses of a subnet google cloud vpc has an amazing feature that
398:15 cloud vpc has an amazing feature that lets you increase the ip space of any
398:18 lets you increase the ip space of any subnets without any workload shutdown or
398:21 subnets without any workload shutdown or downtime as demonstrated earlier in the
398:24 downtime as demonstrated earlier in the previous lesson and this gives you the
398:26 previous lesson and this gives you the flexibility and growth options to meet
398:29 flexibility and growth options to meet your needs but unfortunately there are
398:32 your needs but unfortunately there are some caveats the new subnet must not
398:35 some caveats the new subnet must not overlap with other subnets in the same
398:38 overlap with other subnets in the same vpc network in any region also the new
398:42 vpc network in any region also the new subnets must stay inside the rfc 1918
398:46 subnets must stay inside the rfc 1918 address space the new network range must
398:49 address space the new network range must be larger than the original which means
398:52 be larger than the original which means the prefix length must be smaller in
398:54 the prefix length must be smaller in number and once a subnet has been
398:56 number and once a subnet has been expanded you cannot undo an expansion
399:00 expanded you cannot undo an expansion now auto mode network starts with a
399:02 now auto mode network starts with a slash 20 range that can be expanded to a
399:06 slash 20 range that can be expanded to a 16 ip range but not larger you can also
399:11 16 ip range but not larger you can also convert the auto mode network to a
399:13 convert the auto mode network to a custom mode network to increase the ip
399:16 custom mode network to increase the ip range even further and again this is a
399:19 range even further and again this is a one-way conversion
399:20 one-way conversion custom mode vpc networks cannot be
399:23 custom mode vpc networks cannot be changed to auto mode vpc networks
399:26 changed to auto mode vpc networks now in any network that is created in
399:29 now in any network that is created in google cloud
399:30 google cloud there will always be some ip addresses
399:33 there will always be some ip addresses that you will not be able to use and
399:35 that you will not be able to use and these are reserved for google and so
399:38 these are reserved for google and so every subnet has four reserved ip
399:40 every subnet has four reserved ip addresses in its primary ip range and
399:44 addresses in its primary ip range and just as a note there are no reserved ip
399:47 just as a note there are no reserved ip addresses in the secondary ip ranges and
399:50 addresses in the secondary ip ranges and these reserved ips can be looked at as
399:53 these reserved ips can be looked at as the first two and the last two ip
399:55 the first two and the last two ip addresses in the cider range now the
399:58 addresses in the cider range now the first address in the primary ip range
400:01 first address in the primary ip range for the subnet is reserved for the
400:03 for the subnet is reserved for the network the second address in the
400:05 network the second address in the primary ip range for the subnet is
400:08 primary ip range for the subnet is reserved for the default gateway and
400:10 reserved for the default gateway and allows you access to the internet the
400:13 allows you access to the internet the second to last address in the primary ip
400:16 second to last address in the primary ip range for the subnet is reserved for
400:18 range for the subnet is reserved for google cloud for potential future use
400:21 google cloud for potential future use and the last address and the ip range
400:23 and the last address and the ip range for the subnet is for broadcast
400:26 for the subnet is for broadcast and so that about covers this short yet
400:29 and so that about covers this short yet important lesson on vpc network subnets
400:32 important lesson on vpc network subnets these features and functionalities of
400:35 these features and functionalities of subnets that have been presented to you
400:37 subnets that have been presented to you will help you make better design
400:40 will help you make better design decisions that will give you a bit more
400:42 decisions that will give you a bit more knowledge and flexibility when it comes
400:45 knowledge and flexibility when it comes to assigning ipspace within your vpc
400:48 to assigning ipspace within your vpc networks and so that's all i have to
400:50 networks and so that's all i have to cover for this lesson so you can now
400:52 cover for this lesson so you can now mark this lesson as complete and let's
400:55 mark this lesson as complete and let's move on to the next one
400:57 move on to the next one [Music]
401:01 [Music] welcome back and in this lesson i'm
401:03 welcome back and in this lesson i'm going to be going through
401:05 going to be going through routing and private google access
401:08 routing and private google access now although routing doesn't really show
401:10 now although routing doesn't really show up in the exam i wanted to give you an
401:12 up in the exam i wanted to give you an inside look on how traffic is routed so
401:16 inside look on how traffic is routed so when you're building in google cloud
401:18 when you're building in google cloud you'll know exactly what you will need
401:20 you'll know exactly what you will need to do if you need to edit these routes
401:22 to do if you need to edit these routes in any way or if you need to build new
401:25 in any way or if you need to build new ones to satisfy your particular need now
401:28 ones to satisfy your particular need now private google access does pop its head
401:30 private google access does pop its head in the exam but only at a high level but
401:33 in the exam but only at a high level but i wanted to get just a bit deeper with
401:36 i wanted to get just a bit deeper with the service and get into the data flow
401:38 the service and get into the data flow of when the service is enabled so with
401:41 of when the service is enabled so with that being said let's dive in now google
401:44 that being said let's dive in now google cloud routes define the paths that
401:47 cloud routes define the paths that network traffic takes from a vm instance
401:50 network traffic takes from a vm instance to other destinations these destinations
401:53 to other destinations these destinations can be inside your google cloud vpc
401:56 can be inside your google cloud vpc network for example in another vm or
401:59 network for example in another vm or outside it in a vpc network a route
402:01 outside it in a vpc network a route consists of a single destination
402:04 consists of a single destination and a single next hop when an instance
402:07 and a single next hop when an instance in a vpc network sends a packet google
402:09 in a vpc network sends a packet google cloud delivers the packet to the route's
402:12 cloud delivers the packet to the route's next hop if the packet's destination
402:15 next hop if the packet's destination address is within the route's
402:17 address is within the route's destination range
402:18 destination range and so all these routes are stored in
402:21 and so all these routes are stored in the routing table for the vpc now for
402:24 the routing table for the vpc now for those of you who are not familiar with a
402:26 those of you who are not familiar with a routing table in computer networking a
402:28 routing table in computer networking a routing table is a data table stored in
402:31 routing table is a data table stored in a router or a network host that lists
402:34 a router or a network host that lists the routes to particular network
402:37 the routes to particular network destinations and so in this case the vpc
402:40 destinations and so in this case the vpc is responsible for storing the routing
402:43 is responsible for storing the routing table as well each vm instance has a
402:46 table as well each vm instance has a controller that is kept informed of all
402:49 controller that is kept informed of all applicable routes
402:51 applicable routes from the network's routing table each
402:53 from the network's routing table each packet leaving a vm
402:55 packet leaving a vm is delivered to the appropriate next hop
402:58 is delivered to the appropriate next hop of an applicable route based on a
403:00 of an applicable route based on a routing order now i wanted to take a
403:02 routing order now i wanted to take a couple minutes to go through the
403:04 couple minutes to go through the different routing types that are
403:06 different routing types that are available on google cloud now in google
403:09 available on google cloud now in google cloud there are two types of routing
403:12 cloud there are two types of routing there is the system generated which
403:14 there is the system generated which offers the default and subnet route and
403:17 offers the default and subnet route and then there are the custom routes which
403:20 then there are the custom routes which support static routes and dynamic routes
403:23 support static routes and dynamic routes and so i first wanted to cover system
403:26 and so i first wanted to cover system generated routes in a little bit of
403:28 generated routes in a little bit of depth and so every new network whether
403:31 depth and so every new network whether it be an automatic vpc or a custom vpc
403:35 it be an automatic vpc or a custom vpc has two types of system generated routes
403:38 has two types of system generated routes a default route which you can remove or
403:40 a default route which you can remove or replace and one subnet route for each of
403:42 replace and one subnet route for each of its subnets now when you create a vpc
403:45 its subnets now when you create a vpc network google cloud creates a system
403:48 network google cloud creates a system generated default route and this route
403:51 generated default route and this route serves two purposes it defines the path
403:54 serves two purposes it defines the path out of the vpc network including the
403:57 out of the vpc network including the path to the internet in addition to
403:59 path to the internet in addition to having this route instances must meet
404:02 having this route instances must meet additional requirements if they need
404:04 additional requirements if they need internet access the default route also
404:07 internet access the default route also provides a standard path for private
404:10 provides a standard path for private google access and if you want to
404:12 google access and if you want to completely isolate your network from the
404:14 completely isolate your network from the internet or if you need to replace the
404:17 internet or if you need to replace the default route with the custom route you
404:19 default route with the custom route you can delete the default route now if you
404:22 can delete the default route now if you remove the default route and do not
404:23 remove the default route and do not replace it packets destined to ip ranges
404:27 replace it packets destined to ip ranges that are not covered by other routes are
404:29 that are not covered by other routes are dropped lastly the system generated
404:32 dropped lastly the system generated default route has a priority of 1000
404:36 default route has a priority of 1000 because its destination is the broadest
404:39 because its destination is the broadest possible which covers all ip addresses
404:42 possible which covers all ip addresses in the
404:48 0.0.0.0.0 range google cloud only uses it if a route with a more specific
404:50 it if a route with a more specific destination does not apply to a packet
404:53 destination does not apply to a packet and i'll be getting into priorities in
404:56 and i'll be getting into priorities in just a little bit and so now that we've
404:58 just a little bit and so now that we've covered the default route i wanted to
405:00 covered the default route i wanted to get into the subnet route now subnet
405:02 get into the subnet route now subnet routes are system generated routes that
405:05 routes are system generated routes that define paths to each subnet in the vpc
405:08 define paths to each subnet in the vpc network each subnet has at least one
405:11 network each subnet has at least one subnet route whose destination matches
405:14 subnet route whose destination matches the primary ip range of the subnet if
405:18 the primary ip range of the subnet if the subnet has secondary ip ranges
405:20 the subnet has secondary ip ranges google cloud creates a subnet route with
405:23 google cloud creates a subnet route with a corresponding destination for each
405:25 a corresponding destination for each secondary range no other route can have
405:28 secondary range no other route can have a destination that matches
405:30 a destination that matches or is more specific than the destination
405:33 or is more specific than the destination of a subnet route you can create a
405:36 of a subnet route you can create a custom route that has a broader
405:38 custom route that has a broader destination range that contains the
405:40 destination range that contains the subnet route's destination range now
405:43 subnet route's destination range now when a subnet is created a corresponding
405:46 when a subnet is created a corresponding subnet route for the subnet's primary
405:48 subnet route for the subnet's primary and secondary ip range is also created
405:52 and secondary ip range is also created auto mode vpc networks create a subnet
405:55 auto mode vpc networks create a subnet route for the primary ip ranges of each
405:58 route for the primary ip ranges of each of their automatically created subnets
406:01 of their automatically created subnets you can delete these subnets but only if
406:03 you can delete these subnets but only if you convert the auto mode vpc network to
406:06 you convert the auto mode vpc network to custom mode and you cannot delete a
406:08 custom mode and you cannot delete a subnet route unless you modify or delete
406:12 subnet route unless you modify or delete the subnet so when you delete a subnet
406:14 the subnet so when you delete a subnet all subnet routes for both primary and
406:17 all subnet routes for both primary and secondary ranges are deleted
406:19 secondary ranges are deleted automatically you cannot delete the
406:21 automatically you cannot delete the subnet route for the subnet's primary
406:24 subnet route for the subnet's primary range in any other way and just as a
406:26 range in any other way and just as a note when networks are connected by
406:29 note when networks are connected by using vpc network peering which i will
406:31 using vpc network peering which i will get into a little bit later some subnet
406:34 get into a little bit later some subnet routes from one network are imported
406:37 routes from one network are imported into the other network and vice versa
406:39 into the other network and vice versa and cannot be removed unless you break
406:42 and cannot be removed unless you break the peering relationship and so when you
406:44 the peering relationship and so when you break the peering relationship all
406:46 break the peering relationship all imported subnet routes from the other
406:49 imported subnet routes from the other network are automatically removed so now
406:52 network are automatically removed so now that we've covered the system generated
406:54 that we've covered the system generated routes i wanted to get into custom
406:56 routes i wanted to get into custom routes now custom routes are either
406:59 routes now custom routes are either static routes that you can create
407:01 static routes that you can create manually or dynamic routes maintained
407:04 manually or dynamic routes maintained automatically by one or more of your
407:07 automatically by one or more of your cloud routers and these are created on
407:10 cloud routers and these are created on top of the already created system
407:12 top of the already created system generated routes destinations for custom
407:15 generated routes destinations for custom routes cannot match or be specific than
407:18 routes cannot match or be specific than any subnet route in the network now
407:21 any subnet route in the network now static routes can use any of the static
407:23 static routes can use any of the static route next hops and these can be created
407:27 route next hops and these can be created manually if you use the google cloud
407:29 manually if you use the google cloud console to create a cloud vpn tunnel
407:31 console to create a cloud vpn tunnel that uses policy-based routing or one
407:34 that uses policy-based routing or one that is a route based vpn static routes
407:37 that is a route based vpn static routes for the remote traffic selectors are
407:40 for the remote traffic selectors are created for you and so just to give you
407:42 created for you and so just to give you a little bit more clarity and a little
407:44 a little bit more clarity and a little bit of context i've included a
407:46 bit of context i've included a screenshot here for all the different
407:48 screenshot here for all the different routes that are available for the next
407:51 routes that are available for the next hop we have the default internet gateway
407:54 hop we have the default internet gateway to define a path to external ip
407:56 to define a path to external ip addresses specify an instance and this
407:59 addresses specify an instance and this is where traffic is directed to the
408:01 is where traffic is directed to the primary internal ip address of the vm's
408:04 primary internal ip address of the vm's network interface in the vpc network
408:07 network interface in the vpc network where you define the route specify ip
408:09 where you define the route specify ip address is where you provide an internal
408:12 address is where you provide an internal ip address assigned to a google cloud vm
408:16 ip address assigned to a google cloud vm as a next hop for cloud vpn tunnels that
408:19 as a next hop for cloud vpn tunnels that use policy based routing and route-based
408:22 use policy based routing and route-based vpns you can direct traffic to the vpn
408:25 vpns you can direct traffic to the vpn tunnel by creating routes whose next
408:28 tunnel by creating routes whose next hops refer to the tunnel by its name and
408:31 hops refer to the tunnel by its name and region and just as a note google cloud
408:34 region and just as a note google cloud ignores routes whose next hops are cloud
408:37 ignores routes whose next hops are cloud vpn tunnels that are down and lastly for
408:40 vpn tunnels that are down and lastly for internal tcp and udp low balancing you
408:44 internal tcp and udp low balancing you can use a load balancer's ip address as
408:47 can use a load balancer's ip address as a next hop that distributes traffic
408:50 a next hop that distributes traffic among healthy back-end instances custom
408:52 among healthy back-end instances custom static routes that use this next hop
408:55 static routes that use this next hop cannot be scoped to specific instances
408:58 cannot be scoped to specific instances by network tags and so when creating
409:01 by network tags and so when creating static routes you will always be asked
409:04 static routes you will always be asked for different parameters that are needed
409:07 for different parameters that are needed in order to create this route and so
409:09 in order to create this route and so here i've taken a screenshot from the
409:11 here i've taken a screenshot from the console to give you a bit more context
409:14 console to give you a bit more context with regards to the information that's
409:16 with regards to the information that's needed so first up is the name and
409:18 needed so first up is the name and description
409:20 description so these fields identify the route a
409:22 so these fields identify the route a name is required but a description is
409:25 name is required but a description is optional and every route in your project
409:28 optional and every route in your project must have a unique name next up is the
409:31 must have a unique name next up is the network and each route must be
409:33 network and each route must be associated with exactly one vpc network
409:36 associated with exactly one vpc network in this case it happens to be the
409:38 in this case it happens to be the default network but if you have other
409:40 default network but if you have other networks available you're able to click
409:42 networks available you're able to click on the drop down arrow and choose a
409:45 on the drop down arrow and choose a different network the destination range
409:47 different network the destination range is a single ipv4 cider block that
409:50 is a single ipv4 cider block that contains the ip addresses of systems
409:53 contains the ip addresses of systems that receive incoming packets and the ip
409:56 that receive incoming packets and the ip range must be entered as a valid ipv4
410:00 range must be entered as a valid ipv4 cider block as shown in the example
410:02 cider block as shown in the example below the field now if multiple routes
410:05 below the field now if multiple routes have identical destinations priority is
410:09 have identical destinations priority is used to determine which route should be
410:11 used to determine which route should be used so a lower number would indicate a
410:14 used so a lower number would indicate a higher priority for example a route with
410:17 higher priority for example a route with a priority value of 100 has a higher
410:21 a priority value of 100 has a higher priority than one with a priority value
410:23 priority than one with a priority value of 200 so the highest route priority
410:27 of 200 so the highest route priority means the smallest possible non-negative
410:29 means the smallest possible non-negative number as well another great example is
410:32 number as well another great example is if you look back on your default routes
410:35 if you look back on your default routes all your subnet routes are of a priority
410:37 all your subnet routes are of a priority of zero and the default internet gateway
410:40 of zero and the default internet gateway is of a priority of 1000 and therefore
410:44 is of a priority of 1000 and therefore the subnet routes will take priority
410:46 the subnet routes will take priority over the default internet gateway and
410:48 over the default internet gateway and this is due to the smaller number so
410:50 this is due to the smaller number so remember a good rule of thumb is that
410:53 remember a good rule of thumb is that the lower the number the higher the
410:55 the lower the number the higher the priority the higher the number the lower
410:58 priority the higher the number the lower the priority now to get a little bit
411:00 the priority now to get a little bit more granular you can specify a list of
411:03 more granular you can specify a list of network tags so that the route only
411:05 network tags so that the route only applies to instances that have at least
411:08 applies to instances that have at least one of the listed tags and if you don't
411:11 one of the listed tags and if you don't specify any tags then google cloud
411:14 specify any tags then google cloud applies the route to all instances in
411:17 applies the route to all instances in the network and finally next hop which
411:20 the network and finally next hop which was shown previously this is dedicated
411:22 was shown previously this is dedicated to static routes that have next hops
411:25 to static routes that have next hops that point to the options shown earlier
411:28 that point to the options shown earlier so now that i've covered static routes
411:30 so now that i've covered static routes in a bit of detail i want to get into
411:32 in a bit of detail i want to get into dynamic routes now dynamic routes are
411:35 dynamic routes now dynamic routes are managed by one or more cloud routers and
411:38 managed by one or more cloud routers and this allows you to dynamically exchange
411:40 this allows you to dynamically exchange routes between a vpc network and an
411:43 routes between a vpc network and an on-premises network with dynamic routes
411:46 on-premises network with dynamic routes their destinations always represent ip
411:49 their destinations always represent ip ranges outside of your vpc network
411:52 ranges outside of your vpc network and their next hops are always bgp peer
411:56 and their next hops are always bgp peer addresses a cloud router can manage
411:59 addresses a cloud router can manage dynamic routes for cloud vpn tunnels
412:02 dynamic routes for cloud vpn tunnels that use dynamic routing as well as
412:04 that use dynamic routing as well as cloud interconnect and don't worry i'll
412:07 cloud interconnect and don't worry i'll be getting into cloud routers in a bit
412:09 be getting into cloud routers in a bit of detail in a later lesson now i wanted
412:12 of detail in a later lesson now i wanted to take a minute to go through routing
412:14 to take a minute to go through routing order and the routing order deals with
412:17 order and the routing order deals with priorities that i touched on a little
412:19 priorities that i touched on a little bit earlier now subnet routes are always
412:22 bit earlier now subnet routes are always considered first because google cloud
412:24 considered first because google cloud requires that subnet routes have the
412:27 requires that subnet routes have the most specific destinations matching the
412:30 most specific destinations matching the ip address ranges of their respective
412:32 ip address ranges of their respective subnets if no applicable destination is
412:36 subnets if no applicable destination is found google cloud drops the packet and
412:39 found google cloud drops the packet and replies with a network unreachable error
412:42 replies with a network unreachable error system generated routes apply to all
412:45 system generated routes apply to all instances in the vpc network the scope
412:48 instances in the vpc network the scope of instances to which subnet routes
412:50 of instances to which subnet routes apply cannot be altered although you can
412:53 apply cannot be altered although you can replace the default route and so just as
412:56 replace the default route and so just as a note custom static routes apply to all
412:59 a note custom static routes apply to all instances or specific instances so if
413:02 instances or specific instances so if the route doesn't have a network tag the
413:04 the route doesn't have a network tag the route applies to all instances in the
413:07 route applies to all instances in the network now vpc networks have special
413:10 network now vpc networks have special routes that are used for certain
413:12 routes that are used for certain services and these are referred to as
413:14 services and these are referred to as special return paths in google cloud
413:17 special return paths in google cloud these routes are defined outside of your
413:19 these routes are defined outside of your vpc network in google's production
413:22 vpc network in google's production network they don't appear in your vpc
413:24 network they don't appear in your vpc network's routing table you cannot
413:26 network's routing table you cannot remove them or override them or if you
413:29 remove them or override them or if you delete or replace a default route in
413:31 delete or replace a default route in your vpc network although you can
413:33 your vpc network although you can control traffic to and from these
413:36 control traffic to and from these services by using firewall rules and the
413:39 services by using firewall rules and the services that are covered are load
413:41 services that are covered are load balancers internet aware proxy or iap as
413:45 balancers internet aware proxy or iap as well as cloud dns and so before i end
413:48 well as cloud dns and so before i end this lesson i wanted to touch on private
413:51 this lesson i wanted to touch on private google access now vm instances that only
413:54 google access now vm instances that only have internal ip addresses can use
413:58 have internal ip addresses can use private google access and this allows
414:00 private google access and this allows them to reach the external ip addresses
414:03 them to reach the external ip addresses of google's apis and services the source
414:07 of google's apis and services the source ip address of the packet can be the
414:09 ip address of the packet can be the primary internal ip address of the
414:12 primary internal ip address of the network interface or an address in an
414:15 network interface or an address in an alias ip range that is assigned to the
414:18 alias ip range that is assigned to the interface if you disable private google
414:20 interface if you disable private google access the vm instances can no longer
414:24 access the vm instances can no longer reach google apis and services and will
414:27 reach google apis and services and will only be able to send traffic within the
414:29 only be able to send traffic within the vpc network private google access has no
414:33 vpc network private google access has no effect on instances that have
414:36 effect on instances that have external ip addresses and can still
414:39 external ip addresses and can still access the internet they don't need any
414:41 access the internet they don't need any special configuration to send requests
414:44 special configuration to send requests to the external ip addresses of google
414:47 to the external ip addresses of google apis and services you enable private
414:50 apis and services you enable private google access on a subnet by subnet
414:52 google access on a subnet by subnet basis and it's a setting for subnets in
414:55 basis and it's a setting for subnets in a vpc network and i will be showing you
414:57 a vpc network and i will be showing you this in an upcoming demo where we'll be
415:00 this in an upcoming demo where we'll be building our own custom vpc network now
415:03 building our own custom vpc network now even though the next hop for the
415:04 even though the next hop for the required routes is called the default
415:07 required routes is called the default internet gateway
415:08 internet gateway and the ip addresses for google apis and
415:11 and the ip addresses for google apis and services are external requests to google
415:14 services are external requests to google apis and services from vms that only
415:17 apis and services from vms that only hold internal ip addresses in subnet 1
415:20 hold internal ip addresses in subnet 1 where private google access is enabled
415:23 where private google access is enabled are not sent through the public internet
415:25 are not sent through the public internet those requests stay within google's
415:28 those requests stay within google's network as well vms that only have
415:31 network as well vms that only have internal ip addresses do not meet the
415:34 internal ip addresses do not meet the internet access requirements
415:36 internet access requirements for access to other external ip
415:39 for access to other external ip addresses
415:40 addresses beyond those for google apis and
415:42 beyond those for google apis and services now touching on this diagram
415:45 services now touching on this diagram here
415:46 here firewall rules in the vpc network have
415:48 firewall rules in the vpc network have been configured to allow internet access
415:52 been configured to allow internet access vm1 can access google apis and services
415:56 vm1 can access google apis and services including cloud storage because its
415:58 including cloud storage because its network interface is located in subnet 1
416:01 network interface is located in subnet 1 which has private google access enabled
416:04 which has private google access enabled and because this instance only has an
416:06 and because this instance only has an internal ip address
416:08 internal ip address private google access applies to this
416:10 private google access applies to this instance now with vm2 it can also access
416:14 instance now with vm2 it can also access google apis and services including cloud
416:16 google apis and services including cloud storage because it has an external ip
416:19 storage because it has an external ip address private google access has no
416:22 address private google access has no effect on this instance as it has an
416:25 effect on this instance as it has an external ip address and private google
416:27 external ip address and private google access has not been enabled on that
416:30 access has not been enabled on that subnet and because both of these
416:32 subnet and because both of these instances are in the same network they
416:34 instances are in the same network they are still able to communicate with each
416:36 are still able to communicate with each other over an internal subnet route and
416:39 other over an internal subnet route and so this is just one way where private
416:41 so this is just one way where private google access can be applied there are
416:43 google access can be applied there are some other options for private access as
416:46 some other options for private access as well you can use private google access
416:48 well you can use private google access to connect to google apis and services
416:51 to connect to google apis and services from your on-premises network through a
416:54 from your on-premises network through a cloud vpn tunnel or cloud interconnect
416:57 cloud vpn tunnel or cloud interconnect without having any external ip addresses
417:00 without having any external ip addresses you also have the option of using
417:02 you also have the option of using private google access through a vpc
417:05 private google access through a vpc network peering connection which is
417:07 network peering connection which is known as private services access and
417:10 known as private services access and finally the last option available for
417:12 finally the last option available for private google access is connecting
417:14 private google access is connecting directly from serverless google services
417:17 directly from serverless google services through an internal vpc connection now i
417:20 through an internal vpc connection now i know this has been a lot of theory to
417:22 know this has been a lot of theory to take in but i promise it'll become a lot
417:25 take in but i promise it'll become a lot easier and concepts will become less
417:27 easier and concepts will become less complicated when we start putting this
417:30 complicated when we start putting this into practice coming up soon in the demo
417:33 into practice coming up soon in the demo of building our own custom vpc and so
417:36 of building our own custom vpc and so that's pretty much all i wanted to cover
417:38 that's pretty much all i wanted to cover when it comes to routing and private
417:40 when it comes to routing and private google access so you can now mark this
417:42 google access so you can now mark this lesson as complete and let's move on to
417:45 lesson as complete and let's move on to the next one
417:46 the next one [Music]
417:50 [Music] welcome back and in this lesson i'm
417:52 welcome back and in this lesson i'm going to be discussing ip addressing now
417:55 going to be discussing ip addressing now in the network refresher lesson i went
417:57 in the network refresher lesson i went into a bit of depth on how i p addresses
418:00 into a bit of depth on how i p addresses are broken down and used for
418:02 are broken down and used for communication in computer networks in
418:05 communication in computer networks in this lesson i'll be getting into the
418:07 this lesson i'll be getting into the available types of ip addressing in
418:10 available types of ip addressing in google cloud and how they are used in
418:13 google cloud and how they are used in each different scenario please note for
418:15 each different scenario please note for the exam a high level overview will be
418:18 the exam a high level overview will be needed to know when it comes to ip
418:21 needed to know when it comes to ip addressing but the details behind it
418:23 addressing but the details behind it will give you a better understanding on
418:25 will give you a better understanding on when to use each type of ip address so
418:28 when to use each type of ip address so with that being said let's dive in
418:31 with that being said let's dive in now ip addressing in google cloud holds
418:34 now ip addressing in google cloud holds quite a few categories
418:36 quite a few categories and really start by determining whether
418:39 and really start by determining whether you are planning for communication
418:41 you are planning for communication internally within your vpc or for
418:44 internally within your vpc or for external use to communicate with the
418:46 external use to communicate with the outside world through the internet once
418:49 outside world through the internet once you determine the type of communication
418:52 you determine the type of communication that you're looking to apply between
418:54 that you're looking to apply between resources some more decisions need to be
418:57 resources some more decisions need to be made with regards to the other options
418:59 made with regards to the other options and i will be going through these
419:01 and i will be going through these options in just a sec now in order to
419:03 options in just a sec now in order to make these options a little bit more
419:05 make these options a little bit more digestible i wanted to start off with
419:07 digestible i wanted to start off with the options available for internal ip
419:10 the options available for internal ip addresses
419:11 addresses now internal ip addresses are not
419:14 now internal ip addresses are not publicly advertised they are used only
419:17 publicly advertised they are used only within a network now every vpc network
419:20 within a network now every vpc network or on-premises network has at least one
419:23 or on-premises network has at least one internal ip address range resources with
419:26 internal ip address range resources with internal ip addresses communicate with
419:29 internal ip addresses communicate with other resources as if they're all on the
419:32 other resources as if they're all on the same private network now every vm
419:35 same private network now every vm instance can have one primary internal
419:38 instance can have one primary internal ip address that is unique to the vpc
419:41 ip address that is unique to the vpc network and you can assign a specific
419:43 network and you can assign a specific internal ip address when you create a vm
419:47 internal ip address when you create a vm instance or you can reserve a static
419:50 instance or you can reserve a static internal ip address for your project and
419:53 internal ip address for your project and assign that address to your resources if
419:56 assign that address to your resources if you don't specify an address one will be
419:58 you don't specify an address one will be automatically assigned to the vm in
420:01 automatically assigned to the vm in either case the address must belong to
420:04 either case the address must belong to the ip range of the subnet and so if
420:06 the ip range of the subnet and so if your network is an auto mode vpc network
420:10 your network is an auto mode vpc network the address comes from the region subnet
420:12 the address comes from the region subnet if your network is a custom mode vpc
420:15 if your network is a custom mode vpc network you must specify which subnet
420:17 network you must specify which subnet the ip address comes from now all
420:20 the ip address comes from now all subnets have a primary sider range which
420:23 subnets have a primary sider range which is the range of internal ip addresses
420:26 is the range of internal ip addresses that define the subnet each vm instance
420:29 that define the subnet each vm instance gets its primary internal ip address
420:32 gets its primary internal ip address from this range you can also allocate
420:35 from this range you can also allocate alias ip ranges from that primary range
420:39 alias ip ranges from that primary range or you can add a secondary range to the
420:41 or you can add a secondary range to the subnet and allocate alias ip ranges from
420:45 subnet and allocate alias ip ranges from the secondary range use of alias ip
420:48 the secondary range use of alias ip ranges does not require secondary subnet
420:51 ranges does not require secondary subnet ranges these secondary subnet ranges
420:54 ranges these secondary subnet ranges merely provide an organizational tool
420:57 merely provide an organizational tool now when using ip aliasing you can
421:00 now when using ip aliasing you can configure multiple internal ip addresses
421:03 configure multiple internal ip addresses representing containers or applications
421:06 representing containers or applications hosted in a vm without having to define
421:09 hosted in a vm without having to define a separate network interface and you can
421:11 a separate network interface and you can assign vm alias ip ranges from either
421:14 assign vm alias ip ranges from either the subnet's primary or secondary ranges
421:17 the subnet's primary or secondary ranges when alias ip ranges are configured
421:20 when alias ip ranges are configured google cloud automatically installs vpc
421:23 google cloud automatically installs vpc network routes for primary and alias ip
421:27 network routes for primary and alias ip ranges for the subnet of your primary
421:30 ranges for the subnet of your primary network interface your container
421:32 network interface your container orchestrator or gke does not need to
421:35 orchestrator or gke does not need to specify vpc network connectivity for
421:38 specify vpc network connectivity for these routes and this simplifies routing
421:41 these routes and this simplifies routing traffic and managing your containers now
421:44 traffic and managing your containers now when choosing either an auto mode vpc or
421:47 when choosing either an auto mode vpc or a custom vpc you will have the option to
421:50 a custom vpc you will have the option to choose either an ephemeral ip or a
421:53 choose either an ephemeral ip or a static ip now an ephemeral ip address is
421:56 static ip now an ephemeral ip address is an ip address that doesn't persist
421:59 an ip address that doesn't persist beyond the life of the resource for
422:01 beyond the life of the resource for example when you create an instance or
422:03 example when you create an instance or forwarding rule without specifying an ip
422:06 forwarding rule without specifying an ip address google cloud will automatically
422:08 address google cloud will automatically assign the resource an ephemeral ip
422:11 assign the resource an ephemeral ip address and this ephemeral ip address is
422:14 address and this ephemeral ip address is released when you delete the resource
422:17 released when you delete the resource when the ip address is released it is
422:20 when the ip address is released it is free to eventually be assigned to
422:22 free to eventually be assigned to another resource so is never a great
422:24 another resource so is never a great option if you depend on this ip to
422:26 option if you depend on this ip to remain the same this ephemeral ip
422:29 remain the same this ephemeral ip address can be automatically assigned
422:32 address can be automatically assigned and will be assigned from the selected
422:34 and will be assigned from the selected region subnet as well if you have
422:37 region subnet as well if you have ephemeral ip addresses that are
422:39 ephemeral ip addresses that are currently in use
422:41 currently in use you can promote these addresses to
422:43 you can promote these addresses to static internal ip addresses so that
422:46 static internal ip addresses so that they remain with your project until you
422:48 they remain with your project until you actively remove them and just as a note
422:51 actively remove them and just as a note before you reserve an existing ip
422:53 before you reserve an existing ip address you will need the value of the
422:56 address you will need the value of the ip address that you want to promote now
422:58 ip address that you want to promote now reserving a static ip address
423:01 reserving a static ip address assigns the address to your project
423:03 assigns the address to your project until you explicitly release it this is
423:06 until you explicitly release it this is useful if you are dependent on a
423:08 useful if you are dependent on a specific ip address for a specific
423:11 specific ip address for a specific service and need to prevent another
423:14 service and need to prevent another resource from being able to use the same
423:16 resource from being able to use the same address static addresses are also useful
423:19 address static addresses are also useful if you need to move an ip address from
423:22 if you need to move an ip address from one google cloud resource to another and
423:25 one google cloud resource to another and you also have the same options when
423:27 you also have the same options when creating an internal load balancer as
423:30 creating an internal load balancer as you do with vm instances and so now that
423:33 you do with vm instances and so now that we've covered all the options for
423:35 we've covered all the options for internal ip addresses i would like to
423:38 internal ip addresses i would like to move on to cover all the available
423:40 move on to cover all the available options for external ip addresses now
423:43 options for external ip addresses now you can assign an external ip address to
423:46 you can assign an external ip address to an instance or a forwarding rule if you
423:49 an instance or a forwarding rule if you need to communicate with the internet
423:51 need to communicate with the internet with resources in another network or
423:54 with resources in another network or need to communicate with a public google
423:56 need to communicate with a public google cloud service sources from outside a
423:59 cloud service sources from outside a google cloud vpc network can address a
424:02 google cloud vpc network can address a specific resource by the external ip
424:05 specific resource by the external ip address as long as firewall rules enable
424:08 address as long as firewall rules enable the connection and only resources with
424:11 the connection and only resources with an external ip address can send and
424:14 an external ip address can send and receive traffic directly to and from
424:16 receive traffic directly to and from outside the network and like internal ip
424:19 outside the network and like internal ip addresses external ip addresses have the
424:22 addresses external ip addresses have the option of choosing from an ephemeral or
424:25 option of choosing from an ephemeral or static ip address now an ephemeral
424:28 static ip address now an ephemeral external ip address is an ip address
424:32 external ip address is an ip address that doesn't persist beyond the life of
424:34 that doesn't persist beyond the life of the resource and so follows the same
424:36 the resource and so follows the same rules as ephemeral internal ip addresses
424:40 rules as ephemeral internal ip addresses so when you create an instance or
424:42 so when you create an instance or forwarding rule without specifying an ip
424:44 forwarding rule without specifying an ip address the resource is automatically
424:47 address the resource is automatically assigned an ephemeral external ip
424:49 assigned an ephemeral external ip address and this is something that you
424:51 address and this is something that you will see quite often ephemeral external
424:54 will see quite often ephemeral external ip addresses are released from a
424:56 ip addresses are released from a resource if you delete the resource for
424:59 resource if you delete the resource for vm instances the ephemeral external ip
425:03 vm instances the ephemeral external ip address is also released if you stop the
425:05 address is also released if you stop the instance so after you restart the
425:08 instance so after you restart the instance it is assigned a new ephemeral
425:10 instance it is assigned a new ephemeral external ip address and if you have an
425:13 external ip address and if you have an existing vm that doesn't have an
425:15 existing vm that doesn't have an external ip address you can assign one
425:18 external ip address you can assign one to it forwarding rules always have an ip
425:20 to it forwarding rules always have an ip address whether external or internal so
425:23 address whether external or internal so you don't need to assign an ip address
425:26 you don't need to assign an ip address to a forwarding rule after it is created
425:29 to a forwarding rule after it is created and if your instance has an ephemeral
425:31 and if your instance has an ephemeral external ip address and you want to
425:33 external ip address and you want to permanently assign the ip to your
425:36 permanently assign the ip to your project like ephemeral internal ip
425:38 project like ephemeral internal ip addresses you have the option to promote
425:41 addresses you have the option to promote the ip address from ephemeral to static
425:44 the ip address from ephemeral to static and in this case promoting an ephemeral
425:47 and in this case promoting an ephemeral external ip address to a static external
425:50 external ip address to a static external ip address now when assigning a static
425:53 ip address now when assigning a static ip address these are assigned to a
425:56 ip address these are assigned to a project long term until they are
425:58 project long term until they are explicitly released from that assignment
426:01 explicitly released from that assignment and remain attached to a resource until
426:04 and remain attached to a resource until they are explicitly detached for vm
426:07 they are explicitly detached for vm instances static external ip addresses
426:10 instances static external ip addresses remain attached to stopped instances
426:13 remain attached to stopped instances until they are removed and this is
426:15 until they are removed and this is useful if you are dependent on a
426:17 useful if you are dependent on a specific ip address for a specific
426:20 specific ip address for a specific service like a web server or a global
426:23 service like a web server or a global load balancer that needs access to the
426:25 load balancer that needs access to the internet static external ip addresses
426:28 internet static external ip addresses can be either a regional or global
426:31 can be either a regional or global resource
426:32 resource in a regional static ip address allows
426:35 in a regional static ip address allows resources of that region
426:37 resources of that region or resources of zones within that region
426:40 or resources of zones within that region to use the ip address and just as a note
426:43 to use the ip address and just as a note you can use your own publicly routable
426:46 you can use your own publicly routable ip address prefixes as google cloud
426:49 ip address prefixes as google cloud external ip addresses and advertise them
426:52 external ip addresses and advertise them on the internet the only caveat is that
426:55 on the internet the only caveat is that you must own and bring at the minimum a
426:58 you must own and bring at the minimum a 24 cider block and so now that we've
427:01 24 cider block and so now that we've discussed internal and external ip
427:04 discussed internal and external ip addressing options i wanted to move into
427:06 addressing options i wanted to move into internal ip address reservations now
427:09 internal ip address reservations now static internal ips provide the ability
427:12 static internal ips provide the ability to reserve internal ip addresses
427:15 to reserve internal ip addresses from the ip range configured in the
427:18 from the ip range configured in the subnet then assign those reserved
427:20 subnet then assign those reserved internal addresses to resources as
427:23 internal addresses to resources as needed reserving an internal ip address
427:26 needed reserving an internal ip address takes that address out of the dynamic
427:28 takes that address out of the dynamic allocation pool and prevents it from
427:30 allocation pool and prevents it from being used for automatic allocations
427:33 being used for automatic allocations with the ability to reserve static
427:35 with the ability to reserve static internal ip addresses you can always use
427:39 internal ip addresses you can always use the same ip address for the same
427:41 the same ip address for the same resource even if you have to delete and
427:44 resource even if you have to delete and recreate the resource so when it comes
427:47 recreate the resource so when it comes to internal ip address reservation you
427:50 to internal ip address reservation you can either reserve a static internal ip
427:53 can either reserve a static internal ip address before creating the associated
427:55 address before creating the associated resource or you can create the resource
427:58 resource or you can create the resource with an ephemeral internal ip address
428:01 with an ephemeral internal ip address and then promote that ephemeral ip
428:04 and then promote that ephemeral ip address to a static internal ip address
428:07 address to a static internal ip address and so just to give you a bit more
428:09 and so just to give you a bit more context i have a diagram here to run you
428:11 context i have a diagram here to run you through it so in the first example you
428:14 through it so in the first example you would create a subnet from your vpc
428:16 would create a subnet from your vpc network you would then reserve an
428:18 network you would then reserve an internal ip address from that subnet's
428:22 internal ip address from that subnet's primary ip range and in this diagram is
428:25 primary ip range and in this diagram is marked as 10.12.4.3
428:28 marked as 10.12.4.3 and will be held as reserved for later
428:31 and will be held as reserved for later use with a resource and then when you
428:33 use with a resource and then when you decide to create a vm instance or an
428:36 decide to create a vm instance or an internal load balancer you can use the
428:39 internal load balancer you can use the reserved ip address that was created in
428:42 reserved ip address that was created in the previous step that i p address then
428:45 the previous step that i p address then becomes marked as reserved and in use
428:48 becomes marked as reserved and in use now touching on the second example you
428:50 now touching on the second example you would first create a subnet from your
428:52 would first create a subnet from your vpc network
428:54 vpc network you would then create a vm instance or
428:57 you would then create a vm instance or an internal load balancer with either an
429:00 an internal load balancer with either an automatically allocated ephemeral ip
429:02 automatically allocated ephemeral ip address or a specific ip address that
429:05 address or a specific ip address that you've chosen from within that specific
429:08 you've chosen from within that specific subnet and so once the ephemeral ip
429:10 subnet and so once the ephemeral ip address is in use you can then promote
429:13 address is in use you can then promote the ephemeral ip address to a static
429:16 the ephemeral ip address to a static internal ip address and would then
429:18 internal ip address and would then become reserved and in use now when it
429:21 become reserved and in use now when it comes to the external ip address
429:24 comes to the external ip address reservation
429:25 reservation you are able to obtain a static external
429:28 you are able to obtain a static external ip address by using one of the following
429:30 ip address by using one of the following two options you can either reserve a new
429:33 two options you can either reserve a new static external ip address and then
429:35 static external ip address and then assign the address to a new vm instance
429:38 assign the address to a new vm instance or you can promote an existing ephemeral
429:41 or you can promote an existing ephemeral external ip address to become a static
429:44 external ip address to become a static external ip address now in the case of
429:46 external ip address now in the case of external ip addresses you can reserve
429:49 external ip addresses you can reserve two different types
429:51 two different types a regional ip address which can be used
429:54 a regional ip address which can be used by vm instances with one or more network
429:57 by vm instances with one or more network interfaces or by network load balancers
430:01 interfaces or by network load balancers these ip addresses can be created either
430:03 these ip addresses can be created either in the console or through the command
430:06 in the console or through the command line with the limitation that you will
430:08 line with the limitation that you will only be allowed to create ipv4 ip
430:11 only be allowed to create ipv4 ip addresses the other type is a global ip
430:15 addresses the other type is a global ip address which can be used for global
430:17 address which can be used for global load balancers and can be created either
430:20 load balancers and can be created either in the console or through the command
430:22 in the console or through the command line as shown here the limitation here
430:26 line as shown here the limitation here is that you must choose the premium
430:28 is that you must choose the premium network service tier in order to create
430:30 network service tier in order to create a global ip address
430:32 a global ip address and after reserving the address you can
430:35 and after reserving the address you can finally assign it to an instance during
430:37 finally assign it to an instance during instance creation or to an existing
430:40 instance creation or to an existing instance and so as you can see there is
430:43 instance and so as you can see there is a lot to take in when it comes to
430:45 a lot to take in when it comes to understanding ip addressing and i hope
430:48 understanding ip addressing and i hope this lesson has given you some better
430:50 this lesson has given you some better insight as to which type of ips should
430:52 insight as to which type of ips should be used in a specific scenario now don't
430:55 be used in a specific scenario now don't worry the options may seem overwhelming
430:58 worry the options may seem overwhelming but once you start working with ip
431:00 but once you start working with ip addresses more often the options will
431:03 addresses more often the options will become so much clearer on what to use
431:06 become so much clearer on what to use and when and as i said in the beginning
431:08 and when and as i said in the beginning only high level concepts are needed to
431:11 only high level concepts are needed to know for the exam but knowing the
431:13 know for the exam but knowing the options will allow you to make better
431:15 options will allow you to make better decisions
431:16 decisions in your daily role as a cloud engineer
431:19 in your daily role as a cloud engineer and so that's pretty much all i wanted
431:21 and so that's pretty much all i wanted to cover when it comes to ip addressing
431:24 to cover when it comes to ip addressing in google cloud and so now that we've
431:26 in google cloud and so now that we've covered the theory behind ip addressing
431:29 covered the theory behind ip addressing in google cloud i wanted to bring this
431:31 in google cloud i wanted to bring this into the console for a demo where we
431:33 into the console for a demo where we will get hands-on with creating both
431:36 will get hands-on with creating both internal and external static ip
431:39 internal and external static ip addresses so as i explained before there
431:41 addresses so as i explained before there was a lot to take in with this lesson so
431:44 was a lot to take in with this lesson so now would be a perfect opportunity to
431:46 now would be a perfect opportunity to get up and have a stretch grab yourself
431:49 get up and have a stretch grab yourself a tea or a coffee and whenever you're
431:51 a tea or a coffee and whenever you're ready join me back in the console so you
431:54 ready join me back in the console so you can now mark this lesson as complete and
431:56 can now mark this lesson as complete and i'll see you in the next
431:58 i'll see you in the next [Music]
432:02 [Music] welcome back in this demonstration i'm
432:05 welcome back in this demonstration i'm going to be going over how to create and
432:07 going to be going over how to create and apply both internal and external static
432:11 apply both internal and external static ip addresses i'm going to show how to
432:14 ip addresses i'm going to show how to create them in both the console and the
432:16 create them in both the console and the command line as well as how to promote
432:19 command line as well as how to promote ip addresses from ephemeral ips to
432:22 ip addresses from ephemeral ips to static ips and once we're done creating
432:25 static ips and once we're done creating all the ip addresses i'm going to show
432:28 all the ip addresses i'm going to show you the steps on how to delete them now
432:30 you the steps on how to delete them now there's a lot to get done here so let's
432:33 there's a lot to get done here so let's dive in now for this demonstration i'm
432:35 dive in now for this demonstration i'm going to be using a project that has the
432:38 going to be using a project that has the default vpc created and so in my case i
432:41 default vpc created and so in my case i will be using project bowtieinc dev and
432:45 will be using project bowtieinc dev and so before you start make sure that your
432:47 so before you start make sure that your default vpc is created in the project
432:50 default vpc is created in the project that you had selected so in order to do
432:53 that you had selected so in order to do that i'm going to head over to the
432:54 that i'm going to head over to the navigation menu i'm going to scroll down
432:57 navigation menu i'm going to scroll down to vpc network and we're going to see
432:59 to vpc network and we're going to see here that the default vpc has been
433:02 here that the default vpc has been created and so i can go ahead and start
433:05 created and so i can go ahead and start the demonstration and so the first thing
433:07 the demonstration and so the first thing i wanted to demonstrate is how to create
433:10 i wanted to demonstrate is how to create a static internal ip address and so in
433:13 a static internal ip address and so in order for me to demonstrate this i'm
433:15 order for me to demonstrate this i'm going to be using a vm instance and so
433:17 going to be using a vm instance and so i'm going to head over to the navigation
433:19 i'm going to head over to the navigation menu again and i'm going to scroll down
433:21 menu again and i'm going to scroll down to compute engine
433:23 to compute engine and so here i'm going to create my new
433:25 and so here i'm going to create my new instance by simply clicking on create
433:28 instance by simply clicking on create instance and so under name i'm going to
433:30 instance and so under name i'm going to keep it as instance 1. under region you
433:33 keep it as instance 1. under region you want to select us east one and i'm going
433:36 want to select us east one and i'm going to keep the zone as the default selected
433:38 to keep the zone as the default selected under machine type i'm going to select
433:40 under machine type i'm going to select the drop down and select e2 micro and
433:43 the drop down and select e2 micro and i'm going to leave everything else as
433:45 i'm going to leave everything else as the default i'm going to scroll down
433:47 the default i'm going to scroll down here to management security disks
433:50 here to management security disks networking and soul tenancy and i'm
433:52 networking and soul tenancy and i'm going to select the networking tab from
433:54 going to select the networking tab from there
433:55 there and so under here i'm going to select
433:57 and so under here i'm going to select under network interfaces the default
434:00 under network interfaces the default network interface and here is where i
434:03 network interface and here is where i can create my static internal ip and so
434:06 can create my static internal ip and so clicking on the drop down under primary
434:08 clicking on the drop down under primary internal ip you will see ephemeral
434:11 internal ip you will see ephemeral automatic ephemeral custom and reserve
434:14 automatic ephemeral custom and reserve static internal ip address and so you're
434:17 static internal ip address and so you're going to select reserve static internal
434:19 going to select reserve static internal ip address and you'll get a pop-up
434:22 ip address and you'll get a pop-up prompting you with some fields to fill
434:24 prompting you with some fields to fill out to reserve a static internal ip
434:27 out to reserve a static internal ip address and so under name i'm going to
434:29 address and so under name i'm going to call this
434:30 call this static dash internal and for the
434:33 static dash internal and for the purposes of this demo i'm going to leave
434:35 purposes of this demo i'm going to leave the subnet and the static ip address as
434:38 the subnet and the static ip address as the currently selected if i wanted to
434:40 the currently selected if i wanted to select a specific ip address i can click
434:43 select a specific ip address i can click on this drop down and select let me
434:46 on this drop down and select let me choose and this will give me the option
434:48 choose and this will give me the option to enter in a custom ip address with the
434:52 to enter in a custom ip address with the subnet range that is selected for this
434:54 subnet range that is selected for this specific sub network and so because i'm
434:57 specific sub network and so because i'm not going to do that i'm going to select
434:59 not going to do that i'm going to select assign automatically i'm going to leave
435:02 assign automatically i'm going to leave the purpose as non-shared and i'm going
435:04 the purpose as non-shared and i'm going to simply click on reserve and this is
435:06 to simply click on reserve and this is going to reserve this specific ip
435:09 going to reserve this specific ip address and now as you can see here i
435:11 address and now as you can see here i have the primary internal ip marked as
435:14 have the primary internal ip marked as static internal and so this is going to
435:17 static internal and so this is going to be my first static internal ip address
435:20 be my first static internal ip address and so once you've done these steps you
435:22 and so once you've done these steps you can simply click on done and you can
435:24 can simply click on done and you can head on down to the bottom and simply
435:27 head on down to the bottom and simply click on create to create the instance
435:30 click on create to create the instance and when the instance finishes creating
435:32 and when the instance finishes creating you will see the internal static ip
435:34 you will see the internal static ip address and as you can see here your
435:37 address and as you can see here your static internal ip address has been
435:40 static internal ip address has been assigned to the default network
435:41 assigned to the default network interface on instance 1. and so in order
435:45 interface on instance 1. and so in order for me to view this static internal ip
435:47 for me to view this static internal ip address in the console i can view this
435:50 address in the console i can view this in vpc networks and drill down into the
435:54 in vpc networks and drill down into the specific vpc and find it under static
435:57 specific vpc and find it under static internal ip addresses but i wanted to
436:00 internal ip addresses but i wanted to show you how to view it by querying it
436:03 show you how to view it by querying it through the command line and so in order
436:05 through the command line and so in order to do this i'm going to simply go up to
436:07 to do this i'm going to simply go up to the menu bar on the right hand side and
436:09 the menu bar on the right hand side and open up cloud shell and once cloud shell
436:12 open up cloud shell and once cloud shell has come up you're going to simply paste
436:14 has come up you're going to simply paste in the command
436:16 in the command gcloud compute addresses list and this
436:19 gcloud compute addresses list and this will give me a list of the internal ip
436:21 will give me a list of the internal ip addresses that are available and so now
436:24 addresses that are available and so now i'm going to be prompted to authorize
436:26 i'm going to be prompted to authorize this api call using my credentials and i
436:29 this api call using my credentials and i definitely do so i'm going to click on
436:31 definitely do so i'm going to click on authorize and as expected the static
436:34 authorize and as expected the static internal ip address that we created
436:37 internal ip address that we created earlier has shown up it's marked as
436:39 earlier has shown up it's marked as internal in the region of us east one in
436:43 internal in the region of us east one in the default subnet and the status is in
436:45 the default subnet and the status is in use and so as we discussed in the last
436:48 use and so as we discussed in the last lesson static ip addresses persist even
436:52 lesson static ip addresses persist even after the resource has been deleted and
436:55 after the resource has been deleted and so to demonstrate this i'm going to now
436:57 so to demonstrate this i'm going to now delete the instance i'm going to simply
436:59 delete the instance i'm going to simply check off the instance and go up to the
437:01 check off the instance and go up to the top and click on delete you're going to
437:04 top and click on delete you're going to be prompted to make sure if you want to
437:05 be prompted to make sure if you want to delete this yes i do so i'm going to
437:07 delete this yes i do so i'm going to click on delete and so now that the
437:10 click on delete and so now that the instance has been deleted i'm going to
437:12 instance has been deleted i'm going to query the ip addresses again by using
437:15 query the ip addresses again by using the same command gcloud compute
437:17 the same command gcloud compute addresses list i'm going to hit enter
437:20 addresses list i'm going to hit enter and as you can see here the ip address
437:23 and as you can see here the ip address static dash internal still persists but
437:26 static dash internal still persists but the status is now marked as reserved and
437:29 the status is now marked as reserved and so if i wanted to use this ip address
437:31 so if i wanted to use this ip address for another instance i can do so by
437:34 for another instance i can do so by simply clicking on create instance up
437:36 simply clicking on create instance up here at the top menu and then i can
437:38 here at the top menu and then i can select static dash internal as my ip
437:42 select static dash internal as my ip address so i'm going to quickly close
437:43 address so i'm going to quickly close down cloud shell
437:45 down cloud shell and i'm going to leave the name as
437:46 and i'm going to leave the name as instance one the region
437:49 instance one the region can select us east one and we're going
437:51 can select us east one and we're going to keep the zone as the default selected
437:54 to keep the zone as the default selected under machine type you're going to
437:55 under machine type you're going to select the e2 micro machine type going
437:58 select the e2 micro machine type going to scroll down to management security
438:00 to scroll down to management security disks networking into soul tenancy and
438:03 disks networking into soul tenancy and i'm going to select the networking tab
438:05 i'm going to select the networking tab from under here
438:06 from under here and under network interfaces i'm going
438:09 and under network interfaces i'm going to select the default network interface
438:11 to select the default network interface and under primary internal ip
438:14 and under primary internal ip if i click on the drop down i have the
438:16 if i click on the drop down i have the option of selecting the static dash
438:19 option of selecting the static dash internal static ip address and so i
438:22 internal static ip address and so i wanted to move on to demonstrate how to
438:24 wanted to move on to demonstrate how to promote an internal ephemeral ip address
438:28 promote an internal ephemeral ip address to an internal static ip address and so
438:31 to an internal static ip address and so in order to do this i'm going to select
438:33 in order to do this i'm going to select on ephemeral automatic and i'm going to
438:36 on ephemeral automatic and i'm going to scroll down and click on done and i'm
438:38 scroll down and click on done and i'm going to go ahead and create the
438:40 going to go ahead and create the instance and once the instance is ready
438:42 instance and once the instance is ready i'll be able to go in and edit the
438:44 i'll be able to go in and edit the network interface and so the instance is
438:46 network interface and so the instance is up and ready and so i'm going to drill
438:49 up and ready and so i'm going to drill down into the instance
438:50 down into the instance and i'm going to go up to the top and
438:52 and i'm going to go up to the top and click on edit i'm going to scroll down
438:54 click on edit i'm going to scroll down to network interfaces and i'm going to
438:56 to network interfaces and i'm going to edit the default network interface so
438:59 edit the default network interface so i'm going to scroll down a little bit
439:00 i'm going to scroll down a little bit more and here under internal iptype i'm
439:03 more and here under internal iptype i'm going to click on the drop down and i'm
439:05 going to click on the drop down and i'm going to select static
439:08 going to select static and so here you are taking the current
439:10 and so here you are taking the current ip address which is 10.142.0.4
439:14 ip address which is 10.142.0.4 and promoting it to a static internal ip
439:17 and promoting it to a static internal ip address and so you're going to be
439:18 address and so you're going to be prompted with a pop-up confirming the
439:21 prompted with a pop-up confirming the reservation for that static internal ip
439:24 reservation for that static internal ip address and so notice that i don't have
439:26 address and so notice that i don't have any other options and so all i'm going
439:28 any other options and so all i'm going to do is type in a name and i'm going to
439:31 to do is type in a name and i'm going to call this promoted
439:33 call this promoted static and i'm going to click on reserve
439:36 static and i'm going to click on reserve and this will promote the internal ip
439:39 and this will promote the internal ip address from an ephemeral ip address to
439:42 address from an ephemeral ip address to a static ip address and so now i'm just
439:45 a static ip address and so now i'm just going to click on done
439:47 going to click on done and i'm going to scroll down and click
439:48 and i'm going to scroll down and click on save and so now because i want to
439:51 on save and so now because i want to verify the ip address i'm going to go
439:53 verify the ip address i'm going to go ahead and open up the cloud shell again
439:56 ahead and open up the cloud shell again and i'm going to use the same command
439:58 and i'm going to use the same command that i used earlier which is gcloud
440:01 that i used earlier which is gcloud compute addresses list and i'm going to
440:03 compute addresses list and i'm going to hit enter
440:05 hit enter as expected the promoted static ip
440:08 as expected the promoted static ip address is showing as an internal ip
440:11 address is showing as an internal ip address in the region of us east 1 in
440:14 address in the region of us east 1 in the default subnet and its status is in
440:17 the default subnet and its status is in use and so just as a recap we've created
440:21 use and so just as a recap we've created a static internal ip address for the
440:24 a static internal ip address for the first instance and for the second
440:26 first instance and for the second instance we promoted an ephemeral
440:28 instance we promoted an ephemeral internal ip address into a static
440:31 internal ip address into a static internal ip address and we were able to
440:34 internal ip address and we were able to verify this through cloud shell using
440:37 verify this through cloud shell using the gcloud compute addresses list
440:39 the gcloud compute addresses list command and so this is the end of part
440:42 command and so this is the end of part one of this demo it was getting a bit
440:44 one of this demo it was getting a bit long so i decided to break it up and
440:46 long so i decided to break it up and this would be a great opportunity for
440:49 this would be a great opportunity for you to get up and have a stretch get
440:51 you to get up and have a stretch get yourself a coffee or tea and whenever
440:53 yourself a coffee or tea and whenever you're ready join me in part two where
440:56 you're ready join me in part two where we will be starting immediately from the
440:58 we will be starting immediately from the end of part one so you can now mark this
441:01 end of part one so you can now mark this as complete and i'll see you in the next
441:03 as complete and i'll see you in the next one
441:03 one [Music]
441:07 [Music] welcome back this is part two of the
441:10 welcome back this is part two of the creating internal and external ip
441:13 creating internal and external ip addresses demo and we will be starting
441:16 addresses demo and we will be starting immediately from the end of part one so
441:19 immediately from the end of part one so with that being said let's dive in and
441:21 with that being said let's dive in and so now that we've gone through how to
441:23 so now that we've gone through how to both create static ip addresses and
441:27 both create static ip addresses and promote ephemeral ip addresses to static
441:30 promote ephemeral ip addresses to static ip addresses for internal ips
441:33 ip addresses for internal ips i want to go ahead and go through the
441:35 i want to go ahead and go through the same with external ips and so i'm going
441:38 same with external ips and so i'm going to first start off by deleting this
441:40 to first start off by deleting this instance i'm going to go ahead and click
441:41 instance i'm going to go ahead and click on delete
441:43 on delete and so instead of doing it through the
441:45 and so instead of doing it through the compute engine interface i want to go
441:48 compute engine interface i want to go into the external ip address interface
441:51 into the external ip address interface which can be found in the vpc network
441:53 which can be found in the vpc network menu
441:54 menu so i'm going to go ahead up to the left
441:56 so i'm going to go ahead up to the left hand corner click on the navigation menu
441:59 hand corner click on the navigation menu and i'm going to scroll down to vpc
442:01 and i'm going to scroll down to vpc network
442:02 network and from the menu here on the left hand
442:04 and from the menu here on the left hand side you can simply click on external ip
442:07 side you can simply click on external ip addresses
442:08 addresses and here you will see the console where
442:10 and here you will see the console where you can create a static external ip
442:13 you can create a static external ip address and so to start the process you
442:15 address and so to start the process you can simply click on reserve static
442:17 can simply click on reserve static address and so here you'll be prompted
442:20 address and so here you'll be prompted with a bunch of fields to fill out to
442:22 with a bunch of fields to fill out to create this new external static ip
442:25 create this new external static ip address and so for the name of this
442:26 address and so for the name of this static ip address you can simply call
442:29 static ip address you can simply call this external dash static i'm going to
442:32 this external dash static i'm going to use the same in the description now here
442:35 use the same in the description now here under network service tier i can choose
442:37 under network service tier i can choose from either the premium or the standard
442:40 from either the premium or the standard and as you can see i'm currently using
442:42 and as you can see i'm currently using the premium network service tier and if
442:45 the premium network service tier and if i hover over the question mark over here
442:47 i hover over the question mark over here it tells me a little bit more about this
442:49 it tells me a little bit more about this network service tier and as you can see
442:52 network service tier and as you can see the premium tier allows me higher
442:54 the premium tier allows me higher performance as well as lower latency
442:57 performance as well as lower latency routing but this premium routing comes
442:59 routing but this premium routing comes at a cost
443:01 at a cost whereas the standard network service
443:02 whereas the standard network service tier offers a lower performance compared
443:06 tier offers a lower performance compared to the premium network service tier and
443:08 to the premium network service tier and is a little bit more cost effective but
443:11 is a little bit more cost effective but still delivering performance that's
443:13 still delivering performance that's comparable with other cloud providers
443:16 comparable with other cloud providers and so i'm just going to leave it as the
443:18 and so i'm just going to leave it as the default selected and as we discussed in
443:20 default selected and as we discussed in the previous lesson ipv6 external static
443:24 the previous lesson ipv6 external static ip addresses can only be used for global
443:28 ip addresses can only be used for global load balancers and so since we're only
443:30 load balancers and so since we're only using it for an instance an ipv4 address
443:34 using it for an instance an ipv4 address will suffice and so just as a note for
443:36 will suffice and so just as a note for network service tier if i click on
443:39 network service tier if i click on standard ipv6 is grayed out as well as
443:43 standard ipv6 is grayed out as well as the global selection and this is because
443:45 the global selection and this is because in order to use global load balancing
443:48 in order to use global load balancing you need to be using the premium network
443:50 you need to be using the premium network service tier so whenever you're creating
443:53 service tier so whenever you're creating a global load balancer please keep this
443:55 a global load balancer please keep this in mind as your cost may increase so i'm
443:58 in mind as your cost may increase so i'm going to switch this back to premium and
444:00 going to switch this back to premium and so under type i'm going to keep it as
444:02 so under type i'm going to keep it as regional and under region i'm going to
444:04 regional and under region i'm going to select the same region that my instance
444:06 select the same region that my instance is going to be in which is us east 1 and
444:10 is going to be in which is us east 1 and because i haven't created the instance
444:12 because i haven't created the instance yet there is nothing to attach it to and
444:14 yet there is nothing to attach it to and so i'm going to click on the drop down
444:16 so i'm going to click on the drop down and click on none and so just as another
444:18 and click on none and so just as another note i wanted to quickly highlight this
444:21 note i wanted to quickly highlight this caution point that the static ip
444:23 caution point that the static ip addresses
444:24 addresses not attached to an instance or low
444:27 not attached to an instance or low balancer are still billed at an hourly
444:30 balancer are still billed at an hourly rate so if you're not using any static
444:33 rate so if you're not using any static ip addresses
444:35 ip addresses please remember to delete them otherwise
444:37 please remember to delete them otherwise you will be charged and so everything
444:40 you will be charged and so everything looks good here to create my external
444:42 looks good here to create my external static ip address so i'm going to simply
444:45 static ip address so i'm going to simply click on reserve
444:46 click on reserve and this will create my external static
444:49 and this will create my external static ip address
444:50 ip address and put the status of it as reserved so
444:53 and put the status of it as reserved so as you can see here the external static
444:55 as you can see here the external static ip address has been created and you will
444:58 ip address has been created and you will find all of your external static ip
445:01 find all of your external static ip addresses that you create in future
445:04 addresses that you create in future right here in this menu and you will
445:06 right here in this menu and you will still be able to query all these
445:08 still be able to query all these external ip addresses from the command
445:11 external ip addresses from the command line and so now in order to assign this
445:14 line and so now in order to assign this ip address to a network interface i'm
445:17 ip address to a network interface i'm going to go back over to the navigation
445:19 going to go back over to the navigation menu and scroll down to compute engine
445:22 menu and scroll down to compute engine and create a new instance so you can go
445:24 and create a new instance so you can go ahead and click on create instance i'm
445:26 ahead and click on create instance i'm going to go ahead and keep the name of
445:28 going to go ahead and keep the name of this instance as instance one and in the
445:30 this instance as instance one and in the region i'm going to select us east one
445:33 region i'm going to select us east one i'm going to keep the zone as the
445:35 i'm going to keep the zone as the selected default and under machine type
445:37 selected default and under machine type i'm going to select the e2 micro machine
445:40 i'm going to select the e2 micro machine type i'm going to scroll down to
445:42 type i'm going to scroll down to management security disks networking and
445:44 management security disks networking and soul tenancy and i'm going to select the
445:47 soul tenancy and i'm going to select the networking tab and here under network
445:49 networking tab and here under network interfaces i'm going to select the
445:51 interfaces i'm going to select the default network interface i'm going to
445:53 default network interface i'm going to scroll down a little bit here and under
445:55 scroll down a little bit here and under external ip
445:57 external ip ephemeral has been selected but if i
445:59 ephemeral has been selected but if i click on the drop down i will have the
446:01 click on the drop down i will have the option to select the ip that we had just
446:04 option to select the ip that we had just created which is the external dash
446:07 created which is the external dash static ip and so i'm going to select
446:10 static ip and so i'm going to select that i'm going to click on done and you
446:12 that i'm going to click on done and you can go down and click on create and so
446:14 can go down and click on create and so now when the instance is created i will
446:17 now when the instance is created i will see the external ip address
446:19 see the external ip address of external static as the assigned
446:23 of external static as the assigned external ip and as expected here it is
446:26 external ip and as expected here it is and because i always like to verify my
446:28 and because i always like to verify my work i'm going to go ahead and open up
446:31 work i'm going to go ahead and open up the cloud shell and verify it through
446:33 the cloud shell and verify it through the command line
446:34 the command line and so now i'm going to query all my
446:36 and so now i'm going to query all my available static ip addresses using the
446:39 available static ip addresses using the command gcloud compute addresses list
446:42 command gcloud compute addresses list i'm going to hit enter
446:44 i'm going to hit enter and as you can see here the external
446:47 and as you can see here the external static ip address of 34.75.76
446:56 in the us east one region is now in use and this is because it is assigned to
446:58 and this is because it is assigned to the network interface on instance one
447:01 the network interface on instance one and so before we go ahead and complete
447:03 and so before we go ahead and complete this demo there's one more step that i
447:06 this demo there's one more step that i wanted to go through and this is to
447:08 wanted to go through and this is to promote an ephemeral external ip address
447:11 promote an ephemeral external ip address to a static external ip address and so
447:14 to a static external ip address and so i'm going to go up here to the top menu
447:16 i'm going to go up here to the top menu and create a new instance i'm going to
447:18 and create a new instance i'm going to leave the name here as instance two
447:20 leave the name here as instance two under the region i'm going to select us
447:23 under the region i'm going to select us east one i'm going to keep the zone as
447:25 east one i'm going to keep the zone as the selected default under machine type
447:27 the selected default under machine type i'm going to select the e2 micro machine
447:30 i'm going to select the e2 micro machine type i'm going to leave everything else
447:31 type i'm going to leave everything else as the default and i'm going to scroll
447:33 as the default and i'm going to scroll down to management security disks
447:36 down to management security disks networking and soul tenancy and select
447:38 networking and soul tenancy and select the networking tab and i'm going to
447:40 the networking tab and i'm going to verify that i'm going to be using an
447:42 verify that i'm going to be using an ephemeral external ip upon the creation
447:45 ephemeral external ip upon the creation of this instance if i scroll down here a
447:48 of this instance if i scroll down here a little bit i can see that an external
447:50 little bit i can see that an external ephemeral ip address will be used upon
447:53 ephemeral ip address will be used upon creation and this will be the ip address
447:56 creation and this will be the ip address that i will be promoting to a static ip
447:58 that i will be promoting to a static ip through the command line so i'm going to
448:00 through the command line so i'm going to go ahead and scroll down click on done
448:03 go ahead and scroll down click on done and then i'm going to scroll down and
448:04 and then i'm going to scroll down and click on create and once this instance
448:06 click on create and once this instance is created then i can go ahead and
448:09 is created then i can go ahead and promote the ephemeral external ip
448:11 promote the ephemeral external ip address okay and the instance has been
448:13 address okay and the instance has been created along with its external
448:16 created along with its external ephemeral ip address and so now i can go
448:19 ephemeral ip address and so now i can go ahead and promote this ephemeral ip
448:21 ahead and promote this ephemeral ip address so in order for me to do this
448:24 address so in order for me to do this i'm going to move back to my cloud shell
448:26 i'm going to move back to my cloud shell and i'm going to quickly clear my screen
448:28 and i'm going to quickly clear my screen and i'm going to use the command gcloud
448:31 and i'm going to use the command gcloud compute addresses create and then the
448:33 compute addresses create and then the name that we want to use for this static
448:36 name that we want to use for this static external ip address so i'm going to call
448:38 external ip address so i'm going to call this promoted external i'm going to use
448:41 this promoted external i'm going to use the flag dash dash addresses and so here
448:44 the flag dash dash addresses and so here i will need the external ip address that
448:48 i will need the external ip address that i am promoting
448:49 i am promoting which is going to be 104.196.219.42
448:57 and so i'm going to copy this to my clipboard and i'm going to paste it here
448:59 clipboard and i'm going to paste it here in the command line and now i'm going to
449:01 in the command line and now i'm going to add the region flag
449:03 add the region flag along with the region of us east one and
449:06 along with the region of us east one and i'm going to go ahead and hit enter
449:08 i'm going to go ahead and hit enter and success my ephemeral external ip
449:12 and success my ephemeral external ip address has been promoted to a static
449:14 address has been promoted to a static external ip address and of course to
449:17 external ip address and of course to verify it i'm going to simply type in
449:20 verify it i'm going to simply type in the gcloud compute addresses list
449:22 the gcloud compute addresses list command i'm going to hit enter and as
449:25 command i'm going to hit enter and as expected here it is the promoted
449:28 expected here it is the promoted external ip of 104.196.219.42
449:37 marked as external in the u.s east one region and the status is marked as in
449:40 region and the status is marked as in use and so i wanted to take a moment to
449:43 use and so i wanted to take a moment to congratulate you on making it through
449:45 congratulate you on making it through this demonstration of creating internal
449:48 this demonstration of creating internal and external ip addresses as well as
449:52 and external ip addresses as well as promoting them so just as a recap you've
449:54 promoting them so just as a recap you've created a static internal ip address in
449:58 created a static internal ip address in conjunction with creating a new instance
450:01 conjunction with creating a new instance and assigning it to that instance you
450:03 and assigning it to that instance you then created another instance and used
450:06 then created another instance and used an ephemeral ip and then promoted it to
450:09 an ephemeral ip and then promoted it to a static internal ip address
450:12 a static internal ip address you then created an external static ip
450:14 you then created an external static ip address using the console and assigned
450:17 address using the console and assigned it to a brand new instance you then
450:20 it to a brand new instance you then created another instance using an
450:22 created another instance using an external ephemeral ip address and
450:25 external ephemeral ip address and promoted it to a static external ip
450:28 promoted it to a static external ip address and you did this all using both
450:31 address and you did this all using both the console and the command line so i
450:33 the console and the command line so i wanted to congratulate you on a great
450:36 wanted to congratulate you on a great job now before we end this demonstration
450:39 job now before we end this demonstration i wanted to go through the steps of
450:41 i wanted to go through the steps of cleaning up any leftover resources so
450:44 cleaning up any leftover resources so the first thing you want to do is delete
450:46 the first thing you want to do is delete these instances so you can select them
450:48 these instances so you can select them all and go up to the top and click on
450:50 all and go up to the top and click on delete it's going to ask you if you want
450:52 delete it's going to ask you if you want to delete the two instances yes we do
450:54 to delete the two instances yes we do click on delete and this will delete
450:56 click on delete and this will delete your instances and free up the external
450:59 your instances and free up the external ip addresses so that you're able to
451:01 ip addresses so that you're able to delete them and so now that the
451:03 delete them and so now that the instances have been deleted i'm going to
451:05 instances have been deleted i'm going to go over to the vpc network menu and i'm
451:09 go over to the vpc network menu and i'm going to head on over to the external ip
451:12 going to head on over to the external ip address console
451:13 address console and here i'm able to delete the external
451:16 and here i'm able to delete the external ip addresses and so i'm going to select
451:18 ip addresses and so i'm going to select all of them and i'm going to go up to
451:20 all of them and i'm going to go up to the top menu and click on release static
451:23 the top menu and click on release static address and you should get a prompt
451:25 address and you should get a prompt asking you if you want to delete both
451:26 asking you if you want to delete both these addresses the answer is yes click
451:29 these addresses the answer is yes click on delete and within a few seconds these
451:31 on delete and within a few seconds these external ip addresses should be deleted
451:34 external ip addresses should be deleted and so now all that's left to delete are
451:37 and so now all that's left to delete are the two static internal ip addresses
451:40 the two static internal ip addresses and as i said before because there is no
451:42 and as i said before because there is no console to be able to view any of these
451:45 console to be able to view any of these static internal ip addresses i have to
451:48 static internal ip addresses i have to do it through the command line so i'm
451:50 do it through the command line so i'm going to go back to my cloud shell i'm
451:52 going to go back to my cloud shell i'm going to clear the screen and i'm going
451:54 going to clear the screen and i'm going to list the ip addresses currently in my
451:56 to list the ip addresses currently in my network and so here they are promoted
451:59 network and so here they are promoted static and static internal and so the
452:02 static and static internal and so the command to delete any static ip
452:04 command to delete any static ip addresses is as follows gcloud compute
452:07 addresses is as follows gcloud compute addresses delete the name of the ip
452:10 addresses delete the name of the ip address that i want to delete which is
452:12 address that i want to delete which is promoted static and then i will need the
452:14 promoted static and then i will need the region flag and it'll be the region of
452:17 region flag and it'll be the region of us east one and i'm going to go ahead
452:19 us east one and i'm going to go ahead and hit enter it's going to prompt me if
452:21 and hit enter it's going to prompt me if i want to continue with this and i'm
452:22 i want to continue with this and i'm going to type y for yes hit enter
452:26 going to type y for yes hit enter and success it has been deleted and so
452:29 and success it has been deleted and so just a double check i'm going to do a
452:30 just a double check i'm going to do a quick verification
452:32 quick verification and yes it has been deleted and so all
452:34 and yes it has been deleted and so all that's left to delete is the static
452:37 that's left to delete is the static internal ip address and so i'm going to
452:39 internal ip address and so i'm going to paste in the command
452:41 paste in the command gcloud compute addresses delete the name
452:44 gcloud compute addresses delete the name of the ip address that i want to delete
452:46 of the ip address that i want to delete which is static dash internal along with
452:48 which is static dash internal along with the region flag of us east one i'm going
452:51 the region flag of us east one i'm going to go ahead and hit enter
452:53 to go ahead and hit enter y for yes to continue
452:56 y for yes to continue and success and one last verification to
452:59 and success and one last verification to make sure that it's all cleared up and
453:01 make sure that it's all cleared up and as you can see i have no more static i p
453:03 as you can see i have no more static i p addresses and so this concludes this
453:06 addresses and so this concludes this demonstration on creating assigning and
453:09 demonstration on creating assigning and deleting both static internal and static
453:13 deleting both static internal and static external ip addresses and so again i
453:16 external ip addresses and so again i wanted to congratulate you on a great
453:18 wanted to congratulate you on a great job and so that's pretty much all i
453:20 job and so that's pretty much all i wanted to cover
453:22 wanted to cover in this demo on creating internal and
453:25 in this demo on creating internal and external static ip addresses so you can
453:28 external static ip addresses so you can now mark this as complete and i'll see
453:30 now mark this as complete and i'll see you in the next one
453:31 you in the next one [Music]
453:35 [Music] welcome back
453:37 welcome back in this lesson i will be diving into
453:39 in this lesson i will be diving into some network security by introducing vpc
453:43 some network security by introducing vpc firewall rules a service used to filter
453:46 firewall rules a service used to filter incoming and outgoing network traffic
453:49 incoming and outgoing network traffic based on a set of user-defined rules
453:52 based on a set of user-defined rules a concept that you should be fairly
453:54 a concept that you should be fairly familiar with for the exam
453:57 familiar with for the exam and comes up extremely often when
454:00 and comes up extremely often when working as an engineer in google cloud
454:03 working as an engineer in google cloud it is definitely an essential security
454:05 it is definitely an essential security layer that prevents unwanted access to
454:09 layer that prevents unwanted access to your cloud infrastructure
454:11 your cloud infrastructure now vpc firewall rules apply to a given
454:15 now vpc firewall rules apply to a given project and network
454:17 project and network and if you'd like you can also apply
454:19 and if you'd like you can also apply firewall rules across an organization
454:22 firewall rules across an organization but i will be sticking to strictly vpc
454:25 but i will be sticking to strictly vpc firewall rules in this lesson now vpc
454:28 firewall rules in this lesson now vpc firewall rules let you allow or deny
454:31 firewall rules let you allow or deny connections
454:32 connections to or from your vm instances
454:35 to or from your vm instances based on a configuration that you
454:37 based on a configuration that you specify
454:39 specify and these rules apply to either incoming
454:42 and these rules apply to either incoming connections or outgoing connections
454:44 connections or outgoing connections but never both at the same time enabled
454:48 but never both at the same time enabled vpc firewall rules are always enforced
454:51 vpc firewall rules are always enforced regardless of their configuration and
454:54 regardless of their configuration and operating system even if they have not
454:57 operating system even if they have not started up now every vpc network
455:00 started up now every vpc network functions as a distributed firewall
455:03 functions as a distributed firewall when firewall rules are defined at the
455:06 when firewall rules are defined at the network level connections are allowed or
455:09 network level connections are allowed or denied on a per instance basis
455:12 denied on a per instance basis so you can think of the vpc firewall
455:14 so you can think of the vpc firewall rules as existing not only between your
455:17 rules as existing not only between your instances and other networks
455:20 instances and other networks but also between individual instances
455:23 but also between individual instances within the same network now when you
455:26 within the same network now when you create a vpc firewall rule you specify a
455:29 create a vpc firewall rule you specify a vpc network and a set of components that
455:33 vpc network and a set of components that define what the rule does
455:35 define what the rule does the components enable you to target
455:38 the components enable you to target certain types of traffic
455:40 certain types of traffic based on the traffic's protocol ports
455:43 based on the traffic's protocol ports sources and destinations when you create
455:46 sources and destinations when you create or modify a firewall rule you can
455:49 or modify a firewall rule you can specify the instances
455:51 specify the instances to which it is intended to apply
455:54 to which it is intended to apply by using the target component of the
455:56 by using the target component of the rule now in addition to firewall rules
455:59 rule now in addition to firewall rules that you create google cloud has other
456:02 that you create google cloud has other rules that can affect incoming or
456:05 rules that can affect incoming or outgoing connections so for instance
456:07 outgoing connections so for instance google cloud doesn't allow certain ip
456:10 google cloud doesn't allow certain ip protocols
456:11 protocols such as egress traffic on tcp port 25
456:16 such as egress traffic on tcp port 25 within a vpc network and protocols other
456:19 within a vpc network and protocols other than tcp udp
456:22 than tcp udp icmp and gre
456:24 icmp and gre to external ip addresses of google cloud
456:27 to external ip addresses of google cloud resources are blocked google cloud
456:30 resources are blocked google cloud always allows communication between a vm
456:34 always allows communication between a vm instance and its corresponding metadata
456:36 instance and its corresponding metadata server at 169.254
456:45 and this server is essential to the operation of the instance
456:47 operation of the instance so the instance can access it regardless
456:50 so the instance can access it regardless of any firewall rules that you configure
456:53 of any firewall rules that you configure the metadata server provides some basic
456:56 the metadata server provides some basic services to the instance like dhcp dns
457:00 services to the instance like dhcp dns resolution instance metadata and network
457:03 resolution instance metadata and network time protocol or ntp now just as a note
457:07 time protocol or ntp now just as a note every network has two implied firewall
457:10 every network has two implied firewall rules that permit outgoing connections
457:14 rules that permit outgoing connections and block incoming connections firewall
457:17 and block incoming connections firewall rules that you create can override these
457:20 rules that you create can override these implied rules now the first implied rule
457:23 implied rules now the first implied rule is the allow egress rule and this is an
457:26 is the allow egress rule and this is an egress rule whose action is allow and
457:30 egress rule whose action is allow and the destination is all ips and the
457:33 the destination is all ips and the priority is the lowest possible and lets
457:36 priority is the lowest possible and lets any instance send traffic to any
457:38 any instance send traffic to any destination except for traffic blocked
457:41 destination except for traffic blocked by google cloud the second implied
457:44 by google cloud the second implied firewall rule is the deny ingress rule
457:47 firewall rule is the deny ingress rule and this is an ingress rule whose action
457:50 and this is an ingress rule whose action is deny and the source is all ips and
457:53 is deny and the source is all ips and the priority is the lowest possible and
457:56 the priority is the lowest possible and protects all instances by blocking
457:59 protects all instances by blocking incoming connections to them now i know
458:02 incoming connections to them now i know we touched on this earlier on in a
458:04 we touched on this earlier on in a previous lesson but i felt the need to
458:07 previous lesson but i felt the need to bring it up as these are pre-populated
458:10 bring it up as these are pre-populated rules and the rules that i'm referring
458:12 rules and the rules that i'm referring to are with regards to the default vpc
458:15 to are with regards to the default vpc network and as explained earlier these
458:17 network and as explained earlier these rules can be deleted or modified as
458:20 rules can be deleted or modified as necessary the rules as you can see here
458:22 necessary the rules as you can see here in the table allow ingress connections
458:25 in the table allow ingress connections from any source to any instance on the
458:28 from any source to any instance on the network when it comes to icmp rdp on
458:32 network when it comes to icmp rdp on port 3389 for windows remote desktop
458:35 port 3389 for windows remote desktop protocol and for ssh on port 22. and as
458:39 protocol and for ssh on port 22. and as well the last rule allows ingress
458:42 well the last rule allows ingress connections for all protocols and ports
458:45 connections for all protocols and ports among instances in the network and it
458:48 among instances in the network and it permits incoming connections to vm
458:51 permits incoming connections to vm instances from others in the same
458:53 instances from others in the same network and all of these have a rule
458:56 network and all of these have a rule priority of six five five four which is
459:00 priority of six five five four which is the second to lowest priority so
459:02 the second to lowest priority so breaking down firewall rules there are a
459:05 breaking down firewall rules there are a few characteristics that google put in
459:08 few characteristics that google put in place that help define these rules and
459:11 place that help define these rules and the characteristics are as follows each
459:13 the characteristics are as follows each firewall rule applies to incoming or
459:17 firewall rule applies to incoming or outgoing connections
459:19 outgoing connections and not both firewall rules only support
459:23 and not both firewall rules only support ipv4 connections so when specifying a
459:26 ipv4 connections so when specifying a source for an ingress rule or a
459:28 source for an ingress rule or a destination for an egress rule by
459:31 destination for an egress rule by address
459:32 address you can only use an ipv4 address or ipv4
459:37 you can only use an ipv4 address or ipv4 block insider notation as well each
459:40 block insider notation as well each firewall rules action is either allow or
459:43 firewall rules action is either allow or deny you cannot have both at the same
459:46 deny you cannot have both at the same time and the rule applies to connections
459:49 time and the rule applies to connections as long as it is enforced so for example
459:52 as long as it is enforced so for example you can disable a rule for
459:54 you can disable a rule for troubleshooting purposes and then enable
459:56 troubleshooting purposes and then enable it back again now when you create a
459:59 it back again now when you create a firewall rule you must select a vpc
460:02 firewall rule you must select a vpc network while the rule is enforced at
460:04 network while the rule is enforced at the instance level its configuration is
460:07 the instance level its configuration is associated with a vpc network this means
460:11 associated with a vpc network this means you cannot share firewall rules among
460:13 you cannot share firewall rules among vpc networks including
460:16 vpc networks including networks connected by vpc network
460:19 networks connected by vpc network peering or by using cloud vpn tunnels
460:23 peering or by using cloud vpn tunnels another major thing to note about
460:25 another major thing to note about firewall rules is that they are stateful
460:28 firewall rules is that they are stateful and so that means when a connection is
460:30 and so that means when a connection is allowed through the firewall in either
460:32 allowed through the firewall in either direction return traffic matching this
460:35 direction return traffic matching this connection is also allowed you cannot
460:38 connection is also allowed you cannot configure a firewall rule to deny
460:40 configure a firewall rule to deny associated response traffic return
460:43 associated response traffic return traffic must match the five tuple of the
460:46 traffic must match the five tuple of the accepted request traffic but with the
460:49 accepted request traffic but with the source and destination addresses and
460:52 source and destination addresses and ports reversed so just as a note for
460:54 ports reversed so just as a note for those who may be wondering what a five
460:56 those who may be wondering what a five tuple is i was referring to the set of
460:59 tuple is i was referring to the set of five different values that comprise a
461:02 five different values that comprise a tcpip connection and this would be
461:05 tcpip connection and this would be source ip destination ip source port
461:08 source ip destination ip source port destination port and protocol google
461:11 destination port and protocol google cloud associates incoming packets with
461:14 cloud associates incoming packets with corresponding outbound packets by using
461:17 corresponding outbound packets by using a connection tracking table google cloud
461:20 a connection tracking table google cloud implements connection tracking
461:22 implements connection tracking regardless of whether the protocol
461:24 regardless of whether the protocol supports connections if a connection is
461:27 supports connections if a connection is allowed between a source and a target or
461:29 allowed between a source and a target or between a target and a destination all
461:32 between a target and a destination all response traffic is allowed as long as
461:35 response traffic is allowed as long as the firewalls connections tracking state
461:38 the firewalls connections tracking state is active and as well as a note a
461:41 is active and as well as a note a firewall rules tracking state is
461:43 firewall rules tracking state is considered active if at least one packet
461:47 considered active if at least one packet is sent every 10 minutes now along with
461:50 is sent every 10 minutes now along with the multiple characteristics that make
461:52 the multiple characteristics that make up a firewall rule there are also
461:54 up a firewall rule there are also firewall rule components that go along
461:57 firewall rule components that go along with it here i have a screenshot from
461:59 with it here i have a screenshot from the console with the configuration
462:02 the console with the configuration components of a firewall rule and i
462:04 components of a firewall rule and i wanted to take a moment to highlight
462:07 wanted to take a moment to highlight these components for better clarity so
462:09 these components for better clarity so now the first component is the network
462:12 now the first component is the network and this is the vpc network that you
462:14 and this is the vpc network that you want the firewall rule to apply to the
462:17 want the firewall rule to apply to the next one is priority which we discussed
462:20 next one is priority which we discussed earlier and this is the numerical
462:22 earlier and this is the numerical priority which determines whether the
462:24 priority which determines whether the rule is applied as only the highest
462:27 rule is applied as only the highest priority rule whose other components
462:30 priority rule whose other components match traffic is applied and remember
462:33 match traffic is applied and remember the lower the number the higher the
462:35 the lower the number the higher the priority the higher the number the lower
462:38 priority the higher the number the lower the priority now the next component is
462:41 the priority now the next component is the direction of traffic and these are
462:43 the direction of traffic and these are the ingress rules that apply to incoming
462:46 the ingress rules that apply to incoming connections from specified sources to
462:49 connections from specified sources to google cloud targets and this is where
462:52 google cloud targets and this is where ingress rules apply to incoming
462:54 ingress rules apply to incoming connections from specified sources to
462:57 connections from specified sources to google cloud targets
462:59 google cloud targets and egress rules apply to connections
463:02 and egress rules apply to connections going to specify destinations from
463:04 going to specify destinations from targets and the next one up is action on
463:08 targets and the next one up is action on match and this component either allows
463:11 match and this component either allows or denies which determines whether the
463:13 or denies which determines whether the rule permits or blocks the connection
463:16 rule permits or blocks the connection now a target is what defines which
463:19 now a target is what defines which instances to which the rule applies and
463:22 instances to which the rule applies and you can specify a target by using one of
463:24 you can specify a target by using one of the following three options the first
463:27 the following three options the first option
463:28 option are all instances in the network and
463:30 are all instances in the network and this is the firewall rule that does
463:33 this is the firewall rule that does exactly what it says it applies to all
463:36 exactly what it says it applies to all the instances in the network the second
463:38 the instances in the network the second option is instances by target tags and
463:42 option is instances by target tags and this is where the firewall rule applies
463:44 this is where the firewall rule applies only to instances with a matching
463:47 only to instances with a matching network tag and so i know i haven't
463:49 network tag and so i know i haven't explained it earlier but a network tag
463:51 explained it earlier but a network tag is simply a character string added to a
463:54 is simply a character string added to a tags field in a resource so let's say i
463:58 tags field in a resource so let's say i had a bunch of instances that were
464:00 had a bunch of instances that were considered development i can simply
464:02 considered development i can simply throw a network tag on them using a
464:04 throw a network tag on them using a network tag of dev and apply the
464:07 network tag of dev and apply the necessary firewall rule for all the
464:10 necessary firewall rule for all the development servers holding the network
464:12 development servers holding the network tag dev and so the third option is
464:16 tag dev and so the third option is instances by target service accounts
464:18 instances by target service accounts this is where the firewall rule applies
464:21 this is where the firewall rule applies only to instances that use a specific
464:24 only to instances that use a specific service account and so the next
464:26 service account and so the next component is the source filter and this
464:29 component is the source filter and this is a source for ingress rules or a
464:31 is a source for ingress rules or a destination for egress rules the source
464:34 destination for egress rules the source parameter is only applicable to ingress
464:38 parameter is only applicable to ingress rules and it must be one of the
464:39 rules and it must be one of the following three selections source ip
464:42 following three selections source ip ranges and this is where you specify
464:44 ranges and this is where you specify ranges of ip addresses as sources for
464:47 ranges of ip addresses as sources for packets either inside or outside of
464:50 packets either inside or outside of google cloud the second one is source
464:53 google cloud the second one is source tags and this is where the source
464:55 tags and this is where the source instances are identified by a matching
464:58 instances are identified by a matching network tag and source service accounts
465:01 network tag and source service accounts where source instances
465:03 where source instances are identified by the service accounts
465:05 are identified by the service accounts they use you can also use service
465:07 they use you can also use service accounts to create firewall rules that
465:10 accounts to create firewall rules that are a bit more granular and so one of
465:13 are a bit more granular and so one of the last components of the firewall rule
465:15 the last components of the firewall rule is the protocols and ports you can
465:18 is the protocols and ports you can specify a protocol or a combination of
465:21 specify a protocol or a combination of protocols and their ports if you omit
465:24 protocols and their ports if you omit both protocols and ports the firewall
465:27 both protocols and ports the firewall rule is applicable for all traffic on
465:30 rule is applicable for all traffic on any protocol and any port and so when it
465:33 any protocol and any port and so when it comes to enforcement status of the
465:36 comes to enforcement status of the firewall rule there is a drop down right
465:39 firewall rule there is a drop down right underneath all the components where you
465:41 underneath all the components where you can enable or disable the enforcement
465:44 can enable or disable the enforcement and as i said before this is a great way
465:47 and as i said before this is a great way to enable or disable a firewall rule
465:50 to enable or disable a firewall rule without having to delete it and is great
465:53 without having to delete it and is great for troubleshooting or to grant
465:55 for troubleshooting or to grant temporary access to any instances and
465:58 temporary access to any instances and unless you specify otherwise
466:00 unless you specify otherwise all firewall rules are enabled when they
466:03 all firewall rules are enabled when they are created but you can also choose to
466:06 are created but you can also choose to create a rule in a disabled state and so
466:09 create a rule in a disabled state and so this covers the vpc firewall rules in
466:12 this covers the vpc firewall rules in all its entirety and i will be showing
466:14 all its entirety and i will be showing you how to implement vpc firewall rules
466:18 you how to implement vpc firewall rules along with building a custom vpc
466:21 along with building a custom vpc custom routes
466:23 custom routes and even private google access
466:25 and even private google access all together in a demo following this
466:28 all together in a demo following this lesson to give you some hands-on skills
466:31 lesson to give you some hands-on skills of putting it all into practice and so
466:34 of putting it all into practice and so that's pretty much all i wanted to cover
466:36 that's pretty much all i wanted to cover when it comes to vpc firewall rules so
466:39 when it comes to vpc firewall rules so you can now mark this lesson as complete
466:41 you can now mark this lesson as complete and let's move on to the next one where
466:43 and let's move on to the next one where we dive in and build our custom vpc so
466:47 we dive in and build our custom vpc so now is a perfect time to grab a coffee
466:49 now is a perfect time to grab a coffee or tea and whenever you're ready join me
466:52 or tea and whenever you're ready join me in the console
466:58 welcome back in this demonstration i want to take all
467:01 in this demonstration i want to take all the concepts that we've learned so far
467:03 the concepts that we've learned so far in this networking section and put it
467:06 in this networking section and put it all into practice this diagram shown
467:09 all into practice this diagram shown here is the architecture of exactly what
467:12 here is the architecture of exactly what we will be building in this demo we're
467:15 we will be building in this demo we're going to start by creating a custom vpc
467:17 going to start by creating a custom vpc and then we're going to create two
467:19 and then we're going to create two subnets
467:20 subnets one public and one private
467:23 one public and one private in two separate regions we're then going
467:26 in two separate regions we're then going to create a cloud storage bucket with
467:28 to create a cloud storage bucket with some objects in it and then we will
467:31 some objects in it and then we will create some instances to demonstrate
467:33 create some instances to demonstrate access to cloud storage as well as
467:36 access to cloud storage as well as communication between instances and
467:38 communication between instances and finally we're going to create some
467:40 finally we're going to create some firewall rules for routing traffic to
467:43 firewall rules for routing traffic to all the right places we're also going to
467:45 all the right places we're also going to implement private google access
467:48 implement private google access and demonstrate accessibility to the
467:51 and demonstrate accessibility to the files in cloud storage from the private
467:54 files in cloud storage from the private instance without an external ip so this
467:57 instance without an external ip so this may be a little bit out of your comfort
467:59 may be a little bit out of your comfort zone for some
468:01 zone for some but don't worry i'll be with you every
468:03 but don't worry i'll be with you every step of the way and other than creating
468:05 step of the way and other than creating the instances all the steps here have
468:08 the instances all the steps here have been covered in previous lessons now
468:10 been covered in previous lessons now there's a lot to get done here so
468:12 there's a lot to get done here so whenever you're ready join me in the
468:14 whenever you're ready join me in the console and so here we are back in the
468:16 console and so here we are back in the console and as you can see up here in
468:18 console and as you can see up here in the right hand corner i am logged in as
468:21 the right hand corner i am logged in as tony bowtie ace gmail.com and currently
468:24 tony bowtie ace gmail.com and currently i am logged in under project tony and so
468:27 i am logged in under project tony and so in order to start off on a clean slate
468:29 in order to start off on a clean slate i'm going to create a new project and so
468:32 i'm going to create a new project and so i'm going to simply click on the project
468:34 i'm going to simply click on the project menu drop-down and click on new project
468:37 menu drop-down and click on new project i'm going to call this project
468:39 i'm going to call this project bowtie inc and i don't have any
468:41 bowtie inc and i don't have any organizations so i'm going to simply
468:43 organizations so i'm going to simply click on create and as well for those of
468:46 click on create and as well for those of you doing this lesson i would also
468:48 you doing this lesson i would also recommend for you to create a brand new
468:51 recommend for you to create a brand new project so that you can start off anew
468:53 project so that you can start off anew again i'm going to go over to the
468:55 again i'm going to go over to the project drop down and i'm going to
468:57 project drop down and i'm going to select bow tie ink as the project and
468:59 select bow tie ink as the project and now that i have a fresh new project i
469:01 now that i have a fresh new project i can now create my vpc network so i'm
469:04 can now create my vpc network so i'm going to go over to the left hand corner
469:06 going to go over to the left hand corner to the navigation menu and i'm going to
469:08 to the navigation menu and i'm going to scroll down to vpc network and so
469:11 scroll down to vpc network and so because vpc networks are tied in with
469:14 because vpc networks are tied in with the compute engine api
469:16 the compute engine api we need to enable it before we can
469:18 we need to enable it before we can create any vpc networks so you can go
469:21 create any vpc networks so you can go ahead and enable this api so once this
469:24 ahead and enable this api so once this api has finished and is enabled we'll be
469:28 api has finished and is enabled we'll be able to create our vpc network
469:31 able to create our vpc network ok and the api has been enabled and as
469:34 ok and the api has been enabled and as you can see the default vpc network has
469:37 you can see the default vpc network has been created with a subnet in every
469:40 been created with a subnet in every region along with its corresponding ip
469:43 region along with its corresponding ip address ranges and so for this demo
469:45 address ranges and so for this demo we're going to create a brand new vpc
469:47 we're going to create a brand new vpc network along with some custom subnets
469:50 network along with some custom subnets and so in order to do that i'm going to
469:52 and so in order to do that i'm going to go up here to the top and i'm going to
469:54 go up here to the top and i'm going to click on create vpc network and so here
469:57 click on create vpc network and so here i'm prompted with some fields to fill
469:59 i'm prompted with some fields to fill out
470:00 out so under name i'm going to think of a
470:02 so under name i'm going to think of a creative name that i can call my vpc
470:04 creative name that i can call my vpc network so i'm going to simply call it
470:06 network so i'm going to simply call it custom under description i'm going to
470:08 custom under description i'm going to call this custom
470:10 call this custom vpc network and i'm going to move down
470:13 vpc network and i'm going to move down here to subnets and because i'm creating
470:15 here to subnets and because i'm creating custom subnets i'm going to keep it
470:17 custom subnets i'm going to keep it under custom under subnet creation mode
470:20 under custom under subnet creation mode and so i'm going to need a public subnet
470:23 and so i'm going to need a public subnet and a private subnet and you'll be able
470:25 and a private subnet and you'll be able to get the values from the text file in
470:28 to get the values from the text file in the github repository within the sub
470:31 the github repository within the sub networks folder under networking
470:33 networks folder under networking services and so i'm going to create my
470:35 services and so i'm going to create my public subnet first and i'm going to
470:37 public subnet first and i'm going to simply call the public subnet public for
470:40 simply call the public subnet public for region i'm going to use us east one and
470:44 region i'm going to use us east one and the ip address range will be 10.0.0.0
470:48 the ip address range will be 10.0.0.0 forward slash 24 and i'm going to leave
470:51 forward slash 24 and i'm going to leave private google access off and i'm going
470:53 private google access off and i'm going to simply click on done and now i can
470:56 to simply click on done and now i can create the private subnet so underneath
470:58 create the private subnet so underneath the public subnet you'll see add subnet
471:01 the public subnet you'll see add subnet you can simply click on that and the
471:03 you can simply click on that and the name of the new subnet will be as you
471:05 name of the new subnet will be as you guessed it private under region i'm
471:08 guessed it private under region i'm going to use us east 4 and for the ip
471:11 going to use us east 4 and for the ip address range
471:13 address range be sure to use 10.0.5.0.24
471:19 and we're going to leave private google access off for now and we'll be turning
471:22 access off for now and we'll be turning that on a little bit later in the demo
471:24 that on a little bit later in the demo and so you can now click on done and
471:26 and so you can now click on done and before we click on create we want to
471:28 before we click on create we want to enable the dns api and clicking on
471:31 enable the dns api and clicking on enable will bring you to the dns api
471:34 enable will bring you to the dns api home page and you can click on enable to
471:37 home page and you can click on enable to enable the api okay so now that we have
471:40 enable the api okay so now that we have our network configured along with our
471:43 our network configured along with our public and private subnets as well as
471:46 public and private subnets as well as dns being enabled we can now simply
471:48 dns being enabled we can now simply click on create but before i do that i
471:51 click on create but before i do that i wanted to give you some insight with
471:53 wanted to give you some insight with regards to the command line so as i've
471:55 regards to the command line so as i've shared before everything that can be
471:57 shared before everything that can be done in the console can be done through
471:59 done in the console can be done through the command line and so if ever you
472:02 the command line and so if ever you wanted to do that or you wanted to get
472:04 wanted to do that or you wanted to get to know the command line a little bit
472:06 to know the command line a little bit better
472:07 better after filling out all the fields with
472:09 after filling out all the fields with regards to creating resources in the
472:11 regards to creating resources in the console you will be given the option of
472:14 console you will be given the option of a command line link that you can simply
472:16 a command line link that you can simply click on and here you will be given all
472:18 click on and here you will be given all the commands to create all the same
472:21 the commands to create all the same resources with all the same preferences
472:24 resources with all the same preferences through the command line and i will be
472:26 through the command line and i will be providing these commands in the lesson
472:28 providing these commands in the lesson text so that you can familiarize
472:30 text so that you can familiarize yourself with the commands to use in
472:33 yourself with the commands to use in order to build any networks using the
472:35 order to build any networks using the command line but this is a great
472:37 command line but this is a great reference for you to use at any time and
472:40 reference for you to use at any time and so i'm going to click on close
472:42 so i'm going to click on close and now i'm going to click on create
472:44 and now i'm going to click on create and within a minute or two the custom
472:46 and within a minute or two the custom vpc network will be created and ready to
472:49 vpc network will be created and ready to use okay and the custom vpc network has
472:52 use okay and the custom vpc network has been created along with its public and
472:55 been created along with its public and private subnet and so just to get a
472:57 private subnet and so just to get a little bit more insight with this custom
472:59 little bit more insight with this custom vpc network i'm going to drill down into
473:02 vpc network i'm going to drill down into it
473:03 it and as you can see here the subnets are
473:05 and as you can see here the subnets are respectively labeled private and public
473:08 respectively labeled private and public along with its region ip address range
473:11 along with its region ip address range the gateway and private google access
473:14 the gateway and private google access the routes as you can see here are the
473:16 the routes as you can see here are the system generated routes that i had
473:18 system generated routes that i had discussed in an earlier lesson it has
473:21 discussed in an earlier lesson it has both the subnet routes to its respective
473:23 both the subnet routes to its respective ip range along with the default route
473:26 ip range along with the default route with a path to the internet as well as a
473:29 with a path to the internet as well as a path for private google access now we
473:32 path for private google access now we don't have any firewall rules here yet
473:34 don't have any firewall rules here yet but we'll be adding those in just a few
473:36 but we'll be adding those in just a few minutes and so now that you've created
473:38 minutes and so now that you've created the vpc network with its respective
473:41 the vpc network with its respective subnets we're going to head on over to
473:43 subnets we're going to head on over to cloud storage and create a bucket along
473:46 cloud storage and create a bucket along with uploading the necessary files so
473:48 with uploading the necessary files so i'm going to go again over to the
473:50 i'm going to go again over to the navigation menu
473:51 navigation menu and i'm going to scroll down to storage
473:54 and i'm going to scroll down to storage and so as expected there are no buckets
473:57 and so as expected there are no buckets present here in cloud storage and so
473:59 present here in cloud storage and so we're just going to go ahead and create
474:01 we're just going to go ahead and create our first bucket by going up here to the
474:04 our first bucket by going up here to the top menu and clicking on create bucket
474:07 top menu and clicking on create bucket and so here i've been prompted to name
474:09 and so here i've been prompted to name my bucket and for those of you who are
474:12 my bucket and for those of you who are here for the first time when it comes to
474:14 here for the first time when it comes to naming a storage bucket the name needs
474:17 naming a storage bucket the name needs to be globally unique and this means
474:19 to be globally unique and this means that the name has to be unique across
474:22 that the name has to be unique across all of the google cloud platform now
474:25 all of the google cloud platform now don't worry i'm going to get into
474:26 don't worry i'm going to get into further detail with this in the cloud
474:29 further detail with this in the cloud storage lesson with all of these
474:31 storage lesson with all of these specific details when it comes to names
474:34 specific details when it comes to names storage classes and permissions and so
474:37 storage classes and permissions and so in the meantime you can come up with a
474:39 in the meantime you can come up with a name for your bucket something that
474:41 name for your bucket something that resonates with you and so for me i'm
474:43 resonates with you and so for me i'm going to name my bucket bowtie inc dash
474:46 going to name my bucket bowtie inc dash file dash access and so now i'm going to
474:48 file dash access and so now i'm going to simply click continue and so just as a
474:50 simply click continue and so just as a note for those who are unable to
474:52 note for those who are unable to continue through it is because the name
474:55 continue through it is because the name for your bucket is not globally unique
474:57 for your bucket is not globally unique so do try to find one that is now when
475:00 so do try to find one that is now when it comes to location type i'm just going
475:02 it comes to location type i'm just going to click on region and you can keep the
475:04 to click on region and you can keep the default location as used one and i'm
475:07 default location as used one and i'm going to leave all the other options as
475:09 going to leave all the other options as default and i'm going to go down to the
475:11 default and i'm going to go down to the bottom and click create and so for those
475:14 bottom and click create and so for those of you who have created your bucket you
475:16 of you who have created your bucket you can now upload the files and those files
475:19 can now upload the files and those files can be found in the github repository
475:22 can be found in the github repository in the cloud storage bucket folder under
475:24 in the cloud storage bucket folder under networking services and so now i'm going
475:27 networking services and so now i'm going to click on upload files
475:29 to click on upload files and under the networking services
475:31 and under the networking services section under cloud storage bucket you
475:34 section under cloud storage bucket you will find these three jpeg files and you
475:37 will find these three jpeg files and you can simply select them and click on open
475:40 can simply select them and click on open and so they are now uploaded into the
475:43 and so they are now uploaded into the bucket and so now i'm ready to move on
475:45 bucket and so now i'm ready to move on to the next step so you should now have
475:48 to the next step so you should now have created the vpc network with a private
475:51 created the vpc network with a private and public subnet along with creating
475:53 and public subnet along with creating your own bucket in cloud storage and
475:56 your own bucket in cloud storage and have uploaded the three jpeg files so
475:59 have uploaded the three jpeg files so now that this is done we can now create
476:01 now that this is done we can now create the instances
476:02 the instances that will have access to these files and
476:05 that will have access to these files and so again i will go over to the
476:07 so again i will go over to the navigation menu in the top left hand
476:09 navigation menu in the top left hand corner and scroll down to compute engine
476:12 corner and scroll down to compute engine and here i will click on create
476:14 and here i will click on create and so again i will be prompted with
476:16 and so again i will be prompted with some fields to fill out and so for this
476:19 some fields to fill out and so for this instance i'm going to first create the
476:21 instance i'm going to first create the public instance again i'm going to get
476:24 public instance again i'm going to get really creative and call this
476:26 really creative and call this public dash instance under labels i'm
476:30 public dash instance under labels i'm going to add a label
476:32 going to add a label under key
476:33 under key i'm going to type environment and under
476:36 i'm going to type environment and under value i'm going to type in public i'm
476:38 value i'm going to type in public i'm going to go down to the bottom and click
476:40 going to go down to the bottom and click on save and under region i'm going to
476:42 on save and under region i'm going to select us east1 and you can leave the
476:45 select us east1 and you can leave the zone as us east 1b moving down under
476:48 zone as us east 1b moving down under machine type i'm going to select the e2
476:51 machine type i'm going to select the e2 micro as the machine type just because
476:53 micro as the machine type just because i'm being cost conscious and i want to
476:55 i'm being cost conscious and i want to keep the cost down and so i'm going to
476:57 keep the cost down and so i'm going to scroll down to identity and api access
477:00 scroll down to identity and api access and under service account you should
477:02 and under service account you should have the compute engine default service
477:04 have the compute engine default service account already pre-selected now under
477:07 account already pre-selected now under access scopes i want to be able to have
477:09 access scopes i want to be able to have the proper permissions to be able to
477:12 the proper permissions to be able to read and write to cloud storage along
477:15 read and write to cloud storage along with read and write access to compute
477:17 with read and write access to compute engine and so you can click on set
477:19 engine and so you can click on set access for each api and you can scroll
477:22 access for each api and you can scroll down to compute engine
477:24 down to compute engine click on the drop down menu and select
477:26 click on the drop down menu and select read write and this will give the public
477:29 read write and this will give the public instance the specific access that it
477:31 instance the specific access that it needs to ssh into the private instance
477:34 needs to ssh into the private instance and so now i'm going to set the access
477:36 and so now i'm going to set the access for cloud storage so i'm going to scroll
477:38 for cloud storage so i'm going to scroll down to storage i'm going to click on
477:40 down to storage i'm going to click on the drop down menu and select read write
477:43 the drop down menu and select read write and this will give the instance read
477:45 and this will give the instance read write access to cloud storage scrolling
477:47 write access to cloud storage scrolling down a little bit further i'm going to
477:49 down a little bit further i'm going to go to management security disks
477:52 go to management security disks networking and sold tenancy and i'm
477:54 networking and sold tenancy and i'm going to click on that
477:56 going to click on that scroll up here just a little bit
477:58 scroll up here just a little bit and you can click on the networking tab
478:00 and you can click on the networking tab which will prompt you for a bunch of
478:02 which will prompt you for a bunch of options that you can configure for the
478:05 options that you can configure for the networking of the instance so under
478:07 networking of the instance so under network tags i want to type in public
478:10 network tags i want to type in public and you can click enter you can then
478:13 and you can click enter you can then scroll down to where it says network
478:15 scroll down to where it says network interfaces and click on the current
478:17 interfaces and click on the current interface which is the default and here
478:19 interface which is the default and here it'll open up all your options and so
478:22 it'll open up all your options and so under network you want to click on the
478:24 under network you want to click on the drop down and set it from default to
478:26 drop down and set it from default to custom the public subnet will
478:28 custom the public subnet will automatically be propagated so you can
478:30 automatically be propagated so you can leave it as is and you also want to make
478:33 leave it as is and you also want to make sure that your primary internal ip
478:35 sure that your primary internal ip as well as your external ip are set to
478:38 as well as your external ip are set to ephemeral and you can leave all the
478:40 ephemeral and you can leave all the other options as default and simply
478:42 other options as default and simply click on done and again before clicking
478:44 click on done and again before clicking on create you can click on the command
478:46 on create you can click on the command line link and it will show you all the
478:48 line link and it will show you all the commands needed in order to create this
478:50 commands needed in order to create this instance through the command line so i'm
478:53 instance through the command line so i'm going to go ahead and close this
478:55 going to go ahead and close this and so i'm going to leave all the other
478:56 and so i'm going to leave all the other options as default and i'm going to
478:58 options as default and i'm going to click on create and so now that my
479:00 click on create and so now that my public instance is being created i'm
479:03 public instance is being created i'm going to go ahead and create my private
479:04 going to go ahead and create my private instance
479:05 instance using the same steps that i did for the
479:08 using the same steps that i did for the last instance so i'm going to go ahead
479:10 last instance so i'm going to go ahead and click on create instance here at the
479:12 and click on create instance here at the top and so the first thing i'm going to
479:14 top and so the first thing i'm going to be prompted for is the name of the
479:16 be prompted for is the name of the instance
479:17 instance and so i'm going to call this instance
479:19 and so i'm going to call this instance private dash instance and here i'm going
479:22 private dash instance and here i'm going to add a label the key being environment
479:25 to add a label the key being environment and the value being private i'm going to
479:27 and the value being private i'm going to go down here to the bottom and click on
479:29 go down here to the bottom and click on save
479:30 save and under region i'm going to select us
479:33 and under region i'm going to select us east 4
479:34 east 4 and you can keep the zone as the default
479:36 and you can keep the zone as the default selected under machine type we're going
479:39 selected under machine type we're going to select the e2 micro and again
479:41 to select the e2 micro and again scrolling down to the identity and api
479:44 scrolling down to the identity and api access under the access scopes for the
479:46 access under the access scopes for the default service account i'm going to
479:48 default service account i'm going to click on the set access for each api and
479:51 click on the set access for each api and i'm going to scroll down to storage i'm
479:53 i'm going to scroll down to storage i'm going to click on the drop down menu and
479:55 going to click on the drop down menu and i'm going to select access for read
479:58 i'm going to select access for read write and for the last step i'm going to
480:00 write and for the last step i'm going to go into the networking tab under
480:02 go into the networking tab under management security disks networking and
480:05 management security disks networking and soul tenancy and under network tags i'm
480:07 soul tenancy and under network tags i'm going to give this instance a network
480:09 going to give this instance a network tag of private and under network
480:12 tag of private and under network interfaces we want to edit this and
480:14 interfaces we want to edit this and change it from default over to the
480:16 change it from default over to the custom network and as expected it
480:19 custom network and as expected it selected the private subnet by default
480:22 selected the private subnet by default and because this is going to be a
480:24 and because this is going to be a private instance we are not going to
480:26 private instance we are not going to give this an external ip so i'm going to
480:29 give this an external ip so i'm going to click on the drop down and select none
480:31 click on the drop down and select none and with all the other options set as
480:33 and with all the other options set as default i'm going to simply click on
480:36 default i'm going to simply click on create
480:37 create and this will create my private instance
480:40 and this will create my private instance along with having my public instance so
480:43 along with having my public instance so just as a recap we've created a new
480:45 just as a recap we've created a new custom vpc network
480:47 custom vpc network along with a private and public subnet
480:50 along with a private and public subnet we've created a storage bucket and added
480:53 we've created a storage bucket and added some files in it to be accessed and
480:56 some files in it to be accessed and we've created a private and public
480:58 we've created a private and public instance and assigning the service
481:01 instance and assigning the service account on the public instance read
481:03 account on the public instance read write access to both compute engine and
481:06 write access to both compute engine and cloud storage along with a public ip
481:09 cloud storage along with a public ip address and assigning the service
481:11 address and assigning the service account on the private instance read
481:13 account on the private instance read write access only for cloud storage and
481:16 write access only for cloud storage and no public ip and so this is the end of
481:19 no public ip and so this is the end of part one of this demo
481:21 part one of this demo and this would be a great opportunity
481:24 and this would be a great opportunity for you to get up and have a stretch
481:26 for you to get up and have a stretch get yourself a coffee or tea
481:28 get yourself a coffee or tea and whenever you're ready you can join
481:30 and whenever you're ready you can join me in part two
481:32 me in part two where we will be starting immediately
481:34 where we will be starting immediately from the end of part one so you can go
481:36 from the end of part one so you can go ahead and complete this video and i will
481:39 ahead and complete this video and i will see you in part two
481:40 see you in part two [Music]
481:44 [Music] welcome back
481:46 welcome back this is part two of the custom vpc demo
481:49 this is part two of the custom vpc demo and we will be starting exactly where we
481:51 and we will be starting exactly where we left off from part one so with that
481:53 left off from part one so with that being said let's dive in and so now the
481:56 being said let's dive in and so now the last thing that needs to be done is to
481:58 last thing that needs to be done is to simply create some firewall rules and so
482:01 simply create some firewall rules and so with these firewall rules this will give
482:04 with these firewall rules this will give me ssh access into the public instance
482:07 me ssh access into the public instance as well as allowing private
482:09 as well as allowing private communication from the public instance
482:12 communication from the public instance to the private instance as well as
482:15 to the private instance as well as giving ssh access
482:17 giving ssh access from the public instance to the private
482:19 from the public instance to the private instance and this will allow us to
482:21 instance and this will allow us to access the files in the bucket from the
482:24 access the files in the bucket from the private instance and so in order to
482:26 private instance and so in order to create these firewall rules i need to go
482:29 create these firewall rules i need to go back to my vpc network so i'm going to
482:31 back to my vpc network so i'm going to go up to the left hand corner again to
482:33 go up to the left hand corner again to the navigation menu
482:35 the navigation menu and scroll down to vpc network over here
482:38 and scroll down to vpc network over here on the left hand menu you'll see
482:40 on the left hand menu you'll see firewall i'm going to click on that
482:43 firewall i'm going to click on that and here you will see all the default
482:45 and here you will see all the default firewall rules for the default network
482:48 firewall rules for the default network so for us to create some new ones for
482:50 so for us to create some new ones for the custom vpc i'm going to go up here
482:52 the custom vpc i'm going to go up here to the top and click on create firewall
482:55 to the top and click on create firewall and so the first rule i want to create
482:58 and so the first rule i want to create is for my public instance and i want to
483:00 is for my public instance and i want to give it public access as well as ssh
483:03 give it public access as well as ssh access and so i'm going to name this
483:05 access and so i'm going to name this accordingly as public dash access i'm
483:08 accordingly as public dash access i'm going to give this the same description
483:10 going to give this the same description always a good idea to turn on logs but
483:13 always a good idea to turn on logs but for this demonstration i'm going to keep
483:15 for this demonstration i'm going to keep them off under network i'm going to
483:17 them off under network i'm going to select the custom network i'm going to
483:19 select the custom network i'm going to keep the priority at 1000 the direction
483:22 keep the priority at 1000 the direction of traffic will be ingress and the
483:25 of traffic will be ingress and the action on match will be allow and so
483:27 action on match will be allow and so here is where the target tags come into
483:29 here is where the target tags come into play when it comes to giving access to
483:32 play when it comes to giving access to the network so targets we're going to
483:34 the network so targets we're going to keep it as specified target tags
483:36 keep it as specified target tags and under target tags you can simply
483:39 and under target tags you can simply type in public
483:40 type in public under source filter you can keep it
483:42 under source filter you can keep it under ip ranges
483:44 under ip ranges and the source ip range will be 0.0.0.0
483:49 and the source ip range will be 0.0.0.0 forward slash 0. and we're not going to
483:51 forward slash 0. and we're not going to add a second source filter here so
483:54 add a second source filter here so moving down to protocols and ports under
483:56 moving down to protocols and ports under tcp i'm going to click that off and add
483:59 tcp i'm going to click that off and add in port 22. and because i want to be
484:01 in port 22. and because i want to be able to ping the instance i'm going to
484:04 able to ping the instance i'm going to have to add another protocol which is
484:06 have to add another protocol which is icmp
484:08 icmp and again as explained earlier the
484:10 and again as explained earlier the disable rule link will bring up the
484:12 disable rule link will bring up the enforcement and as you can see it is
484:15 enforcement and as you can see it is enabled but if you wanted to create any
484:17 enabled but if you wanted to create any firewall rules in future and have them
484:20 firewall rules in future and have them disabled you can do that right here but
484:22 disabled you can do that right here but we're gonna keep this enabled and we're
484:24 we're gonna keep this enabled and we're gonna simply click on create and this
484:27 gonna simply click on create and this will create the public firewall rule for
484:29 will create the public firewall rule for our public instance in our custom vpc
484:32 our public instance in our custom vpc network and so we're going to now go
484:34 network and so we're going to now go ahead and create the private firewall
484:37 ahead and create the private firewall rule and so i'm going to name this
484:39 rule and so i'm going to name this private dash access
484:41 private dash access respectively i'm going to put the
484:43 respectively i'm going to put the description as the same under network
484:46 description as the same under network i'm going to select our custom network
484:48 i'm going to select our custom network keep the priority at 1000 direction of
484:51 keep the priority at 1000 direction of traffic should be at ingress and the
484:53 traffic should be at ingress and the action on match should be allow for
484:55 action on match should be allow for target tags you can type in private and
484:58 target tags you can type in private and then hit enter and because i want to be
485:00 then hit enter and because i want to be able to reach the private instance from
485:02 able to reach the private instance from the public instance the source ip range
485:05 the public instance the source ip range will be
485:06 will be 10.0.0.1
485:08 10.0.0.1 forward slash 24. we're not going to add
485:11 forward slash 24. we're not going to add a second source filter and under
485:13 a second source filter and under protocols and ports we're going to
485:15 protocols and ports we're going to simply add tcp port 22
485:19 simply add tcp port 22 and again i want to add icmp
485:22 and again i want to add icmp so that i'm able to ping the instance
485:24 so that i'm able to ping the instance and i'm going to click on create
485:28 and i'm going to click on create and so we now have our two firewall
485:30 and so we now have our two firewall rules private access and public access
485:34 rules private access and public access and if i go over to the custom vpc
485:36 and if i go over to the custom vpc network and i drill into it
485:39 network and i drill into it i'll be able to see these selective
485:41 i'll be able to see these selective firewall rules under the respective
485:43 firewall rules under the respective firewall rules tab and so now that we've
485:46 firewall rules tab and so now that we've created our vpc network along with the
485:49 created our vpc network along with the public and private subnet we've created
485:52 public and private subnet we've created the cloud storage bucket with the files
485:54 the cloud storage bucket with the files that we need to access the instances
485:56 that we need to access the instances that will access those files along with
485:59 that will access those files along with the firewall rules that will allow the
486:02 the firewall rules that will allow the proper communication we can now go ahead
486:04 proper communication we can now go ahead to test everything that we built and
486:07 to test everything that we built and make sure that everything is working as
486:09 make sure that everything is working as expected so let's kick things off by
486:12 expected so let's kick things off by first logging into the public instance
486:14 first logging into the public instance so you can head on over to the
486:15 so you can head on over to the navigation menu and scroll down to
486:18 navigation menu and scroll down to compute engine
486:20 compute engine and you can ssh into the public instance
486:24 and you can ssh into the public instance by clicking on ssh under connect
486:26 by clicking on ssh under connect and this should open up a new tab or a
486:29 and this should open up a new tab or a new window logging you in with your
486:31 new window logging you in with your currently authenticated credentials okay
486:34 currently authenticated credentials okay and we are logged into our instance and
486:37 and we are logged into our instance and i'm going to zoom in for better viewing
486:39 i'm going to zoom in for better viewing and so just to make sure that everything
486:41 and so just to make sure that everything is working as expected we know that our
486:43 is working as expected we know that our firewall rule is correct because we are
486:46 firewall rule is correct because we are able to ssh into the instance and now i
486:49 able to ssh into the instance and now i want to see if i have access to my files
486:51 want to see if i have access to my files in the bucket and so in order to do that
486:54 in the bucket and so in order to do that i'm going to run the gsutil command ls
486:57 i'm going to run the gsutil command ls for list and then gs colon forward slash
487:00 for list and then gs colon forward slash forward slash along with my bucket name
487:02 forward slash along with my bucket name which is bow tie inc
487:09 hyphen file iphone access and i'm going to hit enter and as you can see i have
487:11 to hit enter and as you can see i have access to all the files in the bucket
487:13 access to all the files in the bucket and the last thing i wanted to check is
487:15 and the last thing i wanted to check is if i can ping the private instance so
487:18 if i can ping the private instance so i'm going to first clear my screen and
487:20 i'm going to first clear my screen and i'm going to head on over back to the
487:21 i'm going to head on over back to the console i'm going to copy the ip address
487:23 console i'm going to copy the ip address of the private instance to my clipboard
487:26 of the private instance to my clipboard and then i'm going to head back on over
487:28 and then i'm going to head back on over to my terminal and i'm going to type in
487:30 to my terminal and i'm going to type in ping i'm going to paste the ip address
487:32 ping i'm going to paste the ip address and success
487:34 and success i am able to successfully ping the
487:36 i am able to successfully ping the private instance
487:38 private instance from the public instance
487:40 from the public instance using the icmp protocol and you can hit
487:42 using the icmp protocol and you can hit control c to stop the ping so now that i
487:45 control c to stop the ping so now that i know that my public instance has the
487:47 know that my public instance has the proper permissions to reach cloud
487:49 proper permissions to reach cloud storage
487:50 storage as well as being able to ping my private
487:53 as well as being able to ping my private instance i want to be able to check if i
487:55 instance i want to be able to check if i can ssh
487:57 can ssh into the private instance from my public
487:59 into the private instance from my public instance and so i'm going to first clear
488:01 instance and so i'm going to first clear my screen and next i'm going to paste in
488:04 my screen and next i'm going to paste in this command in order for me to ssh into
488:07 this command in order for me to ssh into the private instance g cloud compute ssh
488:11 the private instance g cloud compute ssh dash dash project and my project name
488:14 dash dash project and my project name which is bow tie inc dash dash zone and
488:17 which is bow tie inc dash dash zone and the zone that my instance is in which is
488:20 the zone that my instance is in which is us east 4c along with the name of the
488:23 us east 4c along with the name of the instance which is private dash instance
488:26 instance which is private dash instance and along with the flag dash dash
488:28 and along with the flag dash dash internal dash ip stating that i am using
488:32 internal dash ip stating that i am using the internal ip in order to ssh into the
488:35 the internal ip in order to ssh into the instance and i'm going to hit enter and
488:37 instance and i'm going to hit enter and so now i've been prompted for a
488:39 so now i've been prompted for a passphrase in order to secure my rsa key
488:43 passphrase in order to secure my rsa key pair as one is being generated to log
488:45 pair as one is being generated to log into the private instance now it's
488:48 into the private instance now it's always good practice when it comes to
488:50 always good practice when it comes to security to secure your key pair with a
488:53 security to secure your key pair with a passphrase but for this demo i'm just
488:56 passphrase but for this demo i'm just going to leave it blank
488:58 going to leave it blank and so i'm just going to hit enter
489:01 and so i'm just going to hit enter i'm going to hit enter again
489:02 i'm going to hit enter again now i don't want to get too deep into it
489:05 now i don't want to get too deep into it but i did want to give you some context
489:07 but i did want to give you some context on what's happening here so when you log
489:09 on what's happening here so when you log into an instance on google cloud with os
489:13 into an instance on google cloud with os login disabled google manages the
489:15 login disabled google manages the authorized keys file
489:17 authorized keys file for new user accounts based on ssh keys
489:21 for new user accounts based on ssh keys in metadata and so the keys that are
489:23 in metadata and so the keys that are being generated that are being used for
489:25 being generated that are being used for the first time are currently being
489:28 the first time are currently being stored within the instance metadata so
489:30 stored within the instance metadata so now that i'm logged into my private
489:32 now that i'm logged into my private instance i'm going to quickly clear my
489:34 instance i'm going to quickly clear my screen and just as a note you'll be able
489:37 screen and just as a note you'll be able to know whether or not you're logged
489:38 to know whether or not you're logged into your private instance by looking
489:41 into your private instance by looking here at your prompt and so now i want to
489:43 here at your prompt and so now i want to make sure that i can ping my public
489:45 make sure that i can ping my public instance so i'm going to quickly type
489:47 instance so i'm going to quickly type the ping command i'm going to head on
489:49 the ping command i'm going to head on over to the console i'm going to grab
489:51 over to the console i'm going to grab the ip address of the public instance
489:53 the ip address of the public instance i'm going to go back to my terminal and
489:55 i'm going to go back to my terminal and paste it in and as expected i'm able to
489:58 paste it in and as expected i'm able to ping my public instance from my private
490:00 ping my public instance from my private instance i'm just going to go ahead and
490:02 instance i'm just going to go ahead and hit control c to stop and i'm going to
490:04 hit control c to stop and i'm going to clear the screen so now we'd like to
490:06 clear the screen so now we'd like to verify whether or not we have access to
490:10 verify whether or not we have access to the files in the cloud storage bucket
490:12 the files in the cloud storage bucket that we created earlier
490:14 that we created earlier and so now i'm going to use the same
490:15 and so now i'm going to use the same command
490:16 command that i used in the public instance to
490:19 that i used in the public instance to list all the files in the cloud storage
490:21 list all the files in the cloud storage bucket so i'm going to use the gsutil
490:23 bucket so i'm going to use the gsutil command ls for list along with gs colon
490:27 command ls for list along with gs colon forward slash forward slash and the
490:30 forward slash forward slash and the bucket name which is bow tie ink hyphen
490:32 bucket name which is bow tie ink hyphen file if an access and i'm going to hit
490:35 file if an access and i'm going to hit enter
490:37 enter and as you can see here i'm not getting
490:39 and as you can see here i'm not getting a response and the command is hanging
490:42 a response and the command is hanging and this is due to the fact that
490:44 and this is due to the fact that external access is needed in order to
490:47 external access is needed in order to reach cloud storage and this instance
490:49 reach cloud storage and this instance only has an internal or private ip so
490:53 only has an internal or private ip so accessing the files in the cloud storage
490:55 accessing the files in the cloud storage bucket is not possible now in order to
490:58 bucket is not possible now in order to access cloud storage and the set of
491:00 access cloud storage and the set of external ip addresses used by google
491:03 external ip addresses used by google apis and services we can do this by
491:06 apis and services we can do this by enabling private google access on the
491:08 enabling private google access on the subnet used by the vms network interface
491:12 subnet used by the vms network interface and so we're going to go ahead and do
491:13 and so we're going to go ahead and do that right now so i'm going to hit
491:15 that right now so i'm going to hit control c to stop and i'm going to go
491:17 control c to stop and i'm going to go back into the console i'm going to go to
491:19 back into the console i'm going to go to the navigation menu and i'm going to
491:21 the navigation menu and i'm going to scroll down to vpc network
491:24 scroll down to vpc network and then i'm going to drill down into
491:25 and then i'm going to drill down into the private subnet and i'm going to edit
491:28 the private subnet and i'm going to edit it under private google access i'm going
491:30 it under private google access i'm going to turn it on and i'm going to go down
491:32 to turn it on and i'm going to go down to the bottom and click on save and by
491:34 to the bottom and click on save and by giving this subnet private google access
491:37 giving this subnet private google access i will allow the private instance and
491:39 i will allow the private instance and any instances with private ip addresses
491:42 any instances with private ip addresses to access any public apis
491:45 to access any public apis such as cloud storage so now when i go
491:48 such as cloud storage so now when i go back to my instance i'm going to clear
491:50 back to my instance i'm going to clear the screen here and i'm going to run the
491:52 the screen here and i'm going to run the gsutil command again
491:56 gsutil command again and success
491:57 and success we are now able to access cloud storage
492:00 we are now able to access cloud storage due to enabling private google access on
492:03 due to enabling private google access on the respective private subnet
492:05 the respective private subnet so i first wanted to congratulate you on
492:08 so i first wanted to congratulate you on making it to the end of this demo
492:10 making it to the end of this demo and hope that this demo has been
492:12 and hope that this demo has been extremely useful as this is a real life
492:15 extremely useful as this is a real life scenario that can come up and so just as
492:17 scenario that can come up and so just as a recap you've created a custom network
492:20 a recap you've created a custom network with two custom subnets you've created a
492:22 with two custom subnets you've created a cloud storage bucket and uploaded some
492:25 cloud storage bucket and uploaded some files to it you've created a public
492:27 files to it you've created a public instance and a private instance and then
492:30 instance and a private instance and then created some firewall rules to route the
492:32 created some firewall rules to route the traffic you then tested it all by using
492:35 traffic you then tested it all by using the command line for communication you
492:38 the command line for communication you also enable private google access for
492:40 also enable private google access for the instance with only the internal ip
492:43 the instance with only the internal ip to access google's public apis so that
492:47 to access google's public apis so that it can access cloud storage and so again
492:50 it can access cloud storage and so again fantastic job on your part as this was a
492:52 fantastic job on your part as this was a pretty complex demo
492:54 pretty complex demo and you can expect things like what
492:56 and you can expect things like what you've experienced in this demo to pop
492:58 you've experienced in this demo to pop up in your role of being a cloud
493:00 up in your role of being a cloud engineer at any time so before you go be
493:04 engineer at any time so before you go be sure to delete all the resources you've
493:06 sure to delete all the resources you've created
493:07 created and again congrats on the great job so
493:10 and again congrats on the great job so you can now mark this as complete and
493:12 you can now mark this as complete and i'll see you in the next one
493:20 welcome back in this lesson i will be going over vpc network peering and how
493:23 going over vpc network peering and how you can privately communicate across
493:26 you can privately communicate across vpcs in the same or different
493:29 vpcs in the same or different organization vpc network peering and vpc
493:32 organization vpc network peering and vpc peering are used interchangeably in this
493:35 peering are used interchangeably in this lesson as they are used to communicate
493:37 lesson as they are used to communicate the same thing now for instances in one
493:40 the same thing now for instances in one vpc to communicate with an instance in
493:43 vpc to communicate with an instance in another vpc they would route traffic via
493:46 another vpc they would route traffic via the public internet however to
493:48 the public internet however to communicate privately between two vpcs
493:52 communicate privately between two vpcs google cloud offers a service called vpc
493:55 google cloud offers a service called vpc peering and i will be going through the
493:57 peering and i will be going through the theory and concepts of vpc peering
494:00 theory and concepts of vpc peering throughout this lesson so with that
494:02 throughout this lesson so with that being said let's dive in
494:05 being said let's dive in now vpc peering enables you to peer vpc
494:08 now vpc peering enables you to peer vpc networks
494:09 networks so that workloads in different vpc
494:11 so that workloads in different vpc networks can communicate in a private
494:14 networks can communicate in a private space that follows the rfc
494:17 space that follows the rfc 1918 standard thus allowing private
494:20 1918 standard thus allowing private connectivity across two vpc networks
494:24 connectivity across two vpc networks traffic stays within google's network
494:26 traffic stays within google's network and never traverses the public internet
494:29 and never traverses the public internet vpc peering gives you the flexibility of
494:32 vpc peering gives you the flexibility of peering networks that are of the same or
494:35 peering networks that are of the same or different projects along with being able
494:38 different projects along with being able to peer with other networks in different
494:40 to peer with other networks in different organizations vpc peering also gives you
494:44 organizations vpc peering also gives you several advantages over using external
494:47 several advantages over using external ip addresses or vpns to connect the
494:50 ip addresses or vpns to connect the first one is reducing network latency as
494:54 first one is reducing network latency as all peering traffic stays within
494:56 all peering traffic stays within google's high-speed network vpc peering
494:59 google's high-speed network vpc peering also offers greater network security as
495:02 also offers greater network security as you don't need to have services exposed
495:04 you don't need to have services exposed to the public internet and deal with
495:06 to the public internet and deal with greater risks of having your traffic
495:09 greater risks of having your traffic getting compromised or if you're trying
495:11 getting compromised or if you're trying to achieve compliance standards for your
495:13 to achieve compliance standards for your organization vpc peering will allow you
495:16 organization vpc peering will allow you to achieve the standards that you need
495:19 to achieve the standards that you need and finally vpc network peering reduces
495:22 and finally vpc network peering reduces network costs as you save on egress
495:24 network costs as you save on egress costs for traffic leaving gcp so in a
495:27 costs for traffic leaving gcp so in a regular network google charges you for
495:30 regular network google charges you for traffic communicating using public ips
495:33 traffic communicating using public ips even if the traffic is within the same
495:35 even if the traffic is within the same zone now you can bypass this and save
495:38 zone now you can bypass this and save money by using internal ips to
495:41 money by using internal ips to communicate and keeping the traffic
495:43 communicate and keeping the traffic within the gcp network
495:46 within the gcp network now there are certain properties or
495:47 now there are certain properties or characteristics that peered vpcs follow
495:51 characteristics that peered vpcs follow and i wanted to point these out for
495:53 and i wanted to point these out for better understanding first off peer vpc
495:56 better understanding first off peer vpc networks remain administratively
495:58 networks remain administratively separate so what exactly does this mean
496:01 separate so what exactly does this mean well it means that routes firewalls vpns
496:05 well it means that routes firewalls vpns and other traffic management tools are
496:07 and other traffic management tools are administered and applied separately in
496:10 administered and applied separately in each of the vpc networks so this applies
496:13 each of the vpc networks so this applies to each vpc independently which also
496:17 to each vpc independently which also means that each side of a peering
496:19 means that each side of a peering association
496:20 association is set up independently as well so when
496:23 is set up independently as well so when you connect one vpc to the other you
496:26 you connect one vpc to the other you have to go into each vpc that you are
496:28 have to go into each vpc that you are connecting to both initiate and
496:30 connecting to both initiate and establish the connection peering becomes
496:33 establish the connection peering becomes active only when the configuration from
496:35 active only when the configuration from both sides match this also means that
496:38 both sides match this also means that each vpc can delete the peering
496:41 each vpc can delete the peering association at any given time now during
496:44 association at any given time now during vpc peering the vpc peers always
496:47 vpc peering the vpc peers always exchange all subnet routes you also have
496:50 exchange all subnet routes you also have the option of exchanging custom routes
496:53 the option of exchanging custom routes subnet and static routes are global and
496:55 subnet and static routes are global and dynamic routes can be regional or global
496:58 dynamic routes can be regional or global a given vpc network can peer with
497:01 a given vpc network can peer with multiple vpc networks but there is a
497:04 multiple vpc networks but there is a limit that you can reach in which you
497:06 limit that you can reach in which you would have to reach out to google and
497:08 would have to reach out to google and ask the limit to be increased now when
497:10 ask the limit to be increased now when peering with vpc networks there are
497:12 peering with vpc networks there are certain restrictions in place that you
497:15 certain restrictions in place that you should be aware of first off a subnet
497:18 should be aware of first off a subnet cider range in one peered vpc network
497:21 cider range in one peered vpc network cannot overlap with a static route in
497:23 cannot overlap with a static route in another peered network this rule covers
497:26 another peered network this rule covers both subnet routes and static routes so
497:30 both subnet routes and static routes so when a vpc subnet is created or a subnet
497:33 when a vpc subnet is created or a subnet ip range is expanded google cloud
497:36 ip range is expanded google cloud performs a check to make sure that the
497:38 performs a check to make sure that the new subnet range does not overlap with
497:41 new subnet range does not overlap with ip ranges of subnets in the same vpc
497:44 ip ranges of subnets in the same vpc network or in directly peered vpc
497:47 network or in directly peered vpc networks if it does the creation or
497:50 networks if it does the creation or expansion will fail google cloud also
497:53 expansion will fail google cloud also ensures that no overlapping subnet ip
497:56 ensures that no overlapping subnet ip ranges
497:57 ranges are allowed across vpc networks that
498:00 are allowed across vpc networks that have appeared network in common and
498:02 have appeared network in common and again if it does the creation or
498:05 again if it does the creation or expansion will fail now speaking of
498:07 expansion will fail now speaking of routing when you create a new subnet in
498:10 routing when you create a new subnet in appeared vpc network
498:12 appeared vpc network vpc network peering doesn't provide
498:15 vpc network peering doesn't provide granular route controls to filter out
498:18 granular route controls to filter out which subnet cider ranges are reachable
498:20 which subnet cider ranges are reachable across pure networks these are handled
498:23 across pure networks these are handled by firewall rules so to allow ingress
498:26 by firewall rules so to allow ingress traffic from vm instances in a peer
498:29 traffic from vm instances in a peer network you must create ingress allow
498:31 network you must create ingress allow firewall rules by default ingress
498:34 firewall rules by default ingress traffic to vms is blocked by the implied
498:38 traffic to vms is blocked by the implied deny ingress rule another key point to
498:40 deny ingress rule another key point to note is that transitive peering is not
498:43 note is that transitive peering is not supported and only directly peered
498:45 supported and only directly peered networks can communicate so they have to
498:48 networks can communicate so they have to be peered directly in this diagram
498:51 be peered directly in this diagram network a is peered with network b and
498:54 network a is peered with network b and network b is peered with network c and
498:57 network b is peered with network c and so if one instance is trying to
498:58 so if one instance is trying to communicate from network a to network c
499:02 communicate from network a to network c this cannot be done unless network a is
499:05 this cannot be done unless network a is directly peered with network c an
499:07 directly peered with network c an extremely important point to note for
499:10 extremely important point to note for vpc peering another thing to note is
499:12 vpc peering another thing to note is that you cannot use a tag or service
499:15 that you cannot use a tag or service account from one peered network in the
499:18 account from one peered network in the other peered network they must each have
499:20 other peered network they must each have their own as again they are each
499:22 their own as again they are each independently operated as stated earlier
499:25 independently operated as stated earlier and so the last thing that i wanted to
499:27 and so the last thing that i wanted to cover is that internal dns is not
499:30 cover is that internal dns is not accessible for compute engine in peered
499:33 accessible for compute engine in peered networks as they must use an ip to
499:36 networks as they must use an ip to communicate and so that about covers
499:38 communicate and so that about covers this short yet important lesson on the
499:41 this short yet important lesson on the theory and concepts of vpc peering and
499:44 theory and concepts of vpc peering and so now that we've covered all the theory
499:46 so now that we've covered all the theory i'm going to be taking these concepts
499:48 i'm going to be taking these concepts into a demo where we will be pairing two
499:50 into a demo where we will be pairing two networks together and verifying the
499:53 networks together and verifying the communication between them and so you
499:55 communication between them and so you can now mark this lesson as complete and
499:57 can now mark this lesson as complete and whenever you're ready join me in the
499:59 whenever you're ready join me in the console
500:06 welcome back in this hands-on demonstration
500:07 demonstration we're going to go through the steps to
500:09 we're going to go through the steps to create a peering connection from two
500:11 create a peering connection from two vpcs
500:12 vpcs in two separate projects as shown here
500:15 in two separate projects as shown here in the diagram and then to verify that
500:17 in the diagram and then to verify that the connection works we're going to
500:19 the connection works we're going to create two instances one in each network
500:23 create two instances one in each network and ping one instance from the other
500:25 and ping one instance from the other instance this demo is very similar to
500:28 instance this demo is very similar to the custom vpc demo that you had done
500:31 the custom vpc demo that you had done earlier but we are adding in another
500:34 earlier but we are adding in another layer of complexity by adding in vpc
500:37 layer of complexity by adding in vpc network peering and so there's quite a
500:39 network peering and so there's quite a bit to do here so let's go ahead and
500:41 bit to do here so let's go ahead and just dive in
500:42 just dive in okay so here we are back in the console
500:44 okay so here we are back in the console as you can see up in the top right hand
500:46 as you can see up in the top right hand corner i am logged in as tony bowties
500:49 corner i am logged in as tony bowties gmail.com and for this specific demo i
500:53 gmail.com and for this specific demo i will be using two projects both project
500:56 will be using two projects both project tony and project bowtie inc and if you
500:59 tony and project bowtie inc and if you currently do not have two projects you
501:02 currently do not have two projects you can go ahead and create yourself a new
501:04 can go ahead and create yourself a new project or the two projects if you have
501:06 project or the two projects if you have none and so i'm going to continue here
501:09 none and so i'm going to continue here with project tony and the first thing i
501:11 with project tony and the first thing i want to do is create the two networks in
501:13 want to do is create the two networks in the two separate projects so i'm going
501:16 the two separate projects so i'm going to go up to the navigation menu in the
501:17 to go up to the navigation menu in the top left hand corner and i'm going to
501:19 top left hand corner and i'm going to scroll down to vpc network
501:22 scroll down to vpc network here i'm going to create my first vpc
501:24 here i'm going to create my first vpc network and i'm going to name this
501:27 network and i'm going to name this bowtie ink
501:29 bowtie ink dash a i'm going to give it the same
501:31 dash a i'm going to give it the same description
501:32 description and then under subnets i'm going to
501:33 and then under subnets i'm going to leave the subnet creation mode under
501:35 leave the subnet creation mode under custom under the subnet name you can
501:38 custom under the subnet name you can call this subnet dash a
501:40 call this subnet dash a i'm going to use the us east one region
501:43 i'm going to use the us east one region and for the ip address range i'm going
501:46 and for the ip address range i'm going to use 10.0 that's 0.0 forward slash 20.
501:50 to use 10.0 that's 0.0 forward slash 20. and i'm going to leave all the other
501:51 and i'm going to leave all the other options as default and i'm going to go
501:53 options as default and i'm going to go down to the bottom and click on create
501:56 down to the bottom and click on create now as this network is being created i'm
501:58 now as this network is being created i'm going to go over to the project bowtie
502:00 going to go over to the project bowtie inc and i'm going to create the vpc
502:03 inc and i'm going to create the vpc network there so under name i'm going to
502:05 network there so under name i'm going to call this bowtie inc
502:07 call this bowtie inc b and under description i'm going to use
502:10 b and under description i'm going to use the same under subnets i'm going to keep
502:12 the same under subnets i'm going to keep subnet creation mode as custom and under
502:15 subnet creation mode as custom and under new subnet i'm going to call this subnet
502:18 new subnet i'm going to call this subnet subnet b the region will be used 4
502:22 subnet b the region will be used 4 and the ip address range will be
502:24 and the ip address range will be 10.4.0.0
502:27 10.4.0.0 forward slash 20. you can leave all the
502:29 forward slash 20. you can leave all the other options as default and scroll down
502:31 other options as default and scroll down to the bottom and click on create as
502:34 to the bottom and click on create as this network is being created i'm going
502:36 this network is being created i'm going to go back to project tony and i'm going
502:38 to go back to project tony and i'm going to create the firewall rule for bow tie
502:41 to create the firewall rule for bow tie ink dash a
502:44 ink dash a in this firewall rule as explained in
502:46 in this firewall rule as explained in the last lesson we'll allow
502:48 the last lesson we'll allow communication from one instance to the
502:51 communication from one instance to the other and so i'm going to click on
502:52 other and so i'm going to click on create firewall
502:54 create firewall and under name i'm going to call this
502:56 and under name i'm going to call this project
502:57 project tony dash a under description i'm going
503:00 tony dash a under description i'm going to use the same
503:02 to use the same under the network i'm going to choose
503:04 under the network i'm going to choose the source network which will be bowtie
503:06 the source network which will be bowtie inc dash a priority i'm going to keep at
503:09 inc dash a priority i'm going to keep at 1000 direction of traffic should be
503:12 1000 direction of traffic should be ingress and action on match should be
503:14 ingress and action on match should be allow under targets i'm going to select
503:17 allow under targets i'm going to select all instances in the network and under
503:20 all instances in the network and under source filter i'm going to keep ip
503:22 source filter i'm going to keep ip ranges selected and the source ip range
503:25 ranges selected and the source ip range specifically for this demo is going to
503:27 specifically for this demo is going to be 0.0.0.0
503:30 be 0.0.0.0 forward slash 0. and again this is
503:32 forward slash 0. and again this is specifically used for this demo and
503:34 specifically used for this demo and should never be used in a
503:36 should never be used in a production-like environment in
503:38 production-like environment in production you should only use the
503:40 production you should only use the source ip ranges that you are
503:42 source ip ranges that you are communicating with and under protocols
503:44 communicating with and under protocols and ports because i need to log into the
503:46 and ports because i need to log into the instance to be able to ping the other
503:49 instance to be able to ping the other instance i'm going to have to open up
503:51 instance i'm going to have to open up tcp on port 22. under other protocols
503:55 tcp on port 22. under other protocols you can add icmp and this will allow the
503:58 you can add icmp and this will allow the ping command to be used i'm going to
504:00 ping command to be used i'm going to leave all the other options as default
504:02 leave all the other options as default and i'm going to click on create and now
504:04 and i'm going to click on create and now that this firewall rule has been created
504:07 that this firewall rule has been created i need to go back over to project bowtie
504:09 i need to go back over to project bowtie inc and create the firewall rule there
504:11 inc and create the firewall rule there as well
504:13 as well i'm going to call this firewall rule
504:15 i'm going to call this firewall rule bowtie inc dash b i'm going to give it
504:18 bowtie inc dash b i'm going to give it the same description under network i'm
504:20 the same description under network i'm going to select bow tie ink dash b i'm
504:23 going to select bow tie ink dash b i'm going to keep the priority as 1000 and
504:25 going to keep the priority as 1000 and the direction of traffic should be
504:27 the direction of traffic should be ingress as well the action on match
504:29 ingress as well the action on match should be allow scrolling down under
504:32 should be allow scrolling down under targets i'm going to select all
504:34 targets i'm going to select all instances in the network and again under
504:37 instances in the network and again under source filter i'm going to keep ip
504:39 source filter i'm going to keep ip ranges selected and under source ip
504:41 ranges selected and under source ip ranges i'm going to enter in
504:44 ranges i'm going to enter in 0.0.0.0 forward slash 0. and under
504:47 0.0.0.0 forward slash 0. and under protocols and ports i'm going to select
504:49 protocols and ports i'm going to select tcp with port 22 as well under other
504:53 tcp with port 22 as well under other protocols i'm going to type in icmp i'm
504:56 protocols i'm going to type in icmp i'm going to leave everything else as
504:57 going to leave everything else as default and i'm going to click on create
505:00 default and i'm going to click on create now once you've created both networks
505:02 now once you've created both networks and have created both firewall rules you
505:05 and have created both firewall rules you can now start creating the instances so
505:07 can now start creating the instances so because i'm already in project bowtie
505:09 because i'm already in project bowtie inc i'm going to go to the left-hand
505:11 inc i'm going to go to the left-hand navigation menu and i'm going to scroll
505:13 navigation menu and i'm going to scroll down to compute engine and create my
505:16 down to compute engine and create my instance so i'm just going to click on
505:18 instance so i'm just going to click on create
505:19 create and to keep with the naming convention
505:21 and to keep with the naming convention i'm going to call this instance instance
505:23 i'm going to call this instance instance b i'm not going to add any labels for
505:25 b i'm not going to add any labels for now under region i'm going to choose us
505:28 now under region i'm going to choose us east 4 and you can leave the zone as the
505:31 east 4 and you can leave the zone as the default selection and under machine type
505:34 default selection and under machine type i'm going to select e2 micro and i'm
505:36 i'm going to select e2 micro and i'm going to scroll down to the bottom and
505:38 going to scroll down to the bottom and i'm going to click on management
505:40 i'm going to click on management security disks networking and sold
505:42 security disks networking and sold tenancy so that i'm able to go into the
505:44 tenancy so that i'm able to go into the networking tab to change the network on
505:47 networking tab to change the network on the default network interface so i'm
505:49 the default network interface so i'm going to click on the default network
505:51 going to click on the default network interface and under network i'm going to
505:53 interface and under network i'm going to select bowtie inc b and the subnet has
505:56 select bowtie inc b and the subnet has already been selected for me and then
505:58 already been selected for me and then i'm going to scroll down click on done
506:00 i'm going to scroll down click on done and i'm going to leave all the other
506:02 and i'm going to leave all the other options as default and click on create
506:05 options as default and click on create and so as this is creating i'm going to
506:07 and so as this is creating i'm going to go over to project tony
506:10 go over to project tony and i'm going to create my instance
506:12 and i'm going to create my instance there
506:14 there and i'm going to name this instance
506:16 and i'm going to name this instance instance a under region i am going to
506:18 instance a under region i am going to select us east1 you can leave the zone
506:21 select us east1 you can leave the zone as the default selected under machine
506:23 as the default selected under machine type i'm going to select e2 micro and
506:26 type i'm going to select e2 micro and scrolling down here to the bottom i'm
506:28 scrolling down here to the bottom i'm going to go into the networking tab
506:30 going to go into the networking tab under management security disks
506:33 under management security disks networking and soul
506:35 networking and soul and here i'm going to edit the network
506:37 and here i'm going to edit the network interface and change it from the default
506:40 interface and change it from the default network to bow tie ink dash a and as you
506:42 network to bow tie ink dash a and as you can see the subnet has been
506:44 can see the subnet has been automatically selected for me
506:46 automatically selected for me so now i can just simply click on done
506:49 so now i can just simply click on done i'm going to leave all the other options
506:51 i'm going to leave all the other options as default and i'm going to click on
506:53 as default and i'm going to click on create
506:54 create so just as a recap we've created two
506:57 so just as a recap we've created two separate networks in two separate
506:59 separate networks in two separate projects along with its corresponding
507:02 projects along with its corresponding subnets and the firewall rules along
507:05 subnets and the firewall rules along with creating an instance in each
507:07 with creating an instance in each network and so now that we have both
507:10 network and so now that we have both environments set up it's now time to
507:12 environments set up it's now time to create the vbc peering connection and so
507:15 create the vbc peering connection and so because i'm in project tony i'm going to
507:17 because i'm in project tony i'm going to start off with this project and i'm
507:19 start off with this project and i'm going to go up to the navigation menu
507:21 going to go up to the navigation menu and scroll down to vpc network and under
507:24 and scroll down to vpc network and under vpc network on the left hand menu you're
507:27 vpc network on the left hand menu you're going to click on vpc network peering
507:29 going to click on vpc network peering and through the interface shown here
507:31 and through the interface shown here we'll be able to create our vpc network
507:34 we'll be able to create our vpc network peering so now you're going to click on
507:36 peering so now you're going to click on create connection and i'm prompted with
507:38 create connection and i'm prompted with some information that i will need and
507:40 some information that i will need and because we are connecting to another vpc
507:43 because we are connecting to another vpc in another project you're going to need
507:46 in another project you're going to need the project id as well as the name of
507:48 the project id as well as the name of the vpc network you want to peer with
507:50 the vpc network you want to peer with and just as explained in the earlier
507:52 and just as explained in the earlier lesson the subnet ip ranges in both
507:55 lesson the subnet ip ranges in both networks cannot overlap so please make
507:58 networks cannot overlap so please make sure that if you are using ip ranges
508:01 sure that if you are using ip ranges outside of the ones that are given for
508:03 outside of the ones that are given for this demonstration the ip ranges that
508:05 this demonstration the ip ranges that you are using do not overlap so once you
508:08 you are using do not overlap so once you have that information you can then click
508:11 have that information you can then click continue
508:12 continue and so here you will be prompted with
508:14 and so here you will be prompted with some fields to fill out with the
508:16 some fields to fill out with the information that you were asked to
508:17 information that you were asked to collect in the previous screen and so
508:20 collect in the previous screen and so since we have that information already
508:22 since we have that information already we can go ahead and start filling in the
508:23 we can go ahead and start filling in the fields so i'm going to call this peering
508:25 fields so i'm going to call this peering connection peering
508:27 connection peering a b and under vpc network i'm going to
508:30 a b and under vpc network i'm going to select bow tie ink dash a under peered
508:33 select bow tie ink dash a under peered vpc network we're going to select the
508:35 vpc network we're going to select the other project which should be bowtie inc
508:38 other project which should be bowtie inc and the vpc network name will be bow tie
508:41 and the vpc network name will be bow tie inc dash b and i'm going to leave all
508:44 inc dash b and i'm going to leave all the other options as default and so
508:46 the other options as default and so under vpc network name you will see
508:49 under vpc network name you will see exchange custom routes and here i can
508:52 exchange custom routes and here i can select to import and export custom
508:55 select to import and export custom routes that i have previously created so
508:57 routes that i have previously created so any special routes that i have created
509:00 any special routes that i have created before the actual peering connection i
509:02 before the actual peering connection i can bring them over to the other network
509:05 can bring them over to the other network to satisfy my requirements and so i'm
509:07 to satisfy my requirements and so i'm not going to do that right now i'm going
509:09 not going to do that right now i'm going to close this up and i'm going to simply
509:11 to close this up and i'm going to simply click on create and so this is finished
509:13 click on create and so this is finished creating and is marked as inactive and
509:16 creating and is marked as inactive and this is because the corresponding
509:18 this is because the corresponding peering connection in project bowtie has
509:21 peering connection in project bowtie has yet to be configured the status will
509:23 yet to be configured the status will change to a green check mark in both
509:25 change to a green check mark in both networks and marked as active once they
509:28 networks and marked as active once they are connected if this status remains as
509:30 are connected if this status remains as inactive then you should recheck your
509:33 inactive then you should recheck your configuration and edit it accordingly so
509:36 configuration and edit it accordingly so now i'm going to head on over to project
509:38 now i'm going to head on over to project bowtie inc and i'm going to create the
509:40 bowtie inc and i'm going to create the corresponding peering connection i'm
509:42 corresponding peering connection i'm going to click on create connection once
509:44 going to click on create connection once you have your project id and the vpc
509:46 you have your project id and the vpc network you can click on continue and
509:49 network you can click on continue and for the name of this peering connection
509:51 for the name of this peering connection i'm going to call this peering dash ba
509:54 i'm going to call this peering dash ba respectively under vpc network i'm going
509:57 respectively under vpc network i'm going to select bowtie inc b and under peered
510:00 to select bowtie inc b and under peered vpc network i'm going to select in
510:02 vpc network i'm going to select in another project here you want to type in
510:04 another project here you want to type in your project id for me i'm going to
510:07 your project id for me i'm going to paste in my project tony project id and
510:10 paste in my project tony project id and under vpc network name i'm going to type
510:12 under vpc network name i'm going to type in bowtie inc
510:14 in bowtie inc a and i'm going to leave all the other
510:16 a and i'm going to leave all the other options as default and i'm going to
510:18 options as default and i'm going to click on create and so now that we've
510:20 click on create and so now that we've established connections on each of the
510:22 established connections on each of the peering connections in each vpc if the
510:25 peering connections in each vpc if the information that we've entered is
510:27 information that we've entered is correct then we should receive a green
510:30 correct then we should receive a green check mark stating that the peering
510:32 check mark stating that the peering connection is connected and success here
510:35 connection is connected and success here we have status as active and if i head
510:38 we have status as active and if i head on over to project tony i should have
510:41 on over to project tony i should have the same green check mark under status
510:43 the same green check mark under status for the peering connection and as
510:45 for the peering connection and as expected the status has a green check
510:48 expected the status has a green check mark and is marked as active so now in
510:50 mark and is marked as active so now in order to do the pairing connectivity
510:52 order to do the pairing connectivity test i'm going to need to grab the
510:54 test i'm going to need to grab the internal ip of the instance in the other
510:58 internal ip of the instance in the other network that resides in project bowtie
511:01 network that resides in project bowtie and so because it doesn't matter which
511:02 and so because it doesn't matter which instance i log into as both of them have
511:06 instance i log into as both of them have ssh and ping access i'm going to simply
511:09 ssh and ping access i'm going to simply go over to the navigation menu i'm going
511:11 go over to the navigation menu i'm going to head on over to compute engine and
511:13 to head on over to compute engine and i'm going to record the internal ip of
511:16 i'm going to record the internal ip of instance a and now i'm going to head
511:18 instance a and now i'm going to head over to project bowtie and log into
511:21 over to project bowtie and log into instance b and ping instance a and so in
511:24 instance b and ping instance a and so in order to ssh into this instance i'm
511:26 order to ssh into this instance i'm going to click on the ssh button under
511:28 going to click on the ssh button under connect and it should open a new browser
511:31 connect and it should open a new browser tab for me logging me into the instance
511:34 tab for me logging me into the instance okay i'm logged in here and i'm going to
511:36 okay i'm logged in here and i'm going to zoom in for better viewing and so now
511:39 zoom in for better viewing and so now i'm going to run a ping command against
511:42 i'm going to run a ping command against instance a using the internal ip that i
511:45 instance a using the internal ip that i had copied earlier and i'm going to hit
511:47 had copied earlier and i'm going to hit enter and as you can see ping is working
511:51 enter and as you can see ping is working and so now we can confirm that the vpc
511:54 and so now we can confirm that the vpc peering connection is established and
511:56 peering connection is established and the two instances in the different vpc
511:59 the two instances in the different vpc networks are communicating over their
512:02 networks are communicating over their private ips and you can go ahead and hit
512:04 private ips and you can go ahead and hit control c to stop the ping and so just
512:07 control c to stop the ping and so just as a recap you've created two separate
512:09 as a recap you've created two separate vpc networks with their own separate
512:12 vpc networks with their own separate subnets in two separate projects you've
512:15 subnets in two separate projects you've created the necessary firewall rules in
512:17 created the necessary firewall rules in each of these networks along with
512:19 each of these networks along with creating instances in each of those
512:22 creating instances in each of those networks you then established a vpc
512:24 networks you then established a vpc peering connection establishing the
512:26 peering connection establishing the configuration in each vpc you then did a
512:30 configuration in each vpc you then did a connectivity test by logging into one of
512:33 connectivity test by logging into one of the instances and pinging the other
512:35 the instances and pinging the other instance and so i hope this helps cement
512:37 instance and so i hope this helps cement the theory of vpc peering that you
512:40 the theory of vpc peering that you learned in the previous lesson and has
512:42 learned in the previous lesson and has given you some context when it comes to
512:45 given you some context when it comes to configuring each end of the peering
512:47 configuring each end of the peering connection so i wanted to take a moment
512:49 connection so i wanted to take a moment to congratulate you on completing this
512:51 to congratulate you on completing this demo and so all that's left now is to
512:54 demo and so all that's left now is to clean up all the resources that we
512:56 clean up all the resources that we created throughout this demo and you can
512:58 created throughout this demo and you can start by selecting the instances and
513:00 start by selecting the instances and deleting them in each network as well as
513:03 deleting them in each network as well as the firewall rules and the networks
513:06 the firewall rules and the networks themselves i'm going to go over to
513:07 themselves i'm going to go over to project tony and i'm going to do the
513:09 project tony and i'm going to do the same thing there and so you can do
513:11 same thing there and so you can do exactly what you did with the last
513:12 exactly what you did with the last instance here you can select it click on
513:15 instance here you can select it click on delete and delete the instance and so
513:17 delete and delete the instance and so next we're going to delete the peering
513:19 next we're going to delete the peering connection so we're going to go up to
513:21 connection so we're going to go up to the navigation menu we're going to
513:23 the navigation menu we're going to scroll down to vpc network and on the
513:25 scroll down to vpc network and on the left hand menu we're going to scroll
513:26 left hand menu we're going to scroll down to vpc network peering and so we're
513:29 down to vpc network peering and so we're going to select appearing connection
513:31 going to select appearing connection we're going to go to the top and click
513:32 we're going to go to the top and click on delete and then delete the peering
513:34 on delete and then delete the peering connection
513:35 connection and so now we're going to delete the
513:37 and so now we're going to delete the firewall rule so we're going to go up to
513:39 firewall rule so we're going to go up to firewall
513:44 we're going to select the firewall rule at the top we're going to click delete
513:45 at the top we're going to click delete and then delete the firewall rule and
513:47 and then delete the firewall rule and last but not least we want to delete the
513:49 last but not least we want to delete the vpc network that we created so we're
513:52 vpc network that we created so we're going to go up to vpc networks we're
513:54 going to go up to vpc networks we're going to drill down into the custom vpc
513:56 going to drill down into the custom vpc up at the top we're going to click on
513:57 up at the top we're going to click on delete vpc network and then we're going
514:00 delete vpc network and then we're going to click on delete and so now that we've
514:01 to click on delete and so now that we've deleted all the resources in project
514:03 deleted all the resources in project tony we're going to go back over to our
514:06 tony we're going to go back over to our second project project bowtie and do the
514:08 second project project bowtie and do the same thing and so we're first going to
514:10 same thing and so we're first going to start off with the vpc peering
514:12 start off with the vpc peering connection so we're going to go over to
514:13 connection so we're going to go over to vpc network peering we're going to
514:15 vpc network peering we're going to select the appearing connection we're
514:17 select the appearing connection we're gonna click on delete at the top and
514:19 gonna click on delete at the top and delete the peering connection next we're
514:21 delete the peering connection next we're gonna go into firewall we're gonna
514:22 gonna go into firewall we're gonna select the firewall rule go up to the
514:24 select the firewall rule go up to the top and click on delete and then delete
514:26 top and click on delete and then delete the firewall rule and finally we're
514:28 the firewall rule and finally we're gonna go over to vpc networks we're
514:31 gonna go over to vpc networks we're going to drill down into the custom
514:32 going to drill down into the custom network we're going to click on delete
514:34 network we're going to click on delete vpc network at the top and delete the
514:36 vpc network at the top and delete the vpc network
514:39 vpc network and so now that you've successfully
514:41 and so now that you've successfully deleted all your resources you can now
514:43 deleted all your resources you can now mark this lesson as complete and i'll
514:45 mark this lesson as complete and i'll see you in the next one and congrats
514:47 see you in the next one and congrats again on the great job of completing
514:49 again on the great job of completing this demo
514:51 this demo [Music]
514:54 [Music] welcome back and in this lesson i'm
514:57 welcome back and in this lesson i'm going to be discussing the concepts and
514:59 going to be discussing the concepts and terminology of shared vpcs i'm also
515:02 terminology of shared vpcs i'm also going to go into some detailed use cases
515:05 going to go into some detailed use cases and how shared vpcs would be used in
515:08 and how shared vpcs would be used in different scenarios so with that being
515:10 different scenarios so with that being said let's dive in
515:13 said let's dive in now when a vpc is created it is usually
515:16 now when a vpc is created it is usually tied to a specific project now what
515:18 tied to a specific project now what happens when you want to share resources
515:21 happens when you want to share resources across different projects but still have
515:24 across different projects but still have separate billing and access within the
515:26 separate billing and access within the projects themselves
515:28 projects themselves this is where shared vpcs come into play
515:31 this is where shared vpcs come into play shared vpcs allow an organization to
515:34 shared vpcs allow an organization to connect resources from multiple projects
515:37 connect resources from multiple projects to a common vpc network so that way they
515:40 to a common vpc network so that way they can communicate with each other securely
515:43 can communicate with each other securely and efficiently using internal ips from
515:46 and efficiently using internal ips from that network when you use shared vpcs
515:49 that network when you use shared vpcs you designate a project as a host
515:52 you designate a project as a host project and attach one or more other
515:55 project and attach one or more other service projects to it the vpc networks
515:58 service projects to it the vpc networks in the host project are considered the
516:00 in the host project are considered the shared vpc networks so just as a
516:03 shared vpc networks so just as a reminder a project that participates in
516:05 reminder a project that participates in a shared vpc is either a host project or
516:09 a shared vpc is either a host project or a service project a host project can
516:12 a service project a host project can contain one or more shared vpc networks
516:15 contain one or more shared vpc networks a service project is any project that
516:18 a service project is any project that has been attached to a host project by a
516:21 has been attached to a host project by a shared vpc admin this attachment allows
516:24 shared vpc admin this attachment allows it to participate in the shared vpc and
516:27 it to participate in the shared vpc and just as a note a project cannot be both
516:30 just as a note a project cannot be both a host and a service project
516:32 a host and a service project simultaneously it has to be one or the
516:35 simultaneously it has to be one or the other and you can create and use
516:37 other and you can create and use multiple host projects however each
516:40 multiple host projects however each service project can only be attached to
516:43 service project can only be attached to a single host project it is also a
516:46 a single host project it is also a common practice to have multiple service
516:49 common practice to have multiple service projects administered by different
516:51 projects administered by different departments or teams in the organization
516:54 departments or teams in the organization and so just for clarity for those who
516:56 and so just for clarity for those who are wondering a project that does not
516:58 are wondering a project that does not participate in a shared vpc
517:01 participate in a shared vpc is called a standalone project
517:03 is called a standalone project and this is to emphasize that it is
517:05 and this is to emphasize that it is neither a host project or a service
517:08 neither a host project or a service project now when it comes to
517:10 project now when it comes to administering these shared vpcs we
517:13 administering these shared vpcs we should be adhering to the principle of
517:15 should be adhering to the principle of least privilege and only assigning the
517:18 least privilege and only assigning the necessary access needed to specific
517:21 necessary access needed to specific users so here i broken down the roles
517:24 users so here i broken down the roles that are needed to enable and administer
517:27 that are needed to enable and administer the shared vpcs a shared vpc admin has
517:31 the shared vpcs a shared vpc admin has the permissions to enable host projects
517:34 the permissions to enable host projects attach service projects to host projects
517:37 attach service projects to host projects and delegate access to some or all of
517:40 and delegate access to some or all of the subnets in shared vpc networks to
517:43 the subnets in shared vpc networks to service project admins when it comes to
517:46 service project admins when it comes to a service project admin this is a shared
517:48 a service project admin this is a shared vpc admin for a given host project and
517:52 vpc admin for a given host project and is typically its project owner as well
517:54 is typically its project owner as well although when defining each service
517:56 although when defining each service project admin a shared vpc admin can
517:59 project admin a shared vpc admin can grant permission to use the whole host
518:02 grant permission to use the whole host project or just some subnets and so when
518:05 project or just some subnets and so when it comes to service project admins there
518:08 it comes to service project admins there are two separate levels of permissions
518:10 are two separate levels of permissions that can be applied the first is project
518:12 that can be applied the first is project level permissions and this is a service
518:15 level permissions and this is a service project admin that can be defined to
518:18 project admin that can be defined to have permission to use all subnets in
518:20 have permission to use all subnets in the host project when it comes to subnet
518:22 the host project when it comes to subnet level permissions a service project
518:25 level permissions a service project admin can be granted a more restrictive
518:27 admin can be granted a more restrictive set of permissions to use only some
518:30 set of permissions to use only some subnets now i wanted to move into some
518:33 subnets now i wanted to move into some use cases which will give you a bit more
518:35 use cases which will give you a bit more context on how shared vpcs are used in
518:38 context on how shared vpcs are used in specific environments illustrated here
518:42 specific environments illustrated here is a simple shared vpc scenario here a
518:45 is a simple shared vpc scenario here a host project has been created and
518:47 host project has been created and attached to service projects to it the
518:50 attached to service projects to it the service project admin in service project
518:53 service project admin in service project a
518:54 a can be configured to access all or some
518:56 can be configured to access all or some of the subnets in the shared vpc network
519:00 of the subnets in the shared vpc network service project admin with at least
519:02 service project admin with at least subnet level permissions to the 10.0.2.0
519:07 subnet level permissions to the 10.0.2.0 24 subnet has created vm1 in a zone
519:11 24 subnet has created vm1 in a zone located in the us west one region this
519:14 located in the us west one region this instance receives its internal ip
519:17 instance receives its internal ip address
519:19 address 10.0.2.15 from the 10.0.2.0
519:24 10.0.2.15 from the 10.0.2.0 24 cider block now service project
519:26 24 cider block now service project admins in service project b can be
519:29 admins in service project b can be configured to access all or some of the
519:31 configured to access all or some of the subnets in the shared vpc network a
519:34 subnets in the shared vpc network a service project admin with at least
519:36 service project admin with at least subnet level permissions to the
519:40 subnet level permissions to the 10.10.4.0 forward slash 24 subnet has
519:43 10.10.4.0 forward slash 24 subnet has created vm2 in a zone located in the us
519:47 created vm2 in a zone located in the us central 1 region this instance receives
519:50 central 1 region this instance receives its internal ip address
519:52 its internal ip address 10.10.4.1
519:54 10.10.4.1 from the
519:56 from the 10.10.4.0 forward slash 24 cider block
519:59 10.10.4.0 forward slash 24 cider block and of course the standalone project
520:01 and of course the standalone project does not participate in the shared vpc
520:04 does not participate in the shared vpc at all as it is neither a host nor a
520:06 at all as it is neither a host nor a service project and the last thing to
520:08 service project and the last thing to note
520:09 note instances in service projects attached
520:12 instances in service projects attached to a host project using the same shared
520:15 to a host project using the same shared vpc network
520:17 vpc network can communicate with one another using
520:20 can communicate with one another using either ephemeral or reserve static
520:23 either ephemeral or reserve static internal ip addresses and i will be
520:25 internal ip addresses and i will be covering both ephemeral and static ip
520:29 covering both ephemeral and static ip addresses in a later section under
520:31 addresses in a later section under compute engine external ip addresses
520:34 compute engine external ip addresses defined in the host project are only
520:36 defined in the host project are only usable by resources in that project they
520:39 usable by resources in that project they are not available for use in service
520:42 are not available for use in service projects moving on to the next use case
520:45 projects moving on to the next use case is a multiple hosts project for this use
520:48 is a multiple hosts project for this use case an organization is using two
520:50 case an organization is using two separate host projects development and
520:53 separate host projects development and production
520:54 production and each host project has two service
520:56 and each host project has two service projects attached to them both host
520:59 projects attached to them both host projects have one shared vpc network
521:02 projects have one shared vpc network with subnets configured to use the same
521:04 with subnets configured to use the same cider ranges both the testing and
521:07 cider ranges both the testing and production networks have been purposely
521:09 production networks have been purposely configured in the same way so this way
521:12 configured in the same way so this way when you work with resources tied to a
521:14 when you work with resources tied to a subnet range it will automatically
521:17 subnet range it will automatically translate over from one environment to
521:19 translate over from one environment to the other moving on to the next use case
521:22 the other moving on to the next use case is the hybrid environment now in this
521:24 is the hybrid environment now in this use case the organization has a single
521:27 use case the organization has a single host project with a single shared vpc
521:30 host project with a single shared vpc network the shared vpc network is
521:33 network the shared vpc network is connected via cloud vpn to an
521:36 connected via cloud vpn to an on-premises network some services and
521:38 on-premises network some services and applications are hosted in gcp while
521:42 applications are hosted in gcp while others are kept on premises and this way
521:45 others are kept on premises and this way separate teams can manage each of their
521:47 separate teams can manage each of their own service projects
521:49 own service projects and each project has no permissions to
521:51 and each project has no permissions to the other service projects as well each
521:54 the other service projects as well each service project can also be billed
521:56 service project can also be billed separately subnet level or project level
521:59 separately subnet level or project level permissions have been granted to the
522:01 permissions have been granted to the necessary service project admins
522:04 necessary service project admins so they can create instances that use
522:06 so they can create instances that use the shared vpc network and again
522:09 the shared vpc network and again instances in these service projects can
522:11 instances in these service projects can be configured to communicate with
522:14 be configured to communicate with internal services
522:16 internal services such as database or directory servers
522:18 such as database or directory servers located on premises and finally the last
522:21 located on premises and finally the last use case is a two-tier web service here
522:25 use case is a two-tier web service here an organization has a web service that
522:27 an organization has a web service that is separated into two tiers and
522:30 is separated into two tiers and different teams manage each tier the
522:32 different teams manage each tier the tier one service project represents the
522:35 tier one service project represents the externally facing component behind an
522:38 externally facing component behind an http or https load balancer the tier 2
522:42 http or https load balancer the tier 2 service project represents an internal
522:45 service project represents an internal service upon which tier 1 relies on and
522:48 service upon which tier 1 relies on and it is balanced using an internal tcp or
522:52 it is balanced using an internal tcp or udp load balancer the shared vpc allows
522:55 udp load balancer the shared vpc allows mapping of each tier of the web service
522:58 mapping of each tier of the web service to different projects so that they can
523:01 to different projects so that they can be managed by different teams while
523:03 be managed by different teams while sharing a common vpc network to host
523:06 sharing a common vpc network to host resources that are needed for both tiers
523:09 resources that are needed for both tiers now we cover quite a bit in this lesson
523:11 now we cover quite a bit in this lesson when it comes to all the concepts of
523:13 when it comes to all the concepts of shared vpcs we covered both host and
523:16 shared vpcs we covered both host and service projects and the roles that they
523:18 service projects and the roles that they play and their limitations we also went
523:21 play and their limitations we also went over the different roles that are needed
523:23 over the different roles that are needed to administrate these shared vpcs and we
523:26 to administrate these shared vpcs and we went over different use cases on how to
523:28 went over different use cases on how to use shared vpcs for different scenarios
523:32 use shared vpcs for different scenarios and so that about covers everything i
523:33 and so that about covers everything i wanted to discuss in this lesson
523:36 wanted to discuss in this lesson so you can now mark this lesson as
523:37 so you can now mark this lesson as complete and let's move on to the next
523:40 complete and let's move on to the next one
523:40 one [Music]
523:44 [Music] welcome back and in this lesson i'm
523:47 welcome back and in this lesson i'm going to be discussing vpc flow logs
523:50 going to be discussing vpc flow logs flow logs is an essential tool for
523:52 flow logs is an essential tool for monitoring and analyzing traffic
523:55 monitoring and analyzing traffic coming in and going out of vpcs from vm
523:59 coming in and going out of vpcs from vm instances flow logs are essential to
524:01 instances flow logs are essential to know for the exam as you should know the
524:04 know for the exam as you should know the capabilities and use cases and so with
524:06 capabilities and use cases and so with that being said let's dive in
524:09 that being said let's dive in so vpc flow logs records a sample of
524:12 so vpc flow logs records a sample of network flows
524:14 network flows sent from and received by vm instances
524:18 sent from and received by vm instances including instances used as google
524:21 including instances used as google kubernetes engine nodes these logs can
524:24 kubernetes engine nodes these logs can be used for network monitoring forensics
524:27 be used for network monitoring forensics real-time security analysis and expense
524:30 real-time security analysis and expense optimization when you enable vpc flow
524:33 optimization when you enable vpc flow logs you enable for all vms in a subnet
524:37 logs you enable for all vms in a subnet so basically you would be enabling vpc
524:40 so basically you would be enabling vpc flow logs on a subnet by subnet basis
524:43 flow logs on a subnet by subnet basis flow logs are aggregated by connection
524:46 flow logs are aggregated by connection from compute engine vms and exported in
524:49 from compute engine vms and exported in real time these logs can be exported to
524:53 real time these logs can be exported to cloud logging previously known as
524:55 cloud logging previously known as stackdriver for 30 days if logs need to
524:58 stackdriver for 30 days if logs need to be stored for longer than 30 days they
525:01 be stored for longer than 30 days they can be exported to a cloud storage
525:03 can be exported to a cloud storage bucket for longer term storage and then
525:07 bucket for longer term storage and then read and queried by cloud logging google
525:10 read and queried by cloud logging google cloud samples packets that leave and
525:13 cloud samples packets that leave and enter a vm to generate flow logs now not
525:17 enter a vm to generate flow logs now not every packet is captured into its own
525:19 every packet is captured into its own log record about one out of every 10
525:22 log record about one out of every 10 packets is captured but this sampling
525:25 packets is captured but this sampling rate might be lower depending on the
525:27 rate might be lower depending on the vm's load and just as a note you cannot
525:30 vm's load and just as a note you cannot adjust this rate this rate is locked by
525:33 adjust this rate this rate is locked by google cloud and cannot be changed in
525:35 google cloud and cannot be changed in any way and because vpc flow logs do not
525:39 any way and because vpc flow logs do not capture every packet it compensates for
525:42 capture every packet it compensates for missed packets by interpolating from the
525:45 missed packets by interpolating from the captured packets now there are many
525:47 captured packets now there are many different use cases for vpc flow logs
525:50 different use cases for vpc flow logs and i wanted to take a quick minute to
525:52 and i wanted to take a quick minute to go over them the first one i wanted to
525:54 go over them the first one i wanted to mention is network monitoring vpc flow
525:57 mention is network monitoring vpc flow logs provide you with real-time
525:59 logs provide you with real-time visibility into network throughput and
526:02 visibility into network throughput and performance so you can monitor the vpc
526:04 performance so you can monitor the vpc network perform network diagnostics
526:08 network perform network diagnostics understand traffic changes and help
526:10 understand traffic changes and help forecast capacity
526:12 forecast capacity for capacity planning you can also
526:14 for capacity planning you can also analyze network usage with vpc flow logs
526:18 analyze network usage with vpc flow logs and you can analyze the network flows
526:20 and you can analyze the network flows for traffic between regions and zones
526:23 for traffic between regions and zones traffic to specific countries on the
526:25 traffic to specific countries on the internet
526:26 internet and based on the analysis you can
526:28 and based on the analysis you can optimize your network traffic expenses
526:31 optimize your network traffic expenses now a great use case for vpc flow logs
526:34 now a great use case for vpc flow logs is network forensics
526:36 is network forensics so for example if an incident occurs you
526:39 so for example if an incident occurs you can examine which ips talked with whom
526:43 can examine which ips talked with whom and when and you can also look at any
526:45 and when and you can also look at any compromised ips by analyzing all the
526:49 compromised ips by analyzing all the incoming and outgoing network flows and
526:52 incoming and outgoing network flows and lastly
526:53 lastly vpc flow logs can be used for real-time
526:56 vpc flow logs can be used for real-time security analysis
526:57 security analysis you can leverage the real-time streaming
527:00 you can leverage the real-time streaming apis using pub sub and integrate them
527:03 apis using pub sub and integrate them with a sim or security information in
527:06 with a sim or security information in event management system like splunk
527:09 event management system like splunk rapid7 or logarithm and this is a very
527:12 rapid7 or logarithm and this is a very common way to add an extra layer of
527:15 common way to add an extra layer of security to your currently existing
527:18 security to your currently existing environment as well as a great way to
527:20 environment as well as a great way to meet any compliance standards that are
527:23 meet any compliance standards that are needed for your organization now vpc
527:26 needed for your organization now vpc flow logs are recorded in a specific
527:29 flow logs are recorded in a specific format log records contain base fields
527:32 format log records contain base fields which are the core fields of every log
527:34 which are the core fields of every log record and meta data fields that add
527:38 record and meta data fields that add additional information metadata fields
527:41 additional information metadata fields may be omitted to save storage costs but
527:44 may be omitted to save storage costs but base fields are always included and
527:46 base fields are always included and cannot be omitted some log fields are in
527:50 cannot be omitted some log fields are in a multi-field format with more than one
527:52 a multi-field format with more than one piece of data in a given field
527:55 piece of data in a given field for example
527:57 for example the connection field that you see from
527:59 the connection field that you see from the base is of the ip details format
528:03 the base is of the ip details format which contains the source and
528:05 which contains the source and destination ip address and port
528:08 destination ip address and port plus the protocol in a single field
528:11 plus the protocol in a single field flows that have an endpoint in a gke
528:14 flows that have an endpoint in a gke cluster can be annotated with gke
528:17 cluster can be annotated with gke annotations which can include details of
528:20 annotations which can include details of the cluster
528:21 the cluster pod and service of the endpoint gke
528:25 pod and service of the endpoint gke annotations are only available with a
528:28 annotations are only available with a custom configuration of metadata fields
528:31 custom configuration of metadata fields now when you enable vpc flow logs you
528:34 now when you enable vpc flow logs you can set a filter based on both base and
528:37 can set a filter based on both base and metadata fields that only preserves logs
528:40 metadata fields that only preserves logs that match the filter all other logs are
528:43 that match the filter all other logs are discarded before being written to
528:46 discarded before being written to logging which saves you money and
528:48 logging which saves you money and reduces the time needed to find the
528:50 reduces the time needed to find the information you're looking for shown
528:53 information you're looking for shown here is a sample from the console in
528:55 here is a sample from the console in both the classic logs viewer as well as
528:58 both the classic logs viewer as well as the logs viewer in preview and so in the
529:00 the logs viewer in preview and so in the classic logs viewer you can simply
529:03 classic logs viewer you can simply select the sub network from the first
529:05 select the sub network from the first pull down menu and from the second pull
529:07 pull down menu and from the second pull down menu you can select the
529:09 down menu you can select the compute.googleapis.com
529:12 compute.googleapis.com forward slash vpc underscore flows and
529:15 forward slash vpc underscore flows and this will give you the information that
529:17 this will give you the information that you need to pull up all your vpc flow
529:20 you need to pull up all your vpc flow logs in the logs viewer preview it is
529:23 logs in the logs viewer preview it is done in a similar way but the query is
529:25 done in a similar way but the query is shown here in the query builder and can
529:28 shown here in the query builder and can be adjusted accordingly pulling up any
529:30 be adjusted accordingly pulling up any vpc flow logs must be done within the
529:33 vpc flow logs must be done within the console when viewing them in google
529:35 console when viewing them in google cloud and so the last thing i wanted to
529:38 cloud and so the last thing i wanted to show you before ending this lesson is a
529:41 show you before ending this lesson is a sample of the log itself the log shown
529:43 sample of the log itself the log shown here is a sample of what a vpc flow log
529:47 here is a sample of what a vpc flow log looks like
529:48 looks like and as you can see here beside each
529:50 and as you can see here beside each field you will see a small arrow
529:53 field you will see a small arrow clicking on these arrows will expand the
529:56 clicking on these arrows will expand the field and reveal many of the subfields
529:59 field and reveal many of the subfields that you saw on the last slide and will
530:01 that you saw on the last slide and will give you the necessary information you
530:04 give you the necessary information you need to analyze your vpc flow logs in
530:07 need to analyze your vpc flow logs in this example of the connection field it
530:10 this example of the connection field it shows the five tuple that describes this
530:12 shows the five tuple that describes this connection which you can clearly see up
530:15 connection which you can clearly see up here at the top and if i were to go
530:18 here at the top and if i were to go further down and expand more of these
530:20 further down and expand more of these fields i would find more information
530:23 fields i would find more information that could help me better analyze more
530:26 that could help me better analyze more logging info for my given problem that i
530:29 logging info for my given problem that i am trying to solve now i didn't want to
530:31 am trying to solve now i didn't want to go too deep into logging as i will be
530:34 go too deep into logging as i will be diving into a complete section on its
530:37 diving into a complete section on its own in a later section of the course but
530:40 own in a later section of the course but i did want you to get a feel for what
530:42 i did want you to get a feel for what type of data vpc flow logs can give you
530:45 type of data vpc flow logs can give you and how it can help you in your specific
530:48 and how it can help you in your specific use case as well as on the exam and so
530:51 use case as well as on the exam and so that's pretty much all i wanted to cover
530:53 that's pretty much all i wanted to cover with regards to vpc flow logs so you can
530:56 with regards to vpc flow logs so you can now mark this lesson as complete and
530:58 now mark this lesson as complete and let's move on to the next one
531:00 let's move on to the next one [Music]
531:04 [Music] welcome back in this lesson i'm going to
531:07 welcome back in this lesson i'm going to cover a high-level overview of a basic
531:09 cover a high-level overview of a basic foundational service that supports the
531:12 foundational service that supports the backbone of the internet as we know it
531:14 backbone of the internet as we know it today this foundation is called dns or
531:18 today this foundation is called dns or the domain name system dns is used
531:21 the domain name system dns is used widely in google cloud from mostly an
531:24 widely in google cloud from mostly an infrastructure perspective and is used
531:26 infrastructure perspective and is used in pretty much any other cloud
531:28 in pretty much any other cloud environment or computer network on the
531:30 environment or computer network on the planet now there is quite a bit to cover
531:32 planet now there is quite a bit to cover in this lesson with regards to dns so
531:35 in this lesson with regards to dns so with that being said let's dive in
531:37 with that being said let's dive in now dns or domain name system is a
531:41 now dns or domain name system is a global decentralized distributed
531:43 global decentralized distributed database that lets you store ip
531:46 database that lets you store ip addresses and other data
531:48 addresses and other data and look them up by name this system
531:51 and look them up by name this system uses human readable names like
531:54 uses human readable names like google.com and translates it into a
531:57 google.com and translates it into a language that computers understand which
531:59 language that computers understand which are numeric ip addresses for example
532:03 are numeric ip addresses for example humans access information online through
532:06 humans access information online through a domain name like google.com computers
532:09 a domain name like google.com computers use ip addresses to access information
532:12 use ip addresses to access information online like 172.217.
532:21 now whether you type google.com or the ip address into a web browser both will
532:24 ip address into a web browser both will connect to google.com dns translates the
532:27 connect to google.com dns translates the domain name to an ip address so that the
532:30 domain name to an ip address so that the web browser knows where to connect to
532:33 web browser knows where to connect to and we know what to enter into the web
532:36 and we know what to enter into the web browser through dns you can connect a
532:39 browser through dns you can connect a domain name to web hosting
532:42 domain name to web hosting mail
532:42 mail and other services
532:44 and other services now getting a bit deeper into it as ip
532:47 now getting a bit deeper into it as ip addresses are at the core of
532:49 addresses are at the core of communicating between devices on the
532:51 communicating between devices on the internet they are hard to memorize and
532:54 internet they are hard to memorize and can change often even for the same
532:57 can change often even for the same service to get around these problems we
533:00 service to get around these problems we gave names to ip addresses for example
533:03 gave names to ip addresses for example when it comes to our computer
533:05 when it comes to our computer communicating with
533:08 communicating with www.google.com it will use the dns
533:11 www.google.com it will use the dns system to do this now in the dns
533:13 system to do this now in the dns database contains the information needed
533:16 database contains the information needed to convert the www.google.com
533:20 to convert the www.google.com domain name to the ip address and this
533:23 domain name to the ip address and this piece of information is stored in a
533:25 piece of information is stored in a logical container called a zone the way
533:28 logical container called a zone the way that the zone is stored is through
533:30 that the zone is stored is through what's commonly known as a zone file now
533:33 what's commonly known as a zone file now within this zone file is a dns record
533:37 within this zone file is a dns record which links the name
533:38 which links the name www and the ip address that your laptop
533:42 www and the ip address that your laptop needs to communicate
533:44 needs to communicate with the specific website and this zone
533:46 with the specific website and this zone file is hosted by what's known as a name
533:49 file is hosted by what's known as a name server or ns server for short and i will
533:53 server or ns server for short and i will be going into further detail on this in
533:55 be going into further detail on this in just a minute so in short if you can
533:58 just a minute so in short if you can query the zone for the record
534:00 query the zone for the record www.google.com
534:03 www.google.com then your computer can communicate with
534:05 then your computer can communicate with the web server and dns is what makes it
534:08 the web server and dns is what makes it all happen
534:09 all happen now i wanted to go into a bit of history
534:12 now i wanted to go into a bit of history of how dns came about so in early
534:15 of how dns came about so in early computer networks a simple text file
534:18 computer networks a simple text file called a host file was created that
534:21 called a host file was created that mapped hostnames to ip addresses and
534:24 mapped hostnames to ip addresses and this enabled people to refer to other
534:26 this enabled people to refer to other computers by the name and their
534:29 computers by the name and their computers translated that name to an ip
534:31 computers translated that name to an ip address when it needed to communicate
534:34 address when it needed to communicate with it the problem is as network sizes
534:36 with it the problem is as network sizes increased the host file approach became
534:40 increased the host file approach became impractical due to the fact that it
534:42 impractical due to the fact that it needed to be stored on each computer as
534:45 needed to be stored on each computer as each computer would have to resolve the
534:47 each computer would have to resolve the same host names as well updates were
534:49 same host names as well updates were difficult to manage as all of the
534:51 difficult to manage as all of the computers would need to be given an
534:54 computers would need to be given an updated file all in all this system was
534:57 updated file all in all this system was not scalable
534:59 not scalable now to overcome these and other
535:01 now to overcome these and other limitations the dns system was developed
535:04 limitations the dns system was developed and the dns system essentially provided
535:07 and the dns system essentially provided for a way to organize the names using a
535:10 for a way to organize the names using a domain name structure it also provided a
535:13 domain name structure it also provided a dynamic system for protocols services
535:17 dynamic system for protocols services and methods
535:18 and methods for storing updating and retrieving ip
535:21 for storing updating and retrieving ip addresses for host computers
535:24 addresses for host computers now that i've covered what dns is
535:26 now that i've covered what dns is and why we use it i wanted to dive into
535:29 and why we use it i wanted to dive into the structure of the dns system now the
535:32 the structure of the dns system now the structure all begins with a dot the root
535:35 structure all begins with a dot the root if you will and this can be found after
535:38 if you will and this can be found after every domain name that you type into
535:40 every domain name that you type into your browser you will almost never see
535:42 your browser you will almost never see it and this is because your browser will
535:45 it and this is because your browser will automatically put it in without your
535:47 automatically put it in without your knowing you can try it with any domain
535:49 knowing you can try it with any domain in any browser and you will almost
535:52 in any browser and you will almost always come up with the same result this
535:54 always come up with the same result this dot is put in for you and will provide
535:58 dot is put in for you and will provide the route for you and this is where we
536:00 the route for you and this is where we start to break down the dns system
536:03 start to break down the dns system now the domain name space consists of a
536:06 now the domain name space consists of a hierarchical data structure like the one
536:09 hierarchical data structure like the one you have on your computer each node has
536:12 you have on your computer each node has a label and zero or more resource
536:14 a label and zero or more resource records which hold information
536:17 records which hold information associated with the domain name the
536:19 associated with the domain name the domain name itself consists of the label
536:22 domain name itself consists of the label concatenated with the name of its parent
536:25 concatenated with the name of its parent node on the right separated by a dot so
536:28 node on the right separated by a dot so when it comes to dns the domain name is
536:31 when it comes to dns the domain name is always assembled from right to left this
536:34 always assembled from right to left this hierarchy or tree is subdivided into
536:37 hierarchy or tree is subdivided into zones beginning at the root zone a dns
536:40 zones beginning at the root zone a dns zone may consist of only one domain or
536:43 zone may consist of only one domain or may consist of many domains and sub
536:46 may consist of many domains and sub domains depending on the administrative
536:49 domains depending on the administrative choices of the zone manager now getting
536:51 choices of the zone manager now getting right into it the root server is the
536:54 right into it the root server is the first step in translating human readable
536:57 first step in translating human readable hostnames into ip addresses the root
537:00 hostnames into ip addresses the root domain is comprised of 13 dns systems
537:04 domain is comprised of 13 dns systems dispersed around the world known
537:07 dispersed around the world known collectively as the dns root servers
537:10 collectively as the dns root servers they are indicated by the letters a
537:12 they are indicated by the letters a through m
537:14 through m operated by 12 organizations such as
537:17 operated by 12 organizations such as verisign cogent and nasa while there are
537:21 verisign cogent and nasa while there are 13 ip addresses that represent these
537:23 13 ip addresses that represent these systems there are actually more than 13
537:26 systems there are actually more than 13 servers some of the ip addresses are
537:29 servers some of the ip addresses are actually a cluster of dns servers and so
537:32 actually a cluster of dns servers and so each of these dns servers also consists
537:35 each of these dns servers also consists of the root zone file which contains the
537:38 of the root zone file which contains the address of the authoritative name server
537:41 address of the authoritative name server for each top level domain and because
537:44 for each top level domain and because this is such a big undertaking to keep
537:46 this is such a big undertaking to keep updated iana or the internet assigned
537:49 updated iana or the internet assigned numbers authority was appointed as the
537:52 numbers authority was appointed as the authority that manages and administrates
537:54 authority that manages and administrates this file and i will include a link in
537:57 this file and i will include a link in the lesson text for those of you who are
537:59 the lesson text for those of you who are looking to dive deeper into the contents
538:02 looking to dive deeper into the contents of this root zone file as well as
538:04 of this root zone file as well as getting to know a little bit more about
538:07 getting to know a little bit more about the iana organization now while the dns
538:10 the iana organization now while the dns root servers establish the hierarchy
538:12 root servers establish the hierarchy most of the name resolution process is
538:15 most of the name resolution process is delegated to other dns servers so just
538:18 delegated to other dns servers so just below the dns route in the hierarchy are
538:21 below the dns route in the hierarchy are the top level domain servers
538:23 the top level domain servers also known as tld for short the top
538:26 also known as tld for short the top level domain takes the tld provided in
538:29 level domain takes the tld provided in the user's query for example www.google
538:34 the user's query for example www.google and provides details for the dot-com tld
538:38 and provides details for the dot-com tld name server the companies that
538:40 name server the companies that administer these domains are named
538:42 administer these domains are named registries and they operate the
538:44 registries and they operate the authoritative name servers for these top
538:48 authoritative name servers for these top level domains for example verisign is
538:51 level domains for example verisign is the registry for the dot com top level
538:53 the registry for the dot com top level domain over a hundred million domains
538:56 domain over a hundred million domains have been registered in the dot com top
538:58 have been registered in the dot com top level domain and these top level dns
539:01 level domain and these top level dns servers
539:02 servers handle top level domains such as com dot
539:06 handle top level domains such as com dot org dot net and dot io and this can also
539:09 org dot net and dot io and this can also be referred to as the gtld which is the
539:13 be referred to as the gtld which is the general top level domains and the cctld
539:17 general top level domains and the cctld which is the country code top level
539:18 which is the country code top level domain like dot ca for canada dot uk for
539:22 domain like dot ca for canada dot uk for the united kingdom and dot it for italy
539:25 the united kingdom and dot it for italy the top level dns servers delegate to
539:28 the top level dns servers delegate to thousands of second level dns servers
539:31 thousands of second level dns servers now second level domain names are sold
539:34 now second level domain names are sold to companies and other organizations and
539:37 to companies and other organizations and over 900 accredited registrars register
539:40 over 900 accredited registrars register and manage the second level domains in
539:43 and manage the second level domains in the dot com domain for end users the
539:46 the dot com domain for end users the second level of this structure is
539:49 second level of this structure is comprised of millions of domain names
539:52 comprised of millions of domain names second level dns servers can further
539:54 second level dns servers can further delegate the zone but most commonly
539:57 delegate the zone but most commonly store the individual host records for a
540:00 store the individual host records for a domain name this is the server at the
540:02 domain name this is the server at the bottom of the dns lookup chain where you
540:05 bottom of the dns lookup chain where you would typically find resource records
540:07 would typically find resource records and it is these resource records that
540:10 and it is these resource records that maps services and host names to ip
540:13 maps services and host names to ip addresses and will respond with the
540:15 addresses and will respond with the queried resource record ultimately
540:17 queried resource record ultimately allowing the web browser making the
540:20 allowing the web browser making the request to reach the ip address needed
540:23 request to reach the ip address needed to access a website or other web
540:25 to access a website or other web resources now there is one more concept
540:28 resources now there is one more concept that i wanted to cover
540:30 that i wanted to cover before we move on and this is the sub
540:32 before we move on and this is the sub domain now some of you have noticed and
540:34 domain now some of you have noticed and wondered where does the sub domain come
540:37 wondered where does the sub domain come into play with regards to the dns
540:39 into play with regards to the dns structure well this is a resource record
540:42 structure well this is a resource record that falls under the second level domain
540:44 that falls under the second level domain and in dns hierarchy a sub domain is a
540:47 and in dns hierarchy a sub domain is a domain that is a part of another main
540:50 domain that is a part of another main domain but i wanted to put it in here
540:52 domain but i wanted to put it in here just to give you an understanding of
540:54 just to give you an understanding of where subdomains would fall so now that
540:57 where subdomains would fall so now that we understand how dns is structured i
541:00 we understand how dns is structured i wanted to go through the breakdown of
541:02 wanted to go through the breakdown of the data flow of dns to give you some
541:04 the data flow of dns to give you some better contacts now there are eight
541:07 better contacts now there are eight steps in a dns lookup first we start off
541:10 steps in a dns lookup first we start off with the dns client which is shown here
541:13 with the dns client which is shown here as tony bowtie's laptop and this is a
541:16 as tony bowtie's laptop and this is a client device which could also be a
541:18 client device which could also be a phone or a tablet and is configured with
541:21 phone or a tablet and is configured with software to send name resolution queries
541:24 software to send name resolution queries to a dns server so when a client needs
541:27 to a dns server so when a client needs to resolve a remote host name into its
541:30 to resolve a remote host name into its ip address in most cases it sends a
541:33 ip address in most cases it sends a request to the dns recursive resolver
541:36 request to the dns recursive resolver which returns the ip address of the
541:39 which returns the ip address of the remote host to the client a recursive
541:42 remote host to the client a recursive resolver
541:43 resolver is a dns server that is configured to
541:46 is a dns server that is configured to query other dns servers until it finds
541:49 query other dns servers until it finds the answer to the question it will
541:51 the answer to the question it will either return the answer or an error
541:53 either return the answer or an error message to the client if it cannot
541:55 message to the client if it cannot answer the query and the query will
541:58 answer the query and the query will eventually be passed off to the dns
542:00 eventually be passed off to the dns client the recursive resolver in essence
542:03 client the recursive resolver in essence acts as the middle man between a client
542:06 acts as the middle man between a client and a dns name server which is usually
542:09 and a dns name server which is usually the internet service provider a service
542:12 the internet service provider a service carrier or a corporate network now to
542:14 carrier or a corporate network now to make sure that a resolver is able to
542:16 make sure that a resolver is able to properly run dns a root hints file is
542:20 properly run dns a root hints file is supplied with almost every operating
542:22 supplied with almost every operating system and this file holds the ip
542:25 system and this file holds the ip addresses for the root name servers this
542:28 addresses for the root name servers this also includes the dns resolver but in
542:31 also includes the dns resolver but in case it is unable to answer the query
542:34 case it is unable to answer the query the client will be able to still make
542:36 the client will be able to still make the query to the dns name servers now
542:39 the query to the dns name servers now after receiving a dns query from a
542:41 after receiving a dns query from a client
542:42 client this recursive resolver will either
542:44 this recursive resolver will either respond with cache data or send a
542:47 respond with cache data or send a request to a root name server and in
542:50 request to a root name server and in this case the resolver queries a dns
542:53 this case the resolver queries a dns root name server the root server then
542:55 root name server the root server then responds to the resolver with the
542:57 responds to the resolver with the address of a top level domain or tld dns
543:02 address of a top level domain or tld dns server such as com or dot net which
543:05 server such as com or dot net which stores the information for its domains
543:08 stores the information for its domains now when searching for google.com the
543:10 now when searching for google.com the request is pointed towards the dot-com
543:13 request is pointed towards the dot-com tld so naturally the resolver then makes
543:16 tld so naturally the resolver then makes a request to the com tld then the tld
543:20 a request to the com tld then the tld name server then responds with the ip
543:23 name server then responds with the ip address of the domain's name server
543:26 address of the domain's name server google.com and lastly the resolver then
543:29 google.com and lastly the resolver then sends a query to the domain's name
543:31 sends a query to the domain's name server the ip address for google.com is
543:35 server the ip address for google.com is then returned to the resolver from the
543:37 then returned to the resolver from the name server this ip address is cache for
543:40 name server this ip address is cache for a period of time determined by the
543:43 a period of time determined by the google.com name server and this process
543:46 google.com name server and this process is so that a future request for this
543:48 is so that a future request for this hostname could be resolved from its
543:51 hostname could be resolved from its cache rather than performing the entire
543:54 cache rather than performing the entire process from beginning to end and so for
543:56 process from beginning to end and so for those of you who are unaware cache is a
543:59 those of you who are unaware cache is a component that stores data so that
544:02 component that stores data so that future requests for that data can be
544:04 future requests for that data can be served faster the purpose of this
544:06 served faster the purpose of this caching is to temporarily store data in
544:10 caching is to temporarily store data in a location that results in improvements
544:13 a location that results in improvements in performance and reliability for data
544:16 in performance and reliability for data requests dns caching involves storing
544:19 requests dns caching involves storing the data closer to the requesting client
544:22 the data closer to the requesting client so that the dns query can be resolved
544:25 so that the dns query can be resolved earlier and additional queries further
544:28 earlier and additional queries further down the dns lookup chain can be avoided
544:31 down the dns lookup chain can be avoided and thus improving load times dns data
544:34 and thus improving load times dns data can be cached in a variety of locations
544:37 can be cached in a variety of locations down the chain each of which will store
544:39 down the chain each of which will store dns records for a set amount of time
544:42 dns records for a set amount of time determined by a time to live also known
544:45 determined by a time to live also known as ttl for short and this value is the
544:48 as ttl for short and this value is the time to live for that domain record a
544:51 time to live for that domain record a high ttl for a domain record means that
544:54 high ttl for a domain record means that local dns resolvers will cache responses
544:58 local dns resolvers will cache responses for longer and give quicker responses
545:01 for longer and give quicker responses however making changes to dns records
545:04 however making changes to dns records can take longer due to the need to wait
545:07 can take longer due to the need to wait for all cash records to expire
545:10 for all cash records to expire alternatively domain records with low
545:13 alternatively domain records with low ttls can change much more quickly but
545:16 ttls can change much more quickly but dns resolvers will need to refresh their
545:19 dns resolvers will need to refresh their records more often and so in this final
545:21 records more often and so in this final step the dns resolver then responds to
545:24 step the dns resolver then responds to the web browser with the ip address of
545:27 the web browser with the ip address of the domain requested initially and once
545:30 the domain requested initially and once these eight steps of the dns lookup have
545:32 these eight steps of the dns lookup have returned the ip address for
545:35 returned the ip address for www.google.com
545:37 www.google.com the browser is able to make the request
545:40 the browser is able to make the request for the webpage and so the browser will
545:42 for the webpage and so the browser will reach out to the ip address of the
545:44 reach out to the ip address of the server and request the web page which
545:47 server and request the web page which will be loaded up in the browser now i
545:49 will be loaded up in the browser now i know this probably has been a review for
545:52 know this probably has been a review for those who are a bit more advanced when
545:54 those who are a bit more advanced when it comes to understanding dns but for
545:57 it comes to understanding dns but for others who are fairly new to the
545:58 others who are fairly new to the underpinnings of dns i hope this has
546:01 underpinnings of dns i hope this has given you a basic understanding of what
546:04 given you a basic understanding of what it is why we use it and how it works
546:07 it is why we use it and how it works moving forward in the course i will be
546:10 moving forward in the course i will be discussing dns with regards to different
546:12 discussing dns with regards to different services and the needed resource records
546:15 services and the needed resource records within zones that are used by these
546:18 within zones that are used by these given services and so that's pretty much
546:20 given services and so that's pretty much all i wanted to cover when it comes to
546:23 all i wanted to cover when it comes to the fundamentals of dns so you can now
546:25 the fundamentals of dns so you can now mark this lesson as complete and let's
546:28 mark this lesson as complete and let's move on to the next one
546:29 move on to the next one [Music]
546:33 [Music] welcome back in this lesson i'm going to
546:36 welcome back in this lesson i'm going to be diving into dns record types now dns
546:40 be diving into dns record types now dns resource records are the basic
546:41 resource records are the basic information elements of the domain name
546:44 information elements of the domain name system they are entries in the dns
546:46 system they are entries in the dns database which provide information about
546:49 database which provide information about hosts these records are physically
546:52 hosts these records are physically stored in the zone files on the dns
546:55 stored in the zone files on the dns server this lesson will go through some
546:57 server this lesson will go through some of the most commonly used dns records
547:00 of the most commonly used dns records that we will be coming across throughout
547:02 that we will be coming across throughout this course so with that being said
547:05 this course so with that being said let's dive in now the first record that
547:07 let's dive in now the first record that i wanted to touch on are the name server
547:10 i wanted to touch on are the name server records also known as ns records for
547:13 records also known as ns records for short this record identifies which dns
547:16 short this record identifies which dns server contains the current records for
547:19 server contains the current records for a domain these servers are usually found
547:22 a domain these servers are usually found at a registrar internet service provider
547:25 at a registrar internet service provider or hosting company ns records are
547:28 or hosting company ns records are created to identify the name server used
547:31 created to identify the name server used for each domain name within a given zone
547:34 for each domain name within a given zone in this example we have the dot co zone
547:37 in this example we have the dot co zone that will have multiple name server
547:40 that will have multiple name server records for
547:41 records for bowtieinc.co now these name server
547:44 bowtieinc.co now these name server records are how the dot co delegation
547:47 records are how the dot co delegation happens for bowtieinc.co and they point
547:50 happens for bowtieinc.co and they point at servers that host the
547:53 at servers that host the inc.co zone that is managed by bowtie
547:56 inc.co zone that is managed by bowtie inc and the flow shown here of the query
547:59 inc and the flow shown here of the query starts from the root zone going to the
548:01 starts from the root zone going to the dot co zone where the record lies for
548:04 dot co zone where the record lies for the name servers for bowtieinc.com
548:08 the name servers for bowtieinc.com and flows down to the bowtieinc.cozone
548:11 and flows down to the bowtieinc.cozone that contain all the necessary records
548:13 that contain all the necessary records for bowtieinc.co
548:15 for bowtieinc.co the next record that i wanted to touch
548:17 the next record that i wanted to touch on are the a and aaa records and this is
548:21 on are the a and aaa records and this is short for address records for ipv4 and
548:25 short for address records for ipv4 and ipv6 ip addresses respectively and this
548:29 ipv6 ip addresses respectively and this record points a domain name to an ip
548:31 record points a domain name to an ip address for example when you type wwe
548:40 in a web browser the dns system will translate that domain name
548:42 translate that domain name to the ip address of 52.54.92.195
548:52 using the a record information stored in the bowtieinc.co
548:53 the bowtieinc.co dns zone file the a record links a
548:57 dns zone file the a record links a website's domain name to an ipv4 address
549:00 website's domain name to an ipv4 address that points to the server where the
549:02 that points to the server where the website's files live now when it comes
549:05 website's files live now when it comes to an aaa record this links a website's
549:09 to an aaa record this links a website's domain to an ipv6 address that points to
549:13 domain to an ipv6 address that points to the same server where the website's
549:15 the same server where the website's files live a records are the simplest
549:18 files live a records are the simplest type of dns records and one of the
549:20 type of dns records and one of the primary records used in dns servers you
549:24 primary records used in dns servers you can do a lot with a records including
549:26 can do a lot with a records including using multiple a records for the same
549:29 using multiple a records for the same domain in order to provide redundancy
549:32 domain in order to provide redundancy the same can be said
549:34 the same can be said for aaa records additionally multiple
549:37 for aaa records additionally multiple domains could point to the same address
549:40 domains could point to the same address in which case each would have its own a
549:43 in which case each would have its own a or aaa record pointing to that same ip
549:47 or aaa record pointing to that same ip address
549:48 address moving on to cname records a c name
549:51 moving on to cname records a c name record short for canonical name record
549:54 record short for canonical name record is a type of resource record that maps
549:57 is a type of resource record that maps one domain name to another this can be
549:59 one domain name to another this can be really convenient when running multiple
550:02 really convenient when running multiple services like an ftp server and an
550:05 services like an ftp server and an e-commerce server each running on
550:07 e-commerce server each running on different ports from a single ip address
550:10 different ports from a single ip address you can for example
550:12 you can for example point ftp ftp.bowtieinc.co
550:15 point ftp ftp.bowtieinc.co and shop.bowtieinc.co
550:18 and shop.bowtieinc.co to the dns entry for bowtieinc.co
550:22 to the dns entry for bowtieinc.co which in turn has an a record which
550:25 which in turn has an a record which points to the ip address so if the ip
550:28 points to the ip address so if the ip address ever changes
550:29 address ever changes you only have to change the record in
550:32 you only have to change the record in one place in the dns a record for bow
550:36 one place in the dns a record for bow tie inc dot co and just as a note cname
550:39 tie inc dot co and just as a note cname records must always point to another
550:41 records must always point to another domain name and never directly to an ip
550:44 domain name and never directly to an ip address next up are txt records a text
550:48 address next up are txt records a text record or txt for short is a type of
550:52 record or txt for short is a type of resource record that provides text
550:54 resource record that provides text information to sources outside your
550:57 information to sources outside your domain that can be used for a number of
551:00 domain that can be used for a number of arbitrary purposes the records value can
551:03 arbitrary purposes the records value can be either human or machine readable text
551:06 be either human or machine readable text in many cases text records are used to
551:09 in many cases text records are used to verify domain ownership or even to
551:12 verify domain ownership or even to provide human readable information about
551:15 provide human readable information about a server a network or a data center it
551:18 a server a network or a data center it is also often used in a more structured
551:21 is also often used in a more structured fashion to record small amounts of
551:24 fashion to record small amounts of machine readable data into the dns
551:27 machine readable data into the dns system a domain may have multiple tax
551:29 system a domain may have multiple tax records associated with it
551:32 records associated with it provided the dns server implementation
551:34 provided the dns server implementation supports this each record can in turn
551:37 supports this each record can in turn have one or more character strings in
551:40 have one or more character strings in this example
551:41 this example google wants to verify the bowtieinc.co
551:44 google wants to verify the bowtieinc.co domain so that g suite can be set up and
551:48 domain so that g suite can be set up and needs verification through the domain to
551:51 needs verification through the domain to google through creating a text record
551:53 google through creating a text record and adding it to the zone google will
551:56 and adding it to the zone google will then supply a text verification record
551:58 then supply a text verification record to add to the domain host's dns records
552:02 to add to the domain host's dns records and start to scan for the text record to
552:05 and start to scan for the text record to verify the domain
552:07 verify the domain the supplied text record is then added
552:10 the supplied text record is then added by the domain administrator and behind
552:13 by the domain administrator and behind the scenes google is doing a
552:15 the scenes google is doing a verification check at timed intervals
552:18 verification check at timed intervals when google finally sees the record
552:20 when google finally sees the record exists the domain ownership is confirmed
552:23 exists the domain ownership is confirmed and g suite can be enabled for the
552:25 and g suite can be enabled for the domain and this is a typical example of
552:28 domain and this is a typical example of how tax records are used now moving on
552:31 how tax records are used now moving on to mx records a dns
552:34 to mx records a dns mx record also known as the mail
552:37 mx record also known as the mail exchange record is the resource record
552:40 exchange record is the resource record that directs email to a mail server the
552:43 that directs email to a mail server the mx record indicates how email messages
552:46 mx record indicates how email messages should be routed and to which server
552:49 should be routed and to which server mail should go to like cname records an
552:52 mail should go to like cname records an mx record must always point to another
552:55 mx record must always point to another domain now mx records consist of two
552:58 domain now mx records consist of two parts the priority
553:00 parts the priority and the domain name the priority are the
553:03 and the domain name the priority are the numbers before the domains for these mx
553:06 numbers before the domains for these mx records and indicate the preference of
553:09 records and indicate the preference of the order in which the mail server
553:11 the order in which the mail server should be used the lower the preference
553:13 should be used the lower the preference number the higher the priority so in
553:16 number the higher the priority so in this example
553:17 this example laura is emailing tony bowtie at tony at
553:21 laura is emailing tony bowtie at tony at bowtieinc.co
553:23 bowtieinc.co the mx records are part of this process
553:26 the mx records are part of this process as dns needs to know where to send the
553:29 as dns needs to know where to send the mail to and we'll look at the domain
553:31 mail to and we'll look at the domain attached to the email address which is
553:34 attached to the email address which is bowtieinc.co so the dns client will run
553:37 bowtieinc.co so the dns client will run a regular dns query by first going to
553:40 a regular dns query by first going to the root then to the
553:42 the root then to the cotld and finally to bowtieinc.co
553:46 cotld and finally to bowtieinc.co it will then receive the mx record which
553:49 it will then receive the mx record which in this example is two of them the first
553:52 in this example is two of them the first one being mail representing
553:54 one being mail representing mail.bowtieinc.co
553:56 mail.bowtieinc.co and then the second one is a different
553:58 and then the second one is a different mail server outside the current domain
554:01 mail server outside the current domain and in this case is a google mail server
554:04 and in this case is a google mail server of aspmx.l.google.com
554:10 and this is a fully qualified domain name as the dot on the right of this
554:12 name as the dot on the right of this record suggests so here the server will
554:15 record suggests so here the server will always try mail.bowtieinc.co
554:18 always try mail.bowtieinc.co first because 5 is lower than 10. and
554:22 first because 5 is lower than 10. and this will give mail.bowtieinc.co
554:25 this will give mail.bowtieinc.co the higher priority in the result of a
554:28 the higher priority in the result of a message send failure the server will
554:30 message send failure the server will default to aspmx.l.google.com
554:37 if both values are the same then it would be low balanced across both
554:39 would be low balanced across both servers whichever is used the server
554:42 servers whichever is used the server gets the result of the query back and it
554:45 gets the result of the query back and it uses this to connect to the mail server
554:47 uses this to connect to the mail server for bowtieinc.co via the smtp protocol
554:52 for bowtieinc.co via the smtp protocol and it uses this protocol to deliver all
554:55 and it uses this protocol to deliver all email and this is how mx records are
554:58 email and this is how mx records are used for email the next record i wanted
555:00 used for email the next record i wanted to cover are the pointer records
555:03 to cover are the pointer records also known as ptr records for short and
555:06 also known as ptr records for short and this provides the domain name associated
555:09 this provides the domain name associated with an ip address so a dns pointer
555:11 with an ip address so a dns pointer record is exactly the opposite of the a
555:14 record is exactly the opposite of the a record which provides the ip address
555:17 record which provides the ip address associated with the domain name dns
555:20 associated with the domain name dns pointer records are used in reverse dns
555:23 pointer records are used in reverse dns lookups as we discussed earlier when a
555:26 lookups as we discussed earlier when a user attempts to reach a domain name in
555:28 user attempts to reach a domain name in their browser a dns lookup occurs
555:31 their browser a dns lookup occurs matching the domain name to the ip
555:33 matching the domain name to the ip address a reverse dns lookup is the
555:36 address a reverse dns lookup is the opposite of this process and it is a
555:38 opposite of this process and it is a query that starts with the ip address
555:41 query that starts with the ip address and looks up the domain name while dnsa
555:44 and looks up the domain name while dnsa records are stored under the given
555:46 records are stored under the given domain name dns pointer records are
555:49 domain name dns pointer records are stored under the ip address reverse and
555:52 stored under the ip address reverse and ending in dot i n
555:55 ending in dot i n a d d r dot arpa so in this example the
555:59 a d d r dot arpa so in this example the pointer record for the iap address
556:01 pointer record for the iap address 52.54.90
556:13 dot in addr dot arpa ipv6 addresses are
556:17 addr dot arpa ipv6 addresses are constructed differently from ipv4
556:19 constructed differently from ipv4 addresses and ipv6 pointer records exist
556:23 addresses and ipv6 pointer records exist in a different namespace
556:25 in a different namespace within.arpa ipv6 pointer records are
556:28 within.arpa ipv6 pointer records are stored under the ipv6 address reversed
556:32 stored under the ipv6 address reversed and converted into 4-bit sections as
556:35 and converted into 4-bit sections as opposed to 8-bit sections as in ipv4 and
556:39 opposed to 8-bit sections as in ipv4 and as well the domain.ip6.arpa
556:43 as well the domain.ip6.arpa is added at the end pointer records are
556:46 is added at the end pointer records are used most commonly in reverse dns
556:48 used most commonly in reverse dns lookups for anti-spam troubleshooting
556:51 lookups for anti-spam troubleshooting email delivery issues and logging and so
556:54 email delivery issues and logging and so the last record that i wanted to cover
556:56 the last record that i wanted to cover are the soa records also known as the
556:59 are the soa records also known as the start of authority records and this
557:02 start of authority records and this resource record is created for you when
557:04 resource record is created for you when you create your managed zone and
557:07 you create your managed zone and specifies the authoritative information
557:10 specifies the authoritative information including global parameters about a dns
557:13 including global parameters about a dns zone the soa record stores important
557:16 zone the soa record stores important information about a domain or zone such
557:19 information about a domain or zone such as the email address of the
557:20 as the email address of the administrator when the domain was last
557:23 administrator when the domain was last updated and how long the server should
557:25 updated and how long the server should wait between refreshes every dns zone
557:28 wait between refreshes every dns zone registered must have an soa record as
557:31 registered must have an soa record as per the rfc 1035 and there is exactly
557:35 per the rfc 1035 and there is exactly one soa record per zone the soa record
557:39 one soa record per zone the soa record contains the core information about your
557:42 contains the core information about your zone so it is not possible for your zone
557:45 zone so it is not possible for your zone to work without that information and i
557:48 to work without that information and i will include a link in the lesson text
557:50 will include a link in the lesson text for those who are interested in diving
557:52 for those who are interested in diving deeper and understanding all the
557:54 deeper and understanding all the information that is covered under these
557:57 information that is covered under these soa records a properly optimized and
558:00 soa records a properly optimized and updated soa record can reduce bandwidth
558:03 updated soa record can reduce bandwidth between name servers increase the speed
558:06 between name servers increase the speed of website access and ensure the site is
558:08 of website access and ensure the site is alive even when the primary dns server
558:12 alive even when the primary dns server is down and so that about covers
558:14 is down and so that about covers everything that i wanted to discuss when
558:16 everything that i wanted to discuss when it comes to resource records within dns
558:20 it comes to resource records within dns so you can now mark this lesson as
558:21 so you can now mark this lesson as complete
558:22 complete and let's move on to the next one
558:24 and let's move on to the next one [Music]
558:28 [Music] welcome back
558:29 welcome back in this lesson i'm going to be covering
558:32 in this lesson i'm going to be covering network address translation also known
558:35 network address translation also known as nat for short this is a common
558:38 as nat for short this is a common process
558:39 process used in home business and any cloud
558:42 used in home business and any cloud networks that you will encounter knowing
558:45 networks that you will encounter knowing and understanding that will help you
558:47 and understanding that will help you achieve why you would use it and what
558:50 achieve why you would use it and what makes it such a necessary process
558:53 makes it such a necessary process now there's quite a bit to cover here so
558:55 now there's quite a bit to cover here so with that being said let's dive in
558:58 with that being said let's dive in now at a high level nat is a way to map
559:01 now at a high level nat is a way to map multiple local private ip addresses to a
559:05 multiple local private ip addresses to a public ip address before transferring
559:08 public ip address before transferring the information this is done by altering
559:11 the information this is done by altering the network address data in the ip
559:14 the network address data in the ip header of the data packet while
559:17 header of the data packet while traveling through a network towards the
559:19 traveling through a network towards the destination
559:20 destination as packets pass through a nat device
559:23 as packets pass through a nat device either the source or destination ip
559:25 either the source or destination ip address is changed
559:27 address is changed then packets returning in the other
559:30 then packets returning in the other direction are translated back to the
559:33 direction are translated back to the original addresses
559:34 original addresses and this is a process that is typically
559:37 and this is a process that is typically used in most home routers that are
559:40 used in most home routers that are provided by your internet service
559:42 provided by your internet service provider now originally nat was designed
559:45 provider now originally nat was designed to deal with the scarcity of free ipv4
559:49 to deal with the scarcity of free ipv4 addresses increasing the number of
559:51 addresses increasing the number of computers that can operate off a single
559:54 computers that can operate off a single publicly routable ip address and so
559:57 publicly routable ip address and so because devices in the private ip space
560:01 because devices in the private ip space such as 192.168.0.0
560:08 cannot traverse the public internet that is needed for those devices to
560:11 that is needed for those devices to communicate with the public internet now
560:14 communicate with the public internet now ipv6 was designed to overcome the ipv4
560:18 ipv6 was designed to overcome the ipv4 shortage and has tons of available
560:21 shortage and has tons of available addresses and therefore there is no real
560:23 addresses and therefore there is no real need for nat when it comes to ipv6 now
560:27 need for nat when it comes to ipv6 now nat has an additional benefit of adding
560:30 nat has an additional benefit of adding a layer of security and privacy by
560:33 a layer of security and privacy by hiding the ip address of your devices
560:36 hiding the ip address of your devices from the outside world and only allowing
560:39 from the outside world and only allowing packets to be sent and received from the
560:42 packets to be sent and received from the originating private device and so this
560:45 originating private device and so this is a high level of what nat is now there
560:48 is a high level of what nat is now there are multiple types of not that i will be
560:50 are multiple types of not that i will be covering
560:51 covering which at a high level do the same thing
560:54 which at a high level do the same thing which is translate private i p addresses
560:57 which is translate private i p addresses to public ip addresses yet different
561:00 to public ip addresses yet different types of nat handles the process
561:02 types of nat handles the process differently so first we have static nat
561:05 differently so first we have static nat which maps a single private ip address
561:09 which maps a single private ip address to a public ip address
561:11 to a public ip address so a one-to-one mapping that gives the
561:13 so a one-to-one mapping that gives the device with the private ip address
561:16 device with the private ip address access to the public internet in both
561:19 access to the public internet in both directions
561:20 directions this is commonly used where one specific
561:23 this is commonly used where one specific device with a private address needs
561:26 device with a private address needs access to the public internet the next
561:29 access to the public internet the next type of nat is dynamic nan and this is
561:31 type of nat is dynamic nan and this is similar to static nat but doesn't hold
561:34 similar to static nat but doesn't hold the same static allocation a private ip
561:37 the same static allocation a private ip address space
561:39 address space is mapped to a pool of public ip
561:41 is mapped to a pool of public ip addresses and are allocated randomly as
561:45 addresses and are allocated randomly as needed when the ip address is no longer
561:47 needed when the ip address is no longer needed the ip address is returned back
561:50 needed the ip address is returned back to the pool ready to be used by another
561:53 to the pool ready to be used by another device
561:54 device this method is commonly used where
561:57 this method is commonly used where multiple internal hosts with private i p
562:00 multiple internal hosts with private i p addresses
562:02 addresses are sharing an equal or fewer amount of
562:04 are sharing an equal or fewer amount of public i p addresses
562:06 public i p addresses and is designed to be an efficient use
562:09 and is designed to be an efficient use of public ips and finally there is port
562:12 of public ips and finally there is port address translation or pat
562:15 address translation or pat where multiple private ip addresses are
562:18 where multiple private ip addresses are translated using a single public ip
562:21 translated using a single public ip address and a specific port
562:24 address and a specific port and this is probably what your home
562:25 and this is probably what your home router is using and will cover all the
562:28 router is using and will cover all the devices you use in your home network
562:31 devices you use in your home network this method uses ports to help
562:34 this method uses ports to help distinguish individual devices
562:37 distinguish individual devices and is also the method that is used for
562:39 and is also the method that is used for cloudnat in google cloud which i will be
562:43 cloudnat in google cloud which i will be covering in a later lesson and so i
562:45 covering in a later lesson and so i wanted to get into a bit more detail on
562:48 wanted to get into a bit more detail on how these methods work
562:50 how these methods work starting with static not
562:53 starting with static not now to set the stage for static not i'm
562:55 now to set the stage for static not i'm going to start off with a private
562:57 going to start off with a private network here on the left
562:59 network here on the left and the public ip space here on the
563:02 and the public ip space here on the right
563:02 right and the router or not device in the
563:05 and the router or not device in the middle in this example there is a server
563:08 middle in this example there is a server on the left that needs access to
563:11 on the left that needs access to external services and for this example
563:14 external services and for this example the external service we are using is the
563:17 the external service we are using is the bowtress service an image sharing site
563:20 bowtress service an image sharing site for all sorts of awesome bow ties
563:23 for all sorts of awesome bow ties so the server on the left is private
563:26 so the server on the left is private with a private ip address of 192.168.0.5
563:35 and this means it has an address in the ip version 4 private address space
563:39 ip version 4 private address space meaning that it cannot route packets
563:41 meaning that it cannot route packets over the public internet because it only
563:44 over the public internet because it only has a private ip
563:46 has a private ip the beautress service on the other hand
563:48 the beautress service on the other hand has a public ip address which is
563:51 has a public ip address which is 54.5.4.9
563:57 so the issue we run into is that the private address can't be routed over the
564:00 private address can't be routed over the public internet because it's private and
564:02 public internet because it's private and the public address of the beau trust
564:04 the public address of the beau trust service
564:05 service can't directly communicate with any
564:08 can't directly communicate with any private address because public and
564:10 private address because public and private addresses can communicate over
564:13 private addresses can communicate over the public internet what we need is to
564:16 the public internet what we need is to translate the private address that the
564:18 translate the private address that the server on the left has
564:20 server on the left has to a public ip that can communicate with
564:23 to a public ip that can communicate with the service on the right and vice versa
564:26 the service on the right and vice versa now then that device will map the
564:28 now then that device will map the private ip to public ip
564:31 private ip to public ip using and maintaining a nat table and in
564:34 using and maintaining a nat table and in this case of static nat the nat device
564:37 this case of static nat the nat device will have a one-to-one mapping of the
564:40 will have a one-to-one mapping of the private ip address to a public ip
564:43 private ip address to a public ip address and can be allocated to the
564:46 address and can be allocated to the device specified which in this case is
564:48 device specified which in this case is the server marked as 192.168.0.15
564:55 and so in order for the server on the left
564:56 left to communicate with the beautress
564:58 to communicate with the beautress service the server will generate a
565:00 service the server will generate a packet as normal with the source ip of
565:03 packet as normal with the source ip of the packet being the server's private ip
565:06 the packet being the server's private ip address and the destination ip of the
565:09 address and the destination ip of the packet being the ip of the bowtrust
565:12 packet being the ip of the bowtrust service now the router in the middle is
565:14 service now the router in the middle is the default gateway for any destination
565:17 the default gateway for any destination so any ip packets which are destined for
565:20 so any ip packets which are destined for anything but the local network are sent
565:23 anything but the local network are sent to the router so as you can see here
565:25 to the router so as you can see here with the entry in the table it will
565:27 with the entry in the table it will contain the private i p address of
565:30 contain the private i p address of 192.168.0.15
565:37 and mapped to the public address which in this case is 73.6.2.33
565:44 and these are statically mapped to one another and so as the packet passes
565:47 another and so as the packet passes through the nat device the source
565:49 through the nat device the source address of the packet is translated
565:52 address of the packet is translated from the private address to the mapped
565:55 from the private address to the mapped public address and this results in a new
565:58 public address and this results in a new packet so this new packet still has
566:01 packet so this new packet still has beautrest as the destination
566:03 beautrest as the destination but now it has a valid public ip address
566:07 but now it has a valid public ip address as the source
566:08 as the source and so this is the translation that
566:10 and so this is the translation that happens through nat now this process
566:13 happens through nat now this process works in a similar way in the other
566:15 works in a similar way in the other direction
566:16 direction so when the beautress service receives
566:18 so when the beautress service receives the packet it sees the source as this
566:22 the packet it sees the source as this public ip
566:23 public ip so when it responds with data its packet
566:26 so when it responds with data its packet has its ip address as the source
566:29 has its ip address as the source and the previous server's public ip
566:32 and the previous server's public ip address as the destination
566:34 address as the destination so it sends this packet back to this
566:37 so it sends this packet back to this public ip so when the packet arrives at
566:40 public ip so when the packet arrives at the nat device the table is checked
566:43 the nat device the table is checked it recognizes then that the ip is for
566:46 it recognizes then that the ip is for the server and so this time for incoming
566:50 the server and so this time for incoming traffic
566:50 traffic the destination ip address is updated to
566:54 the destination ip address is updated to the corresponding private ip address and
566:58 the corresponding private ip address and then the packet is forwarded through to
567:00 then the packet is forwarded through to the private server and this is how
567:03 the private server and this is how static nat works the source i p address
567:06 static nat works the source i p address is translated from the mapped private ip
567:09 is translated from the mapped private ip to public ip
567:11 to public ip and for incoming traffic the destination
567:14 and for incoming traffic the destination i p address is translated from the
567:17 i p address is translated from the allocated public ip to the corresponding
567:20 allocated public ip to the corresponding private ip all without having to
567:22 private ip all without having to configure a public ip
567:25 configure a public ip on any private device
567:27 on any private device as they always hold their private ip
567:30 as they always hold their private ip addresses
567:31 addresses now i wanted to supply an analogy for
567:34 now i wanted to supply an analogy for nat and so a very common analogy that is
567:37 nat and so a very common analogy that is used is that of a phone service so in
567:40 used is that of a phone service so in this example
567:41 this example laura is the new manager of bow tie inc
567:44 laura is the new manager of bow tie inc new location in montreal and has put in
567:48 new location in montreal and has put in a new public phone number of
567:49 a new public phone number of 514-555-8437
567:56 although as you can see here laura also has a private extension
567:58 has a private extension of one three three seven now if george
568:01 of one three three seven now if george called laura at that public phone number
568:03 called laura at that public phone number he would reach laura without ever
568:06 he would reach laura without ever knowing her private extension so the
568:08 knowing her private extension so the private extension acts as that private
568:11 private extension acts as that private ip address
568:12 ip address and the public phone number would act as
568:14 and the public phone number would act as the public ip address and this would be
568:17 the public ip address and this would be the telephone analogy for static nat and
568:20 the telephone analogy for static nat and so this is the end of part one of this
568:23 so this is the end of part one of this lesson it was getting a bit long so i
568:25 lesson it was getting a bit long so i decided to break it up this would be a
568:28 decided to break it up this would be a great opportunity for you to get up and
568:30 great opportunity for you to get up and have a stretch get yourself a coffee or
568:32 have a stretch get yourself a coffee or a tea and whenever you're ready you can
568:35 a tea and whenever you're ready you can join me in part two where we will be
568:37 join me in part two where we will be starting immediately from the end of
568:40 starting immediately from the end of part one so you can go ahead and
568:42 part one so you can go ahead and complete this video and i will see you
568:44 complete this video and i will see you in part two
568:45 in part two [Music]
568:50 [Music] welcome back this is part two of the
568:52 welcome back this is part two of the network address translation lesson and
568:55 network address translation lesson and we will be starting exactly where we
568:57 we will be starting exactly where we left off from part 1.
569:00 left off from part 1. so with that being said let's dive in
569:03 so with that being said let's dive in now moving on to dynamic nat
569:05 now moving on to dynamic nat this method is similar to static nat
569:08 this method is similar to static nat except that devices are not allocated a
569:12 except that devices are not allocated a permanent public ip
569:14 permanent public ip a public ip address is allocated from a
569:17 a public ip address is allocated from a pool of ip addresses
569:20 pool of ip addresses as they are needed and the mapping of
569:22 as they are needed and the mapping of public to private is allocation base in
569:25 public to private is allocation base in this example there are two devices on
569:28 this example there are two devices on the left and according to the nat table
569:31 the left and according to the nat table there are two public ip addresses
569:33 there are two public ip addresses available for use
569:35 available for use 73.6.2.33
569:44 so when the laptop on the left is looking to access the beautress service
569:46 looking to access the beautress service it will generate a packet where the
569:49 it will generate a packet where the source ip
569:50 source ip is the private address of 192.168.0.13
570:04 so it sends this packet and again the router in the middle is the default
570:06 router in the middle is the default gateway for anything that isn't local as
570:10 gateway for anything that isn't local as the packet passes through the router or
570:13 the packet passes through the router or the nat device
570:14 the nat device it checks if the private ip has a
570:17 it checks if the private ip has a current allocation of public addressing
570:20 current allocation of public addressing from the pool and if it doesn't and one
570:23 from the pool and if it doesn't and one is available it allocates one
570:25 is available it allocates one dynamically and in this case
570:27 dynamically and in this case 73.6.2.34
570:32 is allocated so the packet's source i p address
570:35 so the packet's source i p address is translated to this address
570:37 is translated to this address and the packets are sent to the
570:39 and the packets are sent to the beautress service and so this process is
570:42 beautress service and so this process is the same as static not thus far
570:45 the same as static not thus far but because dynamic nat allocates these
570:48 but because dynamic nat allocates these ip addresses dynamically multiple
570:50 ip addresses dynamically multiple private devices can share a single
570:53 private devices can share a single public ip
570:55 public ip as long as the devices are not using the
570:58 as long as the devices are not using the same public ip at the same time and so
571:01 same public ip at the same time and so once the device is finished
571:03 once the device is finished communication the ip is returned back to
571:06 communication the ip is returned back to the pool and is ready for use by another
571:09 the pool and is ready for use by another device now just as a note if there's no
571:12 device now just as a note if there's no public ip addresses available
571:15 public ip addresses available the router rejects any new connections
571:18 the router rejects any new connections until you clear the nat mappings
571:21 until you clear the nat mappings but if you have as many public ip
571:23 but if you have as many public ip addresses as hosts in your network
571:26 addresses as hosts in your network you won't encounter this problem and so
571:29 you won't encounter this problem and so in this case since the lower server is
571:31 in this case since the lower server is looking to access the fashion tube
571:33 looking to access the fashion tube service
571:34 service there is an available public ip address
571:37 there is an available public ip address in the pool
571:38 in the pool of 73.6.2.33
571:45 thus giving it access to the public internet and access to fashion tube so
571:48 internet and access to fashion tube so in summary the nat device maps a private
571:51 in summary the nat device maps a private ip with the public ip in a nat table and
571:55 ip with the public ip in a nat table and public ips are allocated randomly and
571:58 public ips are allocated randomly and dynamically from a pool now this type of
572:02 dynamically from a pool now this type of knot is used where multiple internal
572:04 knot is used where multiple internal hosts with private ip addresses
572:07 hosts with private ip addresses are sharing an equal or fewer amount of
572:10 are sharing an equal or fewer amount of public ip addresses when all of those
572:13 public ip addresses when all of those private devices at some time will need
572:16 private devices at some time will need public access
572:18 public access now an example of dynamic nat using the
572:21 now an example of dynamic nat using the telephone analogy
572:22 telephone analogy would be if laura and two other bow tie
572:25 would be if laura and two other bow tie inc employees
572:27 inc employees lisa and jane
572:28 lisa and jane had private phone numbers
572:30 had private phone numbers and this would represent your private
572:33 and this would represent your private ips
572:34 ips in this example bowtie inc has three
572:36 in this example bowtie inc has three public phone numbers
572:38 public phone numbers now when any employee makes an outbound
572:41 now when any employee makes an outbound call they are routed to whichever public
572:44 call they are routed to whichever public line is open at the time so the caller
572:47 line is open at the time so the caller id on the receiver's end
572:49 id on the receiver's end would show any one of the three public
572:52 would show any one of the three public phone numbers depending on which one was
572:55 phone numbers depending on which one was given to the caller and this would
572:58 given to the caller and this would represent the public ips in the public
573:01 represent the public ips in the public ip pool
573:03 ip pool now the last type of nat which i wanted
573:05 now the last type of nat which i wanted to talk about is the one which you're
573:07 to talk about is the one which you're probably most familiar with and this is
573:10 probably most familiar with and this is port address translation which is also
573:13 port address translation which is also known as not overload and this is the
573:16 known as not overload and this is the type of not you likely use on your home
573:19 type of not you likely use on your home network port address translation is what
573:22 network port address translation is what allows a large number of private devices
573:25 allows a large number of private devices to share one public ip address
573:28 to share one public ip address giving it a many to one mapping
573:31 giving it a many to one mapping architecture now in this example we'll
573:33 architecture now in this example we'll be using three private devices on the
573:36 be using three private devices on the left
573:37 left all wanting to access fashiontube on the
573:39 all wanting to access fashiontube on the right
573:40 right a popular video sharing website of the
573:43 a popular video sharing website of the latest men's fashions
573:45 latest men's fashions shared by millions across the globe
573:48 shared by millions across the globe and this site has a public ip of
573:51 and this site has a public ip of 62.88.44.88
573:59 and accessed using tcp port 443 now the way that port address translation or pat
574:02 way that port address translation or pat works
574:02 works is to use both the ip addresses and
574:05 is to use both the ip addresses and ports
574:06 ports to allow for multiple devices to share
574:09 to allow for multiple devices to share the same public ip every tcp connection
574:13 the same public ip every tcp connection in addition to a source and destination
574:16 in addition to a source and destination ip address
574:17 ip address has a source and destination port the
574:20 has a source and destination port the source port is randomly assigned by the
574:23 source port is randomly assigned by the client so as long as the source port is
574:26 client so as long as the source port is always unique then many private clients
574:29 always unique then many private clients can use the same public ip address and
574:32 can use the same public ip address and all this information is recorded in the
574:35 all this information is recorded in the nat table on the nat device
574:38 nat table on the nat device in this example let's assume that the
574:40 in this example let's assume that the public ip address of this nat device is
574:43 public ip address of this nat device is 73.6.2.33
574:49 so when the laptop in the top left generates a packet and the packet is
574:52 generates a packet and the packet is going to fashion tube its destination ip
574:55 going to fashion tube its destination ip address is 62.80
575:04 and its destination port is 443. now the source ip of this packet is the laptop's
575:07 source ip of this packet is the laptop's private ip address of 192.168.6
575:17 and the source port is 35535 which is a randomly assigned ephemeral
575:20 which is a randomly assigned ephemeral port so the packet is routed through the
575:22 port so the packet is routed through the nat device
575:23 nat device and in transit the nat device records
575:26 and in transit the nat device records the source ip and the original source
575:29 the source ip and the original source private port
575:31 private port and it allocates a new public ip address
575:34 and it allocates a new public ip address and a new public source port which in
575:36 and a new public source port which in this case is 8844
575:39 this case is 8844 it records this information inside the
575:42 it records this information inside the not table as shown here and it adjusts
575:44 not table as shown here and it adjusts the pocket so that its source ip address
575:47 the pocket so that its source ip address is the public ip address that the nat
575:50 is the public ip address that the nat device is using and the source port is
575:53 device is using and the source port is this newly allocated source port and
575:56 this newly allocated source port and this newly adjusted packet is forwarded
575:58 this newly adjusted packet is forwarded on to fashiontube now the process is
576:01 on to fashiontube now the process is very similar with the return traffic
576:04 very similar with the return traffic where the packet will verify the
576:06 where the packet will verify the recorded ips and ports
576:08 recorded ips and ports in the nat table before forwarding the
576:11 in the nat table before forwarding the packet back to the originating source
576:14 packet back to the originating source now if the middle laptop with the ip of
576:16 now if the middle laptop with the ip of 192.168.0.14
576:23 did the same thing then the same process would be followed all of this
576:25 would be followed all of this information would be recorded in the nat
576:28 information would be recorded in the nat table a new public source port would be
576:30 table a new public source port would be allocated and would translate the packet
576:33 allocated and would translate the packet adjusting the packet's source ip address
576:36 adjusting the packet's source ip address and source port as well the same process
576:39 and source port as well the same process would happen for the laptop on the
576:41 would happen for the laptop on the bottom generating a packet with the
576:44 bottom generating a packet with the source and destination ip with the
576:46 source and destination ip with the addition of the source and destination
576:49 addition of the source and destination ports and when routed through the nat
576:51 ports and when routed through the nat device goes through its translation
576:54 device goes through its translation recording the information in the nat
576:56 recording the information in the nat table and reaching its destination again
576:59 table and reaching its destination again return traffic will be verified by the
577:02 return traffic will be verified by the recorded ips and ports in the nat table
577:06 recorded ips and ports in the nat table before forwarding the packet back to its
577:09 before forwarding the packet back to its originating source and so just as a
577:11 originating source and so just as a summary when it comes to port address
577:14 summary when it comes to port address translation the nat device records the
577:16 translation the nat device records the source ip and source port in a nat table
577:20 source ip and source port in a nat table the source ip is then replaced with a
577:23 the source ip is then replaced with a public ip and public source port
577:26 public ip and public source port and are allocated from a pool that
577:28 and are allocated from a pool that allows overloading and this is a
577:31 allows overloading and this is a many-to-one architecture
577:33 many-to-one architecture and so for the telephone analogy for pat
577:36 and so for the telephone analogy for pat let's use a phone operator example so in
577:39 let's use a phone operator example so in this instance george is trying to call
577:41 this instance george is trying to call laura now george only knows lark laura's
577:45 laura now george only knows lark laura's executive admin and only has lark's
577:48 executive admin and only has lark's phone number george does not have
577:50 phone number george does not have laura's private line lark's public phone
577:52 laura's private line lark's public phone number is the equivalent to having a
577:55 number is the equivalent to having a public ip address george calls lark who
577:58 public ip address george calls lark who then connects george to laura the caveat
578:01 then connects george to laura the caveat here is that lark never gives out
578:03 here is that lark never gives out laura's phone number in fact laura
578:06 laura's phone number in fact laura doesn't have a public phone number and
578:08 doesn't have a public phone number and can only be called by lark and here's
578:11 can only be called by lark and here's where nat can add an extra layer of
578:14 where nat can add an extra layer of security by only allowing needed ports
578:17 security by only allowing needed ports to be accessed without allowing anyone
578:20 to be accessed without allowing anyone to connect to any port now i hope this
578:23 to connect to any port now i hope this has helped you understand the process of
578:26 has helped you understand the process of network address translation
578:28 network address translation how the translation happens
578:30 how the translation happens and the process of using a nat table to
578:33 and the process of using a nat table to achieve packet translation
578:35 achieve packet translation along with its destination this is so
578:38 along with its destination this is so common in most environments that you
578:41 common in most environments that you will encounter and it's very important
578:44 will encounter and it's very important to fully understand the different types
578:46 to fully understand the different types of not
578:47 of not and how it can be used in these types of
578:50 and how it can be used in these types of environments and so that's pretty much
578:52 environments and so that's pretty much all i wanted to cover
578:54 all i wanted to cover on this lesson of network address
578:56 on this lesson of network address translation so you can now mark this
578:59 translation so you can now mark this lesson as complete
579:00 lesson as complete and let's move on to the next one
579:02 and let's move on to the next one [Music]
579:06 [Music] welcome back so now that we've covered
579:08 welcome back so now that we've covered the fundamentals of dns along with the
579:11 the fundamentals of dns along with the different record types i wanted to focus
579:14 different record types i wanted to focus in on google cloud's dns service called
579:17 in on google cloud's dns service called cloud dns now cloud dns is a fully
579:20 cloud dns now cloud dns is a fully managed service that manages dns servers
579:24 managed service that manages dns servers for your specific zones and since cloud
579:26 for your specific zones and since cloud dns shows up on the exam only on a high
579:29 dns shows up on the exam only on a high level i will be giving an overview of
579:32 level i will be giving an overview of what this service can do so with that
579:34 what this service can do so with that being said let's dive in now cloud dns
579:38 being said let's dive in now cloud dns acts as an authoritative dns server for
579:41 acts as an authoritative dns server for public zones that are visible to the
579:44 public zones that are visible to the internet or for private zones that are
579:46 internet or for private zones that are visible only within your network and is
579:49 visible only within your network and is commonly referred to as google's dns as
579:52 commonly referred to as google's dns as a service cloud dns has servers that
579:55 a service cloud dns has servers that span the globe making it a globally
579:58 span the globe making it a globally resilient service now while it is a
580:00 resilient service now while it is a global service there is no way to select
580:03 global service there is no way to select specific regions to deploy your zones
580:07 specific regions to deploy your zones and dns server policies you simply add
580:09 and dns server policies you simply add your zones records and policies and it
580:12 your zones records and policies and it is distributed amongst google's dns
580:15 is distributed amongst google's dns servers across the globe cloud dns is
580:18 servers across the globe cloud dns is also one of the few google cloud
580:20 also one of the few google cloud services that offers 100
580:23 services that offers 100 availability along with low latency
580:26 availability along with low latency access
580:27 access by leveraging google's massive global
580:30 by leveraging google's massive global network backbone now in order to use
580:32 network backbone now in order to use cloud dns with a specific publicly
580:35 cloud dns with a specific publicly available domain a domain name must be
580:38 available domain a domain name must be purchased through a domain name
580:40 purchased through a domain name registrar and you can register a domain
580:42 registrar and you can register a domain name through google domains or another
580:45 name through google domains or another domain registrar of your choice cloud
580:48 domain registrar of your choice cloud dns does not provide this service and
580:50 dns does not provide this service and just as a note that to create private
580:53 just as a note that to create private zones the purchasing of a domain name is
580:56 zones the purchasing of a domain name is not necessary now as stated earlier
580:58 not necessary now as stated earlier cloud dns offers the flexibility of
581:01 cloud dns offers the flexibility of hosting both public zones and privately
581:04 hosting both public zones and privately managed dns zones now public zones are
581:08 managed dns zones now public zones are zones that are visible to the public
581:10 zones that are visible to the public internet and so when cloud dns is
581:12 internet and so when cloud dns is managing your public domain it has
581:15 managing your public domain it has public authoritative name servers that
581:18 public authoritative name servers that respond to public zone dns queries for
581:21 respond to public zone dns queries for your specific domain now when it comes
581:23 your specific domain now when it comes to private zones these enable you to
581:26 to private zones these enable you to manage custom domain names
581:28 manage custom domain names for your google cloud resources without
581:31 for your google cloud resources without exposing any dns data to the public
581:34 exposing any dns data to the public internet a private zone can only be
581:36 internet a private zone can only be queried by resources in the same project
581:39 queried by resources in the same project where it is defined and as we discussed
581:42 where it is defined and as we discussed earlier a zone is a container of dns
581:45 earlier a zone is a container of dns records that are queried by dns so from
581:48 records that are queried by dns so from a private zone perspective these can
581:50 a private zone perspective these can only be queried by one or more vpc
581:53 only be queried by one or more vpc networks that you authorize to do so and
581:56 networks that you authorize to do so and just as a note the vpc networks that you
581:58 just as a note the vpc networks that you authorize must be located in the same
582:01 authorize must be located in the same project as the private zone to query
582:04 project as the private zone to query records hosted in manage private zones
582:06 records hosted in manage private zones in other projects the use of dns peering
582:09 in other projects the use of dns peering is needed now i don't want to get too
582:11 is needed now i don't want to get too deep into dns peering
582:14 deep into dns peering but just know that vpc network peering
582:16 but just know that vpc network peering is not required for the cloud dns
582:19 is not required for the cloud dns peering zone to operate peering zones do
582:22 peering zone to operate peering zones do not depend on vpc network peering now
582:26 not depend on vpc network peering now each managed zone that you create is
582:28 each managed zone that you create is associated with a google cloud project
582:31 associated with a google cloud project and once this zone is created it is
582:34 and once this zone is created it is hosted by google's managed name servers
582:36 hosted by google's managed name servers now these zones are always hosted on
582:39 now these zones are always hosted on google's manage name servers within
582:42 google's manage name servers within google cloud so you would create records
582:44 google cloud so you would create records and record sets and these servers would
582:46 and record sets and these servers would then become allocated to that specific
582:49 then become allocated to that specific zone hosting your records and record
582:52 zone hosting your records and record sets and just as a quick reminder a
582:54 sets and just as a quick reminder a record set is the collection of dns
582:57 record set is the collection of dns records in a zone that have the same
582:59 records in a zone that have the same name and are of the same type most
583:02 name and are of the same type most records contain a single record but it's
583:04 records contain a single record but it's not uncommon to see record sets a great
583:07 not uncommon to see record sets a great example of this are a records or ns
583:10 example of this are a records or ns records which we discussed earlier and
583:12 records which we discussed earlier and these records can usually be found in
583:15 these records can usually be found in pairs
583:16 pairs and so now to give you a practical
583:18 and so now to give you a practical example of cloud dns i wanted to bring
583:20 example of cloud dns i wanted to bring the theory into practice through a short
583:23 the theory into practice through a short demo where i'll be creating a managed
583:26 demo where i'll be creating a managed private zone so whenever you're ready
583:28 private zone so whenever you're ready join me in the console and so here we
583:30 join me in the console and so here we are back in the console and i'm logged
583:32 are back in the console and i'm logged in as tonybowties gmail.com and i'm
583:36 in as tonybowties gmail.com and i'm currently in project bowtie inc so now
583:38 currently in project bowtie inc so now to get to cloud dns i'm going to go over
583:40 to get to cloud dns i'm going to go over to the navigation menu i'm going to
583:42 to the navigation menu i'm going to scroll down to network services and go
583:46 scroll down to network services and go over to cloud dns
583:49 over to cloud dns and because i currently don't have any
583:50 and because i currently don't have any zones
583:52 zones i'm prompted with only one option which
583:54 i'm prompted with only one option which is to create a zone and so i'm going to
583:56 is to create a zone and so i'm going to go ahead and create a zone and so here
583:58 go ahead and create a zone and so here i've been prompted with a bunch of
584:00 i've been prompted with a bunch of different options in order to create my
584:02 different options in order to create my dns zone and so the first option that i
584:05 dns zone and so the first option that i have is zone type and because i'm
584:08 have is zone type and because i'm creating a private zone i'm going to
584:10 creating a private zone i'm going to simply click on private and i need to
584:12 simply click on private and i need to provide a zone name which i'm going to
584:14 provide a zone name which i'm going to call tony bowtie next i'm going to have
584:17 call tony bowtie next i'm going to have to provide a dns name which i will call
584:20 to provide a dns name which i will call tony bowtie dot private and under the
584:23 tony bowtie dot private and under the description i'm just going to type in
584:25 description i'm just going to type in private zone for tony bowtie and so the
584:29 private zone for tony bowtie and so the next field i've been given is the
584:31 next field i've been given is the options field where it is currently
584:33 options field where it is currently marked as default private and so if i go
584:36 marked as default private and so if i go over here to the right hand side and
584:38 over here to the right hand side and open up the drop down menu i'm given the
584:40 open up the drop down menu i'm given the options to forward queries to another
584:42 options to forward queries to another server dns peering manage reverse lookup
584:46 server dns peering manage reverse lookup zones and use a service directory
584:48 zones and use a service directory namespace and so depending on your type
584:51 namespace and so depending on your type of scenario one of these five options in
584:53 of scenario one of these five options in most cases will suffice so i'm going to
584:56 most cases will suffice so i'm going to keep it under default private and under
584:58 keep it under default private and under networks it says your private zone will
585:00 networks it says your private zone will be visible to the selected networks
585:02 be visible to the selected networks and so i'm going to click on the drop
585:04 and so i'm going to click on the drop down and i'm giving only the option of
585:06 down and i'm giving only the option of the default network because it's the
585:08 the default network because it's the only network that i have and so i'm
585:10 only network that i have and so i'm going to select it
585:12 going to select it and i'm going to click on the white
585:13 and i'm going to click on the white space and if i feel so inclined i can
585:16 space and if i feel so inclined i can simply click on the shortcut for the
585:18 simply click on the shortcut for the command line and here i'm given this
585:20 command line and here i'm given this specific commands if i was to use the
585:22 specific commands if i was to use the command line in order to create this dns
585:25 command line in order to create this dns zone so i'm going to click on close here
585:27 zone so i'm going to click on close here and i'm going to click on create and as
585:29 and i'm going to click on create and as you can see here my zone has been
585:32 you can see here my zone has been created along with a couple of dns
585:34 created along with a couple of dns records the first one being my name
585:36 records the first one being my name server records as well as my start of
585:39 server records as well as my start of authority records and so as a note to
585:41 authority records and so as a note to know for the exam when creating a zone
585:44 know for the exam when creating a zone these two records will always be created
585:47 these two records will always be created both the soa record and the ns record
585:50 both the soa record and the ns record and moving on to some other options here
585:53 and moving on to some other options here i can add another record set if i choose
585:55 i can add another record set if i choose to again the dns name the record type
585:58 to again the dns name the record type which i have a whole slew of record
586:01 which i have a whole slew of record types to choose from it's ttl and the ip
586:04 types to choose from it's ttl and the ip address but i'm not going to add any
586:06 address but i'm not going to add any records so i'm just going to cancel and
586:08 records so i'm just going to cancel and by clicking in use by i can view which
586:11 by clicking in use by i can view which vpc network is using this zone and as
586:13 vpc network is using this zone and as expected the default network shows up
586:16 expected the default network shows up and i also have the choice of adding
586:17 and i also have the choice of adding another network but since i don't have
586:19 another network but since i don't have any other networks i can't add anything
586:22 any other networks i can't add anything so i'm going to simply cancel i also
586:23 so i'm going to simply cancel i also have the option of removing any networks
586:26 have the option of removing any networks so if i click on this i can remove the
586:28 so if i click on this i can remove the network or i can also remove the network
586:30 network or i can also remove the network by clicking on the hamburger menu and so
586:32 by clicking on the hamburger menu and so as you can see i have a slew of options
586:35 as you can see i have a slew of options to choose from when creating zones and
586:38 to choose from when creating zones and record sets and so that about covers
586:40 record sets and so that about covers everything that i wanted to show you
586:42 everything that i wanted to show you here in cloud dns
586:44 here in cloud dns but before i go i'm going to go ahead
586:45 but before i go i'm going to go ahead and clean up and i'm just going to click
586:47 and clean up and i'm just going to click on the garbage can here on the right
586:49 on the garbage can here on the right hand side of the zone and i'm going to
586:51 hand side of the zone and i'm going to be prompted if i want to delete the zone
586:53 be prompted if i want to delete the zone yes i do so i'm going to click on delete
586:56 yes i do so i'm going to click on delete and so that pretty much covers
586:57 and so that pretty much covers everything that i wanted to show you
586:59 everything that i wanted to show you with regards to cloud dns so you can now
587:01 with regards to cloud dns so you can now mark this lesson as complete and let's
587:04 mark this lesson as complete and let's move on to the next one
587:12 welcome back now before we step into the compute engine section of the course
587:14 compute engine section of the course i wanted to cover a basic foundation of
587:17 i wanted to cover a basic foundation of what makes these vms possible
587:20 what makes these vms possible and this is where a basic understanding
587:22 and this is where a basic understanding of virtualization comes into play now
587:25 of virtualization comes into play now this is merely an introductory lesson to
587:28 this is merely an introductory lesson to virtualization and i won't be getting
587:30 virtualization and i won't be getting too deep into the underpinnings it
587:33 too deep into the underpinnings it serves as just a basic foundation as to
587:36 serves as just a basic foundation as to how compute engine gets its features
587:38 how compute engine gets its features under the hood and how they are possible
587:41 under the hood and how they are possible through the use of virtualization for
587:43 through the use of virtualization for more in-depth understanding on
587:45 more in-depth understanding on virtualization i will be including some
587:47 virtualization i will be including some links in the lesson text for those who
587:50 links in the lesson text for those who are looking to learn more but for now
587:53 are looking to learn more but for now this will provide just enough theory to
587:55 this will provide just enough theory to help you understand how compute engine
587:58 help you understand how compute engine works
587:59 works so with that being said let's dive in so
588:02 so with that being said let's dive in so what exactly is virtualization well
588:05 what exactly is virtualization well virtualization is the process of running
588:07 virtualization is the process of running multiple operating systems on a server
588:11 multiple operating systems on a server simultaneously now before virtualization
588:14 simultaneously now before virtualization became popular a standard model was used
588:17 became popular a standard model was used where an operating system would be
588:19 where an operating system would be installed on a server
588:21 installed on a server so the server would consist of typical
588:24 so the server would consist of typical hardware like cpu
588:26 hardware like cpu memory network cards and other devices
588:29 memory network cards and other devices such as video cards usb devices and
588:33 such as video cards usb devices and storage and then the operating system
588:35 storage and then the operating system would run on top of the hardware now
588:37 would run on top of the hardware now there is a middle layer of the operating
588:39 there is a middle layer of the operating system a supervisor if you will that is
588:42 system a supervisor if you will that is responsible for interacting with
588:45 responsible for interacting with underlying hardware and this is known as
588:48 underlying hardware and this is known as the kernel the kernel manages the
588:50 the kernel the kernel manages the distribution of the hardware resources
588:52 distribution of the hardware resources of the computer efficiently and fairly
588:55 of the computer efficiently and fairly among all the various processes running
588:58 among all the various processes running on the computer now the kernel operates
589:01 on the computer now the kernel operates under what is called kernel mode or
589:04 under what is called kernel mode or privilege mode as it runs privileged
589:06 privilege mode as it runs privileged instructions that interacts with the
589:09 instructions that interacts with the hardware directly now the operating
589:11 hardware directly now the operating system allows other software to run on
589:14 system allows other software to run on top of it like an application
589:16 top of it like an application but cannot interact directly with the
589:19 but cannot interact directly with the hardware it must interact with the
589:21 hardware it must interact with the operating system in user mode or
589:24 operating system in user mode or non-privileged mode so when lark decides
589:27 non-privileged mode so when lark decides to do something on an application that
589:29 to do something on an application that needs to use the system hardware that
589:32 needs to use the system hardware that application needs to go through the
589:34 application needs to go through the operating system it needs to make what's
589:37 operating system it needs to make what's known as a system call and this is the
589:39 known as a system call and this is the model of running one operating system on
589:43 model of running one operating system on a single server now when passed servers
589:46 a single server now when passed servers would traditionally run one application
589:48 would traditionally run one application on one server with one operating system
589:52 on one server with one operating system in the old system the number of servers
589:54 in the old system the number of servers would continue to mount
589:56 would continue to mount since every new application required its
589:58 since every new application required its own server and its own operating system
590:01 own server and its own operating system as a result expensive hardware resources
590:04 as a result expensive hardware resources were purchased but not used and each
590:07 were purchased but not used and each server would use approximately under 20
590:11 server would use approximately under 20 of its resources on average server
590:14 of its resources on average server resources were then known as
590:17 resources were then known as underutilized now there came a time when
590:19 underutilized now there came a time when multiple operating systems were
590:21 multiple operating systems were installed on one computer
590:23 installed on one computer isolated from each other with each
590:26 isolated from each other with each operating system running their own
590:28 operating system running their own applications this was a perfect model to
590:31 applications this was a perfect model to consolidate hardware and keep
590:33 consolidate hardware and keep utilization high but there is a major
590:36 utilization high but there is a major issue that arose each cpu at this given
590:39 issue that arose each cpu at this given moment in time could only have one thing
590:42 moment in time could only have one thing running as privileged so having multiple
590:44 running as privileged so having multiple operating systems running on their own
590:47 operating systems running on their own in an unmodified state
590:49 in an unmodified state and expecting to be running on their own
590:51 and expecting to be running on their own in a privileged state running privileged
590:54 in a privileged state running privileged instructions
590:55 instructions was causing instability in systems
590:58 was causing instability in systems causing not just application crashes but
591:02 causing not just application crashes but system crashes now a hypervisor is what
591:06 system crashes now a hypervisor is what solved this problem it is a small
591:08 solved this problem it is a small software layer that enables multiple
591:11 software layer that enables multiple operating systems to run alongside each
591:14 operating systems to run alongside each other
591:15 other sharing the same physical computing
591:17 sharing the same physical computing resources these operating systems come
591:20 resources these operating systems come as virtual machines or vms and these are
591:24 as virtual machines or vms and these are files that mimic an entire computing
591:27 files that mimic an entire computing hardware environment in software the
591:29 hardware environment in software the hypervisor also known as a virtual
591:32 hypervisor also known as a virtual machine monitor or vmm
591:35 machine monitor or vmm manages these vms as they run alongside
591:38 manages these vms as they run alongside each other it separates virtual machines
591:41 each other it separates virtual machines from each other logically assigning each
591:44 from each other logically assigning each its own slice of the underlying
591:46 its own slice of the underlying computing cpu memory and other devices
591:50 computing cpu memory and other devices like graphics network and storage this
591:53 like graphics network and storage this prevents the vms from interfering with
591:56 prevents the vms from interfering with each other so if for example one
591:59 each other so if for example one operating system suffers a crash or a
592:01 operating system suffers a crash or a security compromise the others will
592:04 security compromise the others will survive and continue running now the
592:06 survive and continue running now the hypervisor was never as efficient as how
592:09 hypervisor was never as efficient as how you see it here it went through some
592:11 you see it here it went through some major iterations that gave its structure
592:14 major iterations that gave its structure as we know it today initially
592:17 as we know it today initially virtualization had to be done in
592:18 virtualization had to be done in software or what we now refer to as the
592:21 software or what we now refer to as the host machine
592:23 host machine and the operating system with its
592:24 and the operating system with its applications put in logical containers
592:27 applications put in logical containers known as virtual machines or guests the
592:30 known as virtual machines or guests the operating system would be installed on
592:32 operating system would be installed on the host which included additional
592:35 the host which included additional capabilities called a hypervisor and
592:38 capabilities called a hypervisor and allowed it to make the necessary
592:40 allowed it to make the necessary privileged calls to the hardware
592:43 privileged calls to the hardware having full access to the host the
592:45 having full access to the host the hypervisor exposed the interface of the
592:48 hypervisor exposed the interface of the hardware device that is available on the
592:51 hardware device that is available on the host
592:52 host and allowed it to be mapped to the
592:54 and allowed it to be mapped to the virtual machine and emulated the
592:56 virtual machine and emulated the behavior of this device and this allowed
592:59 behavior of this device and this allowed the virtual machine using the operating
593:01 the virtual machine using the operating system drivers that were designed to
593:04 system drivers that were designed to interact with the emulated device
593:07 interact with the emulated device without installing any special drivers
593:09 without installing any special drivers or tools
593:11 or tools as well as keeping the operating system
593:13 as well as keeping the operating system unmodified the problem here is that it
593:16 unmodified the problem here is that it was all emulated and so every time the
593:19 was all emulated and so every time the virtual machines made calls back to the
593:22 virtual machines made calls back to the host each instruction needed to be
593:25 host each instruction needed to be translated by the hypervisor
593:27 translated by the hypervisor using what's called a binary translation
593:30 using what's called a binary translation now without this translation the
593:32 now without this translation the emulation wouldn't work and would cause
593:35 emulation wouldn't work and would cause system crashes bringing down all virtual
593:38 system crashes bringing down all virtual machines in the process now the problem
593:41 machines in the process now the problem with this process is that it made the
593:44 with this process is that it made the system painfully slow and it was this
593:46 system painfully slow and it was this performance penalty that caused this
593:49 performance penalty that caused this process to not be so widely adopted but
593:52 process to not be so widely adopted but then another type of virtualization came
593:55 then another type of virtualization came on the scene called para virtualization
593:59 on the scene called para virtualization now in this model a modified guest
594:01 now in this model a modified guest operating system is able to speak
594:04 operating system is able to speak directly to the hypervisor and this
594:06 directly to the hypervisor and this involves having the operating system
594:08 involves having the operating system kernel to be modified and recompiled
594:12 kernel to be modified and recompiled before installation onto the virtual
594:15 before installation onto the virtual machine this would allow the operating
594:17 machine this would allow the operating system to talk directly with the
594:20 system to talk directly with the hypervisor without any performance hits
594:23 hypervisor without any performance hits as there is no translation going on like
594:26 as there is no translation going on like an emulation para virtualization
594:29 an emulation para virtualization replaces instructions that cannot be
594:31 replaces instructions that cannot be virtualized with hyper calls that
594:34 virtualized with hyper calls that communicate directly with the hypervisor
594:36 communicate directly with the hypervisor so a hypercall is based on the same
594:39 so a hypercall is based on the same concept as a system call privileged
594:41 concept as a system call privileged instructions
594:42 instructions that accept instead of calling the
594:44 that accept instead of calling the kernel directly it calls the hypervisor
594:47 kernel directly it calls the hypervisor and due to the modification in this
594:49 and due to the modification in this guest operating system performance is
594:52 guest operating system performance is enhanced as the modified guest operating
594:55 enhanced as the modified guest operating system communicates directly with the
594:58 system communicates directly with the hypervisor and emulation overhead is
595:01 hypervisor and emulation overhead is removed the guest operating system
595:03 removed the guest operating system becomes almost virtualization aware yet
595:07 becomes almost virtualization aware yet there is still a process whereby
595:09 there is still a process whereby software was used to speak to the
595:11 software was used to speak to the hardware the virtual machines could
595:14 hardware the virtual machines could still not access the hardware directly
595:17 still not access the hardware directly although
595:18 although things changed in the world of
595:19 things changed in the world of virtualization when the physical
595:21 virtualization when the physical hardware on the host became
595:23 hardware on the host became virtualization aware and this is where
595:26 virtualization aware and this is where hardware assisted virtualization came
595:28 hardware assisted virtualization came into play now hardware assisted
595:31 into play now hardware assisted virtualization is an approach that
595:34 virtualization is an approach that enables efficient
595:36 enables efficient full virtualization using help from
595:38 full virtualization using help from hardware capabilities
595:40 hardware capabilities on the host cpu using this model the
595:44 on the host cpu using this model the operating system has direct access to
595:46 operating system has direct access to resources without any hypervisor
595:49 resources without any hypervisor emulation or operating system
595:51 emulation or operating system modification the hardware itself becomes
595:54 modification the hardware itself becomes virtualization aware the cpu contains
595:58 virtualization aware the cpu contains specific instructions and capabilities
596:01 specific instructions and capabilities so that the hypervisor can directly
596:03 so that the hypervisor can directly control and configure this support it
596:06 control and configure this support it also provides improved performance
596:09 also provides improved performance because the privileged instructions from
596:12 because the privileged instructions from the virtual machines are now trapped and
596:15 the virtual machines are now trapped and emulated in the hardware directly this
596:17 emulated in the hardware directly this means that the operating system kernels
596:20 means that the operating system kernels no longer need to be modified and
596:22 no longer need to be modified and recompiled like in para virtualization
596:25 recompiled like in para virtualization and can run as is at the same time the
596:29 and can run as is at the same time the hypervisor also does not need to be
596:31 hypervisor also does not need to be involved in the extremely slow process
596:35 involved in the extremely slow process of binary translation now there is one
596:37 of binary translation now there is one more iteration that i wanted to discuss
596:40 more iteration that i wanted to discuss when it comes to virtualization and that
596:42 when it comes to virtualization and that is kernel level virtualization
596:45 is kernel level virtualization now instead of using a hypervisor
596:48 now instead of using a hypervisor kernel level virtualization runs a
596:51 kernel level virtualization runs a separate version of the linux kernel and
596:54 separate version of the linux kernel and sees the associated virtual machine as a
596:56 sees the associated virtual machine as a user space process on the physical host
597:00 user space process on the physical host this makes it easy to run multiple
597:02 this makes it easy to run multiple virtual machines on a single host a
597:05 virtual machines on a single host a device driver is used for communication
597:08 device driver is used for communication between the main linux kernel and the
597:11 between the main linux kernel and the virtual machine every vm is implemented
597:14 virtual machine every vm is implemented as a regular linux process
597:17 as a regular linux process scheduled by the standard linux
597:19 scheduled by the standard linux scheduler
597:20 scheduler with dedicated virtual hardware like a
597:23 with dedicated virtual hardware like a network card
597:24 network card graphics adapter
597:25 graphics adapter cpu memory and disk hardware support by
597:29 cpu memory and disk hardware support by the cpu is required for virtualization a
597:33 the cpu is required for virtualization a slightly modified emulation process is
597:36 slightly modified emulation process is used as the display and execution
597:38 used as the display and execution containers for the virtual machines in
597:41 containers for the virtual machines in many ways kernel level virtualization is
597:44 many ways kernel level virtualization is a specialized form of server
597:47 a specialized form of server virtualization and this is the type of
597:49 virtualization and this is the type of virtualization platform that is used in
597:52 virtualization platform that is used in all of google cloud now with this type
597:54 all of google cloud now with this type of virtualization because of the kernel
597:57 of virtualization because of the kernel acting as the hypervisor it enables a
598:00 acting as the hypervisor it enables a specific feature called nested
598:02 specific feature called nested virtualization now with nested
598:05 virtualization now with nested virtualization it is made possible to
598:08 virtualization it is made possible to install a hypervisor on top of the
598:10 install a hypervisor on top of the already running virtual machine
598:13 already running virtual machine and so this is what google cloud has
598:15 and so this is what google cloud has done now you're probably wondering after
598:17 done now you're probably wondering after going through all the complexities
598:19 going through all the complexities involved with previous virtualization
598:22 involved with previous virtualization models
598:23 models what makes this scenario worthwhile well
598:26 what makes this scenario worthwhile well using nested virtualization it makes it
598:29 using nested virtualization it makes it easier for users to move their
598:31 easier for users to move their on-premises
598:33 on-premises virtualized workloads to the cloud
598:36 virtualized workloads to the cloud without having to import and convert vm
598:39 without having to import and convert vm images so in essence
598:41 images so in essence it eases the use when migrating to cloud
598:45 it eases the use when migrating to cloud a great use case for many but wouldn't
598:47 a great use case for many but wouldn't be possible on google cloud without the
598:50 be possible on google cloud without the benefit of running kernel level
598:52 benefit of running kernel level virtualization now this is an advanced
598:54 virtualization now this is an advanced concept that does not show up on the
598:56 concept that does not show up on the exam but i wanted you to understand
598:59 exam but i wanted you to understand virtualization at a high level
599:01 virtualization at a high level so that you can understand nested
599:03 so that you can understand nested virtualization within google cloud
599:06 virtualization within google cloud as it is a part of the feature set of
599:08 as it is a part of the feature set of compute engine and so that's pretty much
599:10 compute engine and so that's pretty much all i wanted to cover when it comes to
599:13 all i wanted to cover when it comes to virtualization
599:14 virtualization so you can now mark this lesson as
599:16 so you can now mark this lesson as complete and let's move on to the next
599:18 complete and let's move on to the next one
599:18 one [Music]
599:22 [Music] welcome back now earlier on in the
599:25 welcome back now earlier on in the course i discussed compute engine at a
599:27 course i discussed compute engine at a high level to understand what it is and
599:31 high level to understand what it is and what it does the goal for this section
599:33 what it does the goal for this section is to dive deeper into compute engine as
599:36 is to dive deeper into compute engine as it comes up heavily on the exam and so i
599:39 it comes up heavily on the exam and so i want to make sure i expose all the
599:41 want to make sure i expose all the nuances
599:42 nuances as well it is the go-to service offering
599:45 as well it is the go-to service offering from google cloud when looking to solve
599:48 from google cloud when looking to solve any general computing needs with this
599:50 any general computing needs with this lesson specifically i will be going into
599:53 lesson specifically i will be going into what makes up an instance and the
599:55 what makes up an instance and the different options that are available
599:57 different options that are available when creating the instance so with that
599:59 when creating the instance so with that being said let's dive in
600:02 being said let's dive in now compute engine lets you create and
600:05 now compute engine lets you create and run virtual machines known as instances
600:09 run virtual machines known as instances and host them on google's infrastructure
600:11 and host them on google's infrastructure compute engine is google's
600:13 compute engine is google's infrastructure as a service virtual
600:16 infrastructure as a service virtual machine offering so it being an is
600:19 machine offering so it being an is service google takes care of the
600:21 service google takes care of the virtualization platform the physical
600:23 virtualization platform the physical servers the network and storage along
600:26 servers the network and storage along with managing the data center and these
600:28 with managing the data center and these instances are available in different
600:31 instances are available in different sizes depending on how much cpu and
600:34 sizes depending on how much cpu and memory you might need as well compute
600:36 memory you might need as well compute engine offers different family types for
600:39 engine offers different family types for the type of workload you need it for
600:41 the type of workload you need it for each instance is charged by the second
600:44 each instance is charged by the second after the first minute as this is a
600:47 after the first minute as this is a consumption based model and as well
600:49 consumption based model and as well these instances are launched in a vpc
600:52 these instances are launched in a vpc network in a specific zone and these
600:55 network in a specific zone and these instances will actually sit on hosts in
600:58 instances will actually sit on hosts in these zones and you will be given the
601:00 these zones and you will be given the option of using a multi-tenant host
601:03 option of using a multi-tenant host where the server that is hosting your
601:05 where the server that is hosting your machine is shared with others
601:07 machine is shared with others but please note that each instance is
601:10 but please note that each instance is completely isolated from the other so no
601:13 completely isolated from the other so no one can see each other's instances
601:15 one can see each other's instances now you're also given the option of
601:18 now you're also given the option of running your instance on a sole tenant
601:20 running your instance on a sole tenant node whereby your instance is on its own
601:23 node whereby your instance is on its own dedicated hosts that is reserved just
601:26 dedicated hosts that is reserved just for you and you alone you don't share it
601:28 for you and you alone you don't share it with anyone else and this is strictly
601:31 with anyone else and this is strictly for you only now although this option
601:33 for you only now although this option may sound really great it does come at a
601:36 may sound really great it does come at a steep cost
601:37 steep cost so only if your use case requires you to
601:39 so only if your use case requires you to use a sole tenant node for security or
601:43 use a sole tenant node for security or compliance purposes i recommend that you
601:45 compliance purposes i recommend that you stick with a multi-tenant host when
601:48 stick with a multi-tenant host when launching your instances and this is
601:50 launching your instances and this is usually the most common selection for
601:52 usually the most common selection for most
601:53 most now compute engine instances can be
601:55 now compute engine instances can be configured in many different ways and
601:57 configured in many different ways and allow you the flexibility to fulfill the
602:00 allow you the flexibility to fulfill the requests for your specific scenario and
602:02 requests for your specific scenario and as you can see here there are four
602:05 as you can see here there are four different base options when it comes to
602:07 different base options when it comes to configuration of the instance that you
602:09 configuration of the instance that you are preparing to launch and so i wanted
602:12 are preparing to launch and so i wanted to take time to go through them in just
602:14 to take time to go through them in just a bit of detail for context starting
602:16 a bit of detail for context starting first with the machine type which covers
602:20 first with the machine type which covers vcpu and memory now there are many
602:23 vcpu and memory now there are many different predefined machine types that
602:25 different predefined machine types that i will be covering in great depth in a
602:28 i will be covering in great depth in a different lesson but for now just know
602:30 different lesson but for now just know that they are available in different
602:32 that they are available in different families depending on your needs and can
602:35 families depending on your needs and can be chosen from the general
602:37 be chosen from the general compute optimize and memory optimize
602:40 compute optimize and memory optimize machine types they are available in
602:43 machine types they are available in intel or amd flavors and if the
602:45 intel or amd flavors and if the pre-defined options doesn't fit your
602:48 pre-defined options doesn't fit your need you have the option of creating a
602:50 need you have the option of creating a custom machine that will suit your
602:52 custom machine that will suit your specific workload now when creating a vm
602:55 specific workload now when creating a vm instance on compute engine each virtual
602:58 instance on compute engine each virtual cpu or vcpu is implemented as a single
603:02 cpu or vcpu is implemented as a single hardware hyper thread on one of the
603:05 hardware hyper thread on one of the available cpu processors that live on
603:08 available cpu processors that live on the host now when choosing the amount of
603:10 the host now when choosing the amount of vcpus on an instance
603:12 vcpus on an instance you must take into consideration the
603:15 you must take into consideration the desired network throughput as the amount
603:18 desired network throughput as the amount of vcpus will determine this throughput
603:21 of vcpus will determine this throughput as the bandwidth is determined per vm
603:24 as the bandwidth is determined per vm instance not per network interface or
603:27 instance not per network interface or per ip address and so the network
603:30 per ip address and so the network throughput is determined by calculating
603:33 throughput is determined by calculating 2 gigabits per second for every vcpu on
603:36 2 gigabits per second for every vcpu on your instance so if you're looking for
603:39 your instance so if you're looking for greater network throughput then you may
603:41 greater network throughput then you may want to select an instance with more
603:44 want to select an instance with more vcpus and so once you've determined a
603:46 vcpus and so once you've determined a machine type for your compute engine
603:48 machine type for your compute engine instance you will need to provide it an
603:51 instance you will need to provide it an image with an operating system to boot
603:53 image with an operating system to boot up with now when creating your vm
603:55 up with now when creating your vm instances you must use an operating
603:58 instances you must use an operating system image to create boot disks for
604:01 system image to create boot disks for your instances now compute engine offers
604:04 your instances now compute engine offers many pre-configured public images that
604:07 many pre-configured public images that have compatible linux or windows
604:10 have compatible linux or windows operating systems and these operating
604:12 operating systems and these operating system images can be used to create and
604:15 system images can be used to create and start instances compute engine uses your
604:18 start instances compute engine uses your selected image to create a persistent
604:21 selected image to create a persistent boot disk for each instance by default
604:24 boot disk for each instance by default the boot disk for your instance is the
604:27 the boot disk for your instance is the same size as the image that you selected
604:30 same size as the image that you selected and you can use most public images at no
604:33 and you can use most public images at no additional cost but please be aware that
604:36 additional cost but please be aware that there are some premium images that do
604:38 there are some premium images that do add additional cost to your instances
604:41 add additional cost to your instances now moving on to custom images this is a
604:44 now moving on to custom images this is a boot disk image that you own and control
604:47 boot disk image that you own and control access to a private image if you will
604:50 access to a private image if you will custom images are available only to your
604:53 custom images are available only to your cloud project unless you specifically
604:56 cloud project unless you specifically decide to share them with another
604:58 decide to share them with another project or another organization you can
605:01 project or another organization you can create a custom image from boot disks or
605:05 create a custom image from boot disks or other images then use the custom image
605:08 other images then use the custom image to create an instance custom images that
605:10 to create an instance custom images that you import to compute engine add no cost
605:14 you import to compute engine add no cost to your instances but do incur an image
605:17 to your instances but do incur an image storage charge
605:18 storage charge while you keep your custom image in your
605:20 while you keep your custom image in your project now the third option that you
605:22 project now the third option that you have is by using a marketplace image now
605:26 have is by using a marketplace image now google cloud marketplace lets you
605:28 google cloud marketplace lets you quickly deploy
605:29 quickly deploy functional software packages that run on
605:32 functional software packages that run on google cloud you can start up a software
605:35 google cloud you can start up a software package without having to manually
605:37 package without having to manually configure the software the vm instances
605:41 configure the software the vm instances the storage or even the network settings
605:43 the storage or even the network settings this is a all-in-one instance template
605:46 this is a all-in-one instance template that includes the operating system and
605:49 that includes the operating system and the software pre-configured and you can
605:51 the software pre-configured and you can deploy a software package whenever you
605:53 deploy a software package whenever you like and is by far the easiest way to
605:56 like and is by far the easiest way to launch a software package and i will be
605:59 launch a software package and i will be giving you a run through on these
606:00 giving you a run through on these marketplace images in a later demo now
606:04 marketplace images in a later demo now once you've decided on your machine type
606:06 once you've decided on your machine type as well as the type of image that you
606:08 as well as the type of image that you wanted to use moving into the type of
606:11 wanted to use moving into the type of storage that you want would be your next
606:13 storage that you want would be your next step now when configuring a new instance
606:15 step now when configuring a new instance you will need to create a new boot disk
606:18 you will need to create a new boot disk for it and this is where performance
606:20 for it and this is where performance versus cost comes into play as you have
606:23 versus cost comes into play as you have the option to pay less and have a slower
606:26 the option to pay less and have a slower disk speed or lower iops or you can
606:29 disk speed or lower iops or you can choose to have fast disk speed with
606:32 choose to have fast disk speed with higher iops but pay a higher cost and so
606:35 higher iops but pay a higher cost and so the slowest and most inexpensive of
606:38 the slowest and most inexpensive of these options is the standard persistent
606:40 these options is the standard persistent disk which are backed by standard hard
606:43 disk which are backed by standard hard disk drives the balance persistent disks
606:46 disk drives the balance persistent disks are backed by solid state drives and are
606:49 are backed by solid state drives and are faster and can provide higher iops than
606:52 faster and can provide higher iops than the standard option and lastly ssd is
606:56 the standard option and lastly ssd is the fastest option which also brings
606:58 the fastest option which also brings with it the highest iops available for
607:01 with it the highest iops available for persistent disks now outside of these
607:04 persistent disks now outside of these three options for persistent disks you
607:06 three options for persistent disks you also have the option of choosing a local
607:09 also have the option of choosing a local ssd and these are solid state drives
607:12 ssd and these are solid state drives that are physically attached to the
607:14 that are physically attached to the server that hosts your vm instances and
607:17 server that hosts your vm instances and this is why they have the highest
607:19 this is why they have the highest throughput and lowest latency than any
607:22 throughput and lowest latency than any of the available persistent disks just
607:25 of the available persistent disks just as a note the data that you store on a
607:27 as a note the data that you store on a local ssd persists only until the
607:31 local ssd persists only until the instance is stopped or deleted which is
607:34 instance is stopped or deleted which is why local ssds are suited only for
607:36 why local ssds are suited only for temporary storage such as caches or swap
607:40 temporary storage such as caches or swap disk and so lastly moving into
607:43 disk and so lastly moving into networking
607:44 networking each network interface of a compute
607:46 each network interface of a compute engine instance is associated with a
607:49 engine instance is associated with a subnet of a unique vpc network as you've
607:53 subnet of a unique vpc network as you've seen in the last section you can do this
607:55 seen in the last section you can do this with an auto a default or a custom
607:58 with an auto a default or a custom network each network is available in
608:01 network each network is available in many different regions and zones within
608:04 many different regions and zones within that region we've also experienced
608:06 that region we've also experienced routing traffic for our instance both in
608:10 routing traffic for our instance both in and out of the vpc network
608:12 and out of the vpc network by use of firewall rules targeting ip
608:15 by use of firewall rules targeting ip ranges
608:16 ranges specific network tags or by instances
608:19 specific network tags or by instances within the network now load balancers
608:22 within the network now load balancers are responsible for helping distribute
608:25 are responsible for helping distribute user traffic
608:26 user traffic across multiple instances either within
608:29 across multiple instances either within the network or externally using a
608:32 the network or externally using a regional or global load balancer
608:34 regional or global load balancer and i will be getting into low balancing
608:37 and i will be getting into low balancing in another section of the course but i
608:39 in another section of the course but i wanted to stress that load balancers are
608:42 wanted to stress that load balancers are part of instance networking that help
608:45 part of instance networking that help route and manage traffic coming in and
608:48 route and manage traffic coming in and going out of the network
608:50 going out of the network and so this is a high level overview of
608:52 and so this is a high level overview of the different configuration types that
608:55 the different configuration types that go into putting together an instance and
608:57 go into putting together an instance and i will be diving deeper into each
609:00 i will be diving deeper into each in this section as well i will be
609:02 in this section as well i will be putting a hands-on approach to this by
609:05 putting a hands-on approach to this by creating an instance in the next lesson
609:07 creating an instance in the next lesson and focusing on the different available
609:10 and focusing on the different available features that you can use for your
609:12 features that you can use for your specific use case and so this is all i
609:15 specific use case and so this is all i wanted to cover for this lesson so you
609:17 wanted to cover for this lesson so you can now mark this lesson as complete and
609:20 can now mark this lesson as complete and let's move on to the next one
609:21 let's move on to the next one [Music]
609:25 [Music] welcome back now i know in previous
609:28 welcome back now i know in previous demonstrations we've built quite a few
609:30 demonstrations we've built quite a few compute engine instances and have
609:33 compute engine instances and have configured them accordingly in this
609:34 configured them accordingly in this demonstration we're going to go through
609:37 demonstration we're going to go through a build of another instance but i wanted
609:39 a build of another instance but i wanted to dig deeper into the specific
609:42 to dig deeper into the specific configurations that are available for
609:44 configurations that are available for compute engine so with that being said
609:47 compute engine so with that being said let's dive in and so i am now logged in
609:50 let's dive in and so i am now logged in under tony bowties gmail.com as well i
609:54 under tony bowties gmail.com as well i am logged in under the bowtie inc
609:56 am logged in under the bowtie inc project so in order to kick off this
609:58 project so in order to kick off this demo i'm going to head on over to the
610:00 demo i'm going to head on over to the compute engine console so i'm going to
610:02 compute engine console so i'm going to go over to the navigation menu and i'm
610:04 go over to the navigation menu and i'm going to scroll down to compute engine
610:07 going to scroll down to compute engine and so here i'm prompted
610:09 and so here i'm prompted to either create or import a vm instance
610:12 to either create or import a vm instance as well as taking the quick start and so
610:14 as well as taking the quick start and so i'm not going to import or take the
610:16 i'm not going to import or take the quick start so i'm going to simply click
610:18 quick start so i'm going to simply click on create
610:20 on create and so i want to take a moment here to
610:22 and so i want to take a moment here to focus on the left hand menu where there
610:24 focus on the left hand menu where there are a bunch of different options to
610:26 are a bunch of different options to create any given instance so the first
610:29 create any given instance so the first and default option allows me to create
610:32 and default option allows me to create the instance from scratch choosing the
610:34 the instance from scratch choosing the new vm instance from template option
610:37 new vm instance from template option allows me to create a new instance from
610:40 allows me to create a new instance from an instance template and because i don't
610:42 an instance template and because i don't have any instance templates i am
610:44 have any instance templates i am prompted here with the option to create
610:46 prompted here with the option to create one and so for those of you who are
610:48 one and so for those of you who are unfamiliar with instance templates
610:51 unfamiliar with instance templates templates are used in managed instance
610:53 templates are used in managed instance groups and define instance properties
610:56 groups and define instance properties for when instances are launched within
610:58 for when instances are launched within that managed instance group but don't
611:00 that managed instance group but don't worry i will be covering instance groups
611:03 worry i will be covering instance groups and instant templates in a later lesson
611:06 and instant templates in a later lesson the next option that's available is new
611:08 the next option that's available is new vm instance from machine image and an
611:11 vm instance from machine image and an image is a clone or a copy of an
611:13 image is a clone or a copy of an instance and again i will be covering
611:16 instance and again i will be covering this in a separate lesson and going
611:18 this in a separate lesson and going through all the details of machine
611:20 through all the details of machine images but if i did have any machine
611:23 images but if i did have any machine images i would be able to create my
611:25 images i would be able to create my instance from here but since i do not i
611:28 instance from here but since i do not i am prompted with the option to create a
611:30 am prompted with the option to create a new machine image now the last option
611:32 new machine image now the last option that i wanted to show you is the
611:34 that i wanted to show you is the marketplace
611:36 marketplace and so the marketplace has existing
611:38 and so the marketplace has existing machine images that are all
611:40 machine images that are all pre-configured with its proper operating
611:43 pre-configured with its proper operating system as well as the software to
611:45 system as well as the software to accompany it so for instance if i'm
611:48 accompany it so for instance if i'm looking to create a vm with a wordpress
611:50 looking to create a vm with a wordpress installation on it i can simply go up to
611:53 installation on it i can simply go up to the top to the search bar type in
611:55 the top to the search bar type in wordpress and i will be presented with
611:57 wordpress and i will be presented with many different options and i'm just
611:59 many different options and i'm just going to choose the one here at the top
612:01 going to choose the one here at the top and i am presented with 49 results of
612:05 and i am presented with 49 results of virtual machines with different types of
612:08 virtual machines with different types of wordpress installations on them and
612:10 wordpress installations on them and these are all different instances that
612:12 these are all different instances that have been configured specifically for
612:14 have been configured specifically for wordpress by different companies like
612:17 wordpress by different companies like lightspeed analog innovation and
612:20 lightspeed analog innovation and cognosis inc and so for this
612:22 cognosis inc and so for this demonstration i'm going to choose
612:24 demonstration i'm going to choose wordpress on centos 7
612:28 wordpress on centos 7 and here i'm giving an overview about
612:30 and here i'm giving an overview about the software itself i'm also given
612:32 the software itself i'm also given information about the company that
612:34 information about the company that configured this as well at the top i'm
612:36 configured this as well at the top i'm given a monthly estimated cost for this
612:39 given a monthly estimated cost for this specific instance and if i scroll down
612:41 specific instance and if i scroll down the page i can get a little bit more
612:44 the page i can get a little bit more information with regards to this image
612:46 information with regards to this image and as shown here on the right i can see
612:48 and as shown here on the right i can see my pricing the usage fee will cost me
612:51 my pricing the usage fee will cost me 109 a month along with the vm instance
612:54 109 a month along with the vm instance type that the software is configured for
612:57 type that the software is configured for the amount of disk space and the
612:59 the amount of disk space and the sustained use discount i've also been
613:01 sustained use discount i've also been given some links here for tutorials and
613:03 given some links here for tutorials and documentation
613:05 documentation and i've also been given instructions
613:07 and i've also been given instructions for maintenance and support i've been
613:09 for maintenance and support i've been given both an email and a link to live
613:12 given both an email and a link to live support and of course at the bottom we
613:14 support and of course at the bottom we have the terms of service and this is a
613:16 have the terms of service and this is a typical software package amongst many
613:19 typical software package amongst many others that's available in the google
613:21 others that's available in the google cloud marketplace now i can go ahead and
613:23 cloud marketplace now i can go ahead and launch this if i choose but i'm going to
613:25 launch this if i choose but i'm going to choose not to launch this and i'm going
613:27 choose not to launch this and i'm going to back out and so just to give you some
613:29 to back out and so just to give you some context with regards to enterprise
613:31 context with regards to enterprise software
613:32 software software packages like f5 and jenkins
613:36 software packages like f5 and jenkins are also available in the google cloud
613:38 are also available in the google cloud marketplace and again when i click on
613:40 marketplace and again when i click on the first option it'll give me a bunch
613:42 the first option it'll give me a bunch of available options on jenkins and its
613:45 of available options on jenkins and its availability from different companies on
613:48 availability from different companies on different platforms now just as a note
613:51 different platforms now just as a note to update your existing deployment of a
613:54 to update your existing deployment of a software package
613:55 software package you have to redeploy the software
613:58 you have to redeploy the software package from marketplace in order to
614:01 package from marketplace in order to update it but other than that caveat the
614:04 update it but other than that caveat the easiest way to deploy a software package
614:07 easiest way to deploy a software package is definitely through the marketplace
614:09 is definitely through the marketplace and so now that we've gone through all
614:11 and so now that we've gone through all the different options on how to create
614:14 the different options on how to create an instance i'm gonna go back and select
614:16 an instance i'm gonna go back and select new vm instance so i can create a new vm
614:20 new vm instance so i can create a new vm from scratch and so i am prompted here
614:22 from scratch and so i am prompted here at the top with a note telling me that
614:25 at the top with a note telling me that there was a draft that was saved from
614:27 there was a draft that was saved from when i started to create in my new
614:29 when i started to create in my new instance but i navigated away from it
614:32 instance but i navigated away from it and i have the option to restore the
614:34 and i have the option to restore the configuration i was working on and so
614:37 configuration i was working on and so just know that when you are in the midst
614:39 just know that when you are in the midst of creating an instance
614:40 of creating an instance google cloud will automatically save a
614:43 google cloud will automatically save a draft of your build so that you are able
614:46 draft of your build so that you are able to continue working on it later now i
614:48 to continue working on it later now i don't really need this draft but i will
614:51 don't really need this draft but i will just hit restore
614:53 just hit restore and for the name i'm going to keep it as
614:55 and for the name i'm going to keep it as instance 1 and for the sake of this demo
614:57 instance 1 and for the sake of this demo i'm going to add a label
614:59 i'm going to add a label the key is going to be environment and
615:01 the key is going to be environment and the value will be testing i'm going to
615:03 the value will be testing i'm going to go down to the bottom click save now
615:05 go down to the bottom click save now when it comes to the geographic location
615:08 when it comes to the geographic location of the instance using regions i can
615:10 of the instance using regions i can simply click on the drop down and i will
615:13 simply click on the drop down and i will have access to deploy this instance in
615:16 have access to deploy this instance in any currently available region as
615:19 any currently available region as regions are added they will be added
615:21 regions are added they will be added here as well and so i'm going to keep it
615:23 here as well and so i'm going to keep it as us east one
615:26 as us east one and under zone i have the availability
615:29 and under zone i have the availability of putting it in any zone within that
615:31 of putting it in any zone within that region and so i'm going to keep it as us
615:33 region and so i'm going to keep it as us east 1b and just as another note once
615:36 east 1b and just as another note once you've deployed the instance in a
615:38 you've deployed the instance in a specific region you will not be able to
615:41 specific region you will not be able to move that instance to a different region
615:43 move that instance to a different region you will have to recreate it using a
615:45 you will have to recreate it using a snapshot in another region and i will be
615:47 snapshot in another region and i will be going over this in a later lesson now
615:50 going over this in a later lesson now scrolling down to machine configuration
615:52 scrolling down to machine configuration there are three different types of
615:54 there are three different types of families that you can choose from when
615:56 families that you can choose from when it comes to machine types the general
615:58 it comes to machine types the general purpose the compute optimized and the
616:01 purpose the compute optimized and the memory optimized the general purpose
616:03 memory optimized the general purpose machine family has a great available
616:06 machine family has a great available selection of different series types that
616:08 selection of different series types that you can choose from and is usually the
616:10 you can choose from and is usually the go to machine family if you're unsure
616:13 go to machine family if you're unsure about which machine type to select so
616:16 about which machine type to select so for this demo i'm going to keep my
616:17 for this demo i'm going to keep my selection for series type as e2 and
616:20 selection for series type as e2 and under machine type i'm given a very
616:23 under machine type i'm given a very large selection of different sizes when
616:25 large selection of different sizes when it comes to vcpu and memory and so i can
616:29 it comes to vcpu and memory and so i can select from a shared core a standard
616:31 select from a shared core a standard type a high memory type or a high cpu
616:35 type a high memory type or a high cpu type and i will be going over this in
616:38 type and i will be going over this in greater detail in another lesson on
616:40 greater detail in another lesson on machine types now in case the predefined
616:43 machine types now in case the predefined machine types do not fit my needs
616:46 machine types do not fit my needs or the scope for the amount of vcpus and
616:49 or the scope for the amount of vcpus and memory that i need
616:50 memory that i need fall in between those predefined machine
616:53 fall in between those predefined machine types i can simply select the custom
616:56 types i can simply select the custom option and this will bring up a set of
616:58 option and this will bring up a set of sliders
616:59 sliders where i am able to select both the
617:01 where i am able to select both the amount of vcpus and amount of memory
617:04 amount of vcpus and amount of memory that i need for the instance that i am
617:07 that i need for the instance that i am creating now as i change the course
617:10 creating now as i change the course slider to either more vcpus or less my
617:14 slider to either more vcpus or less my core to memory ratio for this series
617:16 core to memory ratio for this series will stay the same and therefore my
617:18 will stay the same and therefore my memory will be adjusted automatically i
617:21 memory will be adjusted automatically i also have the option to change the
617:23 also have the option to change the memory as i see fit to either add more
617:26 memory as i see fit to either add more memory or to remove it and so this is
617:29 memory or to remove it and so this is great for when you're in between sizes
617:32 great for when you're in between sizes and you're looking for something
617:33 and you're looking for something specific that fits your workload and so
617:36 specific that fits your workload and so i'm going to change back the machine
617:37 i'm going to change back the machine type to an e2 micro
617:40 type to an e2 micro and as you can see in the top right
617:43 and as you can see in the top right i will find a monthly estimate of how
617:46 i will find a monthly estimate of how much the instance will cost me
617:48 much the instance will cost me and i can click on this drop down and it
617:51 and i can click on this drop down and it will give me a breakdown of the cost for
617:54 will give me a breakdown of the cost for vcpu in memory the cost for my disks as
617:58 vcpu in memory the cost for my disks as well as my sustained use discount and if
618:01 well as my sustained use discount and if i had any other resources that i was
618:03 i had any other resources that i was consuming like a static ip or an extra
618:06 consuming like a static ip or an extra attached disk those costs would show up
618:09 attached disk those costs would show up here as well and so if i went to a
618:11 here as well and so if i went to a compute optimized you can see how the
618:14 compute optimized you can see how the price has changed but i'm given the
618:16 price has changed but i'm given the breakdown so that i know exactly what
618:18 breakdown so that i know exactly what i'm paying for so i'm going to switch it
618:20 i'm paying for so i'm going to switch it back to general purpose
618:23 back to general purpose and i wanted to point out here the cpu
618:26 and i wanted to point out here the cpu platform and gpu as you can add gpus to
618:29 platform and gpu as you can add gpus to your specific machine configuration and
618:32 your specific machine configuration and so just as another note
618:34 so just as another note gpus can only be added to an n1 machine
618:38 gpus can only be added to an n1 machine type as any other type will show the gpu
618:41 type as any other type will show the gpu selection as grayed out and so here i
618:44 selection as grayed out and so here i can add the gpu type as well as adding
618:47 can add the gpu type as well as adding the number of gpus that i need but for
618:49 the number of gpus that i need but for the sake of this demonstration i'm not
618:51 the sake of this demonstration i'm not going to add any gpus
618:53 going to add any gpus and i'm going to select the e2 series
618:56 and i'm going to select the e2 series and change it back to e2 micro scrolling
618:59 and change it back to e2 micro scrolling down a little bit here
619:01 down a little bit here when it comes to cpu platform depending
619:04 when it comes to cpu platform depending on the machine type you can choose
619:06 on the machine type you can choose between intel or amd if you are looking
619:09 between intel or amd if you are looking for a specific cpu but just know that
619:12 for a specific cpu but just know that your configuration is permanent now
619:14 your configuration is permanent now moving down a little bit more you will
619:17 moving down a little bit more you will see here display device now display
619:19 see here display device now display device is a feature on compute engine
619:22 device is a feature on compute engine that allows you to add a virtual display
619:25 that allows you to add a virtual display to a vm for system management tools
619:28 to a vm for system management tools remote desktop software and any
619:31 remote desktop software and any application that requires you to connect
619:34 application that requires you to connect to a display device on a remote server
619:37 to a display device on a remote server this is an especially great feature to
619:39 this is an especially great feature to have for when your server is stuck at
619:41 have for when your server is stuck at boot patching or hardware failure and
619:44 boot patching or hardware failure and you can't log in and the drivers are
619:46 you can't log in and the drivers are already included for both windows and
619:49 already included for both windows and linux vms this feature works with the
619:52 linux vms this feature works with the default vga driver right out of the box
619:55 default vga driver right out of the box and so i'm going to keep this checked
619:57 and so i'm going to keep this checked off as i don't need it and i'm going to
619:59 off as i don't need it and i'm going to move down to confidential vm service now
620:02 move down to confidential vm service now confidential computing is a security
620:04 confidential computing is a security feature to encrypt sensitive code and
620:07 feature to encrypt sensitive code and data that's in memory so even when it's
620:11 data that's in memory so even when it's being processed it is still encrypted
620:14 being processed it is still encrypted and is a great use case when you're
620:16 and is a great use case when you're dealing with very sensitive information
620:18 dealing with very sensitive information that requires strict requirements now
620:21 that requires strict requirements now compute engine also gives you the option
620:24 compute engine also gives you the option of deploying containers on it and this
620:26 of deploying containers on it and this is a great way to test your containers
620:29 is a great way to test your containers instead of deploying a whole kubernetes
620:31 instead of deploying a whole kubernetes cluster and may even suffice for
620:34 cluster and may even suffice for specific use cases but just note that
620:36 specific use cases but just note that you can only deploy one container per vm
620:40 you can only deploy one container per vm instance and so now that we've covered
620:42 instance and so now that we've covered most of the general configuration
620:44 most of the general configuration options for compute engine i wanted to
620:46 options for compute engine i wanted to take a minute to dive into the options
620:48 take a minute to dive into the options that are available for boot disk so i'm
620:51 that are available for boot disk so i'm going to go ahead and click on change
620:53 going to go ahead and click on change and here i have the option of choosing
620:55 and here i have the option of choosing from a bunch of different public images
620:58 from a bunch of different public images with different operating systems that i
621:00 with different operating systems that i can use for my boot disk so if i wanted
621:03 can use for my boot disk so if i wanted to load up ubuntu i can simply select
621:05 to load up ubuntu i can simply select ubuntu and i can choose from each
621:08 ubuntu and i can choose from each different version that's available
621:10 different version that's available as well i'm shown here the boot disk
621:13 as well i'm shown here the boot disk type which is currently selected as the
621:15 type which is currently selected as the standard persistent disk but i also have
621:18 standard persistent disk but i also have the option of selecting either a
621:21 the option of selecting either a balanced persistent disk or ssd
621:23 balanced persistent disk or ssd persistent disk and i'm going to keep it
621:25 persistent disk and i'm going to keep it as standard persistent disk and if i
621:28 as standard persistent disk and if i wanted to i can increase the boot disk
621:30 wanted to i can increase the boot disk size so if i wanted 100 gigs i can
621:32 size so if i wanted 100 gigs i can simply add it and if i select it and i
621:35 simply add it and if i select it and i go back up to the top right hand corner
621:37 go back up to the top right hand corner i can see that my price for the instance
621:40 i can see that my price for the instance has changed now i'm not charged for the
621:42 has changed now i'm not charged for the operating system due to it being an open
621:45 operating system due to it being an open source image but i am charged more for
621:48 source image but i am charged more for the standard persistent disk because i'm
621:50 the standard persistent disk because i'm no longer using 10 gigs but i'm using
621:52 no longer using 10 gigs but i'm using 100 gigabytes
621:54 100 gigabytes now let's say i wanted to go back and i
621:57 now let's say i wanted to go back and i wanted to change this image to a windows
622:00 wanted to change this image to a windows image i'm going to go down here to
622:01 image i'm going to go down here to windows server and i want to select
622:03 windows server and i want to select windows server 2016 i'm going to load up
622:06 windows server 2016 i'm going to load up the data center version and i'm going to
622:08 the data center version and i'm going to keep the standard persistent disk along
622:10 keep the standard persistent disk along with 100 gigabytes i'm going to select
622:13 with 100 gigabytes i'm going to select it if i scroll back up i can see that
622:15 it if i scroll back up i can see that i'm charged a licensing fee for windows
622:18 i'm charged a licensing fee for windows server and these images with these
622:20 server and these images with these licensing fees are known as premium
622:23 licensing fees are known as premium images so please make sure that you are
622:25 images so please make sure that you are aware of these licensing fees when
622:28 aware of these licensing fees when launching your instances and because i
622:29 launching your instances and because i want to save on money just for now i'm
622:32 want to save on money just for now i'm going to scroll back down to my boot
622:33 going to scroll back down to my boot disk and change it back to ubuntu
622:37 disk and change it back to ubuntu and i'm going to change the size back
622:39 and i'm going to change the size back down to 10 gigabytes as well before you
622:42 down to 10 gigabytes as well before you move on i wanted to touch on custom
622:44 move on i wanted to touch on custom images and so if i did have any custom
622:47 images and so if i did have any custom images i could see them here and i would
622:49 images i could see them here and i would be able to create instances from my
622:52 be able to create instances from my custom images using this method i also
622:55 custom images using this method i also have the option of creating an instance
622:57 have the option of creating an instance from a snapshot and because i don't have
622:59 from a snapshot and because i don't have any nothing shows up and lastly i have
623:02 any nothing shows up and lastly i have the option of using existing disks so
623:05 the option of using existing disks so let's say for instance i had a vm
623:07 let's say for instance i had a vm instance and i had deleted it but i
623:10 instance and i had deleted it but i decided to keep the attached boot disk
623:12 decided to keep the attached boot disk it would show up as unattached and i am
623:15 it would show up as unattached and i am able to attach that to a new instance
623:18 able to attach that to a new instance and so now that i've shown you all the
623:20 and so now that i've shown you all the available options when it comes to boot
623:22 available options when it comes to boot disk i'm going to go ahead and select
623:24 disk i'm going to go ahead and select the ubuntu operating system and move on
623:27 the ubuntu operating system and move on to the next option here we have identity
623:29 to the next option here we have identity and api access which we've gone through
623:32 and api access which we've gone through in great depth in a previous demo as
623:34 in great depth in a previous demo as well i'm given an option to create a
623:37 well i'm given an option to create a firewall rule automatically for http and
623:41 firewall rule automatically for http and https traffic and as for networking as
623:44 https traffic and as for networking as we covered it in great depth in the last
623:46 we covered it in great depth in the last section
623:47 section i will skip that part of the
623:49 i will skip that part of the configuration and simply launch it in
623:51 configuration and simply launch it in the default vpc and so just as a quick
623:54 the default vpc and so just as a quick note i wanted to remind you that down at
623:57 note i wanted to remind you that down at the bottom of the page you can find the
623:59 the bottom of the page you can find the command line shortcut and when you click
624:01 command line shortcut and when you click on it it will give you the gcloud
624:03 on it it will give you the gcloud command to run that you can use in order
624:06 command to run that you can use in order to create your instance and so i want to
624:08 to create your instance and so i want to deploy this as is so i'm going to click
624:10 deploy this as is so i'm going to click here on close and i'm going to click on
624:13 here on close and i'm going to click on create
624:14 create and so i'm just going to give it a
624:16 and so i'm just going to give it a minute now so the instance can be
624:17 minute now so the instance can be created and it took a few seconds but
624:20 created and it took a few seconds but the instance is created and this is
624:22 the instance is created and this is regarded as the inventory page to view
624:25 regarded as the inventory page to view your instance inventory and to look up
624:27 your instance inventory and to look up any correlating information on any of
624:30 any correlating information on any of your instances and so this probably
624:32 your instances and so this probably looks familiar to you from the previous
624:35 looks familiar to you from the previous instances that you've launched so here
624:37 instances that you've launched so here we have the name of the instance the
624:39 we have the name of the instance the zone
624:40 zone the internal ip along with the external
624:43 the internal ip along with the external ip and a selection to connect to the
624:46 ip and a selection to connect to the instance as well i'm also given the
624:48 instance as well i'm also given the option to connect to this instance in
624:50 option to connect to this instance in different ways you also have the option
624:53 different ways you also have the option of adding more column information to
624:55 of adding more column information to your inventory dashboard with regards to
624:58 your inventory dashboard with regards to your instance
625:00 your instance and you can do this by simply clicking
625:01 and you can do this by simply clicking on the columns button right here above
625:04 on the columns button right here above the list of instances and you can select
625:06 the list of instances and you can select from creation time
625:08 from creation time machine type preserve state and even the
625:12 machine type preserve state and even the network and this may bring you more
625:14 network and this may bring you more insight on the information available for
625:16 insight on the information available for that instance or even grouping of
625:19 that instance or even grouping of instances with common configurations
625:22 instances with common configurations this will also help you identify your
625:24 this will also help you identify your instances visually in the console and so
625:27 instances visually in the console and so i'm just going to put the columns back
625:29 i'm just going to put the columns back to
625:30 to exactly what it was
625:36 and so now i want to take a moment to dive right into the instance and have a
625:39 dive right into the instance and have a look at the instance details so as you
625:42 look at the instance details so as you remember we selected the machine type of
625:45 remember we selected the machine type of e2 micro which has two vcpus and one
625:48 e2 micro which has two vcpus and one gigabyte of memory here we have the
625:50 gigabyte of memory here we have the instance id as well scrolling down we
625:54 instance id as well scrolling down we have the cpu platform we have the
625:57 have the cpu platform we have the display device that i was mentioning
625:59 display device that i was mentioning earlier along with the zone the labels
626:02 earlier along with the zone the labels the creation time as well as the network
626:05 the creation time as well as the network interface and scrolling down i can see
626:07 interface and scrolling down i can see here the boot disk with the ubuntu image
626:10 here the boot disk with the ubuntu image as well as the name of the boot disk so
626:12 as well as the name of the boot disk so there are quite a few configurations
626:14 there are quite a few configurations here and if i click on edit i can edit
626:17 here and if i click on edit i can edit some of these configurations on the fly
626:20 some of these configurations on the fly and with some configurations i need to
626:22 and with some configurations i need to stop the instance before editing them
626:24 stop the instance before editing them and there are some configurations like
626:26 and there are some configurations like the network interface where i would have
626:29 the network interface where i would have to delete the instance in order to
626:32 to delete the instance in order to recreate it so for instance if i wanted
626:34 recreate it so for instance if i wanted to change the machine type i need to
626:37 to change the machine type i need to stop the instance in order to change it
626:39 stop the instance in order to change it and the same thing goes for my display
626:41 and the same thing goes for my display device as well the network interface in
626:45 device as well the network interface in order for me to change it from its
626:46 order for me to change it from its current network or subnetwork i'm going
626:49 current network or subnetwork i'm going to have to stop the instance in order to
626:52 to have to stop the instance in order to change it as well and so i hope this
626:54 change it as well and so i hope this general walkthrough of configuring an
626:56 general walkthrough of configuring an instance has given you a sense of what
626:58 instance has given you a sense of what can be configured on launch
627:01 can be configured on launch and allowed you to gain some insight on
627:03 and allowed you to gain some insight on editing features of an instance after
627:06 editing features of an instance after launch a lot of what you've seen here in
627:08 launch a lot of what you've seen here in this demo will come up in the exam and
627:11 this demo will come up in the exam and so i would recommend that before going
627:13 so i would recommend that before going into the exam to spend some time
627:16 into the exam to spend some time launching instances knowing exactly how
627:18 launching instances knowing exactly how they will behave and what can be edited
627:21 they will behave and what can be edited after creation that can be done on the
627:23 after creation that can be done on the fly edits that need the instance to be
627:26 fly edits that need the instance to be shut down and edits that need the
627:28 shut down and edits that need the instance to be recreated and so that's
627:30 instance to be recreated and so that's pretty much all i wanted to cover when
627:32 pretty much all i wanted to cover when it comes to creating an instance so you
627:35 it comes to creating an instance so you can now mark this as complete and let's
627:37 can now mark this as complete and let's move on to the next one
627:45 welcome back now in this lesson i'm going to be discussing compute engine
627:47 going to be discussing compute engine machine types now a machine type is a
627:50 machine types now a machine type is a set of virtualized hardware resources
627:53 set of virtualized hardware resources that's available to a vm instance
627:56 that's available to a vm instance including the system memory size
627:58 including the system memory size virtual cpu count
628:00 virtual cpu count and persistent disks in compute engine
628:03 and persistent disks in compute engine machine types are grouped and curated by
628:06 machine types are grouped and curated by families for different workloads you
628:09 families for different workloads you must always choose a machine type when
628:11 must always choose a machine type when you create an instance and you can
628:13 you create an instance and you can select from a number of pre-defined
628:16 select from a number of pre-defined machine types in each machine type
628:18 machine types in each machine type family if the pre-defined machine types
628:21 family if the pre-defined machine types don't meet your needs then you can
628:23 don't meet your needs then you can create your own custom machine types in
628:26 create your own custom machine types in this lesson i will be going through all
628:28 this lesson i will be going through all the different machine types their
628:30 the different machine types their families and their use cases so with
628:33 families and their use cases so with that being said let's dive in
628:36 that being said let's dive in now each machine type family displayed
628:39 now each machine type family displayed here includes different machine types
628:42 here includes different machine types each family is curated for specific
628:45 each family is curated for specific workload types the following primary
628:48 workload types the following primary machine types are offered on compute
628:50 machine types are offered on compute engine which is general purpose compute
628:53 engine which is general purpose compute optimized and memory optimized and so i
628:56 optimized and memory optimized and so i wanted to go through each one of these
628:58 wanted to go through each one of these families in a little bit of detail now
629:01 families in a little bit of detail now before diving right into it
629:03 before diving right into it defining what type of machine type you
629:06 defining what type of machine type you are running can be overwhelming for some
629:09 are running can be overwhelming for some but can be broken down to be understood
629:11 but can be broken down to be understood a bit better they are broken down into
629:14 a bit better they are broken down into three parts and separated by hyphens the
629:17 three parts and separated by hyphens the first part in this example shown here
629:20 first part in this example shown here is the series so for this example the
629:23 is the series so for this example the series is e2 and the number after the
629:26 series is e2 and the number after the letter is the generation type in this
629:28 letter is the generation type in this case it would be the second generation
629:31 case it would be the second generation now the series come in many different
629:33 now the series come in many different varieties and each are designed for
629:36 varieties and each are designed for specific workloads now moving on to the
629:39 specific workloads now moving on to the middle part of the machine type this is
629:41 middle part of the machine type this is the actual type and types as well can
629:44 the actual type and types as well can come in a slew of different flavors and
629:46 come in a slew of different flavors and is usually coupled with a specific
629:48 is usually coupled with a specific series so in this example the type here
629:51 series so in this example the type here is standard and so moving on to the
629:53 is standard and so moving on to the third part of the machine type this is
629:55 third part of the machine type this is the amount of vcp use
629:57 the amount of vcp use in the machine type and so with vcpus
630:00 in the machine type and so with vcpus they can be offered anywhere from one
630:03 they can be offered anywhere from one vcpu up to 416 vcpus and so for the
630:08 vcpu up to 416 vcpus and so for the example shown here this machine type has
630:11 example shown here this machine type has 32 vcpus and so there is one more aspect
630:15 32 vcpus and so there is one more aspect of a machine type
630:16 of a machine type which is the gpus
630:18 which is the gpus but please note that gpus are only
630:21 but please note that gpus are only available for the n1 series and so
630:24 available for the n1 series and so combining the series the type and the
630:27 combining the series the type and the vcpu
630:28 vcpu you will get your machine type and so
630:31 you will get your machine type and so now that we've broken down the machine
630:32 now that we've broken down the machine types in order to properly define them
630:35 types in order to properly define them i wanted to get into the predefined
630:38 i wanted to get into the predefined machine type families
630:40 machine type families specifically starting off with the
630:42 specifically starting off with the general purpose predefined machine type
630:44 general purpose predefined machine type and all the general purpose machine
630:46 and all the general purpose machine types are available in the standard type
630:50 types are available in the standard type the high memory type and the high cpu
630:53 the high memory type and the high cpu type so the standard type
630:55 type so the standard type is the balance of cpu and memory and
630:58 is the balance of cpu and memory and this is the most common general purpose
631:00 this is the most common general purpose machine type general purpose also comes
631:03 machine type general purpose also comes in high memory and this is a high memory
631:06 in high memory and this is a high memory to cpu ratio so very high memory a lower
631:10 to cpu ratio so very high memory a lower cpu
631:11 cpu and lastly we have the high cpu machine
631:14 and lastly we have the high cpu machine type and this is a high cpu to memory
631:18 type and this is a high cpu to memory ratio so this would be the opposite of
631:20 ratio so this would be the opposite of the high memory so very high cpu to
631:23 the high memory so very high cpu to lower memory so now digging into the
631:26 lower memory so now digging into the general purpose machine family i wanted
631:28 general purpose machine family i wanted to start off with the e2 series and this
631:32 to start off with the e2 series and this is designed for day-to-day computing at
631:35 is designed for day-to-day computing at a low cost so if you're looking to do
631:37 a low cost so if you're looking to do things like web serving
631:39 things like web serving application serving
631:41 application serving back office applications
631:43 back office applications small to medium databases microservices
631:46 small to medium databases microservices virtual desktops or even development
631:49 virtual desktops or even development environments the e2 series would serve
631:52 environments the e2 series would serve the purpose perfectly
631:54 the purpose perfectly now the e2 machine types are cost
631:57 now the e2 machine types are cost optimized machine types that offer
631:59 optimized machine types that offer sizing between 2 to 32 vcpus and half a
632:04 sizing between 2 to 32 vcpus and half a gigabyte to 128 gigabytes of memory so
632:08 gigabyte to 128 gigabytes of memory so small to medium workloads that don't
632:10 small to medium workloads that don't require as many vcpus and applications
632:13 require as many vcpus and applications that don't require local ssds or gpus
632:17 that don't require local ssds or gpus are an ideal fit for e2 machines e2
632:21 are an ideal fit for e2 machines e2 machine types do not offer sustained use
632:23 machine types do not offer sustained use discounts however they do provide
632:26 discounts however they do provide consistently
632:28 consistently low on-demand and committed use pricing
632:31 low on-demand and committed use pricing in other words they offer the lowest
632:33 in other words they offer the lowest on-demand pricing across the general
632:36 on-demand pricing across the general purpose machine types as well the e2
632:39 purpose machine types as well the e2 series machines are available in both
632:43 series machines are available in both pre-defined and custom machine types
632:46 pre-defined and custom machine types moving on i wanted to touch on all the
632:48 moving on i wanted to touch on all the machine types available in the n-series
632:51 machine types available in the n-series and these are a balanced machine type
632:53 and these are a balanced machine type with price and performance across a wide
632:56 with price and performance across a wide range of vm flavors and these machines
632:59 range of vm flavors and these machines are designed for web servers application
633:02 are designed for web servers application servers back office applications medium
633:05 servers back office applications medium to large databases as well as caching
633:08 to large databases as well as caching and media streaming and they are offered
633:11 and media streaming and they are offered in the standard high memory and high cpu
633:15 in the standard high memory and high cpu types
633:16 types now the n1 machine types are compute
633:19 now the n1 machine types are compute engines first generation general purpose
633:22 engines first generation general purpose machine types now this machine type
633:25 machine types now this machine type offers up to 96 vcpus and 624 gigabytes
633:30 offers up to 96 vcpus and 624 gigabytes of memory and again as i mentioned
633:32 of memory and again as i mentioned earlier this is the only machine type
633:35 earlier this is the only machine type that offers both gpu support and tpu
633:39 that offers both gpu support and tpu support the n1 type is available as both
633:42 support the n1 type is available as both pre-defined machine types and custom
633:45 pre-defined machine types and custom machine types and the n1 series offers a
633:48 machine types and the n1 series offers a larger sustained use discount than n2
633:52 larger sustained use discount than n2 machine types speaking of which
633:54 machine types speaking of which the n2 machine types are the second
633:57 the n2 machine types are the second generation general purpose machine types
634:00 generation general purpose machine types and these offer flexible sizing between
634:03 and these offer flexible sizing between two 280 vcpus and half a gigabyte of
634:07 two 280 vcpus and half a gigabyte of memory to 640 gigabytes of memory and
634:11 memory to 640 gigabytes of memory and these machine types also offer an
634:13 these machine types also offer an overall performance improvement over the
634:16 overall performance improvement over the n1 machine types workloads that can take
634:18 n1 machine types workloads that can take advantage of the higher clock frequency
634:21 advantage of the higher clock frequency of the cpu
634:23 of the cpu are a good choice for n2 machine types
634:25 are a good choice for n2 machine types and these workloads can get higher per
634:28 and these workloads can get higher per thread performance while benefiting from
634:30 thread performance while benefiting from all the flexibility that a general
634:33 all the flexibility that a general purpose machine type offers and two
634:35 purpose machine type offers and two machine types also offer the extended
634:39 machine types also offer the extended memory feature and this helps control
634:41 memory feature and this helps control per cpu software licensing costs now
634:44 per cpu software licensing costs now getting into the last n series machine
634:46 getting into the last n series machine type the n2d machine type is the largest
634:50 type the n2d machine type is the largest general purpose machine type with up to
634:53 general purpose machine type with up to 224 vcpus and
634:56 224 vcpus and 896 gigabytes of memory this machine
634:59 896 gigabytes of memory this machine type is available in predefined and
635:02 type is available in predefined and custom machine types and this machine
635:04 custom machine types and this machine type as well has the extended memory
635:07 type as well has the extended memory feature which i discussed earlier that
635:09 feature which i discussed earlier that helps you avoid per cpu software
635:12 helps you avoid per cpu software licensing the n2d machine type supports
635:15 licensing the n2d machine type supports the committed use and sustain use
635:17 the committed use and sustain use discounts now moving on from the general
635:20 discounts now moving on from the general purpose machine type family i wanted to
635:23 purpose machine type family i wanted to move into the compute optimize machine
635:25 move into the compute optimize machine family now this series
635:27 family now this series offers ultra high performance for
635:29 offers ultra high performance for compute intensive workloads such as high
635:32 compute intensive workloads such as high performance computing
635:34 performance computing electronic design automation
635:36 electronic design automation gaming and single threaded applications
635:39 gaming and single threaded applications so anything that is designed for compute
635:42 so anything that is designed for compute intensive workloads this will definitely
635:44 intensive workloads this will definitely be your best choice
635:46 be your best choice now compute engine optimized machine
635:48 now compute engine optimized machine types are ideal for as i said earlier
635:51 types are ideal for as i said earlier compute intensive workloads and these
635:54 compute intensive workloads and these machine types offer the highest
635:56 machine types offer the highest performance per core
635:58 performance per core on compute engine compute optimized
636:00 on compute engine compute optimized types are only available as predefined
636:03 types are only available as predefined machine types and so they are not
636:06 machine types and so they are not available for any custom machine types
636:09 available for any custom machine types the c2 machine types offer a maximum of
636:12 the c2 machine types offer a maximum of 60 vcpus and a maximum of 240 gigabytes
636:17 60 vcpus and a maximum of 240 gigabytes of memory now although the c2 machine
636:19 of memory now although the c2 machine type works great for compute intensive
636:21 type works great for compute intensive workloads it does come with some caveats
636:25 workloads it does come with some caveats and so you cannot use regional
636:26 and so you cannot use regional persistent disks with compute optimized
636:29 persistent disks with compute optimized machine types and i will be getting into
636:31 machine types and i will be getting into the details of persistent disks in a
636:34 the details of persistent disks in a later lesson and they are only available
636:36 later lesson and they are only available in select zones and regions on select
636:40 in select zones and regions on select cpu platforms and so now moving into the
636:43 cpu platforms and so now moving into the last family is the memory optimize
636:45 last family is the memory optimize machine family and this is for ultra
636:48 machine family and this is for ultra high memory workloads this family is
636:51 high memory workloads this family is designed for large in memory databases
636:54 designed for large in memory databases like sap hana as well as in memory
636:57 like sap hana as well as in memory analytics
636:58 analytics now the m series comes in two separate
637:01 now the m series comes in two separate generations
637:03 generations m1 and m2 the m1 offering a maximum of
637:07 m1 and m2 the m1 offering a maximum of 160 vcpus and a maximum memory of
637:13 160 vcpus and a maximum memory of 3844 gigabytes whereas the m2 offering
637:17 3844 gigabytes whereas the m2 offering again a maximum of 160 vcpus but
637:21 again a maximum of 160 vcpus but offering a whopping 11
637:24 offering a whopping 11 776 gigabytes of maximum memory and as i
637:28 776 gigabytes of maximum memory and as i said before these machine types they're
637:30 said before these machine types they're ideal for tasks that require intensive
637:34 ideal for tasks that require intensive use of memory so they are suited for
637:36 use of memory so they are suited for in-memory databases and in memory
637:39 in-memory databases and in memory analytics data warehousing workloads
637:42 analytics data warehousing workloads genomics analysis and sql analysis
637:45 genomics analysis and sql analysis services memory optimized machine types
637:47 services memory optimized machine types are only available as predefined machine
637:50 are only available as predefined machine types and the caveats here is that you
637:53 types and the caveats here is that you cannot use regional persistent disks
637:55 cannot use regional persistent disks with memory optimized machine types as
637:58 with memory optimized machine types as well they're only available in specific
638:01 well they're only available in specific zones now i wanted to take a moment to
638:03 zones now i wanted to take a moment to go back
638:04 go back to the general purpose machine type so
638:07 to the general purpose machine type so that i can dig into the shared cord
638:09 that i can dig into the shared cord machine type and this is spread amongst
638:11 machine type and this is spread amongst the e2 and n1 series and these shared
638:15 the e2 and n1 series and these shared core machine types are used for
638:17 core machine types are used for burstable workloads are very cost
638:19 burstable workloads are very cost effective as well they're great for
638:22 effective as well they're great for non-resource intensive applications
638:25 non-resource intensive applications shared core machine types use context
638:27 shared core machine types use context switching to share a physical core
638:30 switching to share a physical core between vcpus for the purpose of
638:33 between vcpus for the purpose of multitasking different shared core
638:35 multitasking different shared core machine types sustain different amounts
638:38 machine types sustain different amounts of time on a physical core which allows
638:42 of time on a physical core which allows google cloud to cut the price in general
638:44 google cloud to cut the price in general share core instances can be more cost
638:48 share core instances can be more cost effective for running small
638:50 effective for running small non-resource intensive applications than
638:53 non-resource intensive applications than standard high memory or high cpu machine
638:57 standard high memory or high cpu machine types now when it comes to cpu bursting
639:00 types now when it comes to cpu bursting these shared core machine types offer
639:03 these shared core machine types offer bursting capabilities that allow
639:05 bursting capabilities that allow instances to use additional physical cpu
639:09 instances to use additional physical cpu for short periods of time bursting
639:11 for short periods of time bursting happens automatically when your instance
639:14 happens automatically when your instance requires more physical cpu than
639:16 requires more physical cpu than originally allocated during these spikes
639:20 originally allocated during these spikes your instance will take advantage of
639:22 your instance will take advantage of available physical cpu in bursts and the
639:26 available physical cpu in bursts and the e2 shared core machine type is offered
639:28 e2 shared core machine type is offered in micro small and medium while the n1
639:32 in micro small and medium while the n1 series is offered in the f1 micro and
639:35 series is offered in the f1 micro and the g1 small and both of these series
639:38 the g1 small and both of these series have a maximum of two vcpus with a
639:42 have a maximum of two vcpus with a maximum of four gigabytes of memory now
639:45 maximum of four gigabytes of memory now i wanted to take a moment to touch on
639:47 i wanted to take a moment to touch on custom machine types and these are
639:50 custom machine types and these are available for any general purpose
639:52 available for any general purpose machine and so this is customer defined
639:54 machine and so this is customer defined cpu and memory designed for custom
639:57 cpu and memory designed for custom workloads
639:59 workloads now if none of the general purpose
640:01 now if none of the general purpose predefined machine types cater to your
640:04 predefined machine types cater to your needs
640:05 needs you can create a custom machine type
640:07 you can create a custom machine type with a specific number of vcpus and
640:10 with a specific number of vcpus and amount of memory that you need for your
640:12 amount of memory that you need for your instance these machine types are ideal
640:15 instance these machine types are ideal for workloads that are not a good fit
640:18 for workloads that are not a good fit for the pre-defined machine types that
640:21 for the pre-defined machine types that are available they're also great for
640:23 are available they're also great for when you need more memory or more cpu
640:26 when you need more memory or more cpu but the predefined machine types don't
640:28 but the predefined machine types don't quite fit exactly what you need for your
640:31 quite fit exactly what you need for your workload just as a note it costs
640:33 workload just as a note it costs slightly more to use a custom machine
640:35 slightly more to use a custom machine type than a pre-defined machine type and
640:38 type than a pre-defined machine type and there are limitations in the amount of
640:40 there are limitations in the amount of memory and vcpu you can select and as i
640:43 memory and vcpu you can select and as i stated earlier when creating a custom
640:45 stated earlier when creating a custom machine type you can choose from the e2
640:49 machine type you can choose from the e2 n2
640:50 n2 and 2d and n1 machine types and so the
640:54 and 2d and n1 machine types and so the last part i wanted to touch on are the
640:56 last part i wanted to touch on are the gpus that are available and these are
640:58 gpus that are available and these are designed for the graphic intensive
641:01 designed for the graphic intensive workloads and again are only available
641:03 workloads and again are only available for the n1 machine type and gpus come in
641:07 for the n1 machine type and gpus come in five different flavors from nvidia
641:10 five different flavors from nvidia showing here as the tesla k80 the tesla
641:13 showing here as the tesla k80 the tesla p4 the tesla t4 the tesla v100 and the
641:18 p4 the tesla t4 the tesla v100 and the tesla p100 and so these are all the
641:21 tesla p100 and so these are all the families and machine types that are
641:23 families and machine types that are available for you in google cloud and
641:25 available for you in google cloud and will allow you to be a little bit more
641:27 will allow you to be a little bit more flexible with the type of workload that
641:30 flexible with the type of workload that you need them for and so for the exam
641:32 you need them for and so for the exam you won't have to memorize each machine
641:35 you won't have to memorize each machine type but you will need to know an
641:37 type but you will need to know an overview of what each machine type does
641:40 overview of what each machine type does now i know there's been a lot of theory
641:42 now i know there's been a lot of theory presented here in this lesson but i hope
641:44 presented here in this lesson but i hope this is giving you a better
641:45 this is giving you a better understanding of all the available
641:48 understanding of all the available pre-defined machine types in google
641:50 pre-defined machine types in google cloud and so that's pretty much all i
641:52 cloud and so that's pretty much all i wanted to cover in this lesson on
641:54 wanted to cover in this lesson on compute engine machine types so you can
641:56 compute engine machine types so you can now mark this lesson as complete and
641:58 now mark this lesson as complete and let's move on to the next one
642:00 let's move on to the next one [Music]
642:04 [Music] welcome back in this lesson i'm going to
642:07 welcome back in this lesson i'm going to be reviewing managing your instances now
642:10 be reviewing managing your instances now how you manage your instances is a big
642:12 how you manage your instances is a big topic in the exam
642:14 topic in the exam as well it's very useful to know for
642:17 as well it's very useful to know for your work as a cloud engineer in the
642:19 your work as a cloud engineer in the environments you are responsible for
642:22 environments you are responsible for knowing both the features that are
642:24 knowing both the features that are available as well as the best practices
642:26 available as well as the best practices will allow you to make better decisions
642:29 will allow you to make better decisions with regards to your instances and allow
642:32 with regards to your instances and allow you to keep your environment healthy
642:34 you to keep your environment healthy this lesson will dive into the many
642:37 this lesson will dive into the many features that are available in order to
642:39 features that are available in order to better manage your instances using the
642:42 better manage your instances using the specific features within google cloud so
642:45 specific features within google cloud so with that being said let's dive in
642:48 with that being said let's dive in now i wanted to start off this lesson
642:50 now i wanted to start off this lesson discussing the life cycle of an instance
642:53 discussing the life cycle of an instance within google cloud every instance has a
642:56 within google cloud every instance has a predefined life cycle from its starting
642:59 predefined life cycle from its starting provisioning state to its deletion an
643:02 provisioning state to its deletion an instance can transition through many
643:05 instance can transition through many instant states as part of its life cycle
643:07 instant states as part of its life cycle when you first create an instance
643:09 when you first create an instance compute engine provisions resources to
643:12 compute engine provisions resources to start your instance next the instance
643:15 start your instance next the instance moves into staging where it prepares the
643:18 moves into staging where it prepares the first boot and then it finally boots up
643:20 first boot and then it finally boots up and is considered running during its
643:22 and is considered running during its lifetime a running instance can be
643:25 lifetime a running instance can be repeatedly stopped and restarted or
643:28 repeatedly stopped and restarted or suspended and resumed so now i wanted to
643:31 suspended and resumed so now i wanted to take a few minutes to go through the
643:33 take a few minutes to go through the instance life cycle in a bit of detail
643:36 instance life cycle in a bit of detail starting with the provisioning state
643:38 starting with the provisioning state now this is where resources are being
643:41 now this is where resources are being allocated for the instance the instance
643:44 allocated for the instance the instance is not yet running and the instance is
643:46 is not yet running and the instance is being allocated its requested amount of
643:49 being allocated its requested amount of cpu and memory along with its root disk
643:52 cpu and memory along with its root disk any additional disks that are attached
643:54 any additional disks that are attached to it and as well some additional
643:57 to it and as well some additional feature sets that are assigned to this
643:59 feature sets that are assigned to this instance and when it comes to the cost
644:01 instance and when it comes to the cost while in the provisioning state there
644:03 while in the provisioning state there are no costs that are being incurred
644:05 are no costs that are being incurred moving right along to the staging state
644:08 moving right along to the staging state after finishing the provisioning state
644:10 after finishing the provisioning state the life cycle continues with the
644:12 the life cycle continues with the staging state and this is where
644:14 staging state and this is where resources have been acquired and the
644:16 resources have been acquired and the instance is being prepared for first
644:18 instance is being prepared for first boot both internal and external ips are
644:22 boot both internal and external ips are allocated and can be either static or
644:25 allocated and can be either static or ephemeral in the system image that was
644:27 ephemeral in the system image that was originally chosen for this instance
644:30 originally chosen for this instance is used to boot up the instance and this
644:32 is used to boot up the instance and this can be either a public image or a custom
644:35 can be either a public image or a custom image costs in the state are still not
644:38 image costs in the state are still not incurred as the instance is still in the
644:41 incurred as the instance is still in the pre-boot state
644:43 pre-boot state now once the instance has left staging
644:46 now once the instance has left staging it will move on to the running state and
644:49 it will move on to the running state and this is where the instance is booting up
644:51 this is where the instance is booting up or running and should allow you to log
644:54 or running and should allow you to log into the instance either using ssh or
644:57 into the instance either using ssh or rdp within a short waiting period due to
645:00 rdp within a short waiting period due to any startup scripts or any boot
645:03 any startup scripts or any boot maintenance tasks for the operating
645:05 maintenance tasks for the operating system now during the running state you
645:08 system now during the running state you can reset your instance and this is
645:10 can reset your instance and this is where you would wipe the memory contents
645:13 where you would wipe the memory contents of the vm instance and reset the virtual
645:16 of the vm instance and reset the virtual machine to its initial state resetting
645:18 machine to its initial state resetting an instance
645:20 an instance causes an immediate hard reset of the vm
645:23 causes an immediate hard reset of the vm and therefore the vm does not do a
645:25 and therefore the vm does not do a graceful shutdown for the guest
645:27 graceful shutdown for the guest operating system however
645:29 operating system however the vm retains all persistent disk data
645:33 the vm retains all persistent disk data and none of the instance properties
645:35 and none of the instance properties change the instance remains in running
645:38 change the instance remains in running state through the reset now as well in
645:40 state through the reset now as well in the running state a repair can happen
645:43 the running state a repair can happen due to the instance encountering an
645:46 due to the instance encountering an internal error or the underlying machine
645:48 internal error or the underlying machine is unavailable due to maintenance during
645:51 is unavailable due to maintenance during this time the instance is unusable and
645:54 this time the instance is unusable and if the repair is successful the instance
645:56 if the repair is successful the instance returns back to the running state paying
645:59 returns back to the running state paying attention to costs
646:00 attention to costs this state is where the instance starts
646:02 this state is where the instance starts to occur them and is related to the
646:05 to occur them and is related to the resources assigned to the instance like
646:07 resources assigned to the instance like the cpu and memory any static ips and
646:11 the cpu and memory any static ips and any disks that are attached to the
646:13 any disks that are attached to the instance and i will be going into a bit
646:15 instance and i will be going into a bit of detail in just a bit with regards to
646:18 of detail in just a bit with regards to this state
646:20 this state and finally we end the life cycle with
646:22 and finally we end the life cycle with the stopping suspended and terminated
646:25 the stopping suspended and terminated states now when you are suspending an
646:28 states now when you are suspending an instance it is like closing the lid of
646:30 instance it is like closing the lid of your laptop suspending the instance will
646:33 your laptop suspending the instance will preserve the guest operating system
646:35 preserve the guest operating system memory and application state of the
646:37 memory and application state of the instance otherwise it'll be discarded
646:41 instance otherwise it'll be discarded and from this state you can choose
646:43 and from this state you can choose either to resume or to delete it when it
646:46 either to resume or to delete it when it comes to stopping either a user has made
646:48 comes to stopping either a user has made a request to stop the instance or there
646:51 a request to stop the instance or there was a failure and this is a temporary
646:54 was a failure and this is a temporary status and the instance will move to
646:56 status and the instance will move to terminated touching on costs for just a
646:58 terminated touching on costs for just a second when suspending or stopping an
647:01 second when suspending or stopping an instance you pay for resources that are
647:04 instance you pay for resources that are still attached to the vm instance
647:07 still attached to the vm instance such as static ips and persistent disk
647:10 such as static ips and persistent disk data you do not pay the cost of a
647:12 data you do not pay the cost of a running vm instance ephemeral external
647:15 running vm instance ephemeral external ip addresses are released from the
647:17 ip addresses are released from the instance and will be assigned a new one
647:20 instance and will be assigned a new one when the instance is started now when it
647:22 when the instance is started now when it comes to stopping suspending or
647:24 comes to stopping suspending or resetting an instance you can stop or
647:27 resetting an instance you can stop or suspend an instance if you no longer
647:29 suspend an instance if you no longer need it but want to keep the instance
647:31 need it but want to keep the instance around for future use compute engine
647:34 around for future use compute engine waits for the guest to finish shutting
647:36 waits for the guest to finish shutting down and then transitions the instance
647:39 down and then transitions the instance to the terminated state so touching on
647:41 to the terminated state so touching on the terminated state this is where a
647:44 the terminated state this is where a user either shuts down the instance or
647:47 user either shuts down the instance or the instance encounters a failure you
647:49 the instance encounters a failure you can choose to restart the instance or
647:51 can choose to restart the instance or delete it as well as holding some reset
647:54 delete it as well as holding some reset options within the availability policy
647:57 options within the availability policy in this state you still pay for static
648:00 in this state you still pay for static ips and disks
648:01 ips and disks but like the suspending or stopping
648:03 but like the suspending or stopping state you do not pay for the cpu and
648:06 state you do not pay for the cpu and memory resources allocated to the
648:09 memory resources allocated to the instance
648:10 instance and so this covers a high level overview
648:13 and so this covers a high level overview of the instance lifecycle in google
648:15 of the instance lifecycle in google cloud and all of the states that make up
648:18 cloud and all of the states that make up this lifecycle now to get into some
648:20 this lifecycle now to get into some detail with regards to some feature sets
648:23 detail with regards to some feature sets for compute engine i wanted to revisit
648:25 for compute engine i wanted to revisit the states where those features apply
648:28 the states where those features apply now when creating your instance you have
648:30 now when creating your instance you have the option of using shielded vms for
648:34 the option of using shielded vms for added security and when using them the
648:36 added security and when using them the instance would instantiate them as the
648:39 instance would instantiate them as the instance boots and enters into the
648:41 instance boots and enters into the running state
648:43 running state so what exactly is a shielded vm
648:46 so what exactly is a shielded vm well shielded vms offer verifiable
648:49 well shielded vms offer verifiable integrity of your compute engine vm
648:52 integrity of your compute engine vm instances so you can be sure that your
648:54 instances so you can be sure that your instances haven't been compromised by
648:57 instances haven't been compromised by boot or kernel level malware or rootkits
649:00 boot or kernel level malware or rootkits and this is achieved through a four-step
649:02 and this is achieved through a four-step process
649:03 process which is covered by secure boot virtual
649:06 which is covered by secure boot virtual trusted platform module also known as
649:09 trusted platform module also known as vtpm measure boot which is running on
649:12 vtpm measure boot which is running on vtpm and integrity monitoring so i
649:15 vtpm and integrity monitoring so i wanted to dig into this for just a sec
649:17 wanted to dig into this for just a sec to give you a bit more context
649:20 to give you a bit more context now the boot process for shielded vms
649:24 now the boot process for shielded vms start with secure boot and this helps
649:26 start with secure boot and this helps ensure that the system only runs
649:29 ensure that the system only runs authentic software by verifying the
649:32 authentic software by verifying the digital signature for all boot
649:34 digital signature for all boot components and stopping the boot process
649:37 components and stopping the boot process if signature verification fails so
649:40 if signature verification fails so shielded vm instances run firmware
649:42 shielded vm instances run firmware that's signed and verified using
649:45 that's signed and verified using google's certificate authority and on
649:47 google's certificate authority and on each and every boot any boot component
649:50 each and every boot any boot component that isn't properly signed or isn't
649:52 that isn't properly signed or isn't signed at all is not allowed to run and
649:55 signed at all is not allowed to run and so the first time you boot a vm instance
649:58 so the first time you boot a vm instance measure boot creates the integrity
650:00 measure boot creates the integrity policy baseline from the first set of
650:03 policy baseline from the first set of these measurements and then securely
650:05 these measurements and then securely stores this data each time the vm
650:07 stores this data each time the vm instance boots after that these
650:09 instance boots after that these measurements are taken again and stored
650:11 measurements are taken again and stored in secure memory until the next reboot
650:14 in secure memory until the next reboot having these two sets of measurements
650:16 having these two sets of measurements enables integrity monitoring which is
650:19 enables integrity monitoring which is the next step and allows it to determine
650:22 the next step and allows it to determine if there have been changes to a vm
650:24 if there have been changes to a vm instance's boot sequence and this policy
650:27 instance's boot sequence and this policy is loaded onto a virtualized trusted
650:30 is loaded onto a virtualized trusted platform module again known as the vtpm
650:33 platform module again known as the vtpm for short which is a specialized
650:35 for short which is a specialized computer chip that you can use to
650:37 computer chip that you can use to protect objects like keys and
650:40 protect objects like keys and certificates that you use to
650:43 certificates that you use to authenticate access to your system with
650:45 authenticate access to your system with shielded vms vtpm enables measured boot
650:49 shielded vms vtpm enables measured boot by performing the measurements needed to
650:52 by performing the measurements needed to create a known good boot baseline and
650:55 create a known good boot baseline and this is called the integrity policy
650:57 this is called the integrity policy baseline the integrity policy baseline
651:00 baseline the integrity policy baseline is used for comparison
651:02 is used for comparison with measurements from subsequent vm
651:04 with measurements from subsequent vm boots to determine if anything has
651:07 boots to determine if anything has changed integrity monitoring relies on
651:10 changed integrity monitoring relies on the measurements created by measured
651:12 the measurements created by measured boot for both the integrity policy
651:14 boot for both the integrity policy baseline and the most recent boot
651:16 baseline and the most recent boot sequence integrity monitoring compares
651:19 sequence integrity monitoring compares the most recent boot measurements
651:22 the most recent boot measurements to the integrity policy baseline and
651:24 to the integrity policy baseline and returns a pair of pass or failed results
651:28 returns a pair of pass or failed results depending on whether they match or not
651:30 depending on whether they match or not one for the early boot sequence and one
651:33 one for the early boot sequence and one for the late boot sequence and so in
651:35 for the late boot sequence and so in summary this is how shielded vms help
651:39 summary this is how shielded vms help prevent data exfiltration so touching
651:42 prevent data exfiltration so touching now on the running state when you start
651:45 now on the running state when you start a vm instance using google provided
651:48 a vm instance using google provided public images a guest environment is
651:51 public images a guest environment is automatically installed on the vm
651:54 automatically installed on the vm instance a guest environment is a set of
651:56 instance a guest environment is a set of scripts daemons and binaries that read
652:00 scripts daemons and binaries that read the content of the metadata server to
652:02 the content of the metadata server to make a virtual machine run properly on
652:06 make a virtual machine run properly on compute engine a metadata server is a
652:09 compute engine a metadata server is a communication channel for transferring
652:11 communication channel for transferring information from a client to the guest
652:14 information from a client to the guest operating system vm instances created
652:17 operating system vm instances created using google provided public images
652:19 using google provided public images include a guest environment that is
652:21 include a guest environment that is installed by default creating vm
652:24 installed by default creating vm instances
652:25 instances using a custom image will require you to
652:28 using a custom image will require you to manually install the guest environment
652:30 manually install the guest environment this guest environment is available for
652:32 this guest environment is available for both linux and windows systems and each
652:36 both linux and windows systems and each supported operating system that is
652:38 supported operating system that is available on compute engine requires
652:40 available on compute engine requires specific guest environment packages
652:42 specific guest environment packages either google or the owner of the
652:44 either google or the owner of the operating system builds these packages
652:47 operating system builds these packages now when it comes to the linux guest
652:49 now when it comes to the linux guest environment it is either built by google
652:52 environment it is either built by google or the owner of the operating system
652:54 or the owner of the operating system and there are some key components that
652:56 and there are some key components that are applicable to all builds which can
652:59 are applicable to all builds which can be found in the link that i have
653:00 be found in the link that i have included in the lesson text the base
653:02 included in the lesson text the base components of a linux guest environment
653:05 components of a linux guest environment is a python package that contains
653:08 is a python package that contains scripts daemons and packages for the
653:11 scripts daemons and packages for the supported linux distributions when it
653:13 supported linux distributions when it comes to windows a similar approach
653:16 comes to windows a similar approach applies where a package is available
653:18 applies where a package is available with main scripts and binaries as a part
653:21 with main scripts and binaries as a part of this guest environment
653:23 of this guest environment now touching back on the metadata server
653:26 now touching back on the metadata server compute engine provides a method for
653:28 compute engine provides a method for storing and retrieving metadata in the
653:31 storing and retrieving metadata in the form of the metadata server this service
653:35 form of the metadata server this service provides a central point to set metadata
653:38 provides a central point to set metadata in the form of key value pairs which is
653:41 in the form of key value pairs which is then provided to virtual machines at
653:43 then provided to virtual machines at runtime and you can query this metadata
653:45 runtime and you can query this metadata server programmatically from within the
653:48 server programmatically from within the instance and from the compute engine api
653:51 instance and from the compute engine api this is great for use with startup and
653:53 this is great for use with startup and shutdown scripts or gaining more insight
653:56 shutdown scripts or gaining more insight with your instance metadata can be
653:58 with your instance metadata can be assigned to projects as well as
654:00 assigned to projects as well as instances and project metadata
654:02 instances and project metadata propagates to all instances within the
654:05 propagates to all instances within the project while instance metadata only
654:08 project while instance metadata only impacts that instance and you can access
654:10 impacts that instance and you can access the metadata using the following url
654:14 the metadata using the following url with the curl command you see here on
654:16 with the curl command you see here on the screen so if you're looking for the
654:18 the screen so if you're looking for the metadata for a project you would use the
654:21 metadata for a project you would use the first url that ends in project and for
654:24 first url that ends in project and for any instance metadata you can use the
654:26 any instance metadata you can use the second url that ends in instance now
654:29 second url that ends in instance now please note that when you make a request
654:31 please note that when you make a request to get information from the metadata
654:33 to get information from the metadata server your request and the subsequent
654:36 server your request and the subsequent metadata response never leaves the
654:39 metadata response never leaves the physical host running the virtual
654:41 physical host running the virtual machine instance now once the instance
654:44 machine instance now once the instance has booted and has gone through the
654:46 has booted and has gone through the startup scripts you will then have the
654:48 startup scripts you will then have the ability to login to your instance using
654:51 ability to login to your instance using ssh or rdp now there are some different
654:54 ssh or rdp now there are some different methods that you can use to connect and
654:57 methods that you can use to connect and access both your linux instances and
655:00 access both your linux instances and your windows instances that i will be
655:02 your windows instances that i will be going over
655:03 going over now when it comes to linux instances
655:06 now when it comes to linux instances we've already gone through accessing
655:08 we've already gone through accessing these types of instances in previous
655:11 these types of instances in previous lessons and demos but just as a
655:13 lessons and demos but just as a refresher you would typically connect to
655:16 refresher you would typically connect to your vm instance via ssh access on port
655:19 your vm instance via ssh access on port 22. please note that you will require a
655:23 22. please note that you will require a firewall rule as we have done in
655:25 firewall rule as we have done in previous demos to allow this access and
655:28 previous demos to allow this access and you can connect to your linux instances
655:31 you can connect to your linux instances through the google cloud console or the
655:33 through the google cloud console or the cloud shell using the cloud sdk now i
655:36 cloud shell using the cloud sdk now i know that the use of ssh keys are the
655:39 know that the use of ssh keys are the defacto when it comes to logging into
655:42 defacto when it comes to logging into linux instances now in most scenarios on
655:45 linux instances now in most scenarios on google cloud google recommends using os
655:48 google cloud google recommends using os login over using ssh keys the os login
655:52 login over using ssh keys the os login feature lets you use compute engine iam
655:55 feature lets you use compute engine iam roles to manage ssh access to linux
655:58 roles to manage ssh access to linux instances and then if you'd like you can
656:01 instances and then if you'd like you can add an extra layer of security by
656:03 add an extra layer of security by setting up os login with two-step
656:06 setting up os login with two-step verification and manage access at the
656:09 verification and manage access at the organization level by setting up
656:11 organization level by setting up organizational policies os login
656:14 organizational policies os login simplifies ssh access management by
656:17 simplifies ssh access management by linking your linux user account to your
656:19 linking your linux user account to your google identity administrators can
656:22 google identity administrators can easily manage access to instances
656:25 easily manage access to instances at either an instance or project level
656:28 at either an instance or project level by setting iam permissions now if you're
656:31 by setting iam permissions now if you're running your own directory service for
656:33 running your own directory service for managing access or are unable to set up
656:36 managing access or are unable to set up os login you can manually manage ssh
656:40 os login you can manually manage ssh keys and local user accounts in metadata
656:43 keys and local user accounts in metadata by manually creating ssh keys and
656:46 by manually creating ssh keys and editing the public ssh key metadata now
656:49 editing the public ssh key metadata now when it comes to windows instances you
656:52 when it comes to windows instances you would typically connect to your vm
656:54 would typically connect to your vm instance via rdp access on port 3389 and
656:59 instance via rdp access on port 3389 and please note that you will also require a
657:02 please note that you will also require a firewall rule as shown here to allow
657:05 firewall rule as shown here to allow this access you can connect to your
657:07 this access you can connect to your windows instances through the rdp
657:09 windows instances through the rdp protocol or through a powershell
657:12 protocol or through a powershell terminal now when logging into windows
657:15 terminal now when logging into windows this requires setting a windows password
657:18 this requires setting a windows password and can be done either through the
657:19 and can be done either through the console or the gcloud command line tool
657:22 console or the gcloud command line tool and then after setting your password you
657:25 and then after setting your password you can then log in from the recommended rdp
657:28 can then log in from the recommended rdp chrome extension or using a third-party
657:30 chrome extension or using a third-party rdp client and i will provide a link to
657:33 rdp client and i will provide a link to this rdp chrome extension in the lesson
657:36 this rdp chrome extension in the lesson text now once the instance has booted up
657:39 text now once the instance has booted up and your instance is ready to be logged
657:41 and your instance is ready to be logged into you always have the option of
657:44 into you always have the option of modifying your instance and you can do
657:46 modifying your instance and you can do it manually by either modifying it on
657:48 it manually by either modifying it on the fly or you can take the necessary
657:50 the fly or you can take the necessary steps to edit your instance like i
657:53 steps to edit your instance like i showed you in a previous lesson by
657:55 showed you in a previous lesson by stopping it editing it and then
657:57 stopping it editing it and then restarting it although when it comes to
658:00 restarting it although when it comes to google having to do maintenance on a vm
658:02 google having to do maintenance on a vm or you merely want to move your instance
658:05 or you merely want to move your instance to a different zone in the same region
658:07 to a different zone in the same region this has all become possible without
658:10 this has all become possible without shutting down your instance
658:12 shutting down your instance using a feature called live migration
658:15 using a feature called live migration now when it comes to live migration
658:17 now when it comes to live migration compute engine migrates your running
658:20 compute engine migrates your running instances to another host
658:22 instances to another host in the same zone instead of requiring
658:25 in the same zone instead of requiring your vms to be rebooted this allows
658:28 your vms to be rebooted this allows google to perform maintenance reliably
658:31 google to perform maintenance reliably without interrupting any of your vms
658:34 without interrupting any of your vms when a vm is scheduled to be live
658:36 when a vm is scheduled to be live migrated google provides a notification
658:38 migrated google provides a notification to the guest that a migration is coming
658:41 to the guest that a migration is coming soon live migration keeps your instances
658:44 soon live migration keeps your instances running during compute engine hosts that
658:47 running during compute engine hosts that are in need of regular infrastructure
658:49 are in need of regular infrastructure maintenance and upgrades replacement of
658:51 maintenance and upgrades replacement of failed hardware and system configuration
658:54 failed hardware and system configuration changes when google migrates a running
658:56 changes when google migrates a running vm instance from one host to another
658:59 vm instance from one host to another it moves the complete instance state
659:02 it moves the complete instance state from the source to the destination in a
659:05 from the source to the destination in a way that is transparent to the guest os
659:08 way that is transparent to the guest os and anyone communicating with it google
659:11 and anyone communicating with it google also gives you the option of doing live
659:13 also gives you the option of doing live migration manually from one zone to
659:16 migration manually from one zone to another within the same region either
659:18 another within the same region either using the console or running the command
659:21 using the console or running the command line you see here gcloud compute
659:23 line you see here gcloud compute instances move the name of the vm with
659:26 instances move the name of the vm with the zone flag and the zone that it's
659:28 the zone flag and the zone that it's currently in and then the destination
659:30 currently in and then the destination zone flag with the zone that you wanted
659:33 zone flag with the zone that you wanted to go to and just as a note with some
659:35 to go to and just as a note with some caveats instances with gpus attached
659:38 caveats instances with gpus attached cannot be live migrated and you can't
659:41 cannot be live migrated and you can't configure a preemptable instance to live
659:44 configure a preemptable instance to live migrate and so instance lifecycle is
659:47 migrate and so instance lifecycle is full of different options and
659:49 full of different options and understanding them can help better
659:51 understanding them can help better coordinate moving editing and repairing
659:54 coordinate moving editing and repairing vm instances no matter where they may
659:57 vm instances no matter where they may lie in this life cycle now i hope this
660:00 lie in this life cycle now i hope this lesson has given you the necessary
660:02 lesson has given you the necessary theory that will help better use the
660:04 theory that will help better use the discuss feature sets and giving you some
660:07 discuss feature sets and giving you some ideas on how to better manage your
660:09 ideas on how to better manage your instances now there is a lot more to
660:12 instances now there is a lot more to know than what i've shown you here to
660:14 know than what i've shown you here to manage your instances but topics shown
660:16 manage your instances but topics shown here are what shows up in the exam as
660:19 here are what shows up in the exam as well are some really great starting
660:21 well are some really great starting points to begin managing your instances
660:24 points to begin managing your instances and so that's pretty much all i wanted
660:26 and so that's pretty much all i wanted to cover when it comes to managing
660:28 to cover when it comes to managing instances so you can now mark this
660:30 instances so you can now mark this lesson as complete and join me in the
660:32 lesson as complete and join me in the next one where i will cement the theory
660:34 next one where i will cement the theory in this lesson with the hands-on demo
660:37 in this lesson with the hands-on demo [Music]
660:41 [Music] welcome back in this demonstration i'm
660:44 welcome back in this demonstration i'm going to be cementing some of the theory
660:46 going to be cementing some of the theory that we learned in the last lesson with
660:49 that we learned in the last lesson with regards to the different login methods
660:52 regards to the different login methods for windows and linux instances how to
660:55 for windows and linux instances how to implement these methods are extremely
660:57 implement these methods are extremely useful to know both for the exam and for
661:00 useful to know both for the exam and for managing multiple instances in different
661:03 managing multiple instances in different environments now there's a lot to cover
661:05 environments now there's a lot to cover here so with that being said let's dive
661:08 here so with that being said let's dive in so as you can see i am logged in here
661:11 in so as you can see i am logged in here under tony bowtie ace
661:13 under tony bowtie ace gmail.com as well i am in the project of
661:16 gmail.com as well i am in the project of bowtie inc and so the first thing that i
661:19 bowtie inc and so the first thing that i want to do is create both a linux
661:22 want to do is create both a linux instance and a windows instance and this
661:25 instance and a windows instance and this is to demonstrate the different options
661:27 is to demonstrate the different options you have for logging into an instance
661:30 you have for logging into an instance and so in order for me to do that i need
661:32 and so in order for me to do that i need to head on over to compute engine so i'm
661:35 to head on over to compute engine so i'm going to go over to the navigation menu
661:37 going to go over to the navigation menu and i'm going to scroll down to compute
661:38 and i'm going to scroll down to compute engine and so just as a note before
661:41 engine and so just as a note before creating your instances please make sure
661:44 creating your instances please make sure that you have a default vpc created
661:47 that you have a default vpc created before going ahead and creating these
661:49 before going ahead and creating these instances if you've forgotten how to
661:51 instances if you've forgotten how to create a default vpc please go back to
661:54 create a default vpc please go back to the networking services section and
661:56 the networking services section and watch the vpc lesson for a refresher and
661:59 watch the vpc lesson for a refresher and so i'm going to go ahead and create my
662:01 so i'm going to go ahead and create my first instance and i'm going to start
662:03 first instance and i'm going to start with the windows instance so i'm going
662:06 with the windows instance so i'm going to simply click on create
662:08 to simply click on create and so for the name of this instance you
662:10 and so for the name of this instance you can simply call this windows dash
662:13 can simply call this windows dash instance
662:14 instance and i'm not going to add any labels and
662:17 and i'm not going to add any labels and for the region you should select us
662:19 for the region you should select us east1 and you can keep the zone as the
662:22 east1 and you can keep the zone as the default for us east 1b and scrolling
662:25 default for us east 1b and scrolling down to the machine configuration for
662:27 down to the machine configuration for the machine type i'm going to keep it as
662:30 the machine type i'm going to keep it as is as it is a windows instance and i'm
662:33 is as it is a windows instance and i'm going to need a little bit more power
662:35 going to need a little bit more power scrolling down to boot disk we need to
662:37 scrolling down to boot disk we need to change this from debian over to windows
662:40 change this from debian over to windows so i'm going to simply click on the
662:42 so i'm going to simply click on the change button and under operating system
662:44 change button and under operating system i'm going to click on the drop down and
662:46 i'm going to click on the drop down and select windows server for the version
662:49 select windows server for the version i'm going to select the latest version
662:51 i'm going to select the latest version of windows server which is the windows
662:53 of windows server which is the windows server 2019 data center and you can keep
662:57 server 2019 data center and you can keep the boot disk type and the size as its
663:00 the boot disk type and the size as its default and simply head on down and
663:02 default and simply head on down and click on select and we're going to leave
663:04 click on select and we're going to leave everything else as the default and
663:07 everything else as the default and simply click on create
663:10 simply click on create and success our windows instance has
663:12 and success our windows instance has been created and so the first thing that
663:14 been created and so the first thing that you want to do is you want to set a
663:16 you want to do is you want to set a windows password for this instance and
663:19 windows password for this instance and so i'm going to head on over to the rdp
663:21 so i'm going to head on over to the rdp button and i'm going to click on the
663:23 button and i'm going to click on the drop-down and here i'm going to select
663:25 drop-down and here i'm going to select set windows password and here i'm going
663:28 set windows password and here i'm going to get a pop-up to set a new windows
663:30 to get a pop-up to set a new windows password the username has been
663:32 password the username has been propagated for me as tony bowties i'm
663:35 propagated for me as tony bowties i'm going to leave it as is and i'm going to
663:37 going to leave it as is and i'm going to click on set
663:40 click on set and i'm going to be prompted with a new
663:42 and i'm going to be prompted with a new windows password that has been set for
663:44 windows password that has been set for me so i'm going to copy this and i'm
663:46 me so i'm going to copy this and i'm going to paste it into my notepad so be
663:49 going to paste it into my notepad so be sure to record it somewhere either write
663:51 sure to record it somewhere either write it down
663:52 it down or copy and paste it into a text editor
663:55 or copy and paste it into a text editor of your choice i'm going to click on
663:56 of your choice i'm going to click on close and so now for me to log into this
663:59 close and so now for me to log into this i need to make sure of a couple things
664:02 i need to make sure of a couple things the first thing is i need to make sure
664:04 the first thing is i need to make sure that i have a firewall rule open for
664:07 that i have a firewall rule open for port 3389 the second is i need to make
664:11 port 3389 the second is i need to make sure that i have an rdp client and so in
664:14 sure that i have an rdp client and so in order to satisfy my first constraint i'm
664:16 order to satisfy my first constraint i'm going to head on over to the navigation
664:19 going to head on over to the navigation menu and go down to vpc network
664:22 menu and go down to vpc network here i'm going to select firewall and as
664:25 here i'm going to select firewall and as expected the rdp firewall rule has been
664:29 expected the rdp firewall rule has been already created due to the fact that
664:31 already created due to the fact that upon creation of the default vpc network
664:34 upon creation of the default vpc network this default firewall rule is always
664:37 this default firewall rule is always created and so now that i've gotten that
664:39 created and so now that i've gotten that out of the way i'm going to head back on
664:41 out of the way i'm going to head back on over to compute engine
664:43 over to compute engine and what i'm going to do is i'm going to
664:44 and what i'm going to do is i'm going to record the external ip so that i'll be
664:47 record the external ip so that i'll be able to log into it now i'm going to be
664:50 able to log into it now i'm going to be logging into this instance
664:52 logging into this instance from both a windows client and a mac
664:55 from both a windows client and a mac client so starting with windows i'm
664:57 client so starting with windows i'm going to head on over to my windows
664:58 going to head on over to my windows virtual machine and because i know
665:01 virtual machine and because i know windows has a default rdp client already
665:04 windows has a default rdp client already built in i'm going to simply bring it up
665:06 built in i'm going to simply bring it up by hitting the windows key and typing
665:08 by hitting the windows key and typing remote desktop connection
665:11 remote desktop connection i'm going to click on that i'm going to
665:13 i'm going to click on that i'm going to paste in the public ip for the instance
665:15 paste in the public ip for the instance that i just recorded and i'm going to
665:17 that i just recorded and i'm going to click on connect you should get a pop-up
665:19 click on connect you should get a pop-up asking for your credentials i'm going to
665:21 asking for your credentials i'm going to type in my username as tony bowtie ace
665:25 type in my username as tony bowtie ace as well i'm going to paste in the
665:27 as well i'm going to paste in the password and i'm going to click on ok
665:29 password and i'm going to click on ok i'm prompted to accept the security
665:31 i'm prompted to accept the security certificate and i'm going to select yes
665:34 certificate and i'm going to select yes and success
665:36 and success i'm now connected to my windows server
665:38 i'm now connected to my windows server instance and it's going to run all its
665:41 instance and it's going to run all its necessary startup scripts you may get a
665:43 necessary startup scripts you may get a couple of prompts that come up
665:45 couple of prompts that come up asking you if you want to connect to
665:47 asking you if you want to connect to your network
665:48 your network absolutely i'm going to close down
665:50 absolutely i'm going to close down server manager just for now
665:53 server manager just for now and another thing that i wanted to note
665:55 and another thing that i wanted to note is that when you create a windows
665:56 is that when you create a windows instance there will automatically be
665:59 instance there will automatically be provisioned a google cloud shell with
666:02 provisioned a google cloud shell with the sdk pre-installed and so you'll be
666:05 the sdk pre-installed and so you'll be able to run all your regular commands
666:07 able to run all your regular commands right from this shell without having to
666:10 right from this shell without having to install it and this is due to the guest
666:12 install it and this is due to the guest environment that was automatically
666:14 environment that was automatically installed on the vm instance upon
666:17 installed on the vm instance upon creation and this is a perfect example
666:19 creation and this is a perfect example of some of the scripts that are
666:21 of some of the scripts that are installed with the guest environment i'm
666:23 installed with the guest environment i'm going to go ahead and close out of this
666:25 going to go ahead and close out of this and i'm going to go ahead and close out
666:26 and i'm going to go ahead and close out of my instance
666:29 of my instance hit ok and so being here in windows i
666:31 hit ok and so being here in windows i wanted to show you an alternate way of
666:34 wanted to show you an alternate way of logging into your instance through
666:36 logging into your instance through powershell so for those of you who are
666:38 powershell so for those of you who are quite versed in windows and use
666:41 quite versed in windows and use powershell in your day-to-day there is
666:43 powershell in your day-to-day there is an easy way to log into your instance
666:45 an easy way to log into your instance using powershell now in order for me to
666:48 using powershell now in order for me to do that i need to open another firewall
666:51 do that i need to open another firewall rule covering tcp port 5986 so i'm going
666:55 rule covering tcp port 5986 so i'm going to head on over back to the google cloud
666:57 to head on over back to the google cloud console i'm going to head over to the
666:59 console i'm going to head over to the navigation menu and i'm going to scroll
667:01 navigation menu and i'm going to scroll down to vpc network
667:04 down to vpc network i'm going to go into firewall and i'm
667:06 i'm going to go into firewall and i'm going to create a new firewall rule and
667:08 going to create a new firewall rule and under name i'm going to name this as
667:12 under name i'm going to name this as allow
667:13 allow powershell i'm going to use the same for
667:15 powershell i'm going to use the same for the description i'm going to scroll down
667:17 the description i'm going to scroll down to targets and i'm going to select all
667:20 to targets and i'm going to select all instances in the network and under
667:22 instances in the network and under source ip ranges for this demonstration
667:25 source ip ranges for this demonstration i'm going to use
667:27 i'm going to use 0.0.0.0 forward slash 0. and again this
667:30 0.0.0.0 forward slash 0. and again this should not be used in a production
667:32 should not be used in a production environment but is used merely for this
667:34 environment but is used merely for this demo i'm going to leave everything else
667:36 demo i'm going to leave everything else as is and i'm going to go down to
667:38 as is and i'm going to go down to protocols and ports i'm going to click
667:40 protocols and ports i'm going to click on tcp and i'm going to type in 5986 for
667:44 on tcp and i'm going to type in 5986 for the port and i'm going to click on
667:46 the port and i'm going to click on create i'm going to give it a second
667:48 create i'm going to give it a second just to create and it took a couple
667:50 just to create and it took a couple seconds but our firewall rule is now
667:53 seconds but our firewall rule is now created and so now i'm gonna head over
667:55 created and so now i'm gonna head over to my windows vm and i'm gonna open up a
667:58 to my windows vm and i'm gonna open up a powershell command prompt and hit the
668:00 powershell command prompt and hit the windows key and type in powershell
668:03 windows key and type in powershell and so in order for me to not get
668:05 and so in order for me to not get constantly asked about my username and
668:08 constantly asked about my username and password i'm going to use a variable
668:11 password i'm going to use a variable that will keep my password for me and so
668:13 that will keep my password for me and so every time i connect to my windows
668:15 every time i connect to my windows instance i won't need to type it in all
668:17 instance i won't need to type it in all the time and so the command for that is
668:19 the time and so the command for that is dollar sign credentials equals get dash
668:22 dollar sign credentials equals get dash credential i'm going to hit enter and
668:24 credential i'm going to hit enter and i'm going to get a prompt to type in my
668:26 i'm going to get a prompt to type in my username and password so i'm going to
668:28 username and password so i'm going to simply type that in now along with my
668:30 simply type that in now along with my password and hit ok and if you don't get
668:33 password and hit ok and if you don't get a prompt with any errors then chances
668:36 a prompt with any errors then chances are that you've been successful at
668:38 are that you've been successful at entering your credentials and so now in
668:40 entering your credentials and so now in order to connect to the instance you're
668:42 order to connect to the instance you're going to need the public ip address
668:44 going to need the public ip address again so i'm going to head on over back
668:46 again so i'm going to head on over back to the console i'm going to head on over
668:48 to the console i'm going to head on over to the navigation menu and back to
668:50 to the navigation menu and back to compute engine here i'm going to record
668:52 compute engine here i'm going to record the external ip and i'm going to head on
668:54 the external ip and i'm going to head on over back to my windows virtual machine
668:56 over back to my windows virtual machine and so you're going to enter this
668:58 and so you're going to enter this command which i will include in the
669:00 command which i will include in the lesson text and you'll also be able to
669:02 lesson text and you'll also be able to find it in the github repository beside
669:04 find it in the github repository beside computer name you're going to put in
669:06 computer name you're going to put in your public ip address of your windows
669:09 your public ip address of your windows instance and make sure at the end you
669:11 instance and make sure at the end you have your credentials variable i'm going
669:14 have your credentials variable i'm going to simply click enter and success i'm
669:17 to simply click enter and success i'm now connected to my windows instance in
669:20 now connected to my windows instance in google cloud so as you can see here on
669:22 google cloud so as you can see here on the left
669:23 the left is the public ip of my windows instance
669:26 is the public ip of my windows instance and so these are the various ways that
669:28 and so these are the various ways that you can connect to your windows instance
669:31 you can connect to your windows instance from a windows machine and so now for me
669:33 from a windows machine and so now for me to connect to my windows instance on a
669:35 to connect to my windows instance on a mac i'm going to head on over there now
669:37 mac i'm going to head on over there now and like i said before i need to satisfy
669:40 and like i said before i need to satisfy the constraint of having an rdp client
669:43 the constraint of having an rdp client unfortunately mac does not come with an
669:45 unfortunately mac does not come with an rdp client and so the recommended tool
669:48 rdp client and so the recommended tool to use is the chrome extension but i
669:51 to use is the chrome extension but i personally like microsoft's rdp for mac
669:54 personally like microsoft's rdp for mac application and so i'm going to go ahead
669:56 application and so i'm going to go ahead and do a walkthrough of the installation
669:58 and do a walkthrough of the installation so i'm going to start off by opening up
670:00 so i'm going to start off by opening up safari and i'm going to paste in this
670:02 safari and i'm going to paste in this url which i will include in the lesson
670:04 url which i will include in the lesson text
670:06 text and microsoft has made available a
670:08 and microsoft has made available a microsoft remote desktop app available
670:11 microsoft remote desktop app available in the app store i'm going to go ahead
670:13 in the app store i'm going to go ahead and view it in the app store and i'm
670:15 and view it in the app store and i'm going to simply click on get and then
670:17 going to simply click on get and then install and once you've entered your
670:19 install and once you've entered your credentials and you've downloaded and
670:21 credentials and you've downloaded and installed it you can simply click on
670:23 installed it you can simply click on open i'm going to click on not now and
670:26 open i'm going to click on not now and continue and i'm going to close all
670:28 continue and i'm going to close all these other windows for better viewing
670:30 these other windows for better viewing i'm going to click on add pc i'm going
670:32 i'm going to click on add pc i'm going to paste in the public ip address of my
670:34 to paste in the public ip address of my windows instance and under user account
670:37 windows instance and under user account i'm going to add my user account type in
670:40 i'm going to add my user account type in my username paste in my password you can
670:42 my username paste in my password you can add a friendly name here i'm going to
670:44 add a friendly name here i'm going to type in windows dash gc for google cloud
670:48 type in windows dash gc for google cloud and i'm going to click on add and then
670:50 and i'm going to click on add and then once you've pasted in all the
670:51 once you've pasted in all the credentials and your information you can
670:54 credentials and your information you can then click on add and i should be able
670:56 then click on add and i should be able to connect to my windows instance by
670:58 to connect to my windows instance by double clicking on this window it's
671:00 double clicking on this window it's asking me for my certificates i'm going
671:02 asking me for my certificates i'm going to hit continue
671:04 to hit continue and success i'm connected to my windows
671:07 and success i'm connected to my windows instance and so this is how you would
671:09 instance and so this is how you would connect to a windows instance from a
671:11 connect to a windows instance from a windows machine as well as from a mac as
671:14 windows machine as well as from a mac as well there are a couple of other options
671:17 well there are a couple of other options that i wanted to show you over here on
671:19 that i wanted to show you over here on the drop down beside rdp i can download
671:22 the drop down beside rdp i can download an rdp file which will contain the
671:25 an rdp file which will contain the public ip address of the windows
671:27 public ip address of the windows instance along with your username if i
671:29 instance along with your username if i need to reset my password i can view the
671:32 need to reset my password i can view the gcloud command to do it or i can set a
671:34 gcloud command to do it or i can set a new windows password if i forgotten my
671:37 new windows password if i forgotten my old one and so that's everything i had
671:39 old one and so that's everything i had to show you with regards to connecting
671:41 to show you with regards to connecting to a windows instance and so since this
671:44 to a windows instance and so since this demo was getting kind of long i decided
671:46 demo was getting kind of long i decided to split it up into two parts
671:48 to split it up into two parts and so this is the end of part one of
671:50 and so this is the end of part one of this demo and this would be a great
671:52 this demo and this would be a great opportunity to get up and have a stretch
671:55 opportunity to get up and have a stretch grab yourself a tea or a coffee and
671:57 grab yourself a tea or a coffee and whenever you're ready you can join me in
671:59 whenever you're ready you can join me in part two where we will be starting
672:01 part two where we will be starting immediately from the end of part 1 so
672:04 immediately from the end of part 1 so you can complete this video and i'll see
672:06 you can complete this video and i'll see you in part 2.
672:08 you in part 2. [Music]
672:12 [Music] welcome back this is part 2 of the
672:15 welcome back this is part 2 of the connecting to your instances demo and we
672:18 connecting to your instances demo and we will be starting exactly where we left
672:20 will be starting exactly where we left off in part one so with that being said
672:23 off in part one so with that being said let's dive in and so now that we've
672:25 let's dive in and so now that we've created our windows instance and went
672:27 created our windows instance and went through all the methods of how to
672:29 through all the methods of how to connect to it let's go ahead and create
672:31 connect to it let's go ahead and create a linux instance i'm going to go up to
672:34 a linux instance i'm going to go up to the top menu here and click on create
672:36 the top menu here and click on create instance and i'm going to name this
672:38 instance and i'm going to name this instance
672:39 instance linux instance i'm not going to give it
672:41 linux instance i'm not going to give it any labels under region i'm going to
672:44 any labels under region i'm going to select the us east one region and the
672:47 select the us east one region and the zone i'm going to leave it as its set
672:49 zone i'm going to leave it as its set default as us east 1b the machine
672:52 default as us east 1b the machine configuration i'm going to leave it as
672:54 configuration i'm going to leave it as is under boot disk i'm going to leave
672:56 is under boot disk i'm going to leave this as is with the debian distribution
672:59 this as is with the debian distribution and i'm going to go ahead and click on
673:01 and i'm going to go ahead and click on create
673:06 okay and our linux instance has been created and in order for me to connect
673:08 created and in order for me to connect to it
673:09 to it i am going to ssh into it but first i
673:12 i am going to ssh into it but first i need to satisfy the constraint of having
673:14 need to satisfy the constraint of having a firewall rule with tcp port 22 open so
673:18 a firewall rule with tcp port 22 open so i'm going to head on over to the
673:19 i'm going to head on over to the navigation menu
673:21 navigation menu and i'm going to scroll down to vpc
673:23 and i'm going to scroll down to vpc network i'm going to head on over to
673:25 network i'm going to head on over to firewall and as expected the allow ssh
673:28 firewall and as expected the allow ssh firewall rule has been created alongside
673:32 firewall rule has been created alongside the default vpc network and so since
673:34 the default vpc network and so since i've satisfied that constraint i can
673:36 i've satisfied that constraint i can head back on over to compute engine and
673:39 head back on over to compute engine and so here i have a few different options
673:41 so here i have a few different options that i can select from for logging into
673:43 that i can select from for logging into my linux instance i can open in a
673:46 my linux instance i can open in a browser window if i decided i wanted to
673:48 browser window if i decided i wanted to put it on a custom port i can use this
673:51 put it on a custom port i can use this option here if i provided a private ssh
673:54 option here if i provided a private ssh key to connect to this linux instance i
673:57 key to connect to this linux instance i can use this option here i have the
673:59 can use this option here i have the option of viewing the gcloud command in
674:02 option of viewing the gcloud command in order to connect to it
674:04 order to connect to it and i've been presented with a pop-up
674:06 and i've been presented with a pop-up with the command to use within the
674:09 with the command to use within the gcloud command line in order to connect
674:11 gcloud command line in order to connect to my instance i can run it now in cloud
674:14 to my instance i can run it now in cloud shell but i'm going to simply close it
674:16 shell but i'm going to simply close it and so whether you are on a mac a
674:19 and so whether you are on a mac a windows machine or a linux machine you
674:22 windows machine or a linux machine you can simply click on ssh and it will open
674:25 can simply click on ssh and it will open a new browser window connecting you to
674:27 a new browser window connecting you to your instance
674:33 now when you connect to your linux instance for the first time
674:35 instance for the first time compute engine generates an ssh key pair
674:38 compute engine generates an ssh key pair for you this key pair by default is
674:42 for you this key pair by default is added to your project or instance
674:44 added to your project or instance metadata and this will give you the
674:45 metadata and this will give you the freedom of not having to worry about
674:48 freedom of not having to worry about managing keys now if your account is
674:50 managing keys now if your account is configured to use os login compute
674:54 configured to use os login compute engine stores the generated key pair
674:56 engine stores the generated key pair with your user account
674:58 with your user account now when connecting to your linux
675:00 now when connecting to your linux instance in most scenarios google
675:03 instance in most scenarios google recommends using os login this feature
675:06 recommends using os login this feature lets you use iam roles to manage ssh
675:09 lets you use iam roles to manage ssh access to linux instances and this
675:12 access to linux instances and this relieves the complexity of having to
675:14 relieves the complexity of having to manage multiple key pairs and is the
675:17 manage multiple key pairs and is the recommended way to manage many users
675:20 recommended way to manage many users across multiple instances or projects
675:23 across multiple instances or projects and so i'm going to go ahead now and
675:25 and so i'm going to go ahead now and show you how to configure os login for
675:28 show you how to configure os login for your linux instance and the way to do
675:31 your linux instance and the way to do this will be very similar on all
675:33 this will be very similar on all platforms so i'm going to go ahead and
675:35 platforms so i'm going to go ahead and go back to my mac vm and i'm going to
675:38 go back to my mac vm and i'm going to open up my terminal
675:39 open up my terminal make this bigger for better viewing
675:42 make this bigger for better viewing and i'm going to start by running the
675:43 and i'm going to start by running the gcloud init command in order to make
675:46 gcloud init command in order to make sure i'm using the right user and for
675:48 sure i'm using the right user and for the sake of this demonstration i'm going
675:51 the sake of this demonstration i'm going to re-initialize this configuration so
675:54 to re-initialize this configuration so i'm going to click on one hit enter
675:56 i'm going to click on one hit enter number two for tony bowtie ace and i'm
675:59 number two for tony bowtie ace and i'm going to use project bow tie ink so 1
676:02 going to use project bow tie ink so 1 and i'm not going to configure a default
676:05 and i'm not going to configure a default compute region in zone and so if i run
676:08 compute region in zone and so if i run the gcloud config list command i can see
676:11 the gcloud config list command i can see that the account that i'm using is tony
676:13 that the account that i'm using is tony bowties gmail.com in project bowtie inc
676:17 bowties gmail.com in project bowtie inc and so because os login requires a key
676:20 and so because os login requires a key pair i'm going to have to generate that
676:22 pair i'm going to have to generate that myself so i'm going to go ahead and
676:24 myself so i'm going to go ahead and clear the screen and i'm going to use
676:26 clear the screen and i'm going to use the command ssh keygen and this is the
676:29 the command ssh keygen and this is the command to create a public and private
676:32 command to create a public and private key pair i'm going to use the default
676:34 key pair i'm going to use the default path to save my key and i'm going to
676:36 path to save my key and i'm going to enter a passphrase
676:38 enter a passphrase i'm going to enter it again and i
676:40 i'm going to enter it again and i recommend that you write down your
676:41 recommend that you write down your passphrase so that you don't forget it
676:43 passphrase so that you don't forget it as when you lose it you will be unable
676:46 as when you lose it you will be unable to use your key pair and so if i change
676:48 to use your key pair and so if i change directory to dot ssh and do an ls for
676:52 directory to dot ssh and do an ls for list i can see that i now have my public
676:55 list i can see that i now have my public and private key pair the private key
676:57 and private key pair the private key lying in id underscore rsa and the
677:00 lying in id underscore rsa and the public key lying in id underscore
677:03 public key lying in id underscore rsa.pub and so another constraint that i
677:06 rsa.pub and so another constraint that i have is i need to enable os login for my
677:09 have is i need to enable os login for my linux instance so i'm going to go ahead
677:12 linux instance so i'm going to go ahead and go back to the console and i'm going
677:14 and go back to the console and i'm going to go ahead and go into my linux
677:16 to go ahead and go into my linux instance
677:17 instance i'm going to click on edit and if you
677:19 i'm going to click on edit and if you scroll down you will come to some fields
677:22 scroll down you will come to some fields marked as custom metadata and under key
677:25 marked as custom metadata and under key you will type in enable dash os login
677:28 you will type in enable dash os login and under value you will type in all
677:30 and under value you will type in all caps true now i wanted to take a moment
677:33 caps true now i wanted to take a moment here to discuss this feature here under
677:36 here to discuss this feature here under ssh keys for block project wide ssh keys
677:41 ssh keys for block project wide ssh keys now project wide public ssh keys are
677:44 now project wide public ssh keys are meant to give users access to all of the
677:47 meant to give users access to all of the linux instances in a project that allow
677:50 linux instances in a project that allow project project-wide public ssh keys so
677:53 project project-wide public ssh keys so if an instance blocks project-wide
677:56 if an instance blocks project-wide public ssh keys as you see here
677:59 public ssh keys as you see here a user can't use their project-wide
678:01 a user can't use their project-wide public ssh key to connect to the
678:04 public ssh key to connect to the instance
678:05 instance unless the same public ssh key is also
678:08 unless the same public ssh key is also added to the instance metadata this
678:11 added to the instance metadata this allows only users whose public ssh key
678:15 allows only users whose public ssh key is stored in instance level metadata to
678:18 is stored in instance level metadata to access the instance
678:20 access the instance and so this is an important feature to
678:22 and so this is an important feature to note for the exam and so we're going to
678:24 note for the exam and so we're going to leave this feature checked off for now
678:27 leave this feature checked off for now and then you can go to the bottom and
678:28 and then you can go to the bottom and click on save now if i wanted to enable
678:31 click on save now if i wanted to enable os login for all instances in my project
678:35 os login for all instances in my project i can simply go over to the menu on the
678:37 i can simply go over to the menu on the left and click on metadata and add the
678:40 left and click on metadata and add the metadata here with the same values so
678:43 metadata here with the same values so under key i type in enable dash os login
678:47 under key i type in enable dash os login and under value i type in in all caps
678:49 and under value i type in in all caps true but i don't want to enable it for
678:52 true but i don't want to enable it for all my instances
678:54 all my instances only for that one specific instance so
678:56 only for that one specific instance so with regards to project-wide public keys
679:00 with regards to project-wide public keys these keys can be managed through
679:02 these keys can be managed through metadata and should only be used as a
679:05 metadata and should only be used as a last resort if you cannot use the other
679:08 last resort if you cannot use the other tools such as ssh from the console or os
679:12 tools such as ssh from the console or os login these are where the keys are
679:14 login these are where the keys are stored and so you can always find them
679:17 stored and so you can always find them here when looking for them here as you
679:19 here when looking for them here as you can see there are a couple of keys for
679:22 can see there are a couple of keys for tony bowtie ace that i have used for
679:24 tony bowtie ace that i have used for previous instances and so i'm going to
679:27 previous instances and so i'm going to go back to metadata just to make sure
679:29 go back to metadata just to make sure that my key value pair for os login has
679:32 that my key value pair for os login has not been saved and it is not and i'm
679:34 not been saved and it is not and i'm going to head back on over to my
679:36 going to head back on over to my instances and so now that my constraint
679:39 instances and so now that my constraint has been fulfilled where i've enabled
679:41 has been fulfilled where i've enabled the os login feature by adding the
679:43 the os login feature by adding the unnecessary metadata i'm going to head
679:46 unnecessary metadata i'm going to head on over back to my mac vm
679:48 on over back to my mac vm i'm going to go ahead and clear the
679:50 i'm going to go ahead and clear the screen so now i'm going to go ahead and
679:52 screen so now i'm going to go ahead and log into my instance using os login by
679:55 log into my instance using os login by using the command gcloud compute os dash
680:00 using the command gcloud compute os dash login ssh dash keys add and then the
680:04 login ssh dash keys add and then the flag key dash file and then the path for
680:07 flag key dash file and then the path for my public key which is dot ssh forward
680:11 my public key which is dot ssh forward slash id underscore rsa.pub i'm gonna
680:15 slash id underscore rsa.pub i'm gonna hit enter
680:16 hit enter and so my key has been successfully
680:18 and so my key has been successfully stored with my user account i'm gonna go
680:21 stored with my user account i'm gonna go ahead and make this a little bigger for
680:22 ahead and make this a little bigger for better viewing and so in order to log
680:25 better viewing and so in order to log into my instance i'm going to need my
680:27 into my instance i'm going to need my username which is right up here under
680:30 username which is right up here under username i'm going to copy that and i'm
680:32 username i'm going to copy that and i'm just going to clear my screen for a
680:34 just going to clear my screen for a second here for better viewing and so in
680:36 second here for better viewing and so in order for me to ssh into my instance i'm
680:39 order for me to ssh into my instance i'm going to type in the command ssh minus i
680:43 going to type in the command ssh minus i i'm going to have to provide my private
680:45 i'm going to have to provide my private key which is in dot ssh forward slash id
680:49 key which is in dot ssh forward slash id underscore rsa and then my username that
680:51 underscore rsa and then my username that i had recorded earlier at and then i'm
680:54 i had recorded earlier at and then i'm going to need my public ip address of my
680:56 going to need my public ip address of my linux instance so i'm going to head back
680:59 linux instance so i'm going to head back over to the console for just a sec
681:01 over to the console for just a sec i'm going to copy the ip address head
681:04 i'm going to copy the ip address head back over to my mac vm paste it in and
681:06 back over to my mac vm paste it in and hit enter it's asking if i want to
681:08 hit enter it's asking if i want to continue yes i do
681:10 continue yes i do enter the passphrase for my key
681:13 enter the passphrase for my key and success i am connected and so there
681:16 and success i am connected and so there is one caveat that i wanted to show you
681:19 is one caveat that i wanted to show you with regards to permissions for os login
681:22 with regards to permissions for os login so i'm going to head back over to the
681:23 so i'm going to head back over to the console and i'm going to go up to the
681:25 console and i'm going to go up to the navigation menu and head over to i am an
681:28 navigation menu and head over to i am an admin now as you can see here tony
681:31 admin now as you can see here tony bowties gmail.com has the role of owner
681:35 bowties gmail.com has the role of owner and therefore i don't need any granular
681:38 and therefore i don't need any granular specific permissions i have the access
681:41 specific permissions i have the access to do absolutely anything now in case i
681:44 to do absolutely anything now in case i was a different user and i didn't hold
681:46 was a different user and i didn't hold the role of owner i would be looking for
681:49 the role of owner i would be looking for specific permissions
681:51 specific permissions that would be under compute
681:54 that would be under compute os login and this would give me
681:56 os login and this would give me permissions as a standard user now if i
681:59 permissions as a standard user now if i wanted super user access or root access
682:03 wanted super user access or root access i would need to be given the compute os
682:06 i would need to be given the compute os admin login role and as you can see it
682:08 admin login role and as you can see it would allow me administrator user
682:10 would allow me administrator user privileges so when using os login and
682:13 privileges so when using os login and the member is not an owner one of these
682:16 the member is not an owner one of these two roles are needed so i'm going to
682:18 two roles are needed so i'm going to exit out of here i'm going to hit cancel
682:20 exit out of here i'm going to hit cancel and so that about covers everything that
682:22 and so that about covers everything that i wanted to show you with regards to all
682:25 i wanted to show you with regards to all the different methods that you can use
682:28 the different methods that you can use for connecting to vm instances for both
682:31 for connecting to vm instances for both windows and linux instances now i know
682:34 windows and linux instances now i know this may have been a refresher for some
682:37 this may have been a refresher for some but for others
682:38 but for others knowing all the different methods of
682:41 knowing all the different methods of connecting to instances can come in very
682:43 connecting to instances can come in very useful especially when coordinating many
682:47 useful especially when coordinating many instances in bigger environments i want
682:49 instances in bigger environments i want to congratulate you on making it to the
682:51 to congratulate you on making it to the end of this demo and gaining a bit more
682:54 end of this demo and gaining a bit more knowledge on this crucial part of
682:56 knowledge on this crucial part of managing your instances so before you go
682:59 managing your instances so before you go be sure to delete any resources that
683:02 be sure to delete any resources that you've created and again congrats on the
683:05 you've created and again congrats on the great job so you can now mark this as
683:07 great job so you can now mark this as complete and i'll see you in the next
683:09 complete and i'll see you in the next one
683:16 welcome back in this demonstration i'll be discussing metadata and how it can
683:19 be discussing metadata and how it can pertain to a project as well as an
683:22 pertain to a project as well as an instance as well i'm going to touch on
683:24 instance as well i'm going to touch on startup and shutdown scripts and it's
683:26 startup and shutdown scripts and it's real world use cases in the last lesson
683:30 real world use cases in the last lesson we touched the tip of the iceberg when
683:32 we touched the tip of the iceberg when it came to metadata and wanted to go a
683:34 it came to metadata and wanted to go a bit deeper on this topic as i personally
683:37 bit deeper on this topic as i personally feel that it holds so much value
683:40 feel that it holds so much value and give you some ideas on how you can
683:42 and give you some ideas on how you can use it i'm also going to combine the
683:44 use it i'm also going to combine the metadata using variables in a startup
683:47 metadata using variables in a startup script and i'm going to bring to life
683:49 script and i'm going to bring to life something that's dynamic in nature so
683:51 something that's dynamic in nature so with that being said let's dive in so i
683:54 with that being said let's dive in so i am currently logged in as tony at bowtie
683:57 am currently logged in as tony at bowtie ace gmail.com
683:59 ace gmail.com under the project of bow tie inc and so
684:02 under the project of bow tie inc and so in order to get right into the metadata
684:04 in order to get right into the metadata i'm going to head on over to my
684:06 i'm going to head on over to my navigation menu and go straight to
684:08 navigation menu and go straight to compute engine and over here on the left
684:11 compute engine and over here on the left hand menu you will see metadata and you
684:14 hand menu you will see metadata and you can drill down into there now as i
684:16 can drill down into there now as i explained in a previous lesson
684:18 explained in a previous lesson metadata can be assigned to both
684:20 metadata can be assigned to both projects and instances while instance
684:23 projects and instances while instance metadata
684:24 metadata only impacts a specific instance so here
684:27 only impacts a specific instance so here i can add and store metadata which will
684:30 i can add and store metadata which will be used on a project-wide basis as well
684:34 be used on a project-wide basis as well as mentioned earlier metadata is stored
684:36 as mentioned earlier metadata is stored in key value pairs and can be added at
684:39 in key value pairs and can be added at any time now this is a way to add custom
684:42 any time now this is a way to add custom metadata but there is a default set of
684:45 metadata but there is a default set of metadata entries that every instance has
684:48 metadata entries that every instance has access to and again this applies for
684:50 access to and again this applies for both project and instance metadata so
684:53 both project and instance metadata so here i have the option of setting my
684:55 here i have the option of setting my custom metadata for the entire project
684:58 custom metadata for the entire project and so i'm going to dive into where to
685:00 and so i'm going to dive into where to store custom metadata on an instance and
685:03 store custom metadata on an instance and so in order for me to show you this i'm
685:05 so in order for me to show you this i'm going to first head over to vm instances
685:08 going to first head over to vm instances and create my instance and so just as a
685:11 and create my instance and so just as a note before creating your instance make
685:13 note before creating your instance make sure that you have the default vpc
685:15 sure that you have the default vpc created and so because i like to double
685:18 created and so because i like to double check things i'm going to head over to
685:19 check things i'm going to head over to the navigation menu i'm going to scroll
685:21 the navigation menu i'm going to scroll down to vpc network and as expected i
685:24 down to vpc network and as expected i have the default vpc already created and
685:28 have the default vpc already created and so this means i can go ahead and create
685:30 so this means i can go ahead and create my instance so i'm going to head back on
685:32 my instance so i'm going to head back on over to compute engine
685:35 over to compute engine and i'm going to create my instance and
685:37 and i'm going to create my instance and i'm going to name this instance
685:39 i'm going to name this instance bowtie dash web server i'm not going to
685:43 bowtie dash web server i'm not going to add any labels and under the region i'm
685:45 add any labels and under the region i'm going to select us east one and you can
685:48 going to select us east one and you can keep the zone as the default as us east
685:51 keep the zone as the default as us east 1b under machine type i want to keep
685:53 1b under machine type i want to keep things cost effective so i'm going to
685:55 things cost effective so i'm going to select the e2 micro i'm going to scroll
685:58 select the e2 micro i'm going to scroll down and under identity and api access i
686:02 down and under identity and api access i want to set access for each api
686:04 want to set access for each api and scroll down to compute engine i want
686:07 and scroll down to compute engine i want to select it and i want to select on
686:09 to select it and i want to select on read write and i'm going to leave the
686:11 read write and i'm going to leave the rest as is and scrolling down to the
686:14 rest as is and scrolling down to the bottom i want to click on management
686:16 bottom i want to click on management security disks networking and sold
686:18 security disks networking and sold tenancy
686:19 tenancy and under here you will find the option
686:22 and under here you will find the option to add any custom metadata and you can
686:25 to add any custom metadata and you can provide it right here under metadata as
686:28 provide it right here under metadata as a key value pair but we're not going to
686:30 a key value pair but we're not going to add any metadata right now so i'm just
686:32 add any metadata right now so i'm just going to scroll down to the bottom i'm
686:34 going to scroll down to the bottom i'm going to leave everything else as is and
686:36 going to leave everything else as is and simply click on create
686:38 simply click on create and it should take a few moments for my
686:40 and it should take a few moments for my instance to be created okay and now that
686:42 instance to be created okay and now that my instance is up i want to go ahead and
686:45 my instance is up i want to go ahead and start querying the metadata now just as
686:47 start querying the metadata now just as a note metadata must be queried from the
686:50 a note metadata must be queried from the instance itself and can't be done from
686:53 instance itself and can't be done from another instance or even from the cloud
686:55 another instance or even from the cloud sdk on your computer so i'm going to go
686:57 sdk on your computer so i'm going to go ahead and log into the instance using
687:00 ahead and log into the instance using ssh
687:04 okay and now that i'm logged into my instance i want to start querying the
687:06 instance i want to start querying the metadata now normally you would use
687:09 metadata now normally you would use tools like wget or curl to make these
687:12 tools like wget or curl to make these queries
687:13 queries in this demo i will use curl and for
687:16 in this demo i will use curl and for those who don't know curl is a command
687:18 those who don't know curl is a command line tool to transfer data to or from a
687:21 line tool to transfer data to or from a server using supported protocols like
687:24 server using supported protocols like http ftp scp and many more this tool is
687:29 http ftp scp and many more this tool is fantastic for automation since it's
687:32 fantastic for automation since it's designed to work without any user
687:34 designed to work without any user interaction and so i'm going to paste in
687:36 interaction and so i'm going to paste in the url that i am going to use to query
687:39 the url that i am going to use to query the instance metadata and this is the
687:42 the instance metadata and this is the default url that you would use to query
687:45 default url that you would use to query any metadata on any instance getting a
687:48 any metadata on any instance getting a little deeper into it a trailing slash
687:51 little deeper into it a trailing slash shown here shows that the instance value
687:54 shown here shows that the instance value is actually a directory and will have
687:56 is actually a directory and will have other values that append to this url
687:59 other values that append to this url whether they are other directories or
688:01 whether they are other directories or just endpoint values now when you query
688:04 just endpoint values now when you query for metadata you must provide the
688:06 for metadata you must provide the following header in all of your requests
688:09 following header in all of your requests metadata dash flavor colon google and
688:13 metadata dash flavor colon google and should be put in quotations if you don't
688:15 should be put in quotations if you don't provide this header the metadata server
688:18 provide this header the metadata server will deny your request so i'm going to
688:20 will deny your request so i'm going to go ahead and hit enter and as you can
688:22 go ahead and hit enter and as you can see i've been brought up a lot of
688:23 see i've been brought up a lot of different values that i can choose from
688:26 different values that i can choose from in order to retrieve different types of
688:28 in order to retrieve different types of metadata and as stated before anything
688:30 metadata and as stated before anything with a trailing slash is actually a
688:33 with a trailing slash is actually a directory and will have other values
688:36 directory and will have other values underneath it so if i wanted to query
688:38 underneath it so if i wanted to query the network interfaces
688:40 the network interfaces and because it's a directory i need to
688:42 and because it's a directory i need to make sure that i add the trailing slash
688:44 make sure that i add the trailing slash at the end and as you can see here i
688:47 at the end and as you can see here i have the network interface of 0 and i'm
688:49 have the network interface of 0 and i'm going to go ahead and query that
688:51 going to go ahead and query that and here i will have access to all the
688:54 and here i will have access to all the information about the network interface
688:57 information about the network interface on this instance so i'm going to go
688:59 on this instance so i'm going to go ahead and query the network on this
689:01 ahead and query the network on this interface and as expected the default
689:04 interface and as expected the default network is displayed i'm going to
689:06 network is displayed i'm going to quickly go ahead and clear my screen and
689:08 quickly go ahead and clear my screen and i'm going to go ahead and query some
689:10 i'm going to go ahead and query some more metadata this time i'm going to do
689:12 more metadata this time i'm going to do the name of the server and as expected
689:15 the name of the server and as expected bowtie dash web server showed up and
689:18 bowtie dash web server showed up and because it's an endpoint i don't need
689:20 because it's an endpoint i don't need the trailing slash at the end i'm going
689:22 the trailing slash at the end i'm going to go ahead and do one more this time
689:24 to go ahead and do one more this time i'm going to choose machine type
689:27 i'm going to choose machine type and again as expected the e2 micro
689:30 and again as expected the e2 micro machine type is displayed and so just as
689:33 machine type is displayed and so just as a note for those who haven't noticed any
689:35 a note for those who haven't noticed any time that you query metadata it will
689:38 time that you query metadata it will show up to the left of your command
689:40 show up to the left of your command prompt now what i've shown you here is
689:42 prompt now what i've shown you here is what you can do with instance metadata
689:45 what you can do with instance metadata and so how about if you wanted to query
689:46 and so how about if you wanted to query any project metadata well instead of
689:49 any project metadata well instead of instance at the end you would use
689:51 instance at the end you would use project with the trailing slash i'm
689:53 project with the trailing slash i'm going to simply click on enter and as
689:55 going to simply click on enter and as you can see here project doesn't give me
689:57 you can see here project doesn't give me a whole lot of options but it does give
690:00 a whole lot of options but it does give me some important values like project id
690:03 me some important values like project id so i'm going to simply query that right
690:05 so i'm going to simply query that right now and as expected bowtie inc is
690:08 now and as expected bowtie inc is displayed and so this is a great example
690:10 displayed and so this is a great example of how to query any default metadata for
690:14 of how to query any default metadata for instances and for projects now you're
690:16 instances and for projects now you're probably wondering how do i query my
690:19 probably wondering how do i query my custom metadata well once custom
690:21 custom metadata well once custom metadata has been set you can then query
690:24 metadata has been set you can then query it from the attributes directory in the
690:27 it from the attributes directory in the attributes directory can be found in
690:30 attributes directory can be found in both the instance and project metadata
690:32 both the instance and project metadata so i'm going to go ahead and show you
690:34 so i'm going to go ahead and show you that now but first i wanted to add some
690:37 that now but first i wanted to add some custom metadata and this can be set in
690:39 custom metadata and this can be set in either the console the gcloud command
690:42 either the console the gcloud command line tool or using the api and so i'm
690:45 line tool or using the api and so i'm going to run the command here gcloud
690:47 going to run the command here gcloud compute instances add dash metadata the
690:50 compute instances add dash metadata the name of your instance and when you're
690:52 name of your instance and when you're adding custom metadata you would add the
690:54 adding custom metadata you would add the flag dash dash metadata with the key
690:57 flag dash dash metadata with the key value pair which in this example is
691:00 value pair which in this example is environment equals dev and then i'm also
691:03 environment equals dev and then i'm also going to add the zone of the instance
691:05 going to add the zone of the instance which is us east 1a and i'm going to hit
691:08 which is us east 1a and i'm going to hit enter
691:09 enter and because i had a typo there i'm going
691:11 and because i had a typo there i'm going to go ahead and try that again using us
691:14 to go ahead and try that again using us east 1b
691:15 east 1b i'm going to hit on enter
691:17 i'm going to hit on enter and success and so to verify that this
691:20 and success and so to verify that this command has worked
691:22 command has worked i'm going to go ahead and query the
691:23 i'm going to go ahead and query the instance and i'm going to go under
691:25 instance and i'm going to go under attributes
691:27 attributes i'm going to hit on enter and as you can
691:29 i'm going to hit on enter and as you can see here the environment endpoint has
691:31 see here the environment endpoint has been populated so i'm going to query
691:34 been populated so i'm going to query that and as expected dev is displaying
691:37 that and as expected dev is displaying as the environment value now if i wanted
691:39 as the environment value now if i wanted to double check that in the console i
691:41 to double check that in the console i can go over to the console i can drill
691:43 can go over to the console i can drill down into bowtie web server
691:46 down into bowtie web server and if i scroll down to the bottom under
691:48 and if i scroll down to the bottom under custom metadata you can see the key
691:51 custom metadata you can see the key value pair here has m as the key and dev
691:55 value pair here has m as the key and dev being the value and so these are the
691:57 being the value and so these are the many different ways that you can query
691:58 many different ways that you can query metadata for any instances or projects
692:02 metadata for any instances or projects now i wanted to take a quick moment to
692:04 now i wanted to take a quick moment to switch gears and talk about startup and
692:07 switch gears and talk about startup and shutdown scripts now compute engine lets
692:10 shutdown scripts now compute engine lets you create and run your own startup and
692:13 you create and run your own startup and shutdown scripts on your vm instance and
692:16 shutdown scripts on your vm instance and this allows you to perform automation
692:18 this allows you to perform automation that can perform actions when starting
692:20 that can perform actions when starting up such as installing software
692:22 up such as installing software performing updates or any other tasks
692:25 performing updates or any other tasks that are defined in the script and when
692:27 that are defined in the script and when shutting down you can allow instances
692:30 shutting down you can allow instances time to clean up on perform tasks such
692:33 time to clean up on perform tasks such as exporting logs to cloud storage or
692:36 as exporting logs to cloud storage or bigquery or syncing with other systems
692:39 bigquery or syncing with other systems and so i wanted to go ahead and show you
692:41 and so i wanted to go ahead and show you how this would work while combining
692:43 how this would work while combining metadata into the script so i'm going to
692:45 metadata into the script so i'm going to go ahead and drill down into bow tie web
692:47 go ahead and drill down into bow tie web server
692:48 server i'm going to click on edit and i'm going
692:50 i'm going to click on edit and i'm going to scroll down here
692:52 to scroll down here to custom metadata i'm going to click on
692:54 to custom metadata i'm going to click on add item and under key i'm going to type
692:57 add item and under key i'm going to type in
692:57 in startup dash script and under value i'm
693:02 startup dash script and under value i'm going to paste in my script i'm going to
693:04 going to paste in my script i'm going to just enlarge this here for a second and
693:06 just enlarge this here for a second and i will be providing the script in the
693:08 i will be providing the script in the github repository now just to break it
693:10 github repository now just to break it down this is a bash script i'm pulling
693:13 down this is a bash script i'm pulling in a variable called name which will
693:15 in a variable called name which will query the instance name as well i have a
693:18 query the instance name as well i have a variable called zone which will query
693:21 variable called zone which will query the instance zone i'm going to be
693:23 the instance zone i'm going to be installing an apache web server and it's
693:25 installing an apache web server and it's going to display on a web browser both
693:27 going to display on a web browser both the server name and the zone that it's
693:30 the server name and the zone that it's in and so in order for me to see this
693:32 in and so in order for me to see this web page i also need to open up some
693:34 web page i also need to open up some firewall rules and so an easy way to do
693:37 firewall rules and so an easy way to do this would be to scroll up to firewalls
693:39 this would be to scroll up to firewalls and simply click on allow http and allow
693:43 and simply click on allow http and allow https traffic this will tag the instance
693:46 https traffic this will tag the instance with some network tags as http server
693:50 with some network tags as http server and https server and create two separate
693:53 and https server and create two separate firewall rules that will allow traffic
693:56 firewall rules that will allow traffic for port 80 and port 443 so i'm going to
693:59 for port 80 and port 443 so i'm going to leave everything else as is i'm going to
694:01 leave everything else as is i'm going to scroll down to the bottom and click on
694:03 scroll down to the bottom and click on save okay and it took a few seconds
694:05 save okay and it took a few seconds there but it did finish saving i'm going
694:08 there but it did finish saving i'm going to go ahead and go up to the top and
694:10 to go ahead and go up to the top and click on reset and this will perform a
694:12 click on reset and this will perform a hard reset on the instance and will
694:14 hard reset on the instance and will allow the startup script to take effect
694:17 allow the startup script to take effect so i'm going to click on reset it's
694:18 so i'm going to click on reset it's going to ask me if i really want to do
694:20 going to ask me if i really want to do this and for the purposes of this
694:22 this and for the purposes of this demonstration i'm going to click on
694:24 demonstration i'm going to click on reset please note you should never do
694:26 reset please note you should never do this in production as it doesn't do a
694:28 this in production as it doesn't do a clean shutdown on the operating system
694:31 clean shutdown on the operating system but as this is an instance with nothing
694:33 but as this is an instance with nothing on it i'm going to simply click on reset
694:36 on it i'm going to simply click on reset now i'm going to head on back to the
694:38 now i'm going to head on back to the main console for my vm instances and i'm
694:41 main console for my vm instances and i'm going to record my external ip i'm going
694:44 going to record my external ip i'm going to open up a new browser i'm going to
694:46 to open up a new browser i'm going to zoom in for better viewing and i'm going
694:48 zoom in for better viewing and i'm going to paste in my ip address and hit enter
694:51 to paste in my ip address and hit enter and as you can see here
694:53 and as you can see here i've used my startup script to display
694:56 i've used my startup script to display not only this web page but i was able to
694:58 not only this web page but i was able to bring in metadata that i pulled using
695:00 bring in metadata that i pulled using variables and was able to display it
695:03 variables and was able to display it here in the browser and so before i end
695:05 here in the browser and so before i end this demonstration i wanted to show you
695:08 this demonstration i wanted to show you another way of using a startup script
695:10 another way of using a startup script but being able to pull it in from cloud
695:12 but being able to pull it in from cloud storage so i'm going to go back to the
695:14 storage so i'm going to go back to the navigation menu and i'm going to scroll
695:17 navigation menu and i'm going to scroll down to storage
695:20 down to storage here i will create a new bucket
695:23 here i will create a new bucket and for now find a globally unique name
695:25 and for now find a globally unique name to name your bucket and i'm going to
695:27 to name your bucket and i'm going to call my bucket bowtie web server site
695:31 call my bucket bowtie web server site and i'm going to leave the rest as its
695:32 and i'm going to leave the rest as its default and i'm going to simply click on
695:34 default and i'm going to simply click on create
695:35 create and if you have a globally unique name
695:37 and if you have a globally unique name for your bucket you will be prompted
695:39 for your bucket you will be prompted with this page without any errors and
695:42 with this page without any errors and i'm going to go ahead and upload the
695:44 i'm going to go ahead and upload the script
695:45 script and you can find this script in the
695:47 and you can find this script in the github repository so i'm going to go
695:49 github repository so i'm going to go into my repo and i'm going to look for
695:51 into my repo and i'm going to look for bow tie start up final sh i'm going to
695:54 bow tie start up final sh i'm going to open it
695:56 open it and now that i have the script uploaded
695:58 and now that i have the script uploaded i'm going to drill into this file so i
696:00 i'm going to drill into this file so i can get some more information that i
696:02 can get some more information that i need for the instance and what i need
696:05 need for the instance and what i need from here is to copy the uri so i'm
696:07 from here is to copy the uri so i'm going to copy this to my clipboard and
696:09 going to copy this to my clipboard and i'm going to head back on over to
696:11 i'm going to head back on over to compute engine i'm going to drill down
696:13 compute engine i'm going to drill down into my instance
696:15 into my instance i'm going to click on edit at the top
696:17 i'm going to click on edit at the top and i'm going to scroll down to where it
696:19 and i'm going to scroll down to where it says custom metadata and here i'm going
696:22 says custom metadata and here i'm going to remove the startup script metadata
696:25 to remove the startup script metadata and i'm going to add a new item and i'm
696:27 and i'm going to add a new item and i'm going to be adding startup dash script
696:30 going to be adding startup dash script dash url and in the value i'm going to
696:33 dash url and in the value i'm going to paste in the uri that i had just copied
696:36 paste in the uri that i had just copied over and this way on startup my instance
696:39 over and this way on startup my instance will use this startup script that's in
696:42 will use this startup script that's in cloud storage so i'm going to scroll
696:44 cloud storage so i'm going to scroll down to the bottom click on save
696:46 down to the bottom click on save and now i'm going to click on reset i'm
696:48 and now i'm going to click on reset i'm going to reset here i'm going to go back
696:50 going to reset here i'm going to go back to the main page for my vm instances and
696:52 to the main page for my vm instances and i can see that my external ip hasn't
696:55 i can see that my external ip hasn't changed so i'm going to go back to my
696:56 changed so i'm going to go back to my open web browser and i'm going to click
696:59 open web browser and i'm going to click on refresh and success and as you can
697:02 on refresh and success and as you can see here i've taken a whole bunch of
697:04 see here i've taken a whole bunch of different variables including the
697:06 different variables including the machine name
697:07 machine name the environment variable the zone as
697:10 the environment variable the zone as well as the project and i've displayed
697:12 well as the project and i've displayed it here in a simple website and although
697:15 it here in a simple website and although you may not find this website
697:17 you may not find this website specifically useful in your production
697:19 specifically useful in your production environment this is just an idea to get
697:22 environment this is just an idea to get creative using default and custom
697:24 creative using default and custom metadata along with a startup script
697:27 metadata along with a startup script i've seen in some environments where
697:29 i've seen in some environments where people have multiple web servers and
697:32 people have multiple web servers and create a web page to display all the
697:35 create a web page to display all the specific web servers in their different
697:38 specific web servers in their different environments along with their ips their
697:40 environments along with their ips their data and their configurations and so
697:43 data and their configurations and so just as a recap we've gone through the
697:45 just as a recap we've gone through the default and custom metadata and how to
697:48 default and custom metadata and how to query it in an instance we also went
697:51 query it in an instance we also went through startup scripts and how to apply
697:53 through startup scripts and how to apply them both locally and using cloud
697:55 them both locally and using cloud storage and so i hope you have enjoyed
697:58 storage and so i hope you have enjoyed having fun with metadata and using them
698:01 having fun with metadata and using them in startup scripts such as this one i
698:03 in startup scripts such as this one i also hope you find some fascinating use
698:05 also hope you find some fascinating use cases in your current environments and
698:08 cases in your current environments and so before you go just a quick reminder
698:11 so before you go just a quick reminder to delete any resources that you've
698:13 to delete any resources that you've created to not incur any added costs and
698:16 created to not incur any added costs and so that's pretty much all i wanted to
698:17 so that's pretty much all i wanted to cover with this demonstration so you can
698:20 cover with this demonstration so you can now mark this as complete and let's move
698:22 now mark this as complete and let's move on to the next one
698:24 on to the next one [Music]
698:28 [Music] welcome back and in this lesson i'm
698:30 welcome back and in this lesson i'm going to be discussing compute engine
698:32 going to be discussing compute engine billing now when it comes to pricing
698:34 billing now when it comes to pricing with regards to compute engine i've only
698:37 with regards to compute engine i've only gone over the fact that instances are
698:39 gone over the fact that instances are charged by the second after the first
698:41 charged by the second after the first minute but i never got into the depths
698:44 minute but i never got into the depths of billing and the various ways to save
698:46 of billing and the various ways to save money when using compute engine in this
698:49 money when using compute engine in this lesson i will be unveiling how both
698:52 lesson i will be unveiling how both costs and discounts are broken down in
698:55 costs and discounts are broken down in google cloud as it refers to the
698:57 google cloud as it refers to the resource based billing model and the
698:59 resource based billing model and the various savings that can be had when
699:01 various savings that can be had when using compute engine so with that being
699:04 using compute engine so with that being said let's dive in
699:07 said let's dive in now each vcpu and each gigabyte of
699:10 now each vcpu and each gigabyte of memory on compute engine is built
699:12 memory on compute engine is built separately rather than as part of a
699:15 separately rather than as part of a single machine type you are still
699:17 single machine type you are still creating instances using pre-defined
699:20 creating instances using pre-defined machine types but your bill shows them
699:22 machine types but your bill shows them as individual cpus and memory used per
699:26 as individual cpus and memory used per hour and this is what google refers to
699:29 hour and this is what google refers to as resource-based billing which i will
699:31 as resource-based billing which i will get into in just a bit the billing model
699:34 get into in just a bit the billing model applies to all vcpus gpus and memory
699:38 applies to all vcpus gpus and memory resources and are charged a minimum of
699:42 resources and are charged a minimum of one minute for example if you run your
699:44 one minute for example if you run your virtual machine for 30 seconds you will
699:47 virtual machine for 30 seconds you will be billed for one minute of usage after
699:50 be billed for one minute of usage after one minute instances are charged in one
699:53 one minute instances are charged in one second increments instance up time is
699:56 second increments instance up time is another determining factor for cost and
699:59 another determining factor for cost and is measured as the number of seconds
700:01 is measured as the number of seconds between when you start an instance and
700:03 between when you start an instance and when you stop an instance in other words
700:06 when you stop an instance in other words when your instance is in the terminated
700:08 when your instance is in the terminated state if an instance is idle but still
700:11 state if an instance is idle but still has a state of running it will be
700:13 has a state of running it will be charged for instance uptime but again
700:16 charged for instance uptime but again you will not be charged if your instance
700:18 you will not be charged if your instance is in a terminated state
700:21 is in a terminated state now getting into reservations these are
700:24 now getting into reservations these are designed to reserve the vm instances you
700:27 designed to reserve the vm instances you need so after you create a reservation
700:30 need so after you create a reservation the reservation ensures that those
700:33 the reservation ensures that those resources are always available for you
700:36 resources are always available for you to use during the creation process you
700:39 to use during the creation process you can choose how a reservation is to be
700:42 can choose how a reservation is to be used for example you can choose for a
700:44 used for example you can choose for a reservation to be automatically applied
700:47 reservation to be automatically applied to any new or existing instances that
700:50 to any new or existing instances that match the reservation's properties which
700:53 match the reservation's properties which is the default behavior or you can
700:55 is the default behavior or you can specify that reservation to be consumed
700:58 specify that reservation to be consumed by a specific instance in all cases a vm
701:02 by a specific instance in all cases a vm instance can only use a reservation if
701:05 instance can only use a reservation if its properties exactly match the
701:07 its properties exactly match the properties of the reservation after you
701:10 properties of the reservation after you create a reservation you begin paying
701:13 create a reservation you begin paying for the reserved resources immediately
701:15 for the reserved resources immediately and they remain available for your
701:17 and they remain available for your project to use indefinitely until the
701:20 project to use indefinitely until the reservation is deleted reservations are
701:23 reservation is deleted reservations are great to ensure that your project has
701:26 great to ensure that your project has resources for future increases in demand
701:30 resources for future increases in demand including planned or unplanned spikes
701:33 including planned or unplanned spikes backup and disaster recovery or for a
701:36 backup and disaster recovery or for a buffer when you're planning growth when
701:38 buffer when you're planning growth when you no longer need a reservation you can
701:40 you no longer need a reservation you can simply delete the reservation to stop
701:43 simply delete the reservation to stop incurring charges each reservation like
701:46 incurring charges each reservation like normal vms are charged based on existing
701:49 normal vms are charged based on existing on-demand rates which include sustained
701:52 on-demand rates which include sustained use discounts and are eligible for
701:55 use discounts and are eligible for committed use discounts which i will be
701:57 committed use discounts which i will be getting into in just a bit now
701:59 getting into in just a bit now purchasing reservations do come with
702:02 purchasing reservations do come with some caveats
702:03 some caveats reservations apply only to compute
702:06 reservations apply only to compute engine data proc
702:08 engine data proc and google kubernetes engine as well
702:11 and google kubernetes engine as well reservations don't apply to shared core
702:13 reservations don't apply to shared core machine types preemptable vms sole
702:17 machine types preemptable vms sole tenant nodes
702:18 tenant nodes cloud sql and data flow now as i
702:21 cloud sql and data flow now as i explained before each vcpu and each
702:24 explained before each vcpu and each gigabyte of memory on compute engine is
702:27 gigabyte of memory on compute engine is built separately rather than as a part
702:30 built separately rather than as a part of a single machine type and is billed
702:33 of a single machine type and is billed as individual cpus and memory used per
702:36 as individual cpus and memory used per hour resource-based pricing allows
702:39 hour resource-based pricing allows compute engine to apply sustained use
702:41 compute engine to apply sustained use discounts
702:42 discounts to all of your pre-defined machine type
702:45 to all of your pre-defined machine type usage in a region collectively
702:48 usage in a region collectively rather than to individual machine types
702:51 rather than to individual machine types and this way vcpu and memory usage for
702:54 and this way vcpu and memory usage for each machine type can receive any one of
702:57 each machine type can receive any one of the following discounts sustained use
703:00 the following discounts sustained use discounts committed use discounts and
703:03 discounts committed use discounts and preemptable vms and i'd like to take a
703:06 preemptable vms and i'd like to take a moment to dive into a bit of detail on
703:09 moment to dive into a bit of detail on each of these discount types starting
703:12 each of these discount types starting with sustained use discounts
703:15 with sustained use discounts now sustained use discounts are
703:17 now sustained use discounts are automatic discounts for running specific
703:20 automatic discounts for running specific compute engine resources a significant
703:23 compute engine resources a significant portion of the billing month for example
703:26 portion of the billing month for example when you run one of these resources for
703:29 when you run one of these resources for more than 25 percent of a month compute
703:32 more than 25 percent of a month compute engine automatically gives you a
703:34 engine automatically gives you a discount for every incremental minute
703:36 discount for every incremental minute that you use for that instance now the
703:39 that you use for that instance now the following tables show the discounts
703:41 following tables show the discounts applied for the specific resources
703:44 applied for the specific resources described here now for the table on the
703:47 described here now for the table on the left for general purpose n2 and n2d
703:51 left for general purpose n2 and n2d predefined and custom machine types and
703:54 predefined and custom machine types and for compute optimized machine types you
703:57 for compute optimized machine types you can receive a discount of up to 20
703:59 can receive a discount of up to 20 percent the table on the right shows
704:02 percent the table on the right shows that for general purpose n1 predefined
704:06 that for general purpose n1 predefined and custom machine types as well as sole
704:09 and custom machine types as well as sole tenant nodes and gpus you can get a
704:12 tenant nodes and gpus you can get a discount of up to 30 percent sustained
704:15 discount of up to 30 percent sustained use discounts are applied automatically
704:18 use discounts are applied automatically to usage within a project separately for
704:21 to usage within a project separately for each region so there is no action
704:23 each region so there is no action required on your part to enable these
704:26 required on your part to enable these discounts now some notes that i wanted
704:28 discounts now some notes that i wanted to cover here is that sustained use
704:30 to cover here is that sustained use discounts automatically apply to vms
704:33 discounts automatically apply to vms created by both google kubernetes engine
704:36 created by both google kubernetes engine and compute engine as well they do not
704:39 and compute engine as well they do not apply to vms created using the app
704:42 apply to vms created using the app engine flexible environment as well as
704:45 engine flexible environment as well as data flow and the e-2 machine types
704:48 data flow and the e-2 machine types sustained use discounts are applied on
704:50 sustained use discounts are applied on incremental use after you reach certain
704:54 incremental use after you reach certain usage thresholds this means that you pay
704:57 usage thresholds this means that you pay only for the number of minutes that you
704:59 only for the number of minutes that you use an instance and compute engine
705:02 use an instance and compute engine automatically gives you the best price
705:04 automatically gives you the best price google truly believes that there's no
705:07 google truly believes that there's no reason to run an instance for longer
705:09 reason to run an instance for longer than you need it
705:11 than you need it now sustained use discounts are applied
705:14 now sustained use discounts are applied on incremental use after you reach
705:17 on incremental use after you reach certain usage thresholds this means that
705:20 certain usage thresholds this means that you pay only for the number of minutes
705:22 you pay only for the number of minutes that you use an instance and compute
705:24 that you use an instance and compute engine automatically gives you the best
705:27 engine automatically gives you the best price now consider a scenario where you
705:30 price now consider a scenario where you have two instances or sole tenant nodes
705:32 have two instances or sole tenant nodes in the same region that have different
705:35 in the same region that have different machine types and run at different times
705:37 machine types and run at different times of the month
705:38 of the month compute engine
705:40 compute engine breaks down the number of vcpus and
705:42 breaks down the number of vcpus and amount of memory used across all
705:45 amount of memory used across all instances that use predefined machine
705:48 instances that use predefined machine types and combines the resources to
705:51 types and combines the resources to qualify for the largest sustained usage
705:54 qualify for the largest sustained usage discounts possible now in this example
705:57 discounts possible now in this example assume you run the following two
705:59 assume you run the following two instances
706:00 instances in the us east one region during a month
706:03 in the us east one region during a month for the first half you run an n1
706:06 for the first half you run an n1 standard four instance with four vcpus
706:10 standard four instance with four vcpus and 15 gigabytes of memory for the
706:12 and 15 gigabytes of memory for the second half of the month you run a
706:14 second half of the month you run a larger and one standard 16 instance with
706:18 larger and one standard 16 instance with 16 vcpus and 60 gigabytes of memory in
706:22 16 vcpus and 60 gigabytes of memory in this scenario compute engine reorganizes
706:25 this scenario compute engine reorganizes these machine types into individual vcpu
706:28 these machine types into individual vcpu and memory resources and combines their
706:31 and memory resources and combines their usage to create the following resources
706:34 usage to create the following resources for vcpus so because four vcpus were
706:38 for vcpus so because four vcpus were being used for the whole month the
706:40 being used for the whole month the discount here would be thirty percent
706:43 discount here would be thirty percent the additional twelve vcpus were added
706:46 the additional twelve vcpus were added on week two in the month and so for
706:48 on week two in the month and so for those 12 vcpus they would receive a 10
706:52 those 12 vcpus they would receive a 10 discount and this is how discounts are
706:54 discount and this is how discounts are applied when it comes to sustained use
706:56 applied when it comes to sustained use discounts now moving on to the next
706:59 discounts now moving on to the next discount type is committed use discounts
707:02 discount type is committed use discounts so compute engine lets you purchase
707:04 so compute engine lets you purchase committed use contracts in return for
707:08 committed use contracts in return for deeply discounted prices for vm usage so
707:11 deeply discounted prices for vm usage so when you purchase a committed use
707:13 when you purchase a committed use contract you purchase compute resource
707:16 contract you purchase compute resource which is comprised of vcpus memory
707:20 which is comprised of vcpus memory gpus and local ssds and you purchase
707:24 gpus and local ssds and you purchase these resources at a discounted price in
707:27 these resources at a discounted price in return for committing to paying for
707:29 return for committing to paying for those resources for one year or three
707:32 those resources for one year or three years committed use discounts are ideal
707:35 years committed use discounts are ideal for workloads with predictable resource
707:38 for workloads with predictable resource needs so if you know exactly what you're
707:41 needs so if you know exactly what you're going to use committed use discounts
707:43 going to use committed use discounts would be a great option for this and the
707:46 would be a great option for this and the discount is up to 57
707:49 discount is up to 57 for most resources like machine types or
707:52 for most resources like machine types or gpus when it comes to memory optimized
707:55 gpus when it comes to memory optimized machine types the discount is up to 70
707:58 machine types the discount is up to 70 percent now when you purchase a
708:00 percent now when you purchase a committed use contract you can purchase
708:02 committed use contract you can purchase it for a single project and applies to a
708:05 it for a single project and applies to a single project by default or you can
708:08 single project by default or you can purchase multiple contracts which you
708:10 purchase multiple contracts which you can share across many projects by
708:13 can share across many projects by enabling shared discounts once purchased
708:16 enabling shared discounts once purchased your billed monthly for the resources
708:19 your billed monthly for the resources you purchased for the duration of the
708:21 you purchased for the duration of the term you selected whether you use the
708:23 term you selected whether you use the services or not if you have multiple
708:26 services or not if you have multiple projects that share the same cloud
708:27 projects that share the same cloud billing account you can enable committed
708:30 billing account you can enable committed use discount sharing so that all of your
708:33 use discount sharing so that all of your projects within that cloud billing
708:35 projects within that cloud billing account share all of your committed use
708:38 account share all of your committed use discount contracts your sustained use
708:40 discount contracts your sustained use discounts are also pooled at the same
708:44 discounts are also pooled at the same time now some caveats when it comes to
708:46 time now some caveats when it comes to committed use discounts shared core
708:48 committed use discounts shared core machines are excluded on this as well
708:51 machines are excluded on this as well you can purchase commitments only on a
708:54 you can purchase commitments only on a per region basis if a reservation is
708:57 per region basis if a reservation is attached to a committed use discount the
708:59 attached to a committed use discount the reservation can't be deleted for the
709:02 reservation can't be deleted for the duration of the commitment so please be
709:05 duration of the commitment so please be aware now to purchase a commitment for
709:07 aware now to purchase a commitment for gpus or local ssds you must purchase a
709:12 gpus or local ssds you must purchase a general purpose and one commitment and
709:14 general purpose and one commitment and lastly after you create a commitment you
709:17 lastly after you create a commitment you cannot cancel it you must pay the agreed
709:20 cannot cancel it you must pay the agreed upon monthly amount for the duration of
709:23 upon monthly amount for the duration of the commitment now committed use
709:25 the commitment now committed use discount recommendations give you
709:27 discount recommendations give you opportunities to optimize your compute
709:29 opportunities to optimize your compute costs by analyzing your vm spending
709:32 costs by analyzing your vm spending trends with and without a committed use
709:35 trends with and without a committed use discount contract by comparing these
709:37 discount contract by comparing these numbers you can see how much you can
709:40 numbers you can see how much you can save each month with a committed use
709:42 save each month with a committed use contract and this can be found under the
709:44 contract and this can be found under the recommendations tab on the home page in
709:47 recommendations tab on the home page in the console and so i wanted to move on
709:49 the console and so i wanted to move on to the last discount type which are
709:52 to the last discount type which are preemptable vms now preemptable vms are
709:55 preemptable vms now preemptable vms are up to eighty percent cheaper than
709:58 up to eighty percent cheaper than regular instances pricing is fixed and
710:01 regular instances pricing is fixed and you never have to worry about variable
710:04 you never have to worry about variable pricing these prices can be found on the
710:06 pricing these prices can be found on the link to instance pricing that i have
710:08 link to instance pricing that i have included in the lesson text a
710:11 included in the lesson text a preemptable vm is an instance that you
710:13 preemptable vm is an instance that you can create and run at a much lower price
710:17 can create and run at a much lower price than normal instances however compute
710:20 than normal instances however compute engine might stop or preempt these
710:22 engine might stop or preempt these instances if it requires access to those
710:25 instances if it requires access to those resources for other tasks as preemptable
710:29 resources for other tasks as preemptable instances our access compute engine
710:32 instances our access compute engine capacity so their availability varies
710:34 capacity so their availability varies with usage now generally compute engine
710:37 with usage now generally compute engine avoids preempting instances
710:40 avoids preempting instances but compute engine does not use an
710:42 but compute engine does not use an instant cpu usage or other behavior to
710:46 instant cpu usage or other behavior to determine whether or not to preempt it
710:48 determine whether or not to preempt it now a crucial characteristic to know
710:50 now a crucial characteristic to know about preemptable vms is that compute
710:53 about preemptable vms is that compute engine always stops them after they run
710:56 engine always stops them after they run for 24 hours and this is something to be
710:58 for 24 hours and this is something to be aware of for the exam preemptable
711:01 aware of for the exam preemptable instances are finite compute engine
711:03 instances are finite compute engine resources so they might not always be
711:06 resources so they might not always be available and if you happen to
711:08 available and if you happen to accidentally spin up a preemptable vm
711:11 accidentally spin up a preemptable vm and you want to shut it down there is no
711:13 and you want to shut it down there is no charge if it's running for less than 10
711:16 charge if it's running for less than 10 minutes now another thing to note is
711:18 minutes now another thing to note is that preemptable instances can't live
711:21 that preemptable instances can't live migrate to a regular vm instance or be
711:24 migrate to a regular vm instance or be set to automatically restart when there
711:27 set to automatically restart when there is a maintenance event due to the
711:29 is a maintenance event due to the limitations preemptable instances are
711:32 limitations preemptable instances are not covered by any service level
711:34 not covered by any service level agreement and when it comes to the
711:36 agreement and when it comes to the google cloud free tier credits for
711:38 google cloud free tier credits for compute engine this does not apply to
711:40 compute engine this does not apply to preemptable instances so you're probably
711:43 preemptable instances so you're probably asking when is a great time to use
711:46 asking when is a great time to use preemptable vms well if your apps are
711:49 preemptable vms well if your apps are fault tolerant and can withstand
711:51 fault tolerant and can withstand possible instance preemptions then
711:53 possible instance preemptions then preemptable instances can reduce your
711:56 preemptable instances can reduce your compute engine costs significantly for
711:59 compute engine costs significantly for example batch processing jobs can run on
712:02 example batch processing jobs can run on preemptable instances if some of those
712:04 preemptable instances if some of those instances stop during processing the job
712:07 instances stop during processing the job slows down but does not completely stop
712:10 slows down but does not completely stop preemptable instances create your batch
712:13 preemptable instances create your batch processing tasks without placing any
712:16 processing tasks without placing any additional workload on your existing
712:18 additional workload on your existing instances and without requiring for you
712:21 instances and without requiring for you to pay full price for additional normal
712:24 to pay full price for additional normal instances and since containers are
712:26 instances and since containers are naturally stateless and fault tolerant
712:29 naturally stateless and fault tolerant this makes containers an amazing fit for
712:32 this makes containers an amazing fit for preemptable vms so running preemptable
712:34 preemptable vms so running preemptable vms for google kubernetes engine is
712:38 vms for google kubernetes engine is another fantastic use case now it's
712:40 another fantastic use case now it's really critical that you have an
712:42 really critical that you have an understanding for each different
712:44 understanding for each different discount type and when is a good time to
712:46 discount type and when is a good time to use each as you may be presented
712:49 use each as you may be presented different cost-effective solutions in
712:50 different cost-effective solutions in the exam and understanding these
712:52 the exam and understanding these discount types will prepare you to
712:55 discount types will prepare you to answer them understanding the theory
712:57 answer them understanding the theory behind this resource-based pricing model
713:00 behind this resource-based pricing model all the available discount types along
713:02 all the available discount types along with the types of workloads that are
713:04 with the types of workloads that are good for each will guarantee that you
713:07 good for each will guarantee that you will become familiar with what types of
713:09 will become familiar with what types of questions are being asked in the exam
713:12 questions are being asked in the exam and will also make you a better cloud
713:14 and will also make you a better cloud engineer as you will be able to spot
713:17 engineer as you will be able to spot where you can save money and be able to
713:19 where you can save money and be able to make the appropriate changes and so
713:22 make the appropriate changes and so that's pretty much all i wanted to cover
713:24 that's pretty much all i wanted to cover when it comes to compute engine billing
713:26 when it comes to compute engine billing and its discount types so you can now
713:28 and its discount types so you can now mark this lesson as complete and let's
713:30 mark this lesson as complete and let's move on to the next one
713:39 welcome back in this lesson i'm going to be covering the fundamentals as it
713:41 be covering the fundamentals as it pertains to storage these concepts are
713:44 pertains to storage these concepts are needed to know in order to fully
713:46 needed to know in order to fully understand the different google cloud
713:48 understand the different google cloud storage options that i will be diving
713:50 storage options that i will be diving into later as well the exam expects that
713:54 into later as well the exam expects that you know the different types of storage
713:55 you know the different types of storage that's available for all the various
713:58 that's available for all the various services
713:59 services and so before i get into the different
714:01 and so before i get into the different types of storage i wanted to cover the
714:03 types of storage i wanted to cover the underlying theory behind it so with that
714:06 underlying theory behind it so with that being said let's dive in
714:09 being said let's dive in so i wanted to start off by going
714:10 so i wanted to start off by going through the three types of storage and
714:13 through the three types of storage and how data is presented to a user or to
714:16 how data is presented to a user or to the server there is block storage file
714:19 the server there is block storage file storage and object storage these types
714:22 storage and object storage these types of storage tie into the available
714:24 of storage tie into the available services that are available in google
714:27 services that are available in google cloud and they offer different options
714:29 cloud and they offer different options for different types of workloads and i
714:31 for different types of workloads and i will be going over each of these in a
714:33 will be going over each of these in a bit of depth and so the first one i
714:36 bit of depth and so the first one i wanted to touch on is block storage
714:38 wanted to touch on is block storage now block storage is sometimes referred
714:41 now block storage is sometimes referred to as block level storage and is a
714:44 to as block level storage and is a technology that is used to store data
714:46 technology that is used to store data files on storage systems or cloud-based
714:49 files on storage systems or cloud-based storage environments block storage is
714:52 storage environments block storage is the fastest available storage type and
714:55 the fastest available storage type and it is also efficient and reliable with
714:58 it is also efficient and reliable with block storage files are split into
715:01 block storage files are split into evenly sized blocks of data each with
715:03 evenly sized blocks of data each with its own unique identifier it is
715:06 its own unique identifier it is presented to the operating system as
715:08 presented to the operating system as structureless raw data in the form of a
715:11 structureless raw data in the form of a logical volume or a hard drive and the
715:14 logical volume or a hard drive and the operating system structures it with a
715:17 operating system structures it with a file system like ext3 or ext4 on linux
715:22 file system like ext3 or ext4 on linux and ntfs for windows it would then mount
715:25 and ntfs for windows it would then mount this volume or drive as the root volume
715:28 this volume or drive as the root volume in linux or a c or d drive in windows
715:32 in linux or a c or d drive in windows block storage is usually delivered on
715:34 block storage is usually delivered on physical media in the case of google
715:36 physical media in the case of google cloud it is delivered as either spinning
715:39 cloud it is delivered as either spinning hard drives or solid state drives so in
715:43 hard drives or solid state drives so in google cloud you're presented with block
715:45 google cloud you're presented with block storage that consists of either
715:47 storage that consists of either persistent disks or local ssd
715:51 persistent disks or local ssd which can both be mountable and bootable
715:54 which can both be mountable and bootable block storage volumes can then be used
715:56 block storage volumes can then be used as your boot volumes for compute
715:59 as your boot volumes for compute instances in google cloud
716:01 instances in google cloud installed with your operating system of
716:04 installed with your operating system of choice and structured so that your
716:06 choice and structured so that your operating system database or application
716:10 operating system database or application will then be able to consume it now
716:12 will then be able to consume it now moving on to the second type of storage
716:15 moving on to the second type of storage is file storage
716:17 is file storage now file storage is also referred to as
716:20 now file storage is also referred to as file level or file based storage and is
716:23 file level or file based storage and is normally storage that is presented to
716:26 normally storage that is presented to users and applications as a traditional
716:29 users and applications as a traditional network file system in other words the
716:32 network file system in other words the user or application receives data
716:35 user or application receives data through directory trees folders and
716:38 through directory trees folders and files file storage also allows you to do
716:41 files file storage also allows you to do the same this functions similarly to a
716:44 the same this functions similarly to a local hard drive however a structure has
716:47 local hard drive however a structure has already been applied and cannot be
716:50 already been applied and cannot be adjusted after the fact this type of
716:52 adjusted after the fact this type of structure only has the capabilities of
716:55 structure only has the capabilities of being mountable but not bootable you
716:58 being mountable but not bootable you cannot install an operating system on
717:00 cannot install an operating system on file storage as i said before the
717:03 file storage as i said before the structure has already been put in place
717:05 structure has already been put in place for you and is ready for you or your
717:08 for you and is ready for you or your application to consume due to this
717:10 application to consume due to this structure the service that is serving
717:13 structure the service that is serving the file system has some underlying
717:15 the file system has some underlying software that can handle access rights
717:18 software that can handle access rights file sharing file locking and other
717:21 file sharing file locking and other controls related to file storage in
717:24 controls related to file storage in google cloud this service that serves
717:26 google cloud this service that serves this type of storage is known as cloud
717:29 this type of storage is known as cloud file store and is usually presented over
717:31 file store and is usually presented over the network to users in your vpc network
717:35 the network to users in your vpc network using the nfs protocol or in this case
717:39 using the nfs protocol or in this case nfs version 3. but i'll be diving into
717:42 nfs version 3. but i'll be diving into that a little bit later and the last
717:44 that a little bit later and the last storage type that i wanted to cover is
717:46 storage type that i wanted to cover is object storage
717:48 object storage now object storage also referred to as
717:51 now object storage also referred to as object-based storage
717:53 object-based storage is a general term that refers to the way
717:55 is a general term that refers to the way in which we organize and work with units
717:58 in which we organize and work with units of storage called objects and this is a
718:01 of storage called objects and this is a storage type that is a flat collection
718:04 storage type that is a flat collection of unstructured data and this type of
718:06 of unstructured data and this type of storage holds no structure like the
718:09 storage holds no structure like the other two types of storage and is made
718:11 other two types of storage and is made up of three characteristics the first
718:14 up of three characteristics the first one is the data itself and this could be
718:17 one is the data itself and this could be anything from movies songs and even
718:20 anything from movies songs and even photos of men in fancy bow ties the data
718:23 photos of men in fancy bow ties the data could also be binary data as well the
718:26 could also be binary data as well the second characteristic is the metadata
718:29 second characteristic is the metadata and this is usually related to any
718:31 and this is usually related to any contextual information about what the
718:34 contextual information about what the data is or anything that is relevant to
718:37 data is or anything that is relevant to the data and the third characteristic is
718:40 the data and the third characteristic is a globally unique identifier and this
718:43 a globally unique identifier and this way it's possible to find the data
718:45 way it's possible to find the data without having to know the physical
718:47 without having to know the physical location of the data and this is what
718:49 location of the data and this is what allows object storage to be infinitely
718:52 allows object storage to be infinitely scalable as it doesn't matter where the
718:55 scalable as it doesn't matter where the object is stored this type of storage
718:58 object is stored this type of storage can be found in google cloud and is
719:00 can be found in google cloud and is known as cloud storage cloud storage is
719:03 known as cloud storage cloud storage is flat storage with a logical container
719:06 flat storage with a logical container called a bucket that you put objects
719:08 called a bucket that you put objects into now although this type of storage
719:11 into now although this type of storage is not bootable using an open source
719:13 is not bootable using an open source tool called fuse this storage type can
719:17 tool called fuse this storage type can be mounted in google cloud and i will be
719:19 be mounted in google cloud and i will be covering that a little bit later in the
719:22 covering that a little bit later in the cloud storage lesson but in most cases
719:25 cloud storage lesson but in most cases object store is designed as the type of
719:27 object store is designed as the type of storage that is not bootable or
719:30 storage that is not bootable or mountable and because of the
719:32 mountable and because of the characteristics of this storage
719:34 characteristics of this storage it allows object storage again to be
719:37 it allows object storage again to be infinitely scalable and so these are the
719:40 infinitely scalable and so these are the three main types of storage that you
719:42 three main types of storage that you will need to know and understand as each
719:46 will need to know and understand as each has its use cases so if you're looking
719:48 has its use cases so if you're looking for high performance storage you will
719:50 for high performance storage you will always look to block storage to satisfy
719:53 always look to block storage to satisfy your needs if you're looking to share
719:55 your needs if you're looking to share files across multiple systems or have
719:58 files across multiple systems or have multiple applications
720:00 multiple applications that need access to the same files and
720:02 that need access to the same files and directories then file storage might be
720:05 directories then file storage might be your best bet if you're looking to store
720:07 your best bet if you're looking to store terabytes of pictures for a web
720:09 terabytes of pictures for a web application and you don't want to worry
720:12 application and you don't want to worry about scaling object storage will allow
720:14 about scaling object storage will allow you to read and write an infinite amount
720:17 you to read and write an infinite amount of pictures that will meet your
720:19 of pictures that will meet your requirements so now that we've covered
720:21 requirements so now that we've covered these storage types let's take a few
720:24 these storage types let's take a few moments to discuss storage performance
720:27 moments to discuss storage performance terms now when discussing storage
720:29 terms now when discussing storage performance there are some key terms to
720:31 performance there are some key terms to understand that when used together
720:34 understand that when used together define the performance of your storage
720:37 define the performance of your storage first there is io which stands for input
720:40 first there is io which stands for input output
720:41 output and is a single read write request and
720:44 and is a single read write request and can be measured in block size and this
720:46 can be measured in block size and this block size can vary anywhere from one
720:49 block size can vary anywhere from one kilobyte to four megabytes and beyond
720:52 kilobyte to four megabytes and beyond depending on your workload now q depth
720:55 depending on your workload now q depth when it comes to storage is the number
720:58 when it comes to storage is the number of pending input output requests
721:01 of pending input output requests waiting to be performed on a disk io
721:03 waiting to be performed on a disk io requests become queued when reads or
721:07 requests become queued when reads or writes are requested faster than they
721:09 writes are requested faster than they can be processed by the disk when io
721:12 can be processed by the disk when io requests are queued the total amount of
721:14 requests are queued the total amount of time it takes to read or write data to
721:17 time it takes to read or write data to disk becomes significantly higher this
721:20 disk becomes significantly higher this is where performance degradation can
721:22 is where performance degradation can occur and queue depth must be adjusted
721:25 occur and queue depth must be adjusted accordingly now the next term is a
721:27 accordingly now the next term is a common touch point when it comes to
721:29 common touch point when it comes to discussing storage performance on gcp
721:32 discussing storage performance on gcp and on the exam which is iops and this
721:35 and on the exam which is iops and this is a metric that stands for input output
721:38 is a metric that stands for input output operations per second this value
721:41 operations per second this value indicates how many different input or
721:44 indicates how many different input or output operations a device or group of
721:47 output operations a device or group of devices can perform in one second more
721:50 devices can perform in one second more value in the iops signifies the
721:53 value in the iops signifies the capability of executing more operations
721:57 capability of executing more operations per second and again this is a common
721:59 per second and again this is a common touch point that i will be diving into a
722:01 touch point that i will be diving into a little bit later now next up is
722:04 little bit later now next up is throughput and this is the speed at
722:06 throughput and this is the speed at which the data is transferred in a
722:08 which the data is transferred in a second and is most commonly measured in
722:11 second and is most commonly measured in megabytes per second this is going to be
722:14 megabytes per second this is going to be another common topic that comes up
722:16 another common topic that comes up frequently when discussing storage on
722:18 frequently when discussing storage on gcp as well
722:20 gcp as well latency is the measurement of delay
722:23 latency is the measurement of delay between the time data is requested when
722:27 between the time data is requested when the data starts being returned and is
722:29 the data starts being returned and is measured in milliseconds so the time
722:32 measured in milliseconds so the time each io request will take to complete
722:35 each io request will take to complete results in being your average latency
722:37 results in being your average latency and the last two terms i wanted to bring
722:39 and the last two terms i wanted to bring up is sequential and random access
722:42 up is sequential and random access sequential would be a large single file
722:46 sequential would be a large single file like a video and random access would be
722:48 like a video and random access would be loading an application or an operating
722:51 loading an application or an operating system so lots of little files that are
722:53 system so lots of little files that are all over the place it's obvious that
722:55 all over the place it's obvious that accessing data randomly is much slower
722:59 accessing data randomly is much slower and less efficient than accessing it
723:01 and less efficient than accessing it sequentially and this can also affect
723:03 sequentially and this can also affect performance now why i bring up all these
723:06 performance now why i bring up all these terms is not about calculating the
723:08 terms is not about calculating the average throughput but to give you a
723:10 average throughput but to give you a holistic view on storage performance
723:13 holistic view on storage performance as all these characteristics play a part
723:16 as all these characteristics play a part in defining the performance of your
723:19 in defining the performance of your storage
723:20 storage there is not one specific characteristic
723:22 there is not one specific characteristic that is responsible for disk performance
723:25 that is responsible for disk performance but all have a role in achieving the
723:28 but all have a role in achieving the highest performance possible
723:30 highest performance possible for your selected storage now i know
723:32 for your selected storage now i know this is a lot of theory to take in but
723:34 this is a lot of theory to take in but this will all start to make more sense
723:37 this will all start to make more sense when we dive into other parts of the
723:39 when we dive into other parts of the course where we will discuss disk
723:41 course where we will discuss disk performance with all these
723:43 performance with all these characteristics as it relates to compute
723:46 characteristics as it relates to compute engine and other services that use
723:49 engine and other services that use storage it is crucial to know the
723:51 storage it is crucial to know the storage types as well as the performance
723:53 storage types as well as the performance characteristics as it will bring clarity
723:56 characteristics as it will bring clarity to questions in the exam and also give
723:59 to questions in the exam and also give you a better sense on how to increase
724:02 you a better sense on how to increase your storage performance in your work
724:04 your storage performance in your work environment and so that's pretty much
724:06 environment and so that's pretty much all i wanted to cover when it comes to
724:08 all i wanted to cover when it comes to storage types and storage performance as
724:11 storage types and storage performance as it pertains to storage as a whole so you
724:14 it pertains to storage as a whole so you can now mark this lesson as complete and
724:16 can now mark this lesson as complete and let's move on to the next one
724:18 let's move on to the next one [Music]
724:22 [Music] welcome back and in this lesson i'm
724:24 welcome back and in this lesson i'm going to be covering persistent disks
724:27 going to be covering persistent disks and local ssds i'm going to be getting
724:29 and local ssds i'm going to be getting into the detail with the most commonly
724:32 into the detail with the most commonly used storage types for instances which
724:35 used storage types for instances which are both persistent disks and local ssds
724:39 are both persistent disks and local ssds this lesson will sift through all the
724:41 this lesson will sift through all the different types of persistent disks and
724:44 different types of persistent disks and local ssds along with the performance of
724:47 local ssds along with the performance of each knowing what type of disk to use
724:50 each knowing what type of disk to use for your instance and how to increase
724:53 for your instance and how to increase disk performance shows up on the exam
724:56 disk performance shows up on the exam and so i want to make sure to cover it
724:58 and so i want to make sure to cover it in detail and leave no stone unturned so
725:01 in detail and leave no stone unturned so with that being said let's dive in now
725:04 with that being said let's dive in now persistent disks and local ssds are the
725:08 persistent disks and local ssds are the two available types of block storage
725:10 two available types of block storage devices
725:11 devices available in google cloud and the
725:14 available in google cloud and the determining factor of what you will use
725:16 determining factor of what you will use for your particular scenario will depend
725:19 for your particular scenario will depend on your use case and the specific
725:22 on your use case and the specific characteristics that you require from
725:24 characteristics that you require from each storage medium now by default each
725:27 each storage medium now by default each compute engine instance has a single
725:30 compute engine instance has a single boot persistent disk that contains the
725:33 boot persistent disk that contains the operating system when you require
725:35 operating system when you require additional storage space you can add one
725:38 additional storage space you can add one or more additional persistent disks or
725:41 or more additional persistent disks or local ssds to your instance and i will
725:44 local ssds to your instance and i will be going through these storage options
725:47 be going through these storage options along with their characteristics now as
725:49 along with their characteristics now as you can see here persistent disks and
725:52 you can see here persistent disks and local ssds come in a slew of different
725:55 local ssds come in a slew of different types as well with persistent disks they
725:58 types as well with persistent disks they are available in both zonal and regional
726:01 are available in both zonal and regional options so starting off with persistent
726:04 options so starting off with persistent disks you have three different types you
726:06 disks you have three different types you can choose from as well you have the
726:09 can choose from as well you have the flexibility of choosing from two
726:11 flexibility of choosing from two different geographic options when it
726:13 different geographic options when it comes to the redundancy of your
726:15 comes to the redundancy of your persistent disks and i will be covering
726:17 persistent disks and i will be covering the zonal and regional options in detail
726:20 the zonal and regional options in detail in just a bit now persistent disks are
726:23 in just a bit now persistent disks are durable network storage devices
726:26 durable network storage devices that your instances can access like
726:28 that your instances can access like physical disks in a computer so these
726:31 physical disks in a computer so these are not physically attached disks but
726:34 are not physically attached disks but network disks that are connected over
726:36 network disks that are connected over google's internal network persistent
726:39 google's internal network persistent disks are independent of your instance
726:42 disks are independent of your instance and can persist after your instance has
726:44 and can persist after your instance has been terminated and this can be done by
726:46 been terminated and this can be done by turning on this flag upon creation you
726:49 turning on this flag upon creation you can even detach your disk and move it to
726:52 can even detach your disk and move it to other instances when you need to scaling
726:54 other instances when you need to scaling persistent disks can be done
726:56 persistent disks can be done automatically and on the fly by using
727:00 automatically and on the fly by using the disk resize feature and this gives
727:02 the disk resize feature and this gives you the flexibility to resize your
727:05 you the flexibility to resize your current persistent disks with no
727:07 current persistent disks with no downtime and even add additional disks
727:10 downtime and even add additional disks to your instance for additional
727:12 to your instance for additional performance and storage persistent disks
727:15 performance and storage persistent disks are also encrypted by default and google
727:18 are also encrypted by default and google also gives you the option of using your
727:21 also gives you the option of using your own custom keys each persistent disk can
727:24 own custom keys each persistent disk can be up to 64 terabytes in size and most
727:28 be up to 64 terabytes in size and most instances can have up to 128 persistent
727:32 instances can have up to 128 persistent disks and up to 257 terabytes of total
727:36 disks and up to 257 terabytes of total persistent disk space attached and just
727:39 persistent disk space attached and just as a note share core machine types are
727:41 as a note share core machine types are limited to 16 persistent disks and 3
727:45 limited to 16 persistent disks and 3 terabytes of total persistent disk space
727:48 terabytes of total persistent disk space and so now that i've gone through the
727:49 and so now that i've gone through the details of persistent disks i wanted to
727:52 details of persistent disks i wanted to dive into the two geographic options
727:55 dive into the two geographic options that's available for persistent disks
727:58 that's available for persistent disks first starting with zonal now zonal
728:01 first starting with zonal now zonal persistent disks are disks that are
728:03 persistent disks are disks that are available in one zone in one region
728:06 available in one zone in one region these disks are the most commonly used
728:08 these disks are the most commonly used persistent disks for general day-to-day
728:11 persistent disks for general day-to-day usage and used for those whose workloads
728:15 usage and used for those whose workloads are not sensitive to specific zone
728:17 are not sensitive to specific zone outages they are redundant within the
728:19 outages they are redundant within the zone you've created them in but cannot
728:22 zone you've created them in but cannot survive an outage of that zone and may
728:24 survive an outage of that zone and may be subjected to data loss if that
728:27 be subjected to data loss if that specific zone is affected and this is
728:29 specific zone is affected and this is where snapshots should be a part of your
728:32 where snapshots should be a part of your high availability strategy when using
728:35 high availability strategy when using zonal persistent disks snapshots are
728:38 zonal persistent disks snapshots are incremental and can be taken even if you
728:41 incremental and can be taken even if you snapshot disks that are attached to
728:43 snapshot disks that are attached to running instances and i'll be going into
728:45 running instances and i'll be going into detail about snapshots in a later lesson
728:49 detail about snapshots in a later lesson zonal persistent disks can also be used
728:52 zonal persistent disks can also be used with any machine type including
728:54 with any machine type including pre-defined shared core and custom
728:57 pre-defined shared core and custom machine types now when it comes to
728:59 machine types now when it comes to regional persistent disks they have
729:02 regional persistent disks they have storage qualities that are similar to
729:04 storage qualities that are similar to zonal persistent disks however regional
729:07 zonal persistent disks however regional persistent disks provide durable storage
729:10 persistent disks provide durable storage and replication of data between two
729:14 and replication of data between two zones in the same region if you are
729:16 zones in the same region if you are designing systems that require high
729:18 designing systems that require high availability on compute engine you
729:21 availability on compute engine you should use regional persistent disks
729:24 should use regional persistent disks combined with snapshots for durability
729:27 combined with snapshots for durability regional persistent disks are also
729:29 regional persistent disks are also designed to work with regional managed
729:31 designed to work with regional managed instance groups in the unlikely event of
729:34 instance groups in the unlikely event of a zonal outage you can usually fail over
729:37 a zonal outage you can usually fail over your workload running on regional
729:39 your workload running on regional persistent disks to another zone by
729:42 persistent disks to another zone by simply using the force attached flag
729:45 simply using the force attached flag regional persistent disks are slower
729:48 regional persistent disks are slower than zonal persistent disks and should
729:50 than zonal persistent disks and should be taken into consideration when write
729:53 be taken into consideration when write performance is less critical than data
729:56 performance is less critical than data redundancy across multiple zones now
729:59 redundancy across multiple zones now noting a couple of caveats here when it
730:01 noting a couple of caveats here when it comes to disk limits regional persistent
730:04 comes to disk limits regional persistent disks are similar to zonal persistent
730:07 disks are similar to zonal persistent disks however regional standard
730:09 disks however regional standard persistent disks have a 200 gigabyte
730:12 persistent disks have a 200 gigabyte size minimum and may be a major factor
730:15 size minimum and may be a major factor when it comes to cost so please be aware
730:18 when it comes to cost so please be aware as well you can't use regional
730:20 as well you can't use regional persistent disks with memory optimized
730:23 persistent disks with memory optimized machine types or compute optimized
730:26 machine types or compute optimized machine types now these two geographic
730:29 machine types now these two geographic options are available for all three
730:32 options are available for all three persistent disk types whose
730:34 persistent disk types whose characteristics i will dive into now
730:37 characteristics i will dive into now starting off with the standard
730:38 starting off with the standard persistent disk type also known in
730:41 persistent disk type also known in google cloud as pd standard now these
730:44 google cloud as pd standard now these persistent disks are backed by standard
730:47 persistent disks are backed by standard hard disk drives and these are your
730:49 hard disk drives and these are your standard spinning hard disk drives and
730:52 standard spinning hard disk drives and allows google cloud to give a cost
730:54 allows google cloud to give a cost effective solution for your specific
730:56 effective solution for your specific needs standard persistent disks are
730:59 needs standard persistent disks are great for large data processing
731:01 great for large data processing workloads that primarily use sequential
731:04 workloads that primarily use sequential ios now as explained earlier sequential
731:08 ios now as explained earlier sequential access would be accessing larger files
731:11 access would be accessing larger files and would require less work by the hard
731:13 and would require less work by the hard drive thus decreasing latency as there
731:16 drive thus decreasing latency as there are physical moving parts in this hard
731:18 are physical moving parts in this hard drive this would allow the disc to do
731:21 drive this would allow the disc to do the least amount of work as possible and
731:24 the least amount of work as possible and therefore making it the most efficient
731:26 therefore making it the most efficient as possible and therefore sequential ios
731:29 as possible and therefore sequential ios are best suited for this type of
731:31 are best suited for this type of persistent disk and again this is the
731:34 persistent disk and again this is the lowest price persistent disks out of all
731:37 lowest price persistent disks out of all the persistent disk types now stepping
731:39 the persistent disk types now stepping into the performance of standard
731:41 into the performance of standard persistent disks for just a second
731:44 persistent disks for just a second please remember that iops and throughput
731:46 please remember that iops and throughput performance depends on disk size
731:49 performance depends on disk size instance vcpu count and i o block size
731:52 instance vcpu count and i o block size among other factors and so this table
731:55 among other factors and so this table here along with the subsequent tables
731:57 here along with the subsequent tables you will see later are average speeds
732:00 you will see later are average speeds that google has deemed optimum for these
732:03 that google has deemed optimum for these specific disk types they cover the
732:05 specific disk types they cover the maximum sustained iops as well as the
732:08 maximum sustained iops as well as the maximum sustained throughput along with
732:10 maximum sustained throughput along with the granular breakdown of each here you
732:13 the granular breakdown of each here you can see the differences between both the
732:16 can see the differences between both the zonal and regional standard pd and as
732:19 zonal and regional standard pd and as you can see here in the table the zonal
732:22 you can see here in the table the zonal standard pd and the regional standard pd
732:25 standard pd and the regional standard pd are pretty much the same when it comes
732:27 are pretty much the same when it comes to most of these metrics but when you
732:29 to most of these metrics but when you look closely at the read iops per
732:31 look closely at the read iops per instance this is where they differ where
732:34 instance this is where they differ where the zonal standard pd has a higher read
732:37 the zonal standard pd has a higher read iops per instance than the regional
732:40 iops per instance than the regional standard pd and this is because the
732:42 standard pd and this is because the regional standard pd is accessing two
732:45 regional standard pd is accessing two different disks in two separate zones
732:48 different disks in two separate zones and so the latency will be higher the
732:50 and so the latency will be higher the same thing goes for right throughput per
732:52 same thing goes for right throughput per instance and so this would be a decision
732:55 instance and so this would be a decision between high availability versus speed
732:58 between high availability versus speed moving on to the next type of persistent
733:00 moving on to the next type of persistent disk is the balanced persistent disk in
733:03 disk is the balanced persistent disk in google cloud known as pd balance this
733:06 google cloud known as pd balance this disk type is the alternative to the ssd
733:09 disk type is the alternative to the ssd persistent disks that balance both
733:12 persistent disks that balance both performance and cost as this disk type
733:15 performance and cost as this disk type has the same maximum iops as the ssd
733:18 has the same maximum iops as the ssd persistent disk type but holds a lower
733:21 persistent disk type but holds a lower iops per gigabyte and so this disk is
733:24 iops per gigabyte and so this disk is designed for general purpose use the
733:26 designed for general purpose use the price for this disk also falls in
733:29 price for this disk also falls in between the standard and the ssd
733:32 between the standard and the ssd persistent disks so this is basically
733:34 persistent disks so this is basically your middle of the road disk when you're
733:36 your middle of the road disk when you're trying to decide between price and speed
733:40 trying to decide between price and speed moving straight into performance i put
733:42 moving straight into performance i put the standard pd metric here so that you
733:45 the standard pd metric here so that you can see a side-by-side comparison
733:48 can see a side-by-side comparison between the balance pd and the standard
733:50 between the balance pd and the standard pd and as you can see here when it comes
733:53 pd and as you can see here when it comes to the metrics under the maximum
733:55 to the metrics under the maximum sustained iops the balance pd is
733:58 sustained iops the balance pd is significantly higher than the standard
734:01 significantly higher than the standard pd in both the zonal and regional
734:04 pd in both the zonal and regional options as well looking at the maximum
734:06 options as well looking at the maximum sustained throughput the read write
734:08 sustained throughput the read write throughput per gigabyte is a little over
734:11 throughput per gigabyte is a little over two times faster and the right
734:13 two times faster and the right throughput per instance is three times
734:16 throughput per instance is three times faster so quite a bit of jump from the
734:18 faster so quite a bit of jump from the standard pd to the balance pd and moving
734:21 standard pd to the balance pd and moving on to the last persistent disk type is
734:24 on to the last persistent disk type is the ssd persistent disk type also known
734:27 the ssd persistent disk type also known in google cloud as a pd ssd and these
734:31 in google cloud as a pd ssd and these are the fastest persistent disks that
734:33 are the fastest persistent disks that are available
734:34 are available and are great for enterprise
734:36 and are great for enterprise applications and high performance
734:38 applications and high performance databases that demand lower latency and
734:41 databases that demand lower latency and more iops so this would be great for
734:44 more iops so this would be great for transactional databases or applications
734:47 transactional databases or applications that require demanding and near
734:50 that require demanding and near real-time performance the pd ssds have a
734:53 real-time performance the pd ssds have a single digit millisecond latency and
734:56 single digit millisecond latency and because of this comes at a higher cost
734:58 because of this comes at a higher cost and therefore is the highest price
735:01 and therefore is the highest price persistent disk moving on to the
735:03 persistent disk moving on to the performance of this persistent disk this
735:05 performance of this persistent disk this disk type is five times faster when it
735:08 disk type is five times faster when it comes to read iops per gigabyte than the
735:10 comes to read iops per gigabyte than the balance pd as well as five times faster
735:14 balance pd as well as five times faster for the right iops per gigabyte and so
735:16 for the right iops per gigabyte and so the table here on the left shows the
735:19 the table here on the left shows the performance for the pd ssd and the table
735:22 performance for the pd ssd and the table on the right shows the performance of
735:25 on the right shows the performance of both the standard pd and the balance pd
735:28 both the standard pd and the balance pd and so here you can see the difference
735:30 and so here you can see the difference moving from the standard pd over to the
735:33 moving from the standard pd over to the ssd pd the read write throughput per
735:35 ssd pd the read write throughput per instance stays the same from the
735:37 instance stays the same from the standard pd all the way up to the ssd pd
735:41 standard pd all the way up to the ssd pd but where the ssd outperforms all the
735:43 but where the ssd outperforms all the other ones is through the read write
735:45 other ones is through the read write throughput per gigabyte it's one and a
735:48 throughput per gigabyte it's one and a half times faster than the balance pd
735:50 half times faster than the balance pd and four times faster than the standard
735:52 and four times faster than the standard pd and again you will also notice a drop
735:55 pd and again you will also notice a drop in performance from the zonal option to
735:58 in performance from the zonal option to the regional option and so this is the
736:00 the regional option and so this is the end of part one of this lesson as it
736:03 end of part one of this lesson as it started to get a little bit long and so
736:05 started to get a little bit long and so whenever you're ready you can join me in
736:06 whenever you're ready you can join me in part two where i will be starting
736:09 part two where i will be starting immediately from the end of part one so
736:12 immediately from the end of part one so you can complete this video and i will
736:14 you can complete this video and i will see you in the next
736:15 see you in the next [Music]
736:19 [Music] welcome back this is part two of the
736:21 welcome back this is part two of the persistent disks and local ssds lesson
736:24 persistent disks and local ssds lesson and we will be starting exactly where we
736:26 and we will be starting exactly where we left off in part one so with that being
736:29 left off in part one so with that being said let's dive in and so now that i've
736:32 said let's dive in and so now that i've covered all the persistent disk types i
736:34 covered all the persistent disk types i wanted to move into discussing the
736:36 wanted to move into discussing the characteristics of the local ssd local
736:39 characteristics of the local ssd local ssds are physically attached to the
736:42 ssds are physically attached to the server that hosts your vm instance local
736:45 server that hosts your vm instance local ssds have higher throughput and lower
736:48 ssds have higher throughput and lower latency than any of the available
736:51 latency than any of the available persistent disk options and again this
736:53 persistent disk options and again this is because it's physically attached and
736:56 is because it's physically attached and the data doesn't have to travel over the
736:58 the data doesn't have to travel over the network now the crucial thing to know
737:00 network now the crucial thing to know about local ssds is that the data you
737:03 about local ssds is that the data you store on a local ssd persists only until
737:07 store on a local ssd persists only until the instance is stopped or deleted once
737:10 the instance is stopped or deleted once the instance is stopped or deleted your
737:13 the instance is stopped or deleted your data will be gone and there is no chance
737:15 data will be gone and there is no chance of getting it back now each local ssd is
737:19 of getting it back now each local ssd is 375 gigabytes in size but you can attach
737:22 375 gigabytes in size but you can attach a maximum of 24 local ssd partitions for
737:27 a maximum of 24 local ssd partitions for a total of 9 terabytes per instance
737:30 a total of 9 terabytes per instance local ssds are designed to offer very
737:34 local ssds are designed to offer very high iops and very low latency and this
737:37 high iops and very low latency and this is great for when you need a fast
737:39 is great for when you need a fast scratch disk or a cache and you don't
737:43 scratch disk or a cache and you don't want to use instance memory local ssds
737:46 want to use instance memory local ssds are also available in two flavors scuzzy
737:49 are also available in two flavors scuzzy and mvme now for those of you who are
737:52 and mvme now for those of you who are unaware
737:53 unaware scuzzy is an older protocol and made
737:56 scuzzy is an older protocol and made specifically for hard drives it also
737:58 specifically for hard drives it also holds the limitation of having one queue
738:02 holds the limitation of having one queue for commands nvme on the other hand also
738:05 for commands nvme on the other hand also known as non-volatile memory express is
738:09 known as non-volatile memory express is a newer protocol and is designed for the
738:12 a newer protocol and is designed for the specific use of flash memory and
738:15 specific use of flash memory and designed to have up to 64 000 qs as well
738:19 designed to have up to 64 000 qs as well each of those queues in turn can have up
738:22 each of those queues in turn can have up to 64 000 commands running at the same
738:26 to 64 000 commands running at the same time and thus making nvme infinitely
738:29 time and thus making nvme infinitely faster now although nvme comes with
738:32 faster now although nvme comes with these incredible speeds it does come at
738:35 these incredible speeds it does come at a cost and so when it comes to the
738:37 a cost and so when it comes to the caveats of local ssd although compute
738:40 caveats of local ssd although compute engine automatically encrypts your data
738:42 engine automatically encrypts your data when it's written to local ssd storage
738:45 when it's written to local ssd storage space you can't use customer supplied
738:47 space you can't use customer supplied encryption keys with local ssds as well
738:51 encryption keys with local ssds as well local ssds are only available for the n1
738:55 local ssds are only available for the n1 n2 and compute optimized machine types
738:58 n2 and compute optimized machine types now moving on to the performance of
739:01 now moving on to the performance of local ssds throughput is the same
739:03 local ssds throughput is the same between scuzzy and nvme but the read
739:07 between scuzzy and nvme but the read write iops per instance is where nvme
739:10 write iops per instance is where nvme comes out on top and as you can see here
739:13 comes out on top and as you can see here the read iops per instance is a whopping
739:16 the read iops per instance is a whopping two million four hundred thousand read
739:18 two million four hundred thousand read iops per instance as well the right iops
739:21 iops per instance as well the right iops per instance is 1.2 million over the 800
739:25 per instance is 1.2 million over the 800 000 for local ssd now before i end this
739:28 000 for local ssd now before i end this lesson i wanted to cover a few points on
739:31 lesson i wanted to cover a few points on performance scaling
739:32 performance scaling as it pertains to block storage on
739:35 as it pertains to block storage on compute engine now persistent disk
739:38 compute engine now persistent disk performance scales with the size of the
739:40 performance scales with the size of the disk and with the number of vcpus on
739:43 disk and with the number of vcpus on your vm instance persistent disk
739:46 your vm instance persistent disk performance scales linearly until it
739:49 performance scales linearly until it reaches either the limits of the volume
739:52 reaches either the limits of the volume or the limits of each compute engine
739:54 or the limits of each compute engine instance whichever is lower now this may
739:57 instance whichever is lower now this may seem odd that the performance of your
739:59 seem odd that the performance of your disk scales with cpu count but you have
740:02 disk scales with cpu count but you have to remember persistent disks aren't
740:04 to remember persistent disks aren't physically attached to your vm they are
740:07 physically attached to your vm they are independently located as such i o on a
740:10 independently located as such i o on a pd is a network operation and thus it
740:14 pd is a network operation and thus it takes cpu to do i o which means that
740:17 takes cpu to do i o which means that smaller instances run out of cpu to
740:21 smaller instances run out of cpu to perform disk io at higher rates so in
740:24 perform disk io at higher rates so in order for you to get better performance
740:27 order for you to get better performance you can increase the iops for your disk
740:29 you can increase the iops for your disk by resizing them to their maximum
740:32 by resizing them to their maximum capacity but once that size has been
740:34 capacity but once that size has been reached you will have to increase the
740:36 reached you will have to increase the number of cpus on your instance in order
740:39 number of cpus on your instance in order to increase your disk performance a
740:42 to increase your disk performance a recommendation by google is that you
740:44 recommendation by google is that you have one available vcpu for every 2000
740:50 have one available vcpu for every 2000 to
740:51 to iops of expected traffic so to sum it up
740:55 iops of expected traffic so to sum it up performance scales until it reaches
740:58 performance scales until it reaches either the limits of the disk or the
741:00 either the limits of the disk or the limits of the vm instance to which the
741:03 limits of the vm instance to which the disk is attached the vm instance limits
741:06 disk is attached the vm instance limits are determined by the machine type and
741:08 are determined by the machine type and the number of vcpus of the instance now
741:11 the number of vcpus of the instance now if you want to get more granular with
741:13 if you want to get more granular with regards to disk performance i've
741:15 regards to disk performance i've included a few links in the lesson text
741:18 included a few links in the lesson text that will give you some more insight but
741:20 that will give you some more insight but for most general purposes and for the
741:23 for most general purposes and for the exam remember that persistent disk
741:26 exam remember that persistent disk performance is based on the total
741:28 performance is based on the total persistent disk capacity
741:30 persistent disk capacity attached to an instance and the number
741:32 attached to an instance and the number of vcpus that the instance has and so
741:36 of vcpus that the instance has and so that's pretty much all i wanted to cover
741:38 that's pretty much all i wanted to cover when it comes to persistent disks and
741:41 when it comes to persistent disks and local ssds so you can now mark this
741:43 local ssds so you can now mark this lesson as complete and let's move on to
741:46 lesson as complete and let's move on to the next one
741:53 welcome back in this demo i'm going to be covering how to manage and interact
741:56 be covering how to manage and interact with your disks on compute engine this
741:59 with your disks on compute engine this demo is designed to give you both
742:00 demo is designed to give you both experience and understanding on working
742:03 experience and understanding on working with persistent disks and how you would
742:06 with persistent disks and how you would interact with them we're going to start
742:08 interact with them we're going to start the demo off by creating an instance
742:10 the demo off by creating an instance we're then going to create a separate
742:12 we're then going to create a separate persistent disk and attach it to the
742:14 persistent disk and attach it to the instance we're going to then interact
742:16 instance we're going to then interact with the disk and then resize the disk
742:19 with the disk and then resize the disk while afterwards we will delete it and
742:22 while afterwards we will delete it and we're going to do this all by both using
742:24 we're going to do this all by both using the console and the command line so with
742:27 the console and the command line so with that being said let's dive in so here i
742:30 that being said let's dive in so here i am in the console i'm logged in as tony
742:32 am in the console i'm logged in as tony bowties gmail.com and i am in project
742:36 bowties gmail.com and i am in project bowtie inc and so the first thing we
742:38 bowtie inc and so the first thing we need to do to kick off this demo is to
742:41 need to do to kick off this demo is to create an instance that we can attach
742:42 create an instance that we can attach our disk to but first i always like to
742:45 our disk to but first i always like to make sure that i have a vpc to deploy my
742:48 make sure that i have a vpc to deploy my instance into with its corresponding
742:51 instance into with its corresponding default firewall rules so i'm going to
742:53 default firewall rules so i'm going to head on over to the navigation menu
742:56 head on over to the navigation menu and i'm going to go down to vpc network
743:00 and i'm going to go down to vpc network and as expected my default vpc has been
743:03 and as expected my default vpc has been created and just to make sure that i
743:05 created and just to make sure that i have all my necessary firewall rules i'm
743:08 have all my necessary firewall rules i'm going to drill down into the vpc and
743:10 going to drill down into the vpc and head on over to firewall rules i'm going
743:12 head on over to firewall rules i'm going to click on firewall rules and the
743:15 to click on firewall rules and the necessary firewall rule that i need for
743:17 necessary firewall rule that i need for ssh is created and so i can go ahead and
743:21 ssh is created and so i can go ahead and create my instance so i'm going to go
743:23 create my instance so i'm going to go back up to the navigation menu and i'm
743:25 back up to the navigation menu and i'm going to go over to compute engine so
743:27 going to go over to compute engine so i'm going to go ahead and click on
743:28 i'm going to go ahead and click on create and i'm going to name this
743:30 create and i'm going to name this instance bowtie dash instance and for
743:34 instance bowtie dash instance and for the sake of this demo i'll add in a
743:36 the sake of this demo i'll add in a label here the key is going to be
743:38 label here the key is going to be environment and the value will be
743:40 environment and the value will be testing i'm going to go down to the
743:42 testing i'm going to go down to the bottom click on save with regards to the
743:44 bottom click on save with regards to the region i'm going to select us east 1 and
743:47 region i'm going to select us east 1 and i'm going to keep the zone as the
743:49 i'm going to keep the zone as the default for us east 1b and under machine
743:52 default for us east 1b and under machine type to keep things cost effective i'm
743:55 type to keep things cost effective i'm going to use an e2 micro shared core
743:57 going to use an e2 micro shared core machine and i'm going to scroll down to
744:00 machine and i'm going to scroll down to service account and under service
744:02 service account and under service account you want to select the set
744:03 account you want to select the set access for each api
744:05 access for each api you want to scroll down to compute
744:07 you want to scroll down to compute engine and here you want to select read
744:10 engine and here you want to select read write and this will give us the
744:11 write and this will give us the necessary permissions in order to
744:13 necessary permissions in order to interact with our disk that we will be
744:16 interact with our disk that we will be creating later so i'm going to scroll
744:17 creating later so i'm going to scroll down to the bottom here and i'm going to
744:19 down to the bottom here and i'm going to leave everything else set at its default
744:22 leave everything else set at its default and just before creating the instance
744:24 and just before creating the instance please do remember you can always click
744:26 please do remember you can always click on the command line link where you can
744:28 on the command line link where you can get the gcloud command to create this
744:30 get the gcloud command to create this instance through the command line i'm
744:33 instance through the command line i'm going to close this up and i'm going to
744:34 going to close this up and i'm going to simply click on create i'm just going to
744:37 simply click on create i'm just going to wait a few seconds here for my instance
744:39 wait a few seconds here for my instance to come up okay and my instance is up
744:42 to come up okay and my instance is up and so now what we want to do is we want
744:44 and so now what we want to do is we want to create our new disk so i'm going to
744:46 to create our new disk so i'm going to go over here to the left hand menu and
744:48 go over here to the left hand menu and i'm going to click on disks and as you
744:50 i'm going to click on disks and as you can see here the disk for the instance
744:53 can see here the disk for the instance that i had just created has 10 gigabytes
744:56 that i had just created has 10 gigabytes in us east 1b and we want to leave that
744:58 in us east 1b and we want to leave that alone and we want to create our new disk
745:01 alone and we want to create our new disk so i'm going to go up to the top here
745:02 so i'm going to go up to the top here and simply click on create disk
745:05 and simply click on create disk and so for the name of the disk i'm
745:06 and so for the name of the disk i'm going to call this disk new pd for
745:09 going to call this disk new pd for persistent disk and i'm going to give it
745:11 persistent disk and i'm going to give it the same description i'm going to keep
745:13 the same description i'm going to keep the type as standard persistent disk and
745:16 the type as standard persistent disk and for the region i want to select us east
745:19 for the region i want to select us east one i'm going to keep the zone as its
745:21 one i'm going to keep the zone as its default in us east 1b and as the disk is
745:24 default in us east 1b and as the disk is in us east 1b i'll be able to attach it
745:27 in us east 1b i'll be able to attach it to my instance and so just as a note
745:29 to my instance and so just as a note here there is a selection where you can
745:32 here there is a selection where you can replicate this disk within the region if
745:35 replicate this disk within the region if i click that off i've now changed this
745:37 i click that off i've now changed this from a zonal persistent disk to a
745:39 from a zonal persistent disk to a regional persistent disk and over here
745:42 regional persistent disk and over here in zones it'll give me the option to
745:44 in zones it'll give me the option to select any two zones that i prefer and
745:47 select any two zones that i prefer and so if you're looking at creating some
745:48 so if you're looking at creating some regional persistent disks these are the
745:51 regional persistent disks these are the steps you would need to take in order to
745:53 steps you would need to take in order to get it done in the console now in order
745:55 get it done in the console now in order to save on costs i'm going to keep this
745:57 to save on costs i'm going to keep this as a zonal persistent disk so i'm going
745:59 as a zonal persistent disk so i'm going to click on cancel i'm going to uncheck
746:01 to click on cancel i'm going to uncheck the option and make sure your region is
746:03 the option and make sure your region is still set at us east 1 and your zone is
746:06 still set at us east 1 and your zone is selected as us east 1b we're going to
746:09 selected as us east 1b we're going to leave the snapshot schedule alone and
746:11 leave the snapshot schedule alone and i'll be diving into snapshot schedules
746:14 i'll be diving into snapshot schedules in a later lesson i'm going to scroll
746:16 in a later lesson i'm going to scroll down here to source type i'm going to
746:17 down here to source type i'm going to keep it as blank disk and the size here
746:20 keep it as blank disk and the size here is set at 500 gigabytes and we want to
746:23 is set at 500 gigabytes and we want to set it to 100 gigabytes but before we do
746:26 set it to 100 gigabytes but before we do that i wanted to bring your attention to
746:28 that i wanted to bring your attention to the estimated performance here you can
746:30 the estimated performance here you can see the sustain random iops limits as
746:34 see the sustain random iops limits as well as the throughput limit and so
746:36 well as the throughput limit and so depending on the size of the disk that
746:37 depending on the size of the disk that you want to add these limits will change
746:40 you want to add these limits will change accordingly so if i change this to 100
746:43 accordingly so if i change this to 100 my sustained random iops limit on read
746:46 my sustained random iops limit on read went from 375 iops to 75 iops and so
746:51 went from 375 iops to 75 iops and so this is a great demonstration that the
746:53 this is a great demonstration that the larger your disc the better your
746:55 larger your disc the better your performance and so this is a great way
746:58 performance and so this is a great way to figure out on what your performance
747:00 to figure out on what your performance will be before you create your disk and
747:03 will be before you create your disk and i've also been prompted with a note here
747:05 i've also been prompted with a note here saying that because my disk is under 200
747:07 saying that because my disk is under 200 gigabytes that i will have reduced
747:10 gigabytes that i will have reduced performance and so for this demo that's
747:12 performance and so for this demo that's okay i'm going to keep my encryption as
747:14 okay i'm going to keep my encryption as the google manage key and under labels i
747:17 the google manage key and under labels i will add environment as the key and
747:20 will add environment as the key and value is testing
747:22 value is testing and so now that i've entered all my
747:23 and so now that i've entered all my options i'm going to simply click on
747:25 options i'm going to simply click on create
747:27 create and i'm going to give it a few seconds
747:29 and i'm going to give it a few seconds and my new disk should be created okay
747:32 and my new disk should be created okay and my new disk has been created and you
747:34 and my new disk has been created and you can easily create this disk through the
747:36 can easily create this disk through the command line and i will be supplying
747:38 command line and i will be supplying that in the lesson text i merely want to
747:41 that in the lesson text i merely want to go through the console setup so that you
747:43 go through the console setup so that you are aware of all the different options
747:45 are aware of all the different options and so now that i've created my disk and
747:48 and so now that i've created my disk and i've created my instance i want to now
747:50 i've created my instance i want to now log into my instance and attach this new
747:53 log into my instance and attach this new disk so i'm going to go back to vm
747:55 disk so i'm going to go back to vm instances and here i want to ssh into
747:58 instances and here i want to ssh into the bowtie instance and i'm going to
748:00 the bowtie instance and i'm going to give it a few seconds here to connect
748:02 give it a few seconds here to connect and i'm going to zoom in for better
748:04 and i'm going to zoom in for better viewing i'm going to clear my screen
748:07 viewing i'm going to clear my screen and so the first thing i want to do is i
748:08 and so the first thing i want to do is i want to list all my block devices that
748:11 want to list all my block devices that are available to me on this instance and
748:13 are available to me on this instance and the linux command for that is ls blk
748:17 the linux command for that is ls blk and as you can see my boot disk has been
748:19 and as you can see my boot disk has been mounted
748:20 mounted and is available to me and so now i want
748:22 and is available to me and so now i want to attach the new disk that we just
748:24 to attach the new disk that we just created and just as a note i could as
748:27 created and just as a note i could as easily have done this in the console but
748:29 easily have done this in the console but i wanted to give you an idea of what it
748:31 i wanted to give you an idea of what it would look like doing it from the
748:33 would look like doing it from the command line and so i'm going to paste
748:35 command line and so i'm going to paste in the command to attach the disk which
748:38 in the command to attach the disk which is gcloud compute instances attach dash
748:41 is gcloud compute instances attach dash disk the name of the instance which is
748:43 disk the name of the instance which is bow tie dash instance along with the
748:46 bow tie dash instance along with the flag dash dash disk the disk name which
748:48 flag dash dash disk the disk name which is new pd and the zone of the disk using
748:51 is new pd and the zone of the disk using the zone flag with us east 1b so i'm
748:54 the zone flag with us east 1b so i'm going to go ahead and hit enter
748:56 going to go ahead and hit enter and no errors came up so i'm assuming
748:58 and no errors came up so i'm assuming that this had worked and so just to
749:00 that this had worked and so just to double check i'm gonna run the lsblk
749:03 double check i'm gonna run the lsblk command again and success as you can see
749:06 command again and success as you can see here my block device sdb has been
749:08 here my block device sdb has been attached to my instance and is available
749:11 attached to my instance and is available to me with the size of 100 gigabytes and
749:14 to me with the size of 100 gigabytes and so now i want to look at the state that
749:16 so now i want to look at the state that this roblox device is in and so the
749:18 this roblox device is in and so the command for that will be sudo
749:21 command for that will be sudo file dash s followed by the path of the
749:23 file dash s followed by the path of the block device which is forward slash dev
749:26 block device which is forward slash dev forward slash sdb i'm going to hit on
749:29 forward slash sdb i'm going to hit on enter and as you can see it is showing
749:31 enter and as you can see it is showing data which means that it is just a raw
749:34 data which means that it is just a raw data device and so in order for me to
749:36 data device and so in order for me to interact with it i need to format the
749:38 interact with it i need to format the drive with a file system that the
749:40 drive with a file system that the operating system will be able to
749:42 operating system will be able to interact with and so the command to
749:44 interact with and so the command to format the drive would be sudo mkfs
749:48 format the drive would be sudo mkfs which is make file system i'm going to
749:50 which is make file system i'm going to use ext4 as the file system minus
749:53 use ext4 as the file system minus capital f along with the path of the new
749:55 capital f along with the path of the new disk so i'm going to hit on enter and no
749:58 disk so i'm going to hit on enter and no errors so i'm assuming that it was
750:00 errors so i'm assuming that it was successful so just to verify i'm going
750:02 successful so just to verify i'm going to run the sudo file minus s command and
750:06 to run the sudo file minus s command and as you can see here because the disk now
750:08 as you can see here because the disk now has a file system i've been given the
750:10 has a file system i've been given the information with regards to this disk
750:12 information with regards to this disk whereas before it was simply raw data
750:15 whereas before it was simply raw data and so now that we've created our disk
750:17 and so now that we've created our disk and we've formatted our disk to a file
750:20 and we've formatted our disk to a file system that the operating system is able
750:22 system that the operating system is able to read we need to now mount the disk
750:25 to read we need to now mount the disk and so in order to do that we need to
750:27 and so in order to do that we need to create a mount point so i'm going to
750:28 create a mount point so i'm going to first clear the screen and i'm going to
750:31 first clear the screen and i'm going to run the command sudo mkdir and the new
750:34 run the command sudo mkdir and the new mount point i'm going to call it slash
750:37 mount point i'm going to call it slash new pd i'm going to hit enter and now
750:39 new pd i'm going to hit enter and now i'm going to mount the disk and the
750:41 i'm going to mount the disk and the command for that is sudo mount the path
750:43 command for that is sudo mount the path for the block device which is forward
750:45 for the block device which is forward slash dev forward slash sdb and then the
750:48 slash dev forward slash sdb and then the mount point which is forward slash new
750:50 mount point which is forward slash new pd i'm going to hit enter no errors so
750:53 pd i'm going to hit enter no errors so i'm assuming that it had worked but just
750:55 i'm assuming that it had worked but just to verify i'm going to run the command
750:58 to verify i'm going to run the command lsblk
751:00 lsblk and success as you can see sdb has now
751:03 and success as you can see sdb has now been mounted as new pd and so now i can
751:07 been mounted as new pd and so now i can interact with this disk so the first
751:09 interact with this disk so the first thing i want to do is i want to change
751:11 thing i want to do is i want to change directories to this mount point i'm in
751:13 directories to this mount point i'm in now new pd i'm going to do an ls and so
751:16 now new pd i'm going to do an ls and so just as a note for those of you who are
751:18 just as a note for those of you who are wondering the lost and found directory
751:21 wondering the lost and found directory is found on each linux file system and
751:24 is found on each linux file system and this is designed to place orphaned or
751:26 this is designed to place orphaned or corrupted files or any corrupted bits of
751:29 corrupted files or any corrupted bits of data from the file system to be placed
751:32 data from the file system to be placed here and so it's not something that you
751:34 here and so it's not something that you would interact with but always a good to
751:36 would interact with but always a good to know so i'm going to now create a file
751:38 know so i'm going to now create a file in new pd so i'm going to run the
751:40 in new pd so i'm going to run the command sudo nano file a bow ties dot
751:43 command sudo nano file a bow ties dot text so file a bow ties is the file that
751:46 text so file a bow ties is the file that i'm going to create nano is my text
751:49 i'm going to create nano is my text editor and so i'm going to hit on enter
751:51 editor and so i'm going to hit on enter and so in this file i'm going to type in
751:54 and so in this file i'm going to type in bow ties are so classy
751:57 bow ties are so classy because after all they are i'm going to
751:59 because after all they are i'm going to hit ctrl o to save i'm going to hit
752:01 hit ctrl o to save i'm going to hit enter to verify it and ctrl x to exit so
752:05 enter to verify it and ctrl x to exit so if i do another ls i can see the file of
752:08 if i do another ls i can see the file of bow ties has been created also by
752:10 bow ties has been created also by running the command df minus k i'll be
752:13 running the command df minus k i'll be able to see the file system here as well
752:15 able to see the file system here as well and so this is the end of part one of
752:17 and so this is the end of part one of this demo it was getting a bit long so i
752:19 this demo it was getting a bit long so i decided to break it up this would be a
752:21 decided to break it up this would be a great opportunity for you to get up have
752:24 great opportunity for you to get up have a stretch get yourself a coffee or tea
752:27 a stretch get yourself a coffee or tea and whenever you're ready you can join
752:28 and whenever you're ready you can join me in the next one where part two will
752:31 me in the next one where part two will be starting immediately from the end of
752:33 be starting immediately from the end of part one
752:34 part one [Music]
752:38 [Music] welcome back this is part two of this
752:40 welcome back this is part two of this demo and we're gonna continue
752:42 demo and we're gonna continue immediately from the end of part one so
752:45 immediately from the end of part one so with that being said let's dive in and
752:47 with that being said let's dive in and so what i want to do now is i want to
752:50 so what i want to do now is i want to reboot the instance in order to
752:52 reboot the instance in order to demonstrate the mounting of this device
752:54 demonstrate the mounting of this device and i'm going to do that by using the
752:56 and i'm going to do that by using the command sudo reboot it's going to
752:58 command sudo reboot it's going to disconnect me i'm going to click on
753:00 disconnect me i'm going to click on close and i'm going to wait about a
753:01 close and i'm going to wait about a minute for it to reboot okay and it's
753:03 minute for it to reboot okay and it's been about a minute so i'm going to now
753:05 been about a minute so i'm going to now ssh into my instance
753:11 okay and here i am back again logged into my instance i'm going to quickly
753:13 into my instance i'm going to quickly clear the screen and i'm going to run
753:15 clear the screen and i'm going to run the lsblk command now what i wanted to
753:18 the lsblk command now what i wanted to demonstrate here is that although i
753:20 demonstrate here is that although i mounted the new device it did not stay
753:23 mounted the new device it did not stay mounted through the reboot and this is
753:25 mounted through the reboot and this is because there is a configuration file in
753:27 because there is a configuration file in linux that points to which partitions
753:30 linux that points to which partitions get mounted automatically upon startup
753:32 get mounted automatically upon startup that i need to edit in order to make
753:34 that i need to edit in order to make sure that this device is mounted every
753:36 sure that this device is mounted every time the instance reboots and so in
753:38 time the instance reboots and so in order to do that i need to edit a file
753:41 order to do that i need to edit a file called fstab and i'm going to have to
753:44 called fstab and i'm going to have to add the unique identifier for this
753:46 add the unique identifier for this partition also known as the device sdb
753:49 partition also known as the device sdb and this will mount the partition
753:50 and this will mount the partition automatically every time there happens
753:53 automatically every time there happens to be a reboot so in order to do that
753:55 to be a reboot so in order to do that i'm going to run the command sudo blk id
753:59 i'm going to run the command sudo blk id and the path of the block device forward
754:01 and the path of the block device forward slash dev forward slash sdb i'm going to
754:04 slash dev forward slash sdb i'm going to hit on enter and here is the identifier
754:06 hit on enter and here is the identifier also known as the uuid that i need to
754:09 also known as the uuid that i need to append to the fstab file so i'm going to
754:12 append to the fstab file so i'm going to copy the uuid
754:14 copy the uuid and i'm going to use the command
754:16 and i'm going to use the command sudo nano
754:18 sudo nano etc
754:19 etc fs tab and i'm going to hit on enter and
754:21 fs tab and i'm going to hit on enter and here you will find the uuid for your
754:24 here you will find the uuid for your other partitions and so you're going to
754:26 other partitions and so you're going to be appending a line here right at the
754:28 be appending a line here right at the end so i'm going to move my cursor down
754:29 end so i'm going to move my cursor down here i'm going to type in uuid equals
754:33 here i'm going to type in uuid equals and then the uuid that i had copied
754:35 and then the uuid that i had copied earlier the amount point which is going
754:37 earlier the amount point which is going to be forward slash new pd the type of
754:40 to be forward slash new pd the type of file system which is ext4 along with
754:43 file system which is ext4 along with defaults
754:45 defaults comma no fail i'm going to hit control o
754:47 comma no fail i'm going to hit control o to save hit enter to verify and control
754:50 to save hit enter to verify and control x to exit and so now i'm going to mount
754:52 x to exit and so now i'm going to mount this device by running the command sudo
754:55 this device by running the command sudo mount dash a and hit enter and this
754:57 mount dash a and hit enter and this command will mount all the partitions
754:59 command will mount all the partitions that are available in the fstab file and
755:02 that are available in the fstab file and so when i run a lsblk
755:05 so when i run a lsblk i can see here that my block device sdb
755:09 i can see here that my block device sdb is now mounted on forward slash new pd
755:12 is now mounted on forward slash new pd now i know this may be a refresher for
755:14 now i know this may be a refresher for some but this is a perfect demonstration
755:16 some but this is a perfect demonstration of the tasks that need to be done when
755:19 of the tasks that need to be done when creating and attaching a new disk to an
755:22 creating and attaching a new disk to an instance and is a common task for many
755:25 instance and is a common task for many working on linux instances and working
755:27 working on linux instances and working in cloud this can definitely be scripted
755:30 in cloud this can definitely be scripted but i wanted to show you the steps that
755:32 but i wanted to show you the steps that need to be taken in order to get a new
755:34 need to be taken in order to get a new disk in a usable state okay so great we
755:38 disk in a usable state okay so great we have created a new disk we had attached
755:41 have created a new disk we had attached the disk created a file system and had
755:44 the disk created a file system and had mounted the disk along with editing the
755:46 mounted the disk along with editing the configuration file to make sure that the
755:48 configuration file to make sure that the device mounts whenever the instance
755:51 device mounts whenever the instance starts up so now that we've done all
755:53 starts up so now that we've done all that i wanted to demonstrate resizing
755:55 that i wanted to demonstrate resizing this disk from 100 gigabytes to 150
755:59 this disk from 100 gigabytes to 150 gigabytes and so just to show you where
756:01 gigabytes and so just to show you where it is in the console i'm going to
756:03 it is in the console i'm going to quickly go back to my console tab and so
756:05 quickly go back to my console tab and so here i'm going to go to the left hand
756:07 here i'm going to go to the left hand menu i'm going to click on disks i'm
756:09 menu i'm going to click on disks i'm going to drill down into new pd and at
756:11 going to drill down into new pd and at the top i'm going to click on edit and
756:13 the top i'm going to click on edit and so here i'm able to adjust the disk
756:15 so here i'm able to adjust the disk space size and simply click on save not
756:18 space size and simply click on save not much that i really need to do here but i
756:20 much that i really need to do here but i did want to show you how to do this in
756:22 did want to show you how to do this in the command line so i'm going to go back
756:24 the command line so i'm going to go back to the tab of my instance and i'm going
756:26 to the tab of my instance and i'm going to quickly clear the screen and i'm
756:29 to quickly clear the screen and i'm going to paste in the command gcloud
756:31 going to paste in the command gcloud compute disks
756:32 compute disks resize the name of the disk which is new
756:35 resize the name of the disk which is new pd and the new size in gigabytes using
756:38 pd and the new size in gigabytes using the dash dash size flag 150 which is the
756:42 the dash dash size flag 150 which is the new size of the disc along with the dash
756:45 new size of the disc along with the dash dash zone flag of us east 1b i'm going
756:48 dash zone flag of us east 1b i'm going to hit enter it's going to ask me if i
756:50 to hit enter it's going to ask me if i want to do this as this is not
756:52 want to do this as this is not reversible and please remember when you
756:54 reversible and please remember when you resize a disk you can only make it
756:56 resize a disk you can only make it bigger and never smaller so i'm going to
756:59 bigger and never smaller so i'm going to hit y to continue
757:01 hit y to continue and it took a few seconds there but it
757:03 and it took a few seconds there but it was successful so if i run a df minus k
757:06 was successful so if i run a df minus k you can see here that i only have 100
757:09 you can see here that i only have 100 gigabytes available to me and this is
757:11 gigabytes available to me and this is because i have to extend the file system
757:14 because i have to extend the file system on the disk so i've made the disk larger
757:16 on the disk so i've made the disk larger but i haven't allocated those raw blocks
757:18 but i haven't allocated those raw blocks to the file system so in order for the
757:20 to the file system so in order for the file system to see those unallocated
757:23 file system to see those unallocated blocks that's available to it i need to
757:25 blocks that's available to it i need to run another command so i'm going to
757:27 run another command so i'm going to quickly clear my screen again
757:29 quickly clear my screen again and i'm going to run the command sudo
757:31 and i'm going to run the command sudo resize to fs along with the block device
757:34 resize to fs along with the block device i'm going to hit enter and as you can
757:36 i'm going to hit enter and as you can see it was successful showing the old
757:38 see it was successful showing the old blocks as 13 and the new blocks as 19.
757:42 blocks as 13 and the new blocks as 19. so if i run a df minus k i can now see
757:45 so if i run a df minus k i can now see my 150 gigabytes that's available to me
757:48 my 150 gigabytes that's available to me and so just to demonstrate after
757:50 and so just to demonstrate after resizing the disk along with mounting
757:53 resizing the disk along with mounting and then remounting the disk that the
757:55 and then remounting the disk that the file that i've created still exists i'm
757:57 file that i've created still exists i'm going to run an ls minus al but first i
758:00 going to run an ls minus al but first i will need to
758:01 will need to change directories into new pd clear my
758:04 change directories into new pd clear my screen and run an ls and phyla bow ties
758:07 screen and run an ls and phyla bow ties is still there and so this is a great
758:09 is still there and so this is a great example demonstrating how the data on
758:12 example demonstrating how the data on persistent disks persist through the
758:15 persistent disks persist through the lifetime of a disk even when mounting
758:18 lifetime of a disk even when mounting unmounting rebooting and resizing and so
758:22 unmounting rebooting and resizing and so as you can see we've done a lot of work
758:24 as you can see we've done a lot of work here and so just as a recap where we've
758:26 here and so just as a recap where we've created a new disk we attached this disk
758:29 created a new disk we attached this disk to an instance we formatted the disk
758:31 to an instance we formatted the disk into an ext4 file system we've mounted
758:35 into an ext4 file system we've mounted this disk we've written a file to it
758:37 this disk we've written a file to it added its unique identifier to the
758:39 added its unique identifier to the configuration file so that it mounts on
758:42 configuration file so that it mounts on startup and then we've resized the disk
758:44 startup and then we've resized the disk along with extending the file system on
758:46 along with extending the file system on the disk and so this is the end of the
758:48 the disk and so this is the end of the demo and i wanted to congratulate you on
758:51 demo and i wanted to congratulate you on making it to the end and i hope this
758:53 making it to the end and i hope this demo has been extremely useful and again
758:56 demo has been extremely useful and again fantastic job on your part now before
758:58 fantastic job on your part now before you go i wanted to quickly walk through
759:01 you go i wanted to quickly walk through the steps of deleting all the resources
759:03 the steps of deleting all the resources you've created and so the first thing
759:05 you've created and so the first thing that i want to do is delete the disk
759:07 that i want to do is delete the disk that was created for this demo and so
759:10 that was created for this demo and so before i can delete the disk i'm going
759:11 before i can delete the disk i'm going to first detach the disk from the
759:14 to first detach the disk from the instance and the easiest way to do that
759:16 instance and the easiest way to do that is through the command line so i'm going
759:17 is through the command line so i'm going to quickly clear my screen and so i'm
759:19 to quickly clear my screen and so i'm going to show you how to detach the disk
759:22 going to show you how to detach the disk from the instance and so i'm going to
759:24 from the instance and so i'm going to paste in this command gcloud compute
759:26 paste in this command gcloud compute instances detach disk the instance name
759:30 instances detach disk the instance name which is bow tie dash instance along
759:32 which is bow tie dash instance along with the disc with the flag dash dash
759:34 with the disc with the flag dash dash disc the name of the disc which is new
759:36 disc the name of the disc which is new pd along with the zone i'm going to hit
759:38 pd along with the zone i'm going to hit enter
759:39 enter and it's been successfully detached and
759:42 and it's been successfully detached and so now that it's detached i can actually
759:44 so now that it's detached i can actually delete the disk and so i'm going to head
759:46 delete the disk and so i'm going to head on over back to the console and i'm
759:48 on over back to the console and i'm going to go ahead and delete the new pd
759:50 going to go ahead and delete the new pd disk i'm going to click on delete i'm
759:52 disk i'm going to click on delete i'm going to get a prompt asking me if i'm
759:54 going to get a prompt asking me if i'm sure yes i am if i go back to the main
759:57 sure yes i am if i go back to the main menu for my disks and this should just
759:59 menu for my disks and this should just take a moment and once it's deleted you
760:01 take a moment and once it's deleted you will no longer see it here and i'm going
760:03 will no longer see it here and i'm going to go back over to vm instances and i'm
760:06 to go back over to vm instances and i'm going to delete this as well
760:11 and so there's no need to delete your default vpc unless you'd like to
760:13 default vpc unless you'd like to recreate it again but don't worry for
760:16 recreate it again but don't worry for those who decide to keep it you will not
760:18 those who decide to keep it you will not be charged for your vpc as we will be
760:20 be charged for your vpc as we will be using it in the next demo and so that's
760:23 using it in the next demo and so that's pretty much all i wanted to cover when
760:25 pretty much all i wanted to cover when it comes to managing disks with compute
760:27 it comes to managing disks with compute engine so you can now mark this as
760:29 engine so you can now mark this as complete and let's move on to the next
760:32 complete and let's move on to the next one
760:33 one [Music]
760:37 [Music] welcome back in this lesson i'll be
760:39 welcome back in this lesson i'll be discussing persistent disk snapshots now
760:42 discussing persistent disk snapshots now snapshots are a great way to backup data
760:45 snapshots are a great way to backup data from any running or stopped instances
760:48 from any running or stopped instances from unexpected data loss
760:50 from unexpected data loss snapshots are also a great strategy for
760:53 snapshots are also a great strategy for use in a backup plan for any and all
760:56 use in a backup plan for any and all instances no matter where they are
760:58 instances no matter where they are located and so as cloud engineers and
761:01 located and so as cloud engineers and architects this is a great tool for
761:04 architects this is a great tool for achieving the greatest uptime for your
761:06 achieving the greatest uptime for your instances so diving right into it
761:09 instances so diving right into it snapshots as i mentioned before are a
761:12 snapshots as i mentioned before are a great way for both backing up and
761:14 great way for both backing up and restoring the data of your persistent
761:17 restoring the data of your persistent disks you can create snapshots from
761:19 disks you can create snapshots from disks even while they are attached to
761:21 disks even while they are attached to running instances snapshots are global
761:24 running instances snapshots are global resources so any snapshot is accessible
761:27 resources so any snapshot is accessible by any resource within the same project
761:30 by any resource within the same project you can also share snapshots across
761:33 you can also share snapshots across projects as well snapshots also support
761:36 projects as well snapshots also support both zonal and regional persistent disks
761:40 both zonal and regional persistent disks snapshots are incremental and
761:42 snapshots are incremental and automatically compressed so you can
761:44 automatically compressed so you can create regular snapshots on a persistent
761:47 create regular snapshots on a persistent disk
761:48 disk faster and at a much lower cost than if
761:50 faster and at a much lower cost than if you regularly created a full image of a
761:53 you regularly created a full image of a disk now when you create a snapshot you
761:56 disk now when you create a snapshot you have the option of choosing a storage
761:58 have the option of choosing a storage location snapshots are stored in cloud
762:01 location snapshots are stored in cloud storage
762:02 storage and can be stored in either a
762:04 and can be stored in either a multi-regional location or a regional
762:07 multi-regional location or a regional cloud storage bucket a multi-regional
762:09 cloud storage bucket a multi-regional storage location provides higher
762:12 storage location provides higher availability but will drive up costs
762:15 availability but will drive up costs please be aware that the location of a
762:17 please be aware that the location of a snapshot affects its availability and
762:20 snapshot affects its availability and can incur
762:22 can incur networking costs when creating the
762:24 networking costs when creating the snapshot or restoring it to a new disk
762:26 snapshot or restoring it to a new disk if you do not specify storage location
762:29 if you do not specify storage location for a snapshot google cloud uses the
762:31 for a snapshot google cloud uses the default location which stores your
762:34 default location which stores your snapshot in a cloud storage
762:36 snapshot in a cloud storage multi-regional location closest to the
762:39 multi-regional location closest to the region of your source disk if you store
762:41 region of your source disk if you store your snapshot in the same region as your
762:43 your snapshot in the same region as your source disk there is no network charge
762:46 source disk there is no network charge when you access that snapshot from the
762:48 when you access that snapshot from the same region if you access the snapshot
762:50 same region if you access the snapshot from a different region you will incur a
762:53 from a different region you will incur a network cost compute engine stores
762:55 network cost compute engine stores multiple copies of each snapshot across
762:58 multiple copies of each snapshot across multiple locations as well you cannot
763:01 multiple locations as well you cannot change the storage location of an
763:03 change the storage location of an existing snapshot once a snapshot has
763:06 existing snapshot once a snapshot has been taken it can be used to create a
763:08 been taken it can be used to create a new disk in any region and zone
763:11 new disk in any region and zone regardless of the storage location of
763:14 regardless of the storage location of the snapshot now as i explained earlier
763:17 the snapshot now as i explained earlier snapshots are incremental and i wanted
763:19 snapshots are incremental and i wanted to take a moment to dive into that for
763:21 to take a moment to dive into that for just a minute so when creating snapshots
763:24 just a minute so when creating snapshots the first successful snapshot of a
763:27 the first successful snapshot of a persistent disk is a full snapshot that
763:30 persistent disk is a full snapshot that contains all the data on the persistent
763:32 contains all the data on the persistent disk the second snapshot only contains
763:36 disk the second snapshot only contains any new data or modify data since the
763:39 any new data or modify data since the first snapshot data that hasn't changed
763:41 first snapshot data that hasn't changed since snapshot 1 isn't included instead
763:44 since snapshot 1 isn't included instead snapshot 2 contains references to
763:47 snapshot 2 contains references to snapshot 1 for any unchanged data as
763:51 snapshot 1 for any unchanged data as shown here snapshot 3 contains any new
763:54 shown here snapshot 3 contains any new or changed data since snapshot 2 but
763:58 or changed data since snapshot 2 but won't contain any unchanged data from
764:01 won't contain any unchanged data from snapshot 1 or 2. instead snapshot 3
764:04 snapshot 1 or 2. instead snapshot 3 contains references to blocks in
764:07 contains references to blocks in snapshot 1 and snapshot 2 for any
764:10 snapshot 1 and snapshot 2 for any unchanged data this repeats for all
764:13 unchanged data this repeats for all subsequent snapshots of the persistent
764:16 subsequent snapshots of the persistent disk snapshots are always created based
764:19 disk snapshots are always created based on the last successful snapshot taken
764:22 on the last successful snapshot taken and so now you're probably wondering
764:24 and so now you're probably wondering what happens when you decide to delete a
764:26 what happens when you decide to delete a snapshot are they dependent on each
764:28 snapshot are they dependent on each other well when you delete a snapshot
764:31 other well when you delete a snapshot compute engine immediately marks the
764:34 compute engine immediately marks the snapshot as deleted in the system if the
764:36 snapshot as deleted in the system if the snapshot has no dependent snapshots it
764:39 snapshot has no dependent snapshots it is deleted outright however if the
764:41 is deleted outright however if the snapshot does have dependent snapshots
764:44 snapshot does have dependent snapshots then there are some steps that happen
764:46 then there are some steps that happen behind the scenes so shown here in this
764:49 behind the scenes so shown here in this diagram snapshot 2 is deleted the next
764:52 diagram snapshot 2 is deleted the next snapshot from the full snapshot no
764:54 snapshot from the full snapshot no longer references the snapshot for
764:57 longer references the snapshot for deletion in this example snapshot 1 then
765:00 deletion in this example snapshot 1 then becomes the reference for snapshot 3 and
765:03 becomes the reference for snapshot 3 and any data that is required for restoring
765:06 any data that is required for restoring other snapshots is moved into the next
765:09 other snapshots is moved into the next snapshot increasing its size shown here
765:13 snapshot increasing its size shown here blocks that were unique to snapshot 2
765:15 blocks that were unique to snapshot 2 are moved to snapshot 3 and the size of
765:18 are moved to snapshot 3 and the size of snapshot 3 increases any data that is
765:22 snapshot 3 increases any data that is not required for restoring other
765:24 not required for restoring other snapshots is deleted so in this case
765:27 snapshots is deleted so in this case blocks that are already in snapshot 3
765:30 blocks that are already in snapshot 3 are deleted from snapshot 2 and the size
765:33 are deleted from snapshot 2 and the size of all snapshots are lower now because
765:35 of all snapshots are lower now because subsequent snapshots might require
765:38 subsequent snapshots might require information stored in a previous
765:40 information stored in a previous snapshot
765:41 snapshot please be aware that deleting a snapshot
765:44 please be aware that deleting a snapshot does not necessarily delete all the data
765:47 does not necessarily delete all the data on the snapshot if you're looking to
765:49 on the snapshot if you're looking to make sure that your data has indeed been
765:51 make sure that your data has indeed been deleted from your snapshots you should
765:54 deleted from your snapshots you should delete all snapshots if your disk has a
765:57 delete all snapshots if your disk has a snapshot schedule you must detach the
766:00 snapshot schedule you must detach the snapshot schedule from the disk before
766:02 snapshot schedule from the disk before you can delete the schedule
766:04 you can delete the schedule removing the snapshot schedule from the
766:06 removing the snapshot schedule from the disk prevents further snapshot activity
766:09 disk prevents further snapshot activity from occurring
766:11 from occurring now touching on the topic of scheduled
766:13 now touching on the topic of scheduled snapshots by far the best way to backup
766:16 snapshots by far the best way to backup your data on compute engine is to use
766:19 your data on compute engine is to use scheduled snapshots this way you will
766:22 scheduled snapshots this way you will never have to worry about manually
766:24 never have to worry about manually creating snapshots or even worry about
766:26 creating snapshots or even worry about using other tools to kick off those
766:29 using other tools to kick off those snapshots you can simply use this
766:31 snapshots you can simply use this built-in tool by google which is why
766:34 built-in tool by google which is why snapshot schedules are considered best
766:36 snapshot schedules are considered best practice to backup any compute engine
766:39 practice to backup any compute engine persistent disks now in order to create
766:42 persistent disks now in order to create any snapshot schedules you must create
766:44 any snapshot schedules you must create your snapshot schedule in the same
766:47 your snapshot schedule in the same region where your persistent disk
766:49 region where your persistent disk resides now there are two ways to create
766:52 resides now there are two ways to create a snapshot schedule the first one is to
766:54 a snapshot schedule the first one is to create a snapshot schedule and then
766:57 create a snapshot schedule and then attach it to an existing persistent disk
767:00 attach it to an existing persistent disk the other way is to create a new
767:02 the other way is to create a new persistent disk with a snapshot schedule
767:05 persistent disk with a snapshot schedule you also have the option of setting up a
767:08 you also have the option of setting up a snapshot retention policy that defines
767:10 snapshot retention policy that defines how long you want to keep your snapshots
767:13 how long you want to keep your snapshots some options when creating snapshot
767:15 some options when creating snapshot schedules are both retention policies
767:18 schedules are both retention policies and source disk deletion rules now if
767:20 and source disk deletion rules now if you choose to set up a snapshot
767:22 you choose to set up a snapshot retention policy you must do it as part
767:25 retention policy you must do it as part of your snapshot schedule when you
767:27 of your snapshot schedule when you create a snapshot schedule is when you
767:29 create a snapshot schedule is when you can also set a source disk deletion rule
767:32 can also set a source disk deletion rule the source disk deletion rule controls
767:35 the source disk deletion rule controls what happens to your snapshots if the
767:37 what happens to your snapshots if the source disk is deleted now a few caveats
767:40 source disk is deleted now a few caveats here on the scheduled snapshots is that
767:42 here on the scheduled snapshots is that a persistent disk can only have one
767:45 a persistent disk can only have one snapshot schedule attached to it at a
767:48 snapshot schedule attached to it at a time also you cannot delete a snapshot
767:51 time also you cannot delete a snapshot schedule if it is attached to a disk you
767:54 schedule if it is attached to a disk you must detach the schedule from all disks
767:57 must detach the schedule from all disks then delete the schedule as well after
767:59 then delete the schedule as well after you create a snapshot schedule you
768:01 you create a snapshot schedule you cannot edit it to update a snapshot
768:04 cannot edit it to update a snapshot schedule you must delete it and create a
768:07 schedule you must delete it and create a new one now before i end this lesson i
768:10 new one now before i end this lesson i wanted to touch on managing snapshots
768:12 wanted to touch on managing snapshots for just a minute so when managing
768:14 for just a minute so when managing snapshots there's a few things to
768:16 snapshots there's a few things to remember in order to use snapshots to
768:18 remember in order to use snapshots to manage your data efficiently you can
768:21 manage your data efficiently you can snapshot your disks at most once every
768:24 snapshot your disks at most once every 10 minutes you are unable to snapshot
768:26 10 minutes you are unable to snapshot your disks at intervals less than 10
768:29 your disks at intervals less than 10 minutes so please keep that in mind when
768:32 minutes so please keep that in mind when creating your schedules also you should
768:34 creating your schedules also you should create snapshots on a regular schedule
768:37 create snapshots on a regular schedule to minimize data loss if there was an
768:39 to minimize data loss if there was an unexpected failure if you have existing
768:42 unexpected failure if you have existing snapshots of a persistent disk the
768:44 snapshots of a persistent disk the system automatically uses them as a
768:46 system automatically uses them as a baseline for any subsequent snapshots
768:50 baseline for any subsequent snapshots that you create from that same disk
768:52 that you create from that same disk so in order to improve performance you
768:55 so in order to improve performance you can eliminate excessive snapshots by
768:58 can eliminate excessive snapshots by creating an image and reusing it using
769:01 creating an image and reusing it using this method would not only be ideal for
769:03 this method would not only be ideal for storage and management of snapshots but
769:06 storage and management of snapshots but also help to reduce costs and if you
769:09 also help to reduce costs and if you schedule regular snapshots for your
769:11 schedule regular snapshots for your persistent disks you can reduce the time
769:14 persistent disks you can reduce the time that it takes to complete each snapshot
769:16 that it takes to complete each snapshot by creating them during off-peak hours
769:19 by creating them during off-peak hours when possible and lastly for those of
769:21 when possible and lastly for those of you who use windows for most situations
769:24 you who use windows for most situations you can use the volume shadow copy
769:27 you can use the volume shadow copy service to take snapshots of persistent
769:30 service to take snapshots of persistent disks that are attached to windows
769:32 disks that are attached to windows instances you can create vss snapshots
769:36 instances you can create vss snapshots without having to stop the instance or
769:38 without having to stop the instance or detach the persistent disk and so that's
769:41 detach the persistent disk and so that's pretty much all i wanted to cover when
769:43 pretty much all i wanted to cover when it comes to the theory of persistent
769:45 it comes to the theory of persistent disk snapshots their schedules and how
769:48 disk snapshots their schedules and how to manage them in the next lesson i'll
769:50 to manage them in the next lesson i'll be doing a hands-on demo
769:52 be doing a hands-on demo demonstrating snapshots and putting this
769:55 demonstrating snapshots and putting this theory into practice and get a feel for
769:58 theory into practice and get a feel for how snapshots work and how they can be
770:00 how snapshots work and how they can be applied to persistent disks so you can
770:03 applied to persistent disks so you can now mark this lesson as complete and
770:05 now mark this lesson as complete and whenever you're ready join me in the
770:06 whenever you're ready join me in the console
770:07 console [Music]
770:11 [Music] welcome back
770:12 welcome back in this demonstration we're going to
770:14 in this demonstration we're going to dive into snapshots and snapshot
770:17 dive into snapshots and snapshot schedules this demo will give you the
770:19 schedules this demo will give you the hands-on knowledge you need to create
770:21 hands-on knowledge you need to create and delete snapshots along with how to
770:24 and delete snapshots along with how to manage snapshot schedules we're going to
770:27 manage snapshot schedules we're going to start the demo off by creating an
770:29 start the demo off by creating an instance we're going to interact with it
770:31 instance we're going to interact with it and then take a snapshot of the disk
770:33 and then take a snapshot of the disk we're going to then create another
770:35 we're going to then create another instance from the snapshot and then
770:37 instance from the snapshot and then create some snapshot schedules for both
770:39 create some snapshot schedules for both of these instances by using both the
770:42 of these instances by using both the console and the command line so there's
770:44 console and the command line so there's a lot to do here so with that being said
770:47 a lot to do here so with that being said let's dive in and so i'm currently
770:49 let's dive in and so i'm currently logged in as tony bowties gmail.com as
770:53 logged in as tony bowties gmail.com as well i'm in project bowtie inc so the
770:56 well i'm in project bowtie inc so the first thing that we need to do to kick
770:57 first thing that we need to do to kick off this demo is to create an instance
771:00 off this demo is to create an instance but first as always i like to make sure
771:03 but first as always i like to make sure that i have a vpc to deploy my instance
771:05 that i have a vpc to deploy my instance into with its corresponding default
771:08 into with its corresponding default firewall rules and so i'm going to head
771:10 firewall rules and so i'm going to head on over to the navigation menu and
771:12 on over to the navigation menu and scroll down to vpc network
771:19 and because i didn't delete my default vpc from the last demo i still have it
771:21 vpc from the last demo i still have it here i'm just going to drill down and
771:24 here i'm just going to drill down and make sure that i have my firewall rules
771:26 make sure that i have my firewall rules i'm gonna go over to firewall rules and
771:28 i'm gonna go over to firewall rules and as expected the ssh firewall rule that i
771:31 as expected the ssh firewall rule that i need has already been created and so now
771:34 need has already been created and so now that i have everything in order i'm
771:35 that i have everything in order i'm gonna go back over to the navigation
771:37 gonna go back over to the navigation menu and head on over to compute engine
771:40 menu and head on over to compute engine to create my instance now i figure for
771:43 to create my instance now i figure for this demo i'd switch it up a little bit
771:45 this demo i'd switch it up a little bit and create the instance by the command
771:47 and create the instance by the command line so i'm going to head on over to
771:49 line so i'm going to head on over to cloud shell i'm going to open that up
771:51 cloud shell i'm going to open that up and it took a minute to provision and so
771:53 and it took a minute to provision and so what i'm going to do now is i'm going to
771:55 what i'm going to do now is i'm going to open it up in a new tab i'm going to
771:57 open it up in a new tab i'm going to zoom in for better viewing and i'm going
771:59 zoom in for better viewing and i'm going to paste in my command to create my
772:02 to paste in my command to create my instance and this gcloud command to
772:04 instance and this gcloud command to create these instances will be available
772:07 create these instances will be available in the github repository and you will
772:09 in the github repository and you will find all the instructions and the
772:11 find all the instructions and the commands
772:12 commands under managing snapshots in compute
772:14 under managing snapshots in compute engine so i'm going to hit enter
772:17 engine so i'm going to hit enter and you may get a prompt to authorize
772:19 and you may get a prompt to authorize this api call and i'm going to click on
772:22 this api call and i'm going to click on authorize
772:23 authorize and success our instance has been
772:25 and success our instance has been created and is up and running and so now
772:28 created and is up and running and so now what i want to do is ssh into the
772:30 what i want to do is ssh into the instance and so i'm just going to run
772:32 instance and so i'm just going to run the command from here which is gcloud
772:34 the command from here which is gcloud compute ssh dash dash zone the zone that
772:38 compute ssh dash dash zone the zone that i'm in which is used 1b and the instance
772:40 i'm in which is used 1b and the instance which is bowtie dash instance i'm going
772:43 which is bowtie dash instance i'm going to hit enter it's going to prompt me if
772:45 to hit enter it's going to prompt me if i want to continue i'm going to say yes
772:47 i want to continue i'm going to say yes and i'm going to enter my passphrase and
772:50 and i'm going to enter my passphrase and enter it again
772:51 enter it again it's going to update my metadata and
772:53 it's going to update my metadata and it's going to ask me again for my
772:55 it's going to ask me again for my passphrase and i'm in so i'm going to
772:57 passphrase and i'm in so i'm going to just quickly clear my screen and so the
772:59 just quickly clear my screen and so the first thing i want to do is i want to
773:01 first thing i want to do is i want to verify the name of my instance so i'm
773:04 verify the name of my instance so i'm going to type in the command hostname
773:06 going to type in the command hostname and as expected bowtie dash instance
773:09 and as expected bowtie dash instance shows up and so now i want to create a
773:11 shows up and so now i want to create a text file and so i'm going to run the
773:13 text file and so i'm going to run the command
773:14 command sudo nano file a
773:17 sudo nano file a text i'm going to hit enter and it's
773:19 text i'm going to hit enter and it's going to open up my nano text editor and
773:21 going to open up my nano text editor and you can enter a message of any kind that
773:23 you can enter a message of any kind that you'd like for me i'm going to enter
773:26 you'd like for me i'm going to enter more bow tie needed because you can
773:28 more bow tie needed because you can never get enough bow ties i'm going to
773:30 never get enough bow ties i'm going to hit ctrl o to save press enter to verify
773:33 hit ctrl o to save press enter to verify the file name to write and then ctrl x
773:35 the file name to write and then ctrl x to exit i'm going to run the command ls
773:38 to exit i'm going to run the command ls space minus al to list my files so i can
773:42 space minus al to list my files so i can verify that my file has been created and
773:44 verify that my file has been created and as you can see here file a bowties.txt
773:47 as you can see here file a bowties.txt has been created and so now that i've
773:49 has been created and so now that i've created my instance and i've written a
773:51 created my instance and i've written a file to disk i'm going to now head on
773:53 file to disk i'm going to now head on over to the console and take a snapshot
773:55 over to the console and take a snapshot of this disk and because my session was
773:58 of this disk and because my session was transferred to another tab i can now
774:00 transferred to another tab i can now close the terminal and you want to head
774:02 close the terminal and you want to head over to the left-hand menu and go to
774:04 over to the left-hand menu and go to disks and so now i want to show you two
774:07 disks and so now i want to show you two ways on how you can create this snapshot
774:09 ways on how you can create this snapshot the first one is going to disks and
774:12 the first one is going to disks and choosing the disk that you want for me
774:14 choosing the disk that you want for me it's bowtie instance and under actions
774:17 it's bowtie instance and under actions i'm going to click on the hamburger menu
774:19 i'm going to click on the hamburger menu and here i can create snapshot and this
774:21 and here i can create snapshot and this will bring me straight to my snapshot
774:23 will bring me straight to my snapshot menu but for this demo i'm going to go
774:25 menu but for this demo i'm going to go over to the left hand menu and i'm going
774:27 over to the left hand menu and i'm going to click on snapshots and here i'm going
774:30 to click on snapshots and here i'm going to click on create snapshot and so for
774:32 to click on create snapshot and so for the name of the snapshot i'm going to
774:34 the name of the snapshot i'm going to type in bowtie snapshot and i'm going to
774:37 type in bowtie snapshot and i'm going to use the same for the description moving
774:39 use the same for the description moving down on the source disk the only one
774:41 down on the source disk the only one that i can select is bow tie instance
774:43 that i can select is bow tie instance and that's the one that i want anyways
774:46 and that's the one that i want anyways so i'm going to click on that the
774:47 so i'm going to click on that the location in order to cut down on costs
774:49 location in order to cut down on costs we don't need multi-regional we're going
774:51 we don't need multi-regional we're going to just select regional and if you
774:53 to just select regional and if you select on the location i'm able to
774:56 select on the location i'm able to select any other locations like tokyo
774:59 select any other locations like tokyo and i can create my snapshot in tokyo
775:01 and i can create my snapshot in tokyo but i want to keep my snapshot in the
775:03 but i want to keep my snapshot in the same region so i'm going to go back and
775:05 same region so i'm going to go back and select us east one where it is based on
775:08 select us east one where it is based on the source disk location and i'm going
775:10 the source disk location and i'm going to add a label here with the key
775:12 to add a label here with the key environment and the value of testing i'm
775:15 environment and the value of testing i'm going to leave my encryption type as
775:16 going to leave my encryption type as google managed and i'm going to simply
775:18 google managed and i'm going to simply click on create and this will create a
775:21 click on create and this will create a snapshot of the boot disk on bow tie
775:23 snapshot of the boot disk on bow tie instance and that took about a minute
775:25 instance and that took about a minute there and so just as a note if you have
775:28 there and so just as a note if you have any bigger discs they will take a little
775:30 any bigger discs they will take a little bit longer to snapshot okay and now that
775:33 bit longer to snapshot okay and now that i've created my snapshot i'm going to go
775:35 i've created my snapshot i'm going to go back up to vm instances and i'm going to
775:37 back up to vm instances and i'm going to create a new instance from that snapshot
775:40 create a new instance from that snapshot and so i'm going to name this instance
775:42 and so i'm going to name this instance bowtie dash instance dash 2 and i'm
775:44 bowtie dash instance dash 2 and i'm going to give this a label i'm going to
775:46 going to give this a label i'm going to add a label here the key of environment
775:48 add a label here the key of environment and the value of testing and hit save
775:51 and the value of testing and hit save the region is going to be used 1 and you
775:54 the region is going to be used 1 and you can leave the zone as its default as us
775:57 can leave the zone as its default as us east 1b and under machine type you can
775:59 east 1b and under machine type you can select the e2 micro and you want to go
776:02 select the e2 micro and you want to go down to boot disk and select the change
776:04 down to boot disk and select the change button and here i'm going to select
776:07 button and here i'm going to select snapshots instead of using a public
776:09 snapshots instead of using a public image so i'm going to click on snapshots
776:11 image so i'm going to click on snapshots and if i select the snapshot drop down
776:13 and if i select the snapshot drop down menu i will see here my bowtie snapshot
776:17 menu i will see here my bowtie snapshot so i'm going to select this i'm going to
776:18 so i'm going to select this i'm going to leave the rest as default and i'm going
776:20 leave the rest as default and i'm going to go down to select and i'm going to
776:22 to go down to select and i'm going to leave everything else as its default and
776:24 leave everything else as its default and i'm going to click on create i'm going
776:26 i'm going to click on create i'm going to just give it a minute here so bowtie
776:28 to just give it a minute here so bowtie instance 2 can be created okay and it
776:31 instance 2 can be created okay and it took a minute there so now i'm going to
776:33 took a minute there so now i'm going to ssh into this instance
776:35 ssh into this instance and i'm going to zoom in for better
776:37 and i'm going to zoom in for better viewing and even though i know the
776:39 viewing and even though i know the instance is named bowtie.instance2
776:43 instance is named bowtie.instance2 i'm still going to run the hostname
776:44 i'm still going to run the hostname command and as expected the same name
776:47 command and as expected the same name pops up but what i was really curious
776:49 pops up but what i was really curious about is if i run the command ls space
776:52 about is if i run the command ls space dash al i can see here my file of file
776:56 dash al i can see here my file of file of bowties.text
776:58 of bowties.text and if i cat the file
776:59 and if i cat the file [Music]
777:01 [Music] i'll be able to see the text that i
777:03 i'll be able to see the text that i inputted into that file and so although
777:05 inputted into that file and so although it was only one file and a text file at
777:08 it was only one file and a text file at that i was able to verify that my
777:11 that i was able to verify that my snapshot had worked as there will be
777:13 snapshot had worked as there will be times where your snapshot can get
777:14 times where your snapshot can get corrupted and so doing some various spot
777:17 corrupted and so doing some various spot checks on your snapshots is some good
777:19 checks on your snapshots is some good common practice and so now i want to
777:21 common practice and so now i want to create a snapshot schedule for both of
777:24 create a snapshot schedule for both of these instances and so i'm going to go
777:26 these instances and so i'm going to go back to the console and on the left hand
777:28 back to the console and on the left hand menu i'm going to head down to snapshots
777:31 menu i'm going to head down to snapshots and if i go over to snapshot schedules
777:33 and if i go over to snapshot schedules you can see that i have no snapshot
777:35 you can see that i have no snapshot schedules so let's go ahead and create a
777:37 schedules so let's go ahead and create a new one by clicking on create snapshot
777:40 new one by clicking on create snapshot schedule and so as mentioned in the last
777:42 schedule and so as mentioned in the last lesson we need to create this schedule
777:44 lesson we need to create this schedule first before we can attach it to a disk
777:47 first before we can attach it to a disk and so i'm going to name this snapshot
777:49 and so i'm going to name this snapshot schedule as bow tie dash disk schedule
777:53 schedule as bow tie dash disk schedule i'm going to use the same for the
777:54 i'm going to use the same for the description the region i'm going to
777:56 description the region i'm going to select it as us east one and i'm going
777:58 select it as us east one and i'm going to keep the snapshot location as
778:00 to keep the snapshot location as regional under us east one you scroll
778:04 regional under us east one you scroll down here and under schedule options you
778:06 down here and under schedule options you can leave the schedule frequency as
778:08 can leave the schedule frequency as daily and just as a note for start time
778:12 daily and just as a note for start time this time is measured in utc so please
778:15 this time is measured in utc so please remember this when you're creating your
778:17 remember this when you're creating your schedule in your specific time zone and
778:19 schedule in your specific time zone and so i'm going to put the start time as o
778:22 so i'm going to put the start time as o 600 and this will be 1 am eastern
778:25 600 and this will be 1 am eastern standard time as backups are always best
778:28 standard time as backups are always best done when there is the least amount of
778:30 done when there is the least amount of activity and i'm going to keep the auto
778:32 activity and i'm going to keep the auto delete snapshots after 14 days i'm going
778:35 delete snapshots after 14 days i'm going to keep the deletion rule as keep
778:36 to keep the deletion rule as keep snapshots as well i can enable the
778:39 snapshots as well i can enable the volume shadow copy service for windows
778:42 volume shadow copy service for windows but since we're running linux i don't
778:44 but since we're running linux i don't need to enable this and since we labeled
778:46 need to enable this and since we labeled everything else i might as well give
778:48 everything else i might as well give this a label i'm going to use the key as
778:50 this a label i'm going to use the key as environment and the value of testing and
778:52 environment and the value of testing and once you've filled everything out then
778:54 once you've filled everything out then you can simply click on create and it
778:56 you can simply click on create and it took a minute there but the schedule was
778:58 took a minute there but the schedule was created and so now that i have my
779:00 created and so now that i have my snapshot schedule i need to attach it to
779:03 snapshot schedule i need to attach it to a disk so i'm going to head on over to
779:05 a disk so i'm going to head on over to the left hand menu and click on disks
779:07 the left hand menu and click on disks and here i'm going to drill down into
779:09 and here i'm going to drill down into bow tie instance i'm going to go up to
779:11 bow tie instance i'm going to go up to the top and click on edit and under
779:13 the top and click on edit and under snapshot schedule i'm going to click on
779:15 snapshot schedule i'm going to click on the drop down and here i will find bow
779:18 the drop down and here i will find bow tie disk schedule i'm going to select
779:20 tie disk schedule i'm going to select that i'm going to click on save and so
779:22 that i'm going to click on save and so now that i have my snapshot schedule
779:25 now that i have my snapshot schedule attached to my disk for the bowtie
779:28 attached to my disk for the bowtie instance instance i now want to create a
779:30 instance instance i now want to create a snapshot schedule for my other instance
779:33 snapshot schedule for my other instance and so instead of using the console i'm
779:35 and so instead of using the console i'm going to go ahead and do it through the
779:37 going to go ahead and do it through the command line so i'm going to go up to
779:38 command line so i'm going to go up to the top to my open shell and i'm going
779:41 the top to my open shell and i'm going to quickly clear the screen and so in
779:43 to quickly clear the screen and so in order to create my schedule i'm going to
779:45 order to create my schedule i'm going to run this command gcloud compute resource
779:48 run this command gcloud compute resource policies create snapshot schedule the
779:51 policies create snapshot schedule the name of the snapshot schedule which is
779:53 name of the snapshot schedule which is bow tie disk schedule 2 the region the
779:56 bow tie disk schedule 2 the region the maximum retention days the retention
779:59 maximum retention days the retention policy and the schedule followed by the
780:01 policy and the schedule followed by the storage location and like i said before
780:04 storage location and like i said before these commands you will find in the
780:06 these commands you will find in the github repository so i'm going to go
780:08 github repository so i'm going to go ahead and hit enter
780:10 ahead and hit enter and so i wanted to leave this error in
780:12 and so i wanted to leave this error in here to show you that i needed the
780:14 here to show you that i needed the proper permissions in order to create
780:16 proper permissions in order to create this snapshot schedule a great reminder
780:19 this snapshot schedule a great reminder to always check if you have the right
780:21 to always check if you have the right role for the task at hand and so i have
780:24 role for the task at hand and so i have two options i can either change users
780:27 two options i can either change users from my service account user to tony
780:29 from my service account user to tony bowtie or i can simply head on over to
780:32 bowtie or i can simply head on over to my instance and edit the service account
780:34 my instance and edit the service account permissions and so the easiest way to do
780:36 permissions and so the easiest way to do it would be to just switch users and so
780:39 it would be to just switch users and so i'm going to go ahead and do that so i'm
780:41 i'm going to go ahead and do that so i'm going to go ahead and run the command
780:43 going to go ahead and run the command gcloud auth login and remember that this
780:46 gcloud auth login and remember that this is something that you don't have to do i
780:48 is something that you don't have to do i merely wanted to show you that you
780:50 merely wanted to show you that you require the proper permissions on
780:53 require the proper permissions on creation of specific resources okay and
780:56 creation of specific resources okay and i quickly went through the
780:57 i quickly went through the authentication process i'm gonna just
780:59 authentication process i'm gonna just clear my screen and i'm going to go
781:00 clear my screen and i'm going to go ahead and run the command again
781:03 ahead and run the command again and as expected the snapshot schedule
781:06 and as expected the snapshot schedule was created with no errors and so now
781:08 was created with no errors and so now that my schedule has been created i can
781:10 that my schedule has been created i can now attach it to the disk so i'm going
781:12 now attach it to the disk so i'm going to run the command gcloud compute disks
781:15 to run the command gcloud compute disks add resource policies the instance name
781:18 add resource policies the instance name which is bowtie instance 2 and the
781:20 which is bowtie instance 2 and the resource policy which is the snapshot
781:22 resource policy which is the snapshot schedule named as bowtie disk schedule 2
781:25 schedule named as bowtie disk schedule 2 in the zone of us east 1b i'm going to
781:28 in the zone of us east 1b i'm going to hit enter
781:29 hit enter and success and so just to verify that
781:32 and success and so just to verify that the snapshot schedule has been attached
781:34 the snapshot schedule has been attached to my disk i'm going to go back to the
781:36 to my disk i'm going to go back to the console i'm going to head back on over
781:38 console i'm going to head back on over to the main page of disks i'm going to
781:41 to the main page of disks i'm going to drill down into bow tie instance 2 and
781:43 drill down into bow tie instance 2 and here it is the snapshot schedule has
781:46 here it is the snapshot schedule has been attached and so i want to
781:47 been attached and so i want to congratulate you on making it to the end
781:50 congratulate you on making it to the end of this demo and i hope this demo has
781:52 of this demo and i hope this demo has been useful as snapshots in the role of
781:55 been useful as snapshots in the role of an engineer is a common task that can
781:58 an engineer is a common task that can save you from any data loss once set
782:01 save you from any data loss once set into place and so just as a recap you've
782:03 into place and so just as a recap you've created an instance you created a file
782:06 created an instance you created a file on that instance and then you've created
782:08 on that instance and then you've created a snapshot of the disk of that instance
782:11 a snapshot of the disk of that instance and used it to create another instance
782:13 and used it to create another instance you then verified the snapshot and then
782:15 you then verified the snapshot and then created a snapshot schedule for both
782:18 created a snapshot schedule for both boot disks of the instances using the
782:21 boot disks of the instances using the console and the command line well done
782:23 console and the command line well done on another great job now before you go i
782:26 on another great job now before you go i wanted to take a moment to clean up any
782:28 wanted to take a moment to clean up any resources we've used so we don't
782:30 resources we've used so we don't accumulate any costs and so the first
782:33 accumulate any costs and so the first thing we want to do is we want to detach
782:35 thing we want to do is we want to detach the snapshot schedules from the disks
782:38 the snapshot schedules from the disks and so since we're in bow tie instance 2
782:40 and so since we're in bow tie instance 2 i'm going to go ahead and click on edit
782:42 i'm going to go ahead and click on edit under snapshot schedule i'm going to
782:44 under snapshot schedule i'm going to select the no schedule hit save and i'm
782:47 select the no schedule hit save and i'm going to do the same thing with my other
782:49 going to do the same thing with my other disk
782:55 now i'm going to head back on over to snapshots i'm going to delete this
782:57 snapshots i'm going to delete this snapshot and i'm going to head back on
782:59 snapshot and i'm going to head back on over to snapshot schedules i'm going to
783:02 over to snapshot schedules i'm going to select all the snapshot schedules and
783:04 select all the snapshot schedules and i'm going to click on delete
783:07 i'm going to click on delete and now that everything's cleaned up
783:09 and now that everything's cleaned up with regards to snapshots and snapshot
783:11 with regards to snapshots and snapshot schedules i can now go over to vm
783:14 schedules i can now go over to vm instances and delete the instances
783:17 instances and delete the instances i'm going to select them all and simply
783:19 i'm going to select them all and simply click on delete
783:24 and so that's pretty much all i wanted to cover in this demo when it comes to
783:26 to cover in this demo when it comes to snapshots and snapshot schedules
783:29 snapshots and snapshot schedules so you can now mark this as complete and
783:31 so you can now mark this as complete and let's move on to the next one
783:40 welcome back in this lesson we're going to switch gears and take an automated
783:42 to switch gears and take an automated approach to deployment by diving into
783:45 approach to deployment by diving into google's tool for infrastructure as code
783:47 google's tool for infrastructure as code called deployment manager now deployment
783:50 called deployment manager now deployment manager allows you to deploy
783:52 manager allows you to deploy update and tear down resources from
783:54 update and tear down resources from within google cloud using yaml jinja and
783:59 within google cloud using yaml jinja and python code templates it allows you to
784:01 python code templates it allows you to automate the deployment of all the
784:03 automate the deployment of all the resources that are available in google
784:05 resources that are available in google cloud and deploy it in a fast easy and
784:09 cloud and deploy it in a fast easy and repeatable way for consistency and
784:12 repeatable way for consistency and efficiency in this lesson we're going to
784:14 efficiency in this lesson we're going to explore the architecture of deployment
784:16 explore the architecture of deployment manager and dive into all the different
784:19 manager and dive into all the different components that gives it its flexibility
784:22 components that gives it its flexibility and the features that make this tool an
784:24 and the features that make this tool an easy solution for deploying complex
784:27 easy solution for deploying complex environments so with that being said
784:30 environments so with that being said let's dive in
784:31 let's dive in now breaking down the components that i
784:33 now breaking down the components that i mentioned earlier i wanted to start off
784:36 mentioned earlier i wanted to start off with the first component being the
784:38 with the first component being the configuration now a configuration
784:40 configuration now a configuration defines the structure of your deployment
784:43 defines the structure of your deployment as you must specify a configuration to
784:46 as you must specify a configuration to create a deployment a configuration
784:48 create a deployment a configuration describes all the resources you want for
784:51 describes all the resources you want for a single deployment and is written in
784:54 a single deployment and is written in yaml syntax that lists each of the
784:56 yaml syntax that lists each of the resources you want to create and its
784:58 resources you want to create and its respective resource properties a
785:00 respective resource properties a configuration must contain a resources
785:03 configuration must contain a resources section followed by the list of
785:05 section followed by the list of resources to create and so each resource
785:08 resources to create and so each resource must contain these three components the
785:12 must contain these three components the name the type and properties without
785:15 name the type and properties without these three components a deployment will
785:17 these three components a deployment will not instantiate and so i wanted to take
785:19 not instantiate and so i wanted to take a moment to go over these three
785:21 a moment to go over these three components in a bit of depth so the
785:24 components in a bit of depth so the first component of the configuration is
785:26 first component of the configuration is the name and the name is a user defined
785:29 the name and the name is a user defined string to identify this resource and can
785:32 string to identify this resource and can be anything you choose from names like
785:35 be anything you choose from names like instance one my-vm
785:38 instance one my-vm bowtie dash instance and you can even go
785:41 bowtie dash instance and you can even go as far to use larks dash instance dash
785:45 as far to use larks dash instance dash don't dash touch and the syntax can be
785:48 don't dash touch and the syntax can be found here and must not contain any
785:51 found here and must not contain any spaces or invalid characters next
785:53 spaces or invalid characters next component in a configuration is type and
785:56 component in a configuration is type and there are a couple of different types
785:58 there are a couple of different types that you can choose from a type can
786:00 that you can choose from a type can represent a single api source known as a
786:04 represent a single api source known as a base type
786:05 base type or a set of resources known as a
786:07 or a set of resources known as a composite type and either one of these
786:10 composite type and either one of these can be used to create part of your
786:12 can be used to create part of your deployment the type of the resource
786:14 deployment the type of the resource being deployed here in this diagram is
786:16 being deployed here in this diagram is shown as a base type of
786:18 shown as a base type of compute.v1.instance
786:21 compute.v1.instance and there are many other api resources
786:23 and there are many other api resources that can be used such as compute.v1.disk
786:27 that can be used such as compute.v1.disk app engine dot v1 as well as
786:31 app engine dot v1 as well as bigquery.v2 and the syntax is shown here
786:34 bigquery.v2 and the syntax is shown here as api dot version dot resource now a
786:38 as api dot version dot resource now a composite type contains one or more
786:40 composite type contains one or more templates that are pre-configured to
786:43 templates that are pre-configured to work together these templates expand to
786:45 work together these templates expand to a set of base types when deployed in a
786:48 a set of base types when deployed in a deployment composite types are
786:50 deployment composite types are essentially hosted templates that you
786:53 essentially hosted templates that you can add to deployment manager the syntax
786:55 can add to deployment manager the syntax is shown here as gcp dash types forward
786:59 is shown here as gcp dash types forward slash provider colon resource and to
787:02 slash provider colon resource and to give you an example of what a composite
787:04 give you an example of what a composite type looks like
787:05 type looks like here is shown the creation of a reserved
787:08 here is shown the creation of a reserved ip address using the compute engine v1
787:11 ip address using the compute engine v1 api and you could also use composite
787:14 api and you could also use composite types with other apis in the same way
787:17 types with other apis in the same way such as gcp dash types forward slash app
787:21 such as gcp dash types forward slash app engine dash v1 colon apps or bigquery
787:25 engine dash v1 colon apps or bigquery v2 colon data sets and for the last
787:29 v2 colon data sets and for the last component in a configuration is
787:31 component in a configuration is properties and this is the parameters
787:34 properties and this is the parameters for the resource type this includes all
787:36 for the resource type this includes all the parameters you see here in this
787:38 the parameters you see here in this example including the zone
787:41 example including the zone machine type the type of disk along with
787:43 machine type the type of disk along with its parameters pretty much everything
787:46 its parameters pretty much everything that gives detail on the resource type
787:48 that gives detail on the resource type now just as a note they must match the
787:51 now just as a note they must match the properties for this type so what do i
787:53 properties for this type so what do i mean by this so let's say you entered a
787:56 mean by this so let's say you entered a zone but that particular zone doesn't
787:58 zone but that particular zone doesn't exist or that compute engine machine
788:00 exist or that compute engine machine type doesn't exist in that zone you will
788:02 type doesn't exist in that zone you will end up getting an error as deployment
788:05 end up getting an error as deployment manager will not be able to parse this
788:07 manager will not be able to parse this configuration and thus failing
788:09 configuration and thus failing deployment so make sure when you add
788:11 deployment so make sure when you add your properties that they match those of
788:14 your properties that they match those of the resource now a configuration can
788:17 the resource now a configuration can contain templates which are essentially
788:19 contain templates which are essentially parts of the configuration file that
788:22 parts of the configuration file that have been abstracted into individual
788:24 have been abstracted into individual building blocks a template is a separate
788:27 building blocks a template is a separate file that is imported and used as a type
788:30 file that is imported and used as a type in a configuration and you can use as
788:32 in a configuration and you can use as many templates as you want in a
788:34 many templates as you want in a configuration and allow you to separate
788:36 configuration and allow you to separate your configuration out into different
788:39 your configuration out into different pieces that you can use and reuse across
788:42 pieces that you can use and reuse across different deployments templates can be
788:44 different deployments templates can be as generalized or specific as you need
788:48 as generalized or specific as you need and they also allow you to take
788:49 and they also allow you to take advantage of features like template
788:51 advantage of features like template properties environment variables and
788:54 properties environment variables and modules to create dynamic configuration
788:57 modules to create dynamic configuration as shown here templates can be written
789:00 as shown here templates can be written in a couple of different ways they can
789:02 in a couple of different ways they can be written in either ginger 2.1 or
789:05 be written in either ginger 2.1 or python 3. the example shown on the left
789:08 python 3. the example shown on the left has been written in ginger and is very
789:11 has been written in ginger and is very similar to the yaml syntax so if you're
789:13 similar to the yaml syntax so if you're familiar with yaml this might be better
789:16 familiar with yaml this might be better for you the example on the right has
789:18 for you the example on the right has been written in python and is pretty
789:20 been written in python and is pretty amazing as you can take advantage of
789:23 amazing as you can take advantage of programmatically generating parts of
789:25 programmatically generating parts of your templates if you are familiar with
789:27 your templates if you are familiar with python this might be a better format for
789:30 python this might be a better format for you now one of the advantages of using
789:32 you now one of the advantages of using templates is the ability to create and
789:35 templates is the ability to create and define custom template properties
789:38 define custom template properties template properties are arbitrary
789:40 template properties are arbitrary variables that you define in template
789:43 variables that you define in template files any configuration file or template
789:46 files any configuration file or template file that uses the template in question
789:48 file that uses the template in question can provide a value for the template
789:51 can provide a value for the template property without changing the template
789:53 property without changing the template directly this lets you abstract the
789:56 directly this lets you abstract the property so that you can change the
789:58 property so that you can change the property's value for each unique
790:01 property's value for each unique configuration without updating the
790:03 configuration without updating the underlying template and just as a note
790:06 underlying template and just as a note deployment manager creates predefined
790:08 deployment manager creates predefined environment variables that you can use
790:11 environment variables that you can use in your deployment in this example the
790:14 in your deployment in this example the project variable will use the project id
790:17 project variable will use the project id for this specific project and so
790:19 for this specific project and so combining all these components together
790:21 combining all these components together will give you a deployment and so a
790:24 will give you a deployment and so a deployment is a collection of resources
790:26 deployment is a collection of resources that are deployed and managed together
790:29 that are deployed and managed together using a configuration you can then
790:31 using a configuration you can then deploy update or delete this deployment
790:34 deploy update or delete this deployment by merely changing some code or at the
790:37 by merely changing some code or at the click of a button now when you deploy
790:40 click of a button now when you deploy you provide a valid configuration in the
790:43 you provide a valid configuration in the request to create the deployment a
790:45 request to create the deployment a deployment can contain a number of
790:47 deployment can contain a number of resources across a number of google
790:50 resources across a number of google cloud services when you create a
790:52 cloud services when you create a deployment deployment manager creates
790:55 deployment deployment manager creates all of the described resources to deploy
790:57 all of the described resources to deploy a configuration it must be done through
791:00 a configuration it must be done through the command line and cannot be done
791:02 the command line and cannot be done through the console you can simply use
791:04 through the console you can simply use the syntax shown here and a deployment
791:06 the syntax shown here and a deployment will be instantiated from the
791:08 will be instantiated from the configuration file that you have entered
791:11 configuration file that you have entered where bow tie deploy is the name of the
791:13 where bow tie deploy is the name of the deployment and the file after the dash
791:16 deployment and the file after the dash dash config is your configuration file
791:19 dash config is your configuration file google cloud also offers pre-defined
791:22 google cloud also offers pre-defined templates that you can use to deploy
791:25 templates that you can use to deploy from the gcp marketplace and can be
791:27 from the gcp marketplace and can be found right in the console of deployment
791:30 found right in the console of deployment manager this way all the configuration
791:33 manager this way all the configuration and template creation is handled for you
791:35 and template creation is handled for you and you just deploy the solution through
791:38 and you just deploy the solution through the console now after you've created a
791:40 the console now after you've created a deployment you can update it whenever
791:43 deployment you can update it whenever you need to you can update a deployment
791:45 you need to you can update a deployment by adding or removing resources from a
791:47 by adding or removing resources from a deployment or updating the properties of
791:50 deployment or updating the properties of existing resources in a deployment a
791:52 existing resources in a deployment a single update can contain any
791:55 single update can contain any combination of these changes so you can
791:57 combination of these changes so you can make changes to the properties of
791:59 make changes to the properties of existing resources and add new resources
792:02 existing resources and add new resources in the same request you update your
792:04 in the same request you update your deployment by first making changes to
792:07 deployment by first making changes to your configuration file or you can
792:09 your configuration file or you can create a configuration file with the
792:11 create a configuration file with the changes you want you will then have the
792:13 changes you want you will then have the option to pick the policies to use for
792:15 option to pick the policies to use for your updates or you can use the default
792:18 your updates or you can use the default policies and finally you then make the
792:21 policies and finally you then make the update request to deployment manager and
792:23 update request to deployment manager and so once you've launched your deployment
792:26 so once you've launched your deployment each deployment has a corresponding
792:28 each deployment has a corresponding manifest as the example shown here a
792:31 manifest as the example shown here a manifest is a read-only property that
792:33 manifest is a read-only property that describes all the resources in your
792:36 describes all the resources in your deployment and is automatically created
792:39 deployment and is automatically created with each new deployment manifests
792:41 with each new deployment manifests cannot be modified after they have been
792:43 cannot be modified after they have been created as well it's not the same as a
792:46 created as well it's not the same as a configuration file but is created based
792:49 configuration file but is created based on the configuration file and so when
792:52 on the configuration file and so when you delete a deployment all resources
792:55 you delete a deployment all resources that are part of the deployment are also
792:57 that are part of the deployment are also deleted if you want to delete specific
792:59 deleted if you want to delete specific resources from your deployment and keep
793:01 resources from your deployment and keep the rest delete those resources from
793:04 the rest delete those resources from your configuration file and update the
793:06 your configuration file and update the deployment instead
793:08 deployment instead and so as you can see here deployment
793:10 and so as you can see here deployment manager gives you a slew of different
793:12 manager gives you a slew of different options to deploy update or delete
793:16 options to deploy update or delete resources simultaneously in google cloud
793:20 resources simultaneously in google cloud now like most services in gcp there are
793:23 now like most services in gcp there are always some best practices to follow
793:25 always some best practices to follow note that there are many more best
793:27 note that there are many more best practices to add to this and can be
793:29 practices to add to this and can be found in the documentation which i will
793:32 found in the documentation which i will be providing the link to in the lesson
793:34 be providing the link to in the lesson text but i did want to point out some
793:36 text but i did want to point out some important ones to remember so the first
793:38 important ones to remember so the first one i wanted to bring up is to break
793:40 one i wanted to bring up is to break your configurations up into logical
793:43 your configurations up into logical units so for example you should create
793:45 units so for example you should create separate configurations for networking
793:48 separate configurations for networking services security services and compute
793:51 services security services and compute services so this way each team will be
793:54 services so this way each team will be able to easily take care of their own
793:56 able to easily take care of their own domain without having to sift through a
793:59 domain without having to sift through a massive template containing the code to
794:02 massive template containing the code to the entire environment another best
794:04 the entire environment another best practice to follow is to use references
794:07 practice to follow is to use references and references should be used for values
794:09 and references should be used for values that are not defined until a resource is
794:12 that are not defined until a resource is created such as resources self-link ip
794:15 created such as resources self-link ip address or system generated id without
794:19 address or system generated id without references deployment manager creates
794:21 references deployment manager creates all resources in parallel so there's no
794:24 all resources in parallel so there's no guarantee that dependent resources are
794:27 guarantee that dependent resources are created in the correct order using
794:29 created in the correct order using references would enforce the order in
794:32 references would enforce the order in which resources are created the next one
794:35 which resources are created the next one is to preview your deployments using the
794:37 is to preview your deployments using the preview flag so you should always
794:39 preview flag so you should always preview your deployments to assess how
794:42 preview your deployments to assess how making an update will affect your
794:44 making an update will affect your deployment deployment manager does not
794:46 deployment deployment manager does not actually deploy resources when you
794:49 actually deploy resources when you preview a configuration but runs a mock
794:52 preview a configuration but runs a mock deployment of those resources instead
794:54 deployment of those resources instead this gives you the opportunity to see
794:57 this gives you the opportunity to see the changes to your deployment before
794:59 the changes to your deployment before committing to it you also want to
795:01 committing to it you also want to consider automating the creation of
795:03 consider automating the creation of projects as well as automating the
795:06 projects as well as automating the creation of resources contained within
795:08 creation of resources contained within the projects and this enables you to
795:10 the projects and this enables you to adopt an infrastructure as code approach
795:14 adopt an infrastructure as code approach for project provisioning this will allow
795:16 for project provisioning this will allow you to provide a series of predefined
795:19 you to provide a series of predefined project environments that can be quickly
795:21 project environments that can be quickly and easily provisioned it will also
795:23 and easily provisioned it will also allow you to use version control to
795:26 allow you to use version control to manage your base project configuration
795:28 manage your base project configuration and it will also allow you to deploy
795:31 and it will also allow you to deploy reproducible and consistent project
795:33 reproducible and consistent project configurations and lastly using a
795:36 configurations and lastly using a version control system as part of the
795:39 version control system as part of the development process for your deployments
795:42 development process for your deployments is a great best practice to follow as it
795:44 is a great best practice to follow as it allows you to fall back to a previous
795:47 allows you to fall back to a previous known good configuration it provides an
795:49 known good configuration it provides an audit trail for changes as well it uses
795:53 audit trail for changes as well it uses the configuration as part of a
795:55 the configuration as part of a continuous deployment system now as
795:57 continuous deployment system now as you've seen here in this lesson
795:59 you've seen here in this lesson deployment manager can be a powerful
796:01 deployment manager can be a powerful tool in your tool belt when it comes to
796:03 tool in your tool belt when it comes to implementing infrastructure as code and
796:06 implementing infrastructure as code and it has endless possibilities that you
796:08 it has endless possibilities that you can explore on your own it can also
796:11 can explore on your own it can also provide a massive push
796:13 provide a massive push towards devops practices and head down
796:16 towards devops practices and head down the path of continuous automation
796:19 the path of continuous automation through continuous integration
796:21 through continuous integration continuous delivery and continuous
796:23 continuous delivery and continuous deployment and so that's pretty much all
796:25 deployment and so that's pretty much all i wanted to cover when it comes to
796:27 i wanted to cover when it comes to deployment manager and so whenever
796:29 deployment manager and so whenever you're ready join me in the next one
796:31 you're ready join me in the next one where we will go hands-on in a
796:33 where we will go hands-on in a demonstration to deploy a configuration
796:36 demonstration to deploy a configuration in deployment manager so you can now
796:38 in deployment manager so you can now mark this lesson as complete and
796:39 mark this lesson as complete and whenever you're ready join me in the
796:41 whenever you're ready join me in the console
796:42 console [Music]
796:46 [Music] welcome back in this demonstration we're
796:48 welcome back in this demonstration we're gonna go hands-on with deployment
796:50 gonna go hands-on with deployment manager and deploy a small web server
796:53 manager and deploy a small web server we're gonna first use the google cloud
796:55 we're gonna first use the google cloud editor to copy in our code and we're
796:58 editor to copy in our code and we're gonna then do a dry run and then finally
797:00 gonna then do a dry run and then finally deploy our code we're gonna then do a
797:02 deploy our code we're gonna then do a walkthrough of deployment manager in the
797:04 walkthrough of deployment manager in the console and go through the manifest as
797:07 console and go through the manifest as well as some of the other features we're
797:09 well as some of the other features we're then going to verify all the deployed
797:11 then going to verify all the deployed resources and we get to do an easy
797:13 resources and we get to do an easy cleanup in the end by hitting the delete
797:16 cleanup in the end by hitting the delete button and taking care of removing any
797:18 button and taking care of removing any resources that were created so there's
797:20 resources that were created so there's quite a bit to go through here and so
797:22 quite a bit to go through here and so with that being said let's dive in and
797:25 with that being said let's dive in and so as you can see here i am logged in as
797:27 so as you can see here i am logged in as tonybowties gmail.com in the project of
797:31 tonybowties gmail.com in the project of bowtie inc now since we're going to be
797:34 bowtie inc now since we're going to be doing most of our work in code the first
797:36 doing most of our work in code the first thing that we want to do is go to the
797:38 thing that we want to do is go to the google cloud editor so i'm going to go
797:40 google cloud editor so i'm going to go up here to the top and open up cloud
797:42 up here to the top and open up cloud shell and i'm going to then click on the
797:44 shell and i'm going to then click on the button open editor i'm going to make
797:46 button open editor i'm going to make this full screen for better viewing and
797:48 this full screen for better viewing and so in order to get the terminal in the
797:50 so in order to get the terminal in the same viewing pane as the editor i'm
797:52 same viewing pane as the editor i'm going to simply go up to the top menu
797:54 going to simply go up to the top menu and click on terminal and select new
797:57 and click on terminal and select new terminal now for better viewing and this
797:59 terminal now for better viewing and this is totally optional for you i'm going to
798:01 is totally optional for you i'm going to change the color theme into a dark mode
798:04 change the color theme into a dark mode and so i'm going to go up to the menu
798:06 and so i'm going to go up to the menu click on file go down to settings and go
798:08 click on file go down to settings and go over to color theme and i'm going to
798:10 over to color theme and i'm going to select dark visual studio and for those
798:13 select dark visual studio and for those of you who are working in visual studio
798:15 of you who are working in visual studio code this may look very familiar to you
798:18 code this may look very familiar to you and i'm also going to increase the font
798:20 and i'm also going to increase the font size by again going back up to file over
798:23 size by again going back up to file over to settings and then over to open
798:25 to settings and then over to open preferences here under workspace and
798:28 preferences here under workspace and then scroll down to terminal
798:30 then scroll down to terminal and if you scroll down to integrated
798:32 and if you scroll down to integrated font size i'm going to adjust the font
798:34 font size i'm going to adjust the font size to 20 for better viewing and my
798:37 size to 20 for better viewing and my cloud shell font size is a little bit
798:39 cloud shell font size is a little bit easier to see and so once you've done
798:40 easier to see and so once you've done that you can then close the preferences
798:42 that you can then close the preferences tab and we're now ready to create files
798:45 tab and we're now ready to create files in our editor okay so next up i want to
798:48 in our editor okay so next up i want to create a folder for all my files to live
798:50 create a folder for all my files to live in so i'm going to go up to the menu
798:52 in so i'm going to go up to the menu here i'm going to select on file and
798:54 here i'm going to select on file and select new folder and i'm going to
798:55 select new folder and i'm going to rename this folder as templates and hit
798:58 rename this folder as templates and hit ok and so now that we have the folder
799:01 ok and so now that we have the folder that all of our files are going to live
799:03 that all of our files are going to live in the next step is to open up the
799:05 in the next step is to open up the github repository in your text editor
799:08 github repository in your text editor and have your files ready to copy over
799:11 and have your files ready to copy over and so just as a note for those who are
799:13 and so just as a note for those who are fluent in how to use git
799:15 fluent in how to use git you can use this new feature in the
799:17 you can use this new feature in the cloud shell editor to clone the course
799:20 cloud shell editor to clone the course repo without having to recreate the
799:22 repo without having to recreate the files so i'm going to go over my text
799:24 files so i'm going to go over my text editor and make sure that you've
799:25 editor and make sure that you've recently done a git pull we're going to
799:28 recently done a git pull we're going to open up the files under compute engine
799:30 open up the files under compute engine deployment manager and you'll see
799:32 deployment manager and you'll see templates with a set of three files and
799:35 templates with a set of three files and i've already conveniently opened them up
799:37 i've already conveniently opened them up i'm going to go up to bow tie
799:38 i'm going to go up to bow tie deploy.yaml and this is going to be the
799:41 deploy.yaml and this is going to be the configuration file that i'm going to be
799:43 configuration file that i'm going to be copying over and once i finish copying
799:45 copying over and once i finish copying all these files over i'll be going
799:47 all these files over i'll be going through this in a little bit of detail
799:50 through this in a little bit of detail just so you can understand the format of
799:52 just so you can understand the format of this configuration and so i'm going to
799:54 this configuration and so i'm going to select all of this i'm going to copy
799:55 select all of this i'm going to copy this head back on over to the editor and
799:58 this head back on over to the editor and here i'm going to select file new file
800:01 here i'm going to select file new file so i'm going to rename this as bow tie
800:04 so i'm going to rename this as bow tie dash deploy dot yaml hit okay and i'm
800:08 dash deploy dot yaml hit okay and i'm going to paste in my code and so this
800:10 going to paste in my code and so this configuration file is showing that i'm
800:12 configuration file is showing that i'm going to be importing two templates by
800:15 going to be importing two templates by the name of bowtie.webserver.jinja
800:18 the name of bowtie.webserver.jinja as well as
800:20 as well as bowtie.network.jinja so i'm going to
800:22 bowtie.network.jinja so i'm going to have a template for my web server and a
800:24 have a template for my web server and a template for the network and under
800:26 template for the network and under resources as you can see this code here
800:29 resources as you can see this code here will create my bow tie dash web server
800:31 will create my bow tie dash web server the type is going to be the template the
800:33 the type is going to be the template the properties will have the zone the
800:36 properties will have the zone the machine type as well as a reference for
800:38 machine type as well as a reference for the network as well underneath the
800:41 the network as well underneath the bowtie web server is the bowtie network
800:44 bowtie web server is the bowtie network and again this is pulling from type
800:46 and again this is pulling from type bowtie.network.jinja
800:49 bowtie.network.jinja so this is a another template file and
800:51 so this is a another template file and under the properties we have the region
800:54 under the properties we have the region of us east one and so we're going to
800:56 of us east one and so we're going to copy over these two templates bowtie web
800:59 copy over these two templates bowtie web server and bowtie network as we need
801:01 server and bowtie network as we need both of these templates in order to
801:03 both of these templates in order to complete this deployment and so i'm
801:05 complete this deployment and so i'm going to go ahead and do that now head
801:07 going to go ahead and do that now head back on over to my code editor i'm going
801:09 back on over to my code editor i'm going to go to bowtie web server i'm going to
801:11 to go to bowtie web server i'm going to copy everything here back to my editor
801:13 copy everything here back to my editor and i'm going to create the new file
801:15 and i'm going to create the new file called bowtie
801:17 called bowtie web server it's going to be dot jinja
801:19 web server it's going to be dot jinja hit enter i'm going to paste the code in
801:22 hit enter i'm going to paste the code in and just to do a quick run through of
801:23 and just to do a quick run through of the template the instance name is going
801:25 the template the instance name is going to be bow tie dash website the type is
801:28 to be bow tie dash website the type is compute.v1.instance
801:30 compute.v1.instance and as you can see here we are using a
801:32 and as you can see here we are using a bunch of different properties here under
801:35 bunch of different properties here under zone we have property zone which is
801:37 zone we have property zone which is going to reference back to the yaml
801:39 going to reference back to the yaml template here under zone you will see us
801:42 template here under zone you will see us east 1b and so this way if i have to
801:44 east 1b and so this way if i have to create another web server
801:46 create another web server i can enter whatever zone i like here in
801:48 i can enter whatever zone i like here in the configuration file and leave the bow
801:50 the configuration file and leave the bow tie dash web server template just the
801:53 tie dash web server template just the way it is under machine type i have
801:55 way it is under machine type i have variables set for both the zone and
801:57 variables set for both the zone and machine type under disks i'm going to
802:00 machine type under disks i'm going to have the device name as an environment
802:02 have the device name as an environment variable and it's going to be a
802:04 variable and it's going to be a persistent disk and the source image is
802:07 persistent disk and the source image is going to be debian9 i also put in some
802:09 going to be debian9 i also put in some metadata here that will bring up the web
802:12 metadata here that will bring up the web server and lastly i have a network tag
802:14 server and lastly i have a network tag of http server as well as the
802:17 of http server as well as the configuration for the network interface
802:19 configuration for the network interface the network referring to bowtie dash
802:22 the network referring to bowtie dash network and a sub network called public
802:25 network and a sub network called public which i will be showing to you in just a
802:27 which i will be showing to you in just a moment and as well the access configs of
802:30 moment and as well the access configs of the type one to one nat and this will
802:32 the type one to one nat and this will give the instance a public ip address
802:35 give the instance a public ip address and so now that we've gone through that
802:36 and so now that we've gone through that template we need to create one last
802:38 template we need to create one last template which is the bowtie dash
802:40 template which is the bowtie dash network so i'm going to head back on
802:42 network so i'm going to head back on over to my code editor and open up
802:44 over to my code editor and open up bowtie network select the code copy it
802:48 bowtie network select the code copy it back over to cloud editor and i'm going
802:50 back over to cloud editor and i'm going to create a new file call this bowtie
802:53 to create a new file call this bowtie network dot jinja hit enter paste in my
802:56 network dot jinja hit enter paste in my code and to quickly walk you through
802:58 code and to quickly walk you through this we're going to be creating a new
803:00 this we're going to be creating a new custom network called bow tie dash
803:02 custom network called bow tie dash network the type is going to be
803:04 network the type is going to be compute.v1.network
803:06 compute.v1.network as the vpc uses the compute engine api
803:10 as the vpc uses the compute engine api it's going to be a custom network so the
803:12 it's going to be a custom network so the value of the auto create sub networks is
803:15 value of the auto create sub networks is going to be false the name is going to
803:16 going to be false the name is going to be public here we have the custom
803:19 be public here we have the custom ipcider range and you can also use this
803:22 ipcider range and you can also use this as a variable but for this demo i
803:24 as a variable but for this demo i decided to just leave it under network i
803:26 decided to just leave it under network i have a reference to the bowtie network
803:29 have a reference to the bowtie network the value for private google access is
803:31 the value for private google access is false and the region variable is
803:33 false and the region variable is fulfilled through the configuration file
803:36 fulfilled through the configuration file moving right along i have two firewall
803:38 moving right along i have two firewall rules here one for ssh access and the
803:42 rules here one for ssh access and the other for web server access one opening
803:44 other for web server access one opening up port 22 to the world as well as port
803:47 up port 22 to the world as well as port 80. as well the web server access
803:50 80. as well the web server access firewall rule has a target tag of http
803:53 firewall rule has a target tag of http server referencing back to the network
803:55 server referencing back to the network tag of the bowtie web server instance
803:58 tag of the bowtie web server instance okay and so now we've finished creating
804:00 okay and so now we've finished creating the configuration file along with the
804:02 the configuration file along with the templates so i'm going to head back on
804:04 templates so i'm going to head back on up to the menu click on file and select
804:08 up to the menu click on file and select save all and since we've finished
804:10 save all and since we've finished creating all of our files the next thing
804:12 creating all of our files the next thing to do is to execute a mock deploy using
804:15 to do is to execute a mock deploy using the bowtie deploy configuration but
804:17 the bowtie deploy configuration but first i know that we haven't used
804:19 first i know that we haven't used deployment manager before and so i need
804:22 deployment manager before and so i need to go in and turn on the api and so i'm
804:25 to go in and turn on the api and so i'm just going to go up here to the top to
804:27 just going to go up here to the top to the search bar and i'm going to type in
804:29 the search bar and i'm going to type in deployment and you should see deployment
804:31 deployment and you should see deployment manager as the first result and bring
804:34 manager as the first result and bring this down a little bit and as expected
804:36 this down a little bit and as expected the deployment manager api has not been
804:38 the deployment manager api has not been enabled yet so i'm going to click on
804:40 enabled yet so i'm going to click on enable and after a few moments we should
804:42 enable and after a few moments we should be good to go
804:44 be good to go okay and as you can see here deployment
804:46 okay and as you can see here deployment manager is pretty empty
804:48 manager is pretty empty as most of it is done through the
804:49 as most of it is done through the command line but if you're looking to
804:51 command line but if you're looking to deploy a marketplace solution you can do
804:54 deploy a marketplace solution you can do that right here at the top and this will
804:56 that right here at the top and this will bring you right to the marketplace and
804:58 bring you right to the marketplace and will allow you to deploy from a large
805:00 will allow you to deploy from a large selection of pre-configured templates
805:03 selection of pre-configured templates but i don't want to do that and so i'm
805:05 but i don't want to do that and so i'm just going to bring this up a little bit
805:06 just going to bring this up a little bit and i'm going to head on over to the
805:08 and i'm going to head on over to the terminal i'm going to run an ls i'm
805:10 terminal i'm going to run an ls i'm going to run the command ls and you
805:12 going to run the command ls and you should be able to see the templates
805:14 should be able to see the templates folder i'm going to change my directory
805:16 folder i'm going to change my directory into the templates folder do another ls
805:18 into the templates folder do another ls and here are all my files and so before
805:21 and here are all my files and so before we do a mock deploy of this
805:22 we do a mock deploy of this configuration we want to make sure that
805:25 configuration we want to make sure that we're deploying to the correct project i
805:27 we're deploying to the correct project i can see here that i am currently in bow
805:29 can see here that i am currently in bow tie inc but if you are ever unsure about
805:32 tie inc but if you are ever unsure about the project that you're in you can
805:34 the project that you're in you can always run the gcloud config list
805:35 always run the gcloud config list command in order to confirm so i'm going
805:38 command in order to confirm so i'm going to quickly clear my screen and i'm going
805:40 to quickly clear my screen and i'm going to run the command gcloud config list
805:43 to run the command gcloud config list it's going to prompt me to authorize
805:45 it's going to prompt me to authorize this api call and i'm going to authorize
805:48 this api call and i'm going to authorize and as expected my project is set to
805:50 and as expected my project is set to deploy in project bowtie inc and so now
805:53 deploy in project bowtie inc and so now that i've verified it i'm going to
805:54 that i've verified it i'm going to quickly clear my screen again and so i'm
805:57 quickly clear my screen again and so i'm going to paste in my command gcloud
805:59 going to paste in my command gcloud deployment dash manager deployments
806:02 deployment dash manager deployments create bowtie deploy which is the name
806:06 create bowtie deploy which is the name of the deployment along with the
806:07 of the deployment along with the configuration file flag dash dash config
806:11 configuration file flag dash dash config and then the name of the configuration
806:12 and then the name of the configuration file which is bowtie
806:15 file which is bowtie deploy.yaml and the preview flag as
806:17 deploy.yaml and the preview flag as we're only doing a mock deploy and so if
806:20 we're only doing a mock deploy and so if there are any errors i'll be able to see
806:22 there are any errors i'll be able to see this before i actually deploy all the
806:24 this before i actually deploy all the resources so i'm going to go ahead and
806:26 resources so i'm going to go ahead and hit enter and in just a minute we'll
806:29 hit enter and in just a minute we'll find out exactly what happens
806:31 find out exactly what happens and as you can see here the mock
806:33 and as you can see here the mock deployment was a success and there are
806:35 deployment was a success and there are no errors and if i do a quick refresh up
806:38 no errors and if i do a quick refresh up here in the console i'll be able to see
806:40 here in the console i'll be able to see my deployment which i can drill down
806:42 my deployment which i can drill down into and here i will see my manifest
806:45 into and here i will see my manifest file with my manifest name and i can
806:47 file with my manifest name and i can view the config as well as my templates
806:50 view the config as well as my templates that it imported the layout as well as
806:53 that it imported the layout as well as the expanded config so if i click on
806:55 the expanded config so if i click on view of the config it'll show me here in
806:58 view of the config it'll show me here in the right hand panel exactly what this
807:00 the right hand panel exactly what this deployment has used for the config and i
807:03 deployment has used for the config and i can do the same thing with my template
807:05 can do the same thing with my template files so i'm going to open up my network
807:08 files so i'm going to open up my network template and i can quickly go through
807:10 template and i can quickly go through that if i'd like as well i also have the
807:12 that if i'd like as well i also have the option to download it and if i really
807:14 option to download it and if i really want to get granular i can go over here
807:16 want to get granular i can go over here to the left hand pane i can select on vm
807:19 to the left hand pane i can select on vm instance and it'll show me all the
807:21 instance and it'll show me all the resource properties everything from the
807:23 resource properties everything from the disks to the machine type to the
807:25 disks to the machine type to the metadata the network interfaces the zone
807:28 metadata the network interfaces the zone that it's in and the network tag same
807:31 that it's in and the network tag same thing if i go over here to the network
807:33 thing if i go over here to the network and again because this is a custom
807:35 and again because this is a custom network the value for the autocreate
807:37 network the value for the autocreate subnetworks is false i can check on the
807:40 subnetworks is false i can check on the public sub network as well as the
807:42 public sub network as well as the firewall rules and so because this is a
807:44 firewall rules and so because this is a preview it has not actually deployed
807:47 preview it has not actually deployed anything now taking a look at compute
807:49 anything now taking a look at compute engine instances in a new tab you can
807:52 engine instances in a new tab you can see here that i have no instances
807:53 see here that i have no instances deployed and so the same goes for any of
807:56 deployed and so the same goes for any of the other resources and so what we want
807:58 the other resources and so what we want to do now is we want to deploy this
808:00 to do now is we want to deploy this deployment and we can do that one of two
808:02 deployment and we can do that one of two ways we can simply click on the button
808:05 ways we can simply click on the button here that says deploy or we can run the
808:07 here that says deploy or we can run the command in the command line and so i'm
808:09 command in the command line and so i'm looking to show you how to do it in the
808:10 looking to show you how to do it in the command line so i'm going to move down
808:12 command line so i'm going to move down to the command line i'm going to quickly
808:14 to the command line i'm going to quickly clear my screen
808:16 clear my screen i'm going to paste in the code which is
808:18 i'm going to paste in the code which is gcloud deployment dash manager
808:20 gcloud deployment dash manager deployments update bowtie deploy now
808:24 deployments update bowtie deploy now you're probably wondering why update and
808:27 you're probably wondering why update and this is because the configuration has
808:29 this is because the configuration has been deployed even though it's a preview
808:31 been deployed even though it's a preview deployment manager still sees it as a
808:34 deployment manager still sees it as a deployment and has created what google
808:36 deployment and has created what google cloud calls a shell and so by using
808:38 cloud calls a shell and so by using update you can fully deploy the
808:41 update you can fully deploy the configuration using your last preview to
808:44 configuration using your last preview to perform that update and this will deploy
808:46 perform that update and this will deploy your resources exactly how you see it in
808:49 your resources exactly how you see it in the manifest and so anytime i make an
808:51 the manifest and so anytime i make an adjustment to either the configuration
808:54 adjustment to either the configuration or the templates i can simply run the
808:56 or the templates i can simply run the update command instead of doing the
808:59 update command instead of doing the whole deployment again so i want to get
809:00 whole deployment again so i want to get this deployed now and so i'm going to
809:02 this deployed now and so i'm going to hit enter
809:04 hit enter and i'll be back in a minute once it's
809:06 and i'll be back in a minute once it's deployed all the resources and success
809:09 deployed all the resources and success my deployment is successful and as you
809:11 my deployment is successful and as you can see here there are no errors and all
809:14 can see here there are no errors and all the resources are in a completed state
809:16 the resources are in a completed state so i'm going to select my bow tie
809:18 so i'm going to select my bow tie website in my manifest and i'll have
809:20 website in my manifest and i'll have access to the resource with a link up
809:23 access to the resource with a link up here at the top that will bring me to
809:25 here at the top that will bring me to the instance as well i can ssh into the
809:28 the instance as well i can ssh into the instance and i have all the same options
809:31 instance and i have all the same options that i have in the compute engine
809:33 that i have in the compute engine console and so in order to verify that
809:35 console and so in order to verify that all my resources have been deployed i'm
809:38 all my resources have been deployed i'm going to go back over to the tab that i
809:40 going to go back over to the tab that i already have open and as you can see my
809:42 already have open and as you can see my instance has been deployed and i want to
809:44 instance has been deployed and i want to check to see if my network has been
809:46 check to see if my network has been deployed so i'm going to go up to the
809:48 deployed so i'm going to go up to the navigation menu and i'm going to head on
809:50 navigation menu and i'm going to head on down to vpc network and as you can see
809:53 down to vpc network and as you can see here bowtie network has been deployed
809:55 here bowtie network has been deployed with its two corresponding firewall
809:57 with its two corresponding firewall rules i'm going to drill down into
809:59 rules i'm going to drill down into bowtie network and check out the
810:01 bowtie network and check out the firewall rules and as you can see here
810:03 firewall rules and as you can see here ssh access and web server access have
810:07 ssh access and web server access have been created with its corresponding
810:09 been created with its corresponding protocols and ports and so now that i
810:11 protocols and ports and so now that i know that all my resources have been
810:13 know that all my resources have been deployed i want to head back on over to
810:15 deployed i want to head back on over to compute engine to see if my instance has
810:18 compute engine to see if my instance has been configured properly so i'm going to
810:20 been configured properly so i'm going to click on ssh to see if i can ssh into
810:23 click on ssh to see if i can ssh into the instance and success with ssh so i
810:26 the instance and success with ssh so i know that this is working properly and
810:28 know that this is working properly and so i'm going to close this tab down and
810:30 so i'm going to close this tab down and i also want to see whether or not my web
810:32 i also want to see whether or not my web server has been configured properly with
810:35 server has been configured properly with the metadata that i provided it and so i
810:37 the metadata that i provided it and so i can directly open up the webpage by
810:40 can directly open up the webpage by simply clicking on this link and success
810:43 simply clicking on this link and success my you look dapper today why thank you
810:45 my you look dapper today why thank you tony bowtie and so as you can see the
810:48 tony bowtie and so as you can see the web server has been configured properly
810:50 web server has been configured properly using the metadata that i provided so i
810:52 using the metadata that i provided so i wanted to congratulate you on making it
810:54 wanted to congratulate you on making it to the end of this demo and hope it has
810:57 to the end of this demo and hope it has been extremely useful and gave you an
810:59 been extremely useful and gave you an understanding of how infrastructure is
811:01 understanding of how infrastructure is code is used in google cloud using their
811:04 code is used in google cloud using their native tools i hope this also triggered
811:06 native tools i hope this also triggered some possible use cases for you that
811:09 some possible use cases for you that will allow you to automate more
811:11 will allow you to automate more resources and configurations in your
811:14 resources and configurations in your environment and allow you to start
811:16 environment and allow you to start innovating on fantastic new ways for
811:19 innovating on fantastic new ways for cicd for those of you who are familiar
811:22 cicd for those of you who are familiar with infrastructure as code this may
811:24 with infrastructure as code this may have been a refresher but will give you
811:26 have been a refresher but will give you some insight for questions on the exam
811:29 some insight for questions on the exam that cover deployment manager and just
811:31 that cover deployment manager and just as a quick note for those of you who are
811:33 as a quick note for those of you who are looking to learn more about
811:35 looking to learn more about infrastructure as code i have put a few
811:37 infrastructure as code i have put a few links in the lesson text going into
811:40 links in the lesson text going into depth on deployment manager and another
811:42 depth on deployment manager and another tool that google recommends called
811:44 tool that google recommends called terraform and so now before you go we
811:47 terraform and so now before you go we want to clean up all the resources that
811:49 want to clean up all the resources that we've deployed to reduce any incurred
811:52 we've deployed to reduce any incurred costs and because deployment manager
811:54 costs and because deployment manager makes it easy we can do it in one simple
811:56 makes it easy we can do it in one simple step so i'm going to head back on over
811:58 step so i'm going to head back on over to my open tab where i have my console
812:01 to my open tab where i have my console open to deployment manager and i'm going
812:03 open to deployment manager and i'm going to head on over to the delete button and
812:05 to head on over to the delete button and simply click on delete now deployment
812:08 simply click on delete now deployment manager gives me the option of deleting
812:10 manager gives me the option of deleting all the resources it created or simply
812:13 all the resources it created or simply deleting the manifest but keeping the
812:15 deleting the manifest but keeping the resources untouched and so you want to
812:17 resources untouched and so you want to select delete bowtie deploy with all of
812:20 select delete bowtie deploy with all of its resources and simply click on delete
812:23 its resources and simply click on delete all and this will initiate the teardown
812:25 all and this will initiate the teardown of all the resources that have been
812:27 of all the resources that have been deployed from the bowtie deploy
812:30 deployed from the bowtie deploy configuration and this will take a few
812:32 configuration and this will take a few minutes to tear down but if you ever
812:34 minutes to tear down but if you ever have a larger configuration to deploy
812:37 have a larger configuration to deploy just as a note it may take a little bit
812:39 just as a note it may take a little bit longer to both deploy and to tear down
812:42 longer to both deploy and to tear down and so just as a recap you've created a
812:45 and so just as a recap you've created a configuration file and two templates in
812:47 configuration file and two templates in the cloud shell editor you then deployed
812:50 the cloud shell editor you then deployed your configuration using deployment
812:52 your configuration using deployment manager through the command line in
812:54 manager through the command line in cloud shell you then verified each
812:57 cloud shell you then verified each individual resource that was deployed
812:59 individual resource that was deployed and verified the configuration of each
813:02 and verified the configuration of each resource congratulations again on a job
813:05 resource congratulations again on a job well done and so that's pretty much all
813:07 well done and so that's pretty much all i wanted to cover in this demo when it
813:09 i wanted to cover in this demo when it comes to deploying resources using
813:12 comes to deploying resources using deployment manager so you can now mark
813:14 deployment manager so you can now mark this as complete and let's move on to
813:16 this as complete and let's move on to the next one
813:17 the next one [Music]
813:21 [Music] welcome back and in this lesson we're
813:24 welcome back and in this lesson we're going to learn about google cloud load
813:26 going to learn about google cloud load balancing and how it's used to
813:27 balancing and how it's used to distribute traffic within the google
813:30 distribute traffic within the google cloud platform google cloud load
813:32 cloud platform google cloud load balancing is essential when using it
813:34 balancing is essential when using it with instance groups kubernetes clusters
813:38 with instance groups kubernetes clusters and is pretty much the defacto when it
813:40 and is pretty much the defacto when it comes to balancing traffic coming in as
813:43 comes to balancing traffic coming in as well as within your gcp environment
813:46 well as within your gcp environment knowing the differences between the
813:47 knowing the differences between the types of load balancers and which one to
813:49 types of load balancers and which one to use for specific scenarios is crucial
813:53 use for specific scenarios is crucial for the exam as you will be tested on it
813:55 for the exam as you will be tested on it and so there's a lot to cover here so
813:57 and so there's a lot to cover here so with that being said let's dive in now i
814:00 with that being said let's dive in now i wanted to start off with some basics
814:02 wanted to start off with some basics with regards to what is low balancing
814:05 with regards to what is low balancing and so when it comes to the low balancer
814:07 and so when it comes to the low balancer itself a low balancer distributes user
814:10 itself a low balancer distributes user traffic across multiple instances of
814:13 traffic across multiple instances of your application so by spreading the
814:15 your application so by spreading the load you reduce the risk of your
814:17 load you reduce the risk of your applications experiencing performance
814:20 applications experiencing performance issues a load balancer is a single point
814:22 issues a load balancer is a single point of entry with either one or multiple
814:25 of entry with either one or multiple back ends and within gcp these back ends
814:28 back ends and within gcp these back ends could consist of either instance groups
814:31 could consist of either instance groups or negs and i'll be getting into any g's
814:34 or negs and i'll be getting into any g's in just a little bit low balancers on
814:36 in just a little bit low balancers on gcp are fully distributed and software
814:40 gcp are fully distributed and software defined so there is no actual hardware
814:43 defined so there is no actual hardware load balancer involved in low balancing
814:46 load balancer involved in low balancing on gcp it is completely software defined
814:49 on gcp it is completely software defined and so there's no need to worry about
814:51 and so there's no need to worry about any hardware any pre-warming time as
814:54 any hardware any pre-warming time as this is all done through software now
814:56 this is all done through software now depending on which low balancer you
814:58 depending on which low balancer you choose google cloud gives you the option
815:01 choose google cloud gives you the option of having either a global load balancer
815:04 of having either a global load balancer or a regional load balancer the load
815:06 or a regional load balancer the load balancers are meant to serve content as
815:09 balancers are meant to serve content as close as possible to the users so that
815:12 close as possible to the users so that they don't experience increased latency
815:14 they don't experience increased latency and gives the users a better experience
815:17 and gives the users a better experience as well as reducing latency on your
815:19 as well as reducing latency on your applications when dealing with low
815:22 applications when dealing with low balancers in between services google
815:25 balancers in between services google cloud also offers auto scaling with
815:27 cloud also offers auto scaling with health checks in their load balancers to
815:30 health checks in their load balancers to make sure that your traffic is always
815:32 make sure that your traffic is always routed to healthy instances and by using
815:35 routed to healthy instances and by using auto scaling able to scale up the amount
815:37 auto scaling able to scale up the amount of instances you need in order to handle
815:40 of instances you need in order to handle the load automatically now as there are
815:43 the load automatically now as there are many different low balancers to choose
815:45 many different low balancers to choose from it helps to know what specific
815:47 from it helps to know what specific aspects you're looking for and how you
815:50 aspects you're looking for and how you want your traffic distributed and so
815:52 want your traffic distributed and so google has broken them down for us into
815:55 google has broken them down for us into these three categories the first
815:57 these three categories the first category is global versus regional
816:00 category is global versus regional global load balancing is great for when
816:02 global load balancing is great for when your back ends are distributed across
816:05 your back ends are distributed across multiple regions
816:06 multiple regions and your users need access to the same
816:09 and your users need access to the same applications and content
816:11 applications and content using a single anycast ip address
816:14 using a single anycast ip address as well when you're looking for ipv6
816:16 as well when you're looking for ipv6 termination global load balancing will
816:19 termination global load balancing will take care of that now when it comes to
816:21 take care of that now when it comes to regional load balancing this is if
816:23 regional load balancing this is if you're looking at serving your back ends
816:25 you're looking at serving your back ends in a single region and handling only
816:28 in a single region and handling only ipv4 traffic now once you've determined
816:31 ipv4 traffic now once you've determined whether or not you need global versus
816:33 whether or not you need global versus regional low balancing the second
816:35 regional low balancing the second category to dive into is external versus
816:39 category to dive into is external versus internal external load balancers are
816:41 internal external load balancers are designed to distribute traffic coming
816:44 designed to distribute traffic coming into your network from the internet
816:46 into your network from the internet and internal load balancers are designed
816:49 and internal load balancers are designed to distribute traffic within your
816:51 to distribute traffic within your network and finally the last category
816:54 network and finally the last category that will help you decide on what type
816:56 that will help you decide on what type of load balancer you need is the traffic
816:59 of load balancer you need is the traffic type and shown here are all the traffic
817:02 type and shown here are all the traffic types that cover http https tcp and udp
817:07 types that cover http https tcp and udp and so now that we've covered the
817:09 and so now that we've covered the different types of load balancing that's
817:11 different types of load balancing that's available on google cloud i wanted to
817:13 available on google cloud i wanted to dive into some more depth on the low
817:16 dive into some more depth on the low balancers themselves here you can see
817:18 balancers themselves here you can see that there are five load balancers
817:20 that there are five load balancers available and i will be going through
817:22 available and i will be going through each one of these in detail now before
817:25 each one of these in detail now before diving into the low balancers themselves
817:28 diving into the low balancers themselves i wanted to introduce you to a concept
817:30 i wanted to introduce you to a concept using gcp
817:32 using gcp for all load balancers called back end
817:35 for all load balancers called back end services how a low balancer knows
817:37 services how a low balancer knows exactly what to do is defined by a
817:40 exactly what to do is defined by a backend service and this is how cloud
817:42 backend service and this is how cloud load balancing knows how to distribute
817:45 load balancing knows how to distribute the traffic the backend service
817:47 the traffic the backend service configuration contains a set of values
817:50 configuration contains a set of values such as the protocol used to connect to
817:52 such as the protocol used to connect to back ends various distribution in
817:54 back ends various distribution in session settings health checks and
817:57 session settings health checks and timeouts these settings provide fine
818:00 timeouts these settings provide fine grain control over how your load
818:02 grain control over how your load balancer behaves an external http or
818:05 balancer behaves an external http or https load balancer must have at least
818:08 https load balancer must have at least one backend service and can have
818:11 one backend service and can have multiple backend services the back ends
818:13 multiple backend services the back ends of a backend service can be either
818:16 of a backend service can be either instance groups or network endpoint
818:18 instance groups or network endpoint groups also known as negs but not a
818:21 groups also known as negs but not a combination of both and so just as a
818:24 combination of both and so just as a note you'll hear me refer to negs over
818:27 note you'll hear me refer to negs over the course of this lesson and so a
818:29 the course of this lesson and so a network endpoint group also known as neg
818:32 network endpoint group also known as neg is a configuration object that specifies
818:35 is a configuration object that specifies a group of back-end endpoints or
818:38 a group of back-end endpoints or services and a common use case for this
818:41 services and a common use case for this configuration is deploying services into
818:44 configuration is deploying services into containers now moving on to the values
818:46 containers now moving on to the values themselves i wanted to first start with
818:48 themselves i wanted to first start with health checks and google cloud uses the
818:51 health checks and google cloud uses the overall health state of each back end to
818:54 overall health state of each back end to determine its eligibility for receiving
818:57 determine its eligibility for receiving new requests or connections back ends
819:00 new requests or connections back ends that respond successfully for the
819:02 that respond successfully for the configured number of times are
819:04 configured number of times are considered healthy back-ends that fail
819:06 considered healthy back-ends that fail to respond successfully for a separate
819:08 to respond successfully for a separate number of times are considered unhealthy
819:11 number of times are considered unhealthy and when a back-end is considered
819:13 and when a back-end is considered unhealthy traffic will not be routed to
819:15 unhealthy traffic will not be routed to it next up is session affinity and
819:18 it next up is session affinity and session affinity sends all requests from
819:21 session affinity sends all requests from the same client to the same back end if
819:24 the same client to the same back end if the back end is healthy and it has
819:26 the back end is healthy and it has capacity service timeout is the next
819:29 capacity service timeout is the next value and this is the amount of time
819:31 value and this is the amount of time that the load balancer waits for a
819:33 that the load balancer waits for a backend to return a full response to a
819:36 backend to return a full response to a request next up is traffic distribution
819:39 request next up is traffic distribution and this comprises of three different
819:41 and this comprises of three different values the first one is a balancing mode
819:44 values the first one is a balancing mode and this defines how the load balancer
819:46 and this defines how the load balancer measures back-end readiness for the new
819:48 measures back-end readiness for the new requests or connections the second one
819:51 requests or connections the second one is target capacity and this defines a
819:53 is target capacity and this defines a target maximum number of connections a
819:55 target maximum number of connections a target maximum rate or target maximum
819:58 target maximum rate or target maximum cpu utilization and the third value for
820:01 cpu utilization and the third value for traffic distribution is capacity scalar
820:04 traffic distribution is capacity scalar and this adjusts overall available
820:06 and this adjusts overall available capacity without modifying the target
820:09 capacity without modifying the target capacity and the last value for back-end
820:12 capacity and the last value for back-end services are back-ends and a back-end is
820:15 services are back-ends and a back-end is a group of endpoints that receive
820:17 a group of endpoints that receive traffic from a google cloud load
820:19 traffic from a google cloud load balancer and there are several types of
820:21 balancer and there are several types of back-ends but the one that we are
820:23 back-ends but the one that we are concentrating on for this section and
820:25 concentrating on for this section and for the exam is the instance group now
820:28 for the exam is the instance group now backend services are not critical to
820:30 backend services are not critical to know for the exam but i wanted to
820:32 know for the exam but i wanted to introduce you to this concept to add a
820:35 introduce you to this concept to add a bit more context for when you are
820:37 bit more context for when you are creating low balancers in any
820:39 creating low balancers in any environment
820:40 environment and will help you understand other
820:42 and will help you understand other concepts in this lesson and so this is
820:44 concepts in this lesson and so this is the end of part one of this lesson it
820:46 the end of part one of this lesson it was getting a bit long so i decided to
820:48 was getting a bit long so i decided to break it up this would be a great
820:50 break it up this would be a great opportunity for you to get up and have a
820:53 opportunity for you to get up and have a stretch get yourself a coffee or tea and
820:56 stretch get yourself a coffee or tea and whenever you're ready join me in part
820:58 whenever you're ready join me in part two where we will be starting
821:00 two where we will be starting immediately from the end of part one so
821:02 immediately from the end of part one so you can now complete this video and i
821:04 you can now complete this video and i will see you in part two
821:12 this is part two of the cloud load balancers lesson and we'll be starting
821:14 balancers lesson and we'll be starting exactly where we left off in part one so
821:17 exactly where we left off in part one so with that being said let's dive in now
821:20 with that being said let's dive in now before jumping right into the first load
821:22 before jumping right into the first load balancer that i wanted to introduce
821:24 balancer that i wanted to introduce which is http and https low balancer
821:28 which is http and https low balancer there's a couple of different concepts
821:30 there's a couple of different concepts that i wanted to introduce and these are
821:32 that i wanted to introduce and these are the methods of how an http and https
821:36 the methods of how an http and https load balancer distributes traffic using
821:38 load balancer distributes traffic using forwarding rules and these are cross
821:41 forwarding rules and these are cross region low balancing and content based
821:43 region low balancing and content based load balancing now touching on cross
821:45 load balancing now touching on cross region load balancing when you configure
821:48 region load balancing when you configure an external http or https load balancer
821:52 an external http or https load balancer in premium tier it uses a global
821:55 in premium tier it uses a global external ip address and can
821:57 external ip address and can intelligently route requests from users
822:00 intelligently route requests from users to the closest backend instance group or
822:03 to the closest backend instance group or neg based on proximity for example if
822:07 neg based on proximity for example if you set up instance groups in north
822:09 you set up instance groups in north america and europe and attach them to a
822:12 america and europe and attach them to a low balancers back-end service user
822:14 low balancers back-end service user requests around the world are
822:16 requests around the world are automatically sent to the vms closest to
822:19 automatically sent to the vms closest to the users assuming that the vms pass
822:21 the users assuming that the vms pass health checks and have enough capacity
822:24 health checks and have enough capacity if the closest vms are all unhealthy or
822:26 if the closest vms are all unhealthy or if the closest instance group is at
822:28 if the closest instance group is at capacity and another instance group is
822:31 capacity and another instance group is not at capacity the load balancer
822:33 not at capacity the load balancer automatically sends requests to the next
822:36 automatically sends requests to the next closest region that has available
822:39 closest region that has available capacity and so here in this diagram a
822:41 capacity and so here in this diagram a user in switzerland hits the low
822:44 user in switzerland hits the low balancer by going to bowtieinc.co and
822:47 balancer by going to bowtieinc.co and because there are vms that are able to
822:49 because there are vms that are able to serve that traffic in europe west 6
822:51 serve that traffic in europe west 6 traffic is routed to that region and so
822:54 traffic is routed to that region and so now getting into content based load
822:56 now getting into content based load balancing http and https low balancing
823:00 balancing http and https low balancing supports content based load balancing
823:03 supports content based load balancing using url maps to select a backend
823:06 using url maps to select a backend service based on the requested host name
823:09 service based on the requested host name request path or both for example you can
823:12 request path or both for example you can use a set of instance groups or negs to
823:16 use a set of instance groups or negs to handle your video content and another
823:18 handle your video content and another set to handle static as well as another
823:21 set to handle static as well as another set to handle any images you can also
823:24 set to handle any images you can also use http or https low balancing with
823:28 use http or https low balancing with cloud storage buckets and then after you
823:30 cloud storage buckets and then after you have your load balancer set up you can
823:32 have your load balancer set up you can add cloud storage buckets to it now
823:34 add cloud storage buckets to it now moving right along when it comes to http
823:38 moving right along when it comes to http and https load balancer this is a global
823:42 and https load balancer this is a global proxy based layer 7 low balancer which
823:45 proxy based layer 7 low balancer which is at the application layer and so just
823:47 is at the application layer and so just as a note here with all the other low
823:50 as a note here with all the other low balancers that are available in gcp the
823:52 balancers that are available in gcp the http and https low balancer is the only
823:57 http and https low balancer is the only layer 7 load balancer all the other low
823:59 layer 7 load balancer all the other low balancers in gcp are layer 4 and will
824:02 balancers in gcp are layer 4 and will work at the network layer and so this
824:04 work at the network layer and so this low balancer enables you to serve your
824:07 low balancer enables you to serve your applications worldwide behind a single
824:10 applications worldwide behind a single external unicast ip address external
824:13 external unicast ip address external http and https load balancing
824:17 http and https load balancing distributes http and https traffic to
824:21 distributes http and https traffic to back ends hosted on compute engine and
824:24 back ends hosted on compute engine and gke external http and https load
824:28 gke external http and https load balancing is implemented on google front
824:31 balancing is implemented on google front ends or gfes as shown here in the
824:34 ends or gfes as shown here in the diagram gfes are distributed globally
824:37 diagram gfes are distributed globally and operate together using google's
824:39 and operate together using google's global network and control plane in the
824:42 global network and control plane in the premium tier gfes offer cross-regional
824:45 premium tier gfes offer cross-regional low balancing directing traffic to the
824:48 low balancing directing traffic to the closest healthy backend that has
824:50 closest healthy backend that has capacity and terminating http and https
824:54 capacity and terminating http and https traffic as close as possible to your
824:57 traffic as close as possible to your users with the standard tier the load
825:00 users with the standard tier the load balancing is handled regionally and this
825:02 balancing is handled regionally and this load balancer is available to be used
825:04 load balancer is available to be used both externally and internally that
825:07 both externally and internally that makes this load balancer global external
825:11 makes this load balancer global external and internal this load balancer also
825:13 and internal this load balancer also gives support for https and ssl which
825:17 gives support for https and ssl which covers tls for encryption in transit as
825:20 covers tls for encryption in transit as well this load balancer accepts all
825:22 well this load balancer accepts all traffic whether it is ipv4 or ipv6
825:26 traffic whether it is ipv4 or ipv6 traffic and just know that ipv6 traffic
825:30 traffic and just know that ipv6 traffic will terminate at the low balancer and
825:33 will terminate at the low balancer and then it will forward traffic as ipv4 so
825:36 then it will forward traffic as ipv4 so it doesn't really matter which type of
825:37 it doesn't really matter which type of traffic you're sending the load balancer
825:40 traffic you're sending the load balancer will still send the traffic to the back
825:42 will still send the traffic to the back end using ipv4 this traffic is
825:45 end using ipv4 this traffic is distributed by location or by content as
825:48 distributed by location or by content as shown in the previous diagram forwarding
825:50 shown in the previous diagram forwarding rules are in place to distribute defined
825:53 rules are in place to distribute defined targets to each target pool for the
825:56 targets to each target pool for the instance groups again defined targets
825:59 instance groups again defined targets could be content based and therefore as
826:01 could be content based and therefore as shown in the previous diagram video
826:04 shown in the previous diagram video content could go to one target whereas
826:06 content could go to one target whereas static content could go to another
826:08 static content could go to another target url maps direct your requests
826:11 target url maps direct your requests based on rules so you can create a bunch
826:14 based on rules so you can create a bunch of rules depending on what type of
826:15 of rules depending on what type of traffic you want to direct and put them
826:17 traffic you want to direct and put them in maps for requests ssl certificates
826:20 in maps for requests ssl certificates are needed for https and these can be
826:23 are needed for https and these can be either google managed or self-managed
826:26 either google managed or self-managed and so just as a quick note here the
826:28 and so just as a quick note here the ports used for http are on 80 and 8080
826:33 ports used for http are on 80 and 8080 as well on https the port that is used
826:37 as well on https the port that is used is port 443 now moving into the next low
826:40 is port 443 now moving into the next low balancer is ssl proxy an ssl proxy low
826:45 balancer is ssl proxy an ssl proxy low balancing is a reverse proxy load
826:47 balancing is a reverse proxy load balancer that distributes ssl traffic
826:50 balancer that distributes ssl traffic coming from the internet to your vm
826:53 coming from the internet to your vm instances when using ssl proxy load
826:56 instances when using ssl proxy load balancing for your ssl traffic user ssl
826:59 balancing for your ssl traffic user ssl connections are terminated at the low
827:02 connections are terminated at the low balancing layer and then proxied to the
827:05 balancing layer and then proxied to the closest available backend instances by
827:08 closest available backend instances by either using ssl or tcp with the premium
827:11 either using ssl or tcp with the premium tier ssl proxy low balancing can be
827:14 tier ssl proxy low balancing can be configured as a global load balancing
827:17 configured as a global load balancing service with the standard tier the ssl
827:20 service with the standard tier the ssl proxy load balancer handles low
827:22 proxy load balancer handles low balancing regionally this load balancer
827:25 balancing regionally this load balancer also distributes traffic by location
827:28 also distributes traffic by location only ssl proxy low balancing lets you
827:31 only ssl proxy low balancing lets you use a single ip address for all users
827:34 use a single ip address for all users worldwide and is a layer 4 load balancer
827:37 worldwide and is a layer 4 load balancer which works on the network layer this
827:39 which works on the network layer this load balancer shows support for tcp with
827:42 load balancer shows support for tcp with ssl offload and this is something
827:45 ssl offload and this is something specific to remember for the exam this
827:48 specific to remember for the exam this is not like the http or https load
827:51 is not like the http or https load balancer where we can use specific rules
827:54 balancer where we can use specific rules or specific configurations in order to
827:56 or specific configurations in order to direct traffic ssl proxy low balancer
827:59 direct traffic ssl proxy low balancer supports both ipv4 and ipv6 but again it
828:04 supports both ipv4 and ipv6 but again it does terminate at the load balancer and
828:06 does terminate at the load balancer and forwards the traffic to the back end as
828:09 forwards the traffic to the back end as ipv4 traffic and forwarding rules are in
828:12 ipv4 traffic and forwarding rules are in place to distribute each defined target
828:15 place to distribute each defined target to its proper target pool and encryption
828:18 to its proper target pool and encryption is supported by configuring back-end
828:20 is supported by configuring back-end services to accept all the traffic over
828:23 services to accept all the traffic over ssl now just as a note it can also be
828:26 ssl now just as a note it can also be used for other protocols that use ssl
828:30 used for other protocols that use ssl such as web sockets and imap over ssl
828:33 such as web sockets and imap over ssl and carry a number of open ports to
828:36 and carry a number of open ports to support them moving on to the next load
828:38 support them moving on to the next load balancer is tcp proxy now the tcp proxy
828:42 balancer is tcp proxy now the tcp proxy load balancer is a reverse proxy load
828:44 load balancer is a reverse proxy load balancer that distributes tcp traffic
828:48 balancer that distributes tcp traffic coming from the internet to your vm
828:50 coming from the internet to your vm instances when using tcp proxy load
828:52 instances when using tcp proxy load balancing traffic coming over a tcp
828:55 balancing traffic coming over a tcp connection is terminated at the load
828:58 connection is terminated at the load balancing layer and then forwarded to
829:00 balancing layer and then forwarded to the closest available backend using tcp
829:04 the closest available backend using tcp or ssl so this is where the low balancer
829:06 or ssl so this is where the low balancer will determine which instances are at
829:08 will determine which instances are at capacity and send them to those
829:11 capacity and send them to those instances that are not like ssl proxy
829:14 instances that are not like ssl proxy load balancing tcp proxy load balancing
829:17 load balancing tcp proxy load balancing lets you use a single ip address for all
829:20 lets you use a single ip address for all users worldwide the tcp proxy load
829:23 users worldwide the tcp proxy load balancer automatically routes traffic to
829:26 balancer automatically routes traffic to the back ends that are closest to the
829:27 the back ends that are closest to the user this is a layer 4 load balancer and
829:30 user this is a layer 4 load balancer and again can serve traffic both globally
829:33 again can serve traffic both globally and externally tcp proxy distributes
829:36 and externally tcp proxy distributes traffic by location only and is intended
829:39 traffic by location only and is intended for specifically non-http traffic
829:42 for specifically non-http traffic although you can decide if you want to
829:44 although you can decide if you want to use ssl between the proxy and your back
829:46 use ssl between the proxy and your back end and you can do this by selecting a
829:49 end and you can do this by selecting a certificate on the back end again this
829:51 certificate on the back end again this type of load balancer supports ipv4 and
829:54 type of load balancer supports ipv4 and ipv6 traffic and ipv6 traffic will
829:58 ipv6 traffic and ipv6 traffic will terminate at the low balancer and
830:00 terminate at the low balancer and forwards that traffic to the back end as
830:03 forwards that traffic to the back end as ipv4 traffic now tcp proxy low balancing
830:07 ipv4 traffic now tcp proxy low balancing is intended for tcp traffic and supports
830:10 is intended for tcp traffic and supports many well-known ports such as port 25
830:14 many well-known ports such as port 25 for simple mail transfer protocol or
830:16 for simple mail transfer protocol or smtp next up we have the network load
830:19 smtp next up we have the network load balancer now the tcp udp network load
830:22 balancer now the tcp udp network load balancer is a regional pass-through load
830:25 balancer is a regional pass-through load balancer a network load balancer
830:27 balancer a network load balancer distributes tcp or udp traffic among
830:31 distributes tcp or udp traffic among instances in the same region network
830:33 instances in the same region network load balancers are not proxies and
830:36 load balancers are not proxies and therefore responses from the back end
830:38 therefore responses from the back end vms go directly to the clients
830:41 vms go directly to the clients not back through the load balancer the
830:43 not back through the load balancer the term known for this is direct server
830:46 term known for this is direct server return as shown here in the diagram this
830:48 return as shown here in the diagram this is a layer 4 regional load balancer and
830:51 is a layer 4 regional load balancer and an external load balancer as well that
830:54 an external load balancer as well that can serve to regional locations it
830:57 can serve to regional locations it supports either tcp or udp but not both
831:01 supports either tcp or udp but not both although it can low balance udp tcp and
831:05 although it can low balance udp tcp and ssl traffic on the ports that are not
831:07 ssl traffic on the ports that are not supported by the tcp proxy and ssl proxy
831:12 supported by the tcp proxy and ssl proxy ssl traffic can still be decrypted by
831:14 ssl traffic can still be decrypted by your back end instead of the load
831:17 your back end instead of the load balancer itself traffic is also
831:19 balancer itself traffic is also distributed by incoming protocol data
831:22 distributed by incoming protocol data this being protocols scheme and scope
831:25 this being protocols scheme and scope there is no tls offloading or proxying
831:28 there is no tls offloading or proxying and forwarding rules are in place to
831:30 and forwarding rules are in place to distribute and define targets to their
831:33 distribute and define targets to their target pools and this is for tcp and udp
831:36 target pools and this is for tcp and udp only now with other protocols they use
831:39 only now with other protocols they use target instances as opposed to instance
831:42 target instances as opposed to instance groups lastly a network load balancer
831:44 groups lastly a network load balancer can also only support
831:47 can also only support self-managed ssl certificates as opposed
831:50 self-managed ssl certificates as opposed to the google managed certificates as
831:52 to the google managed certificates as well and so the last low balancer to
831:54 well and so the last low balancer to introduce is the internal load balancer
831:57 introduce is the internal load balancer now an internal tcp or udp load balancer
832:01 now an internal tcp or udp load balancer is a layer 4 regional load balancer that
832:04 is a layer 4 regional load balancer that enables you to distribute traffic behind
832:07 enables you to distribute traffic behind an internal load balancing ip address
832:10 an internal load balancing ip address that is accessible only to your internal
832:13 that is accessible only to your internal vm instances internal tcp and udp load
832:17 vm instances internal tcp and udp load balancing distributes traffic among vm
832:20 balancing distributes traffic among vm instances in the same region this load
832:22 instances in the same region this load balancer supports tcp or udp traffic but
832:26 balancer supports tcp or udp traffic but not both and as i said before this type
832:28 not both and as i said before this type of load balancer is used to balance
832:30 of load balancer is used to balance traffic within gcp across instances this
832:34 traffic within gcp across instances this low balancer cannot be used for
832:37 low balancer cannot be used for balancing internet traffic as it is
832:39 balancing internet traffic as it is internal only traffic is automatically
832:42 internal only traffic is automatically sent to the back end as it does not
832:44 sent to the back end as it does not terminate client connections and for
832:47 terminate client connections and for forwarding rules this load balancer
832:49 forwarding rules this load balancer follows specific specifications where
832:52 follows specific specifications where you need to specify at least one and up
832:54 you need to specify at least one and up to five ports by number as well you must
832:58 to five ports by number as well you must specify all to forward traffic to all
833:01 specify all to forward traffic to all ports now again like the network load
833:04 ports now again like the network load balancer you can use either tcp or udp
833:08 balancer you can use either tcp or udp and so that's pretty much all i had to
833:09 and so that's pretty much all i had to cover with this lesson on low balancing
833:12 cover with this lesson on low balancing please remember that for the exam you
833:14 please remember that for the exam you will need to know the differences
833:16 will need to know the differences between them all
833:18 between them all in my experience there are a few
833:20 in my experience there are a few questions that come up on the exam where
833:22 questions that come up on the exam where you will need to know what low balancer
833:24 you will need to know what low balancer to use and so a good idea might be to
833:27 to use and so a good idea might be to dive into the console and have a look at
833:30 dive into the console and have a look at the options as well as going back
833:32 the options as well as going back through this lesson as a refresher to
833:35 through this lesson as a refresher to understand each use case this is also a
833:37 understand each use case this is also a crucial component in any environment
833:40 crucial component in any environment that is used especially when serving
833:42 that is used especially when serving applications to the internet for any
833:45 applications to the internet for any three-tier web application or kubernetes
833:48 three-tier web application or kubernetes cluster and so that pretty much sums up
833:49 cluster and so that pretty much sums up this lesson on low balancing so you can
833:52 this lesson on low balancing so you can now mark this lesson as complete and
833:54 now mark this lesson as complete and let's move on to the next one
834:02 welcome back in this lesson i will be going into depth on instance groups
834:05 going into depth on instance groups along with instance templates instance
834:07 along with instance templates instance groups are a great way to set up a group
834:10 groups are a great way to set up a group of identical servers used in conjunction
834:12 of identical servers used in conjunction with instance groups instance templates
834:15 with instance groups instance templates handles the instance properties to
834:17 handles the instance properties to deploy the instance groups into your
834:19 deploy the instance groups into your environment this lesson will dive into
834:22 environment this lesson will dive into the details of the features use cases
834:25 the details of the features use cases and how instance groups and instance
834:27 and how instance groups and instance templates work together to create a
834:30 templates work together to create a highly scalable and performing
834:32 highly scalable and performing environment now there's a lot to cover
834:34 environment now there's a lot to cover here so with that being said let's dive
834:37 here so with that being said let's dive in now an instance group is a collection
834:40 in now an instance group is a collection of vm instances that you can manage as a
834:42 of vm instances that you can manage as a single entity compute engine offers two
834:45 single entity compute engine offers two kinds of vm instance groups managed and
834:48 kinds of vm instance groups managed and unmanaged manage instance groups or migs
834:52 unmanaged manage instance groups or migs let you operate applications on multiple
834:55 let you operate applications on multiple identical vms you can make your workload
834:58 identical vms you can make your workload scalable and highly available by taking
835:01 scalable and highly available by taking advantage of automated mig services like
835:04 advantage of automated mig services like auto scaling auto healing regional and
835:07 auto scaling auto healing regional and zonal deployments and automatic updating
835:10 zonal deployments and automatic updating and i'll be getting into these services
835:11 and i'll be getting into these services in just a sec now when it comes to
835:14 in just a sec now when it comes to unmanaged instance groups they also let
835:16 unmanaged instance groups they also let you low balance across a fleet of vms
835:19 you low balance across a fleet of vms but this is something that you need to
835:21 but this is something that you need to manage and i'll be going deeper into
835:23 manage and i'll be going deeper into unmanaged instance groups a bit later
835:25 unmanaged instance groups a bit later right now i wanted to take some time to
835:28 right now i wanted to take some time to go through the features and use cases of
835:30 go through the features and use cases of migs in a bit more detail for some more
835:33 migs in a bit more detail for some more context starting off with its use cases
835:36 context starting off with its use cases now migs are great for stateless serving
835:39 now migs are great for stateless serving workloads such as website front ends web
835:42 workloads such as website front ends web servers and website applications as the
835:45 servers and website applications as the application does not preserve its state
835:48 application does not preserve its state and saves no data to persistent storage
835:51 and saves no data to persistent storage all user and session data stays with the
835:53 all user and session data stays with the client and makes scaling up and down
835:56 client and makes scaling up and down quick and easy migs are also great for
835:58 quick and easy migs are also great for stateless batch workloads and these are
836:01 stateless batch workloads and these are high performance or high throughput
836:04 high performance or high throughput compute workloads such as image
836:06 compute workloads such as image processing from a queue and lastly you
836:09 processing from a queue and lastly you can build highly available stateful
836:11 can build highly available stateful workloads using stateful managed
836:14 workloads using stateful managed instance groups or stateful migs
836:16 instance groups or stateful migs stateful workloads include applications
836:19 stateful workloads include applications with stateful data or configuration such
836:22 with stateful data or configuration such as databases
836:24 as databases legacy monolith type applications and
836:26 legacy monolith type applications and long running batch computations with
836:29 long running batch computations with checkpointing you can improve uptime and
836:31 checkpointing you can improve uptime and resiliency of these types of
836:33 resiliency of these types of applications with auto healing
836:35 applications with auto healing controlled updates and multi-zone
836:37 controlled updates and multi-zone deployments while preserving each
836:40 deployments while preserving each instance's unique state including
836:42 instance's unique state including instance names persistent disks and
836:45 instance names persistent disks and metadata now that i've covered the type
836:47 metadata now that i've covered the type of workloads that are used with migs i
836:50 of workloads that are used with migs i wanted to dive into the features
836:52 wanted to dive into the features starting with auto healing
836:54 starting with auto healing now when it comes to auto healing
836:56 now when it comes to auto healing managed instance groups maintain high
836:59 managed instance groups maintain high availability of your applications by
837:01 availability of your applications by proactively keeping your instances in a
837:04 proactively keeping your instances in a running state a mig automatically
837:07 running state a mig automatically recreates an instance that is not
837:09 recreates an instance that is not running and managed instance groups also
837:12 running and managed instance groups also take care of application-based auto
837:14 take care of application-based auto healing and this improves application
837:16 healing and this improves application availability by relying on a health
837:18 availability by relying on a health check that detects things like freezing
837:22 check that detects things like freezing crashing or overloading if a health
837:24 crashing or overloading if a health check determines that an application has
837:26 check determines that an application has failed on a vm the mig auto healer
837:29 failed on a vm the mig auto healer automatically recreates that vm instance
837:32 automatically recreates that vm instance the health check used to monitor the
837:34 the health check used to monitor the migs are similar to the health checks
837:36 migs are similar to the health checks used for low balancing with a few little
837:38 used for low balancing with a few little differences low balancing health checks
837:41 differences low balancing health checks help direct traffic away from
837:43 help direct traffic away from unresponsive instances and towards
837:46 unresponsive instances and towards healthy ones these health checks cannot
837:48 healthy ones these health checks cannot recreate instances whereas mig health
837:50 recreate instances whereas mig health checks proactively signal to delete and
837:53 checks proactively signal to delete and recreate instances that become unhealthy
837:57 recreate instances that become unhealthy moving on to managed instance groups
837:59 moving on to managed instance groups regional or multi-zone feature now you
838:01 regional or multi-zone feature now you have the option of creating regional
838:03 have the option of creating regional migs or zonal migs
838:05 migs or zonal migs regional migs provide higher
838:07 regional migs provide higher availability compared to zonal migs
838:10 availability compared to zonal migs because the instances in a regional mig
838:13 because the instances in a regional mig are spread across multiple zones in a
838:16 are spread across multiple zones in a single region google recommends regional
838:18 single region google recommends regional migs over zonal migs as you can manage
838:21 migs over zonal migs as you can manage twice as many migs as zonal migs so you
838:24 twice as many migs as zonal migs so you can manage 2 000 migs instead of 1000
838:28 can manage 2 000 migs instead of 1000 you can also spread your application
838:29 you can also spread your application load across multiple zones instead of a
838:32 load across multiple zones instead of a single zone or managing multiple zonal
838:35 single zone or managing multiple zonal migs across different zones and this
838:37 migs across different zones and this protects against zonal failures and
838:40 protects against zonal failures and unforeseen scenarios where an entire
838:42 unforeseen scenarios where an entire group of instances in a single zone
838:44 group of instances in a single zone malfunctions in the case of a zonal
838:47 malfunctions in the case of a zonal failure or if a group of instances in a
838:50 failure or if a group of instances in a zone stops responding a regional mig
838:52 zone stops responding a regional mig continues supporting your instances by
838:55 continues supporting your instances by continuing to serve traffic to the
838:57 continuing to serve traffic to the instances in the remaining zones now
839:00 instances in the remaining zones now cloud low balancing can use instance
839:02 cloud low balancing can use instance groups to serve traffic so you can add
839:05 groups to serve traffic so you can add instance groups to a target pool or to a
839:08 instance groups to a target pool or to a back end an instance group is a type of
839:10 back end an instance group is a type of back end and the instances in the
839:12 back end and the instances in the instance group respond to traffic from
839:15 instance group respond to traffic from the load balancer the back end service
839:18 the load balancer the back end service in turn knows which instances it can use
839:21 in turn knows which instances it can use and how much traffic they can handle and
839:23 and how much traffic they can handle and how much traffic they are currently
839:25 how much traffic they are currently handling in addition the back-end
839:27 handling in addition the back-end service monitors health checking and
839:29 service monitors health checking and does not send new connections to
839:32 does not send new connections to unhealthy instances now when your
839:34 unhealthy instances now when your applications require additional compute
839:36 applications require additional compute resources migs support auto scaling that
839:39 resources migs support auto scaling that dynamically add or remove instances from
839:42 dynamically add or remove instances from the mig in response to an increase or
839:45 the mig in response to an increase or decrease in load you can turn on auto
839:48 decrease in load you can turn on auto scaling and configure an auto scaling
839:50 scaling and configure an auto scaling policy to specify how you want the group
839:53 policy to specify how you want the group to scale not only will auto scaling
839:55 to scale not only will auto scaling scale up to meet the load demands but
839:58 scale up to meet the load demands but will also shrink and remove instances as
840:01 will also shrink and remove instances as the load decreases to reduce your costs
840:04 the load decreases to reduce your costs auto scaling policies include scaling
840:07 auto scaling policies include scaling based on cpu utilization load balancing
840:10 based on cpu utilization load balancing capacity and cloud monitoring metrics
840:13 capacity and cloud monitoring metrics and so when it comes to auto updating
840:15 and so when it comes to auto updating you can easily and safely deploy new
840:18 you can easily and safely deploy new versions of software to instances in a
840:21 versions of software to instances in a mig the rollout of an update happens
840:24 mig the rollout of an update happens automatically based on your
840:25 automatically based on your specifications you can also control the
840:28 specifications you can also control the speed and scope of the deployments in
840:30 speed and scope of the deployments in order to minimize disruptions to your
840:32 order to minimize disruptions to your application you can optionally perform
840:35 application you can optionally perform rolling updates as well as partial
840:38 rolling updates as well as partial rollouts for canary testing and for
840:40 rollouts for canary testing and for those who don't know rolling updates
840:42 those who don't know rolling updates allow updates to take place with zero
840:45 allow updates to take place with zero downtime by incrementally updating
840:48 downtime by incrementally updating instances with new ones as well canary
840:51 instances with new ones as well canary testing is a way to reduce risk and
840:53 testing is a way to reduce risk and validate new software by releasing
840:56 validate new software by releasing software to a small percentage of users
840:59 software to a small percentage of users with canary testing you can deliver to
841:01 with canary testing you can deliver to certain groups of users at a time and
841:04 certain groups of users at a time and this is also referred to as stage
841:06 this is also referred to as stage rollouts and this is a best practice in
841:09 rollouts and this is a best practice in devops and software development now
841:11 devops and software development now there are a few more things that i
841:13 there are a few more things that i wanted to point out that relate to migs
841:15 wanted to point out that relate to migs you can reduce the cost of your workload
841:18 you can reduce the cost of your workload by using preemptable vm instances in
841:20 by using preemptable vm instances in your instance group and when they are
841:22 your instance group and when they are deleted auto healing will bring the
841:24 deleted auto healing will bring the instances back when preemptable capacity
841:27 instances back when preemptable capacity becomes available again you can also
841:29 becomes available again you can also deploy containers to instances in
841:32 deploy containers to instances in managed instance groups when you specify
841:34 managed instance groups when you specify a container image in an instance
841:36 a container image in an instance template and is used to create a mig
841:38 template and is used to create a mig each vm is created with the container
841:41 each vm is created with the container optimized os that includes docker and
841:44 optimized os that includes docker and your container starts automatically on
841:47 your container starts automatically on each vm in the group and finally when
841:49 each vm in the group and finally when creating migs you must define the vpc
841:52 creating migs you must define the vpc network that it will reside in although
841:54 network that it will reside in although when you don't define the network google
841:56 when you don't define the network google cloud will attempt to use the default
841:58 cloud will attempt to use the default network now moving on into unmanaged
842:01 network now moving on into unmanaged instance groups for just a minute
842:03 instance groups for just a minute unmanaged instance groups can contain
842:05 unmanaged instance groups can contain heterogeneous instances and these are
842:08 heterogeneous instances and these are instances that are of mixed sizes of cpu
842:11 instances that are of mixed sizes of cpu ram as well as instance types and you
842:14 ram as well as instance types and you can add and remove these instances from
842:16 can add and remove these instances from the group whenever you choose there's a
842:18 the group whenever you choose there's a major downside to this though unmanaged
842:21 major downside to this though unmanaged instance groups do not offer auto
842:23 instance groups do not offer auto scaling auto healing rolling update
842:26 scaling auto healing rolling update support multi-zone support or the use of
842:29 support multi-zone support or the use of instance templates and are not a good
842:31 instance templates and are not a good fit for deploying highly available and
842:34 fit for deploying highly available and scalable workloads you should only use
842:36 scalable workloads you should only use unmanaged instance groups if you need to
842:39 unmanaged instance groups if you need to apply load balancing to groups of these
842:42 apply load balancing to groups of these mixed types of instances or if you need
842:45 mixed types of instances or if you need to manage the instances yourself so
842:47 to manage the instances yourself so unmanaged instance groups are designed
842:50 unmanaged instance groups are designed for very special use cases where you
842:52 for very special use cases where you will need to mix instance types in
842:55 will need to mix instance types in almost all cases you will be using
842:57 almost all cases you will be using managed instance groups as they were
842:59 managed instance groups as they were intended to capture the benefits of all
843:02 intended to capture the benefits of all the features they have to offer
843:04 the features they have to offer now in order to launch an instance group
843:06 now in order to launch an instance group into any environment you will need
843:08 into any environment you will need another resource to do this and this is
843:11 another resource to do this and this is where instance templates come into play
843:14 where instance templates come into play an instance template is a resource that
843:16 an instance template is a resource that you can use to create vm instances and
843:19 you can use to create vm instances and managed instance groups instance
843:22 managed instance groups instance templates define the machine type boot
843:24 templates define the machine type boot disk image or container image as well as
843:28 disk image or container image as well as labels and other instance properties you
843:30 labels and other instance properties you can then use an instance template to
843:32 can then use an instance template to create a mig or vm instance instance
843:35 create a mig or vm instance instance templates are an easy way to save a vm
843:38 templates are an easy way to save a vm instances configuration so you can use
843:41 instances configuration so you can use it later to recreate vms or groups of
843:44 it later to recreate vms or groups of vms an instance template
843:46 vms an instance template is a global resource that is not bound
843:48 is a global resource that is not bound to a zone or region although you can
843:51 to a zone or region although you can restrict a template to a zone by calling
843:54 restrict a template to a zone by calling out specific zonal resources now there
843:56 out specific zonal resources now there is something to note for when you are
843:58 is something to note for when you are ever using migs if you want to create a
844:01 ever using migs if you want to create a group of identical instances you must
844:04 group of identical instances you must use an instance template to create a mig
844:07 use an instance template to create a mig and is something you should always keep
844:08 and is something you should always keep in the front of mind when using migs
844:11 in the front of mind when using migs these two resources both instance
844:13 these two resources both instance templates and managed instance groups go
844:16 templates and managed instance groups go hand in hand now some other things to
844:18 hand in hand now some other things to note is that instance templates are
844:20 note is that instance templates are designed to create instances with
844:23 designed to create instances with identical configurations so you cannot
844:26 identical configurations so you cannot update an existing instance template or
844:28 update an existing instance template or change an instance template after you
844:30 change an instance template after you create it if you need to make changes to
844:32 create it if you need to make changes to the configuration
844:34 the configuration create a new instance template you can
844:36 create a new instance template you can create a template based on an existing
844:38 create a template based on an existing instance template or based on an
844:40 instance template or based on an existing instance to use an existing vm
844:43 existing instance to use an existing vm to make a template you can save the
844:45 to make a template you can save the configuration using the gcloud command
844:48 configuration using the gcloud command gcloud instance dash templates create or
844:52 gcloud instance dash templates create or to use the console you can simply go to
844:54 to use the console you can simply go to the instance templates page click on the
844:56 the instance templates page click on the template that you want to update and
844:58 template that you want to update and click on create similar the last thing
845:00 click on create similar the last thing that i wanted to point out is that you
845:02 that i wanted to point out is that you can use custom or public images in your
845:06 can use custom or public images in your instance templates and so that's pretty
845:08 instance templates and so that's pretty much all i had to cover when it comes to
845:11 much all i had to cover when it comes to instance groups and instance templates
845:13 instance groups and instance templates managed instance groups are great for
845:15 managed instance groups are great for when you're looking at high availability
845:17 when you're looking at high availability as a priority and letting migs do all
845:20 as a priority and letting migs do all the work of keeping your environment up
845:22 the work of keeping your environment up and running and so you can now mark this
845:24 and running and so you can now mark this lesson as complete and whenever you're
845:26 lesson as complete and whenever you're ready join me in the next one where we
845:28 ready join me in the next one where we go hands-on with instance groups
845:31 go hands-on with instance groups instance templates and load balancers in
845:33 instance templates and load balancers in a demo
845:41 welcome back in this demo we're going to put everything that we've learned
845:43 put everything that we've learned together in a hands-on demo called
845:45 together in a hands-on demo called managing bow ties we're going to create
845:47 managing bow ties we're going to create an instance template and next we're
845:49 an instance template and next we're going to use it to create an instance
845:51 going to use it to create an instance group we're then going to create a low
845:53 group we're then going to create a low balancer with a new back end and create
845:56 balancer with a new back end and create some health checks along the way we're
845:58 some health checks along the way we're then going to verify that all instances
846:00 then going to verify that all instances are working by browsing to the load
846:02 are working by browsing to the load balancer ip and verifying the website
846:05 balancer ip and verifying the website application we're then going to stress
846:07 application we're then going to stress test one of the instances to simulate a
846:10 test one of the instances to simulate a scale out using auto scaling and then
846:12 scale out using auto scaling and then we're going to simulate scaling the
846:14 we're going to simulate scaling the instance group back in now there's quite
846:16 instance group back in now there's quite a bit to do here so with that being said
846:19 a bit to do here so with that being said let's dive in so here i am logged in as
846:22 let's dive in so here i am logged in as tony bowties at gmail.com under project
846:26 tony bowties at gmail.com under project bowtie inc and so the first thing that
846:28 bowtie inc and so the first thing that you want to do is you want to make sure
846:29 you want to do is you want to make sure that you have a default vpc network
846:32 that you have a default vpc network already created and so just to double
846:34 already created and so just to double check i'm going to go over to the
846:35 check i'm going to go over to the navigation menu i'm going to scroll down
846:38 navigation menu i'm going to scroll down to vpc network
846:41 to vpc network and yes i do have a default vpc network
846:44 and yes i do have a default vpc network so i'm going to go ahead and start
846:45 so i'm going to go ahead and start creating my resources and so now what i
846:48 creating my resources and so now what i want to do is i want to create my
846:49 want to do is i want to create my instance template and so in order to do
846:51 instance template and so in order to do that i'm going to go back up to the
846:53 that i'm going to go back up to the navigation menu i'm going to go down to
846:55 navigation menu i'm going to go down to compute engine and go up to instance
846:58 compute engine and go up to instance templates as you can see i currently
847:00 templates as you can see i currently have no instance templates and yours
847:02 have no instance templates and yours should look the same and so you can go
847:04 should look the same and so you can go ahead and click on create instance
847:06 ahead and click on create instance template and so just as a note there are
847:08 template and so just as a note there are no monthly costs associated with
847:11 no monthly costs associated with instance templates but this estimate
847:13 instance templates but this estimate here on the right is to show you the
847:15 here on the right is to show you the cost of each instance you will be
847:17 cost of each instance you will be creating with this template okay so
847:19 creating with this template okay so getting right into it i'm going to name
847:21 getting right into it i'm going to name this instance template
847:23 this instance template bowtie template and since we're spinning
847:26 bowtie template and since we're spinning up a lot of vms you want to be conscious
847:28 up a lot of vms you want to be conscious on costs and so under series you're
847:30 on costs and so under series you're going to click on the drop down and
847:32 going to click on the drop down and you're going to select n1 and under
847:34 you're going to select n1 and under machine type you're going to select f1
847:37 machine type you're going to select f1 micro and this is the smallest instance
847:39 micro and this is the smallest instance type as well as the cheapest within
847:41 type as well as the cheapest within google cloud you can go ahead and scroll
847:43 google cloud you can go ahead and scroll down right to the bottom here under
847:45 down right to the bottom here under firewall you want to check off allow
847:47 firewall you want to check off allow http traffic next you want to select
847:50 http traffic next you want to select management security disks networking and
847:52 management security disks networking and sold tenancy you scroll down a little
847:55 sold tenancy you scroll down a little bit and under startup script you're
847:57 bit and under startup script you're going to paste in the script that's
847:59 going to paste in the script that's available in the repo and you will find
848:02 available in the repo and you will find a link to this script and the repo in
848:04 a link to this script and the repo in the lesson text and so you can leave all
848:06 the lesson text and so you can leave all the other options as its default and
848:09 the other options as its default and simply click on create it's going to
848:10 simply click on create it's going to take a couple minutes here okay and the
848:12 take a couple minutes here okay and the instance template is ready and so the
848:14 instance template is ready and so the next step that you want to do is create
848:17 next step that you want to do is create an instance group and as i said in a
848:19 an instance group and as i said in a previous lesson in order to create an
848:21 previous lesson in order to create an instance group you need an instance
848:23 instance group you need an instance template hence why we made the instance
848:25 template hence why we made the instance template first okay and our instance
848:27 template first okay and our instance template has been created and so now
848:29 template has been created and so now that you've created your instance
848:31 that you've created your instance template you can head on over to
848:33 template you can head on over to instance groups here in the left hand
848:35 instance groups here in the left hand menu and as expected there are no
848:37 menu and as expected there are no instance groups and so you can go ahead
848:39 instance groups and so you can go ahead and click on the big blue button and
848:41 and click on the big blue button and create an instance group you're going to
848:43 create an instance group you're going to make sure that new managed instance
848:45 make sure that new managed instance group stateless is selected and here you
848:47 group stateless is selected and here you have the option of choosing a stateful
848:49 have the option of choosing a stateful instance group as well as an unmanaged
848:52 instance group as well as an unmanaged instance group and so we're going to
848:53 instance group and so we're going to keep things stateless and so for the
848:55 keep things stateless and so for the name of the instance group you can
848:57 name of the instance group you can simply call this bowtie group i'm going
849:00 simply call this bowtie group i'm going to use the same name in the description
849:01 to use the same name in the description and under location you want to check off
849:04 and under location you want to check off multiple zones in under region you want
849:06 multiple zones in under region you want to select us east one and if you click
849:09 to select us east one and if you click on configure zones you can see here that
849:11 on configure zones you can see here that you can select all the different zones
849:13 you can select all the different zones that's available in that region that you
849:16 that's available in that region that you choose to have your instances in and so
849:18 choose to have your instances in and so i'm going to keep it under all three
849:19 i'm going to keep it under all three zones i'm going to scroll down here a
849:21 zones i'm going to scroll down here a little bit and under instance template
849:24 little bit and under instance template you should see bow tie template you can
849:26 you should see bow tie template you can select that you can scroll down a little
849:28 select that you can scroll down a little bit more and here under minimum number
849:30 bit more and here under minimum number of instances you want to set the minimum
849:32 of instances you want to set the minimum number of instances to 3 and under
849:35 number of instances to 3 and under maximum number of instances you want to
849:37 maximum number of instances you want to set that to 6 and so this is going to be
849:39 set that to 6 and so this is going to be double the amount of the minimum number
849:42 double the amount of the minimum number of instances so when you're scaled out
849:44 of instances so when you're scaled out you should have a maximum of 6 instances
849:46 you should have a maximum of 6 instances and when you're scaled in or you have
849:49 and when you're scaled in or you have very low traffic you should only have
849:51 very low traffic you should only have three instances so you can scroll down
849:53 three instances so you can scroll down some more and under auto healing you
849:55 some more and under auto healing you want to select the health check and
849:57 want to select the health check and you're going to go ahead and create a
849:59 you're going to go ahead and create a new health check under name you can call
850:01 new health check under name you can call this healthy bow ties i'm going to use
850:04 this healthy bow ties i'm going to use the same for the description and i'm
850:06 the same for the description and i'm going to leave the rest as its default
850:08 going to leave the rest as its default and go down and click on save and
850:10 and go down and click on save and continue i'm going to scroll down some
850:11 continue i'm going to scroll down some more and i'm going to leave the rest as
850:13 more and i'm going to leave the rest as is and simply click on create and it's
850:16 is and simply click on create and it's going to take a couple minutes here and
850:18 going to take a couple minutes here and so i'm going to pause the video and i'll
850:19 so i'm going to pause the video and i'll be back in a flash okay and my instance
850:22 be back in a flash okay and my instance group has been created and so to get a
850:24 group has been created and so to get a better look at it i'm going to click on
850:26 better look at it i'm going to click on bow tie group and i can see here that
850:28 bow tie group and i can see here that three instances have been created if i
850:30 three instances have been created if i go up to vm instances you can see here
850:33 go up to vm instances you can see here that i have three instances but under
850:35 that i have three instances but under instance groups because i have health
850:38 instance groups because i have health check enabled it shows that my instances
850:40 check enabled it shows that my instances are unhealthy and this is because i
850:42 are unhealthy and this is because i still need to create a firewall rule
850:44 still need to create a firewall rule that will allow google's health check
850:46 that will allow google's health check probes to reach my vm instances and so
850:49 probes to reach my vm instances and so you're going to go ahead and create that
850:51 you're going to go ahead and create that firewall rule so you can bring the
850:53 firewall rule so you can bring the health check status up to healthy so i'm
850:55 health check status up to healthy so i'm going to go over to the navigation menu
850:57 going to go over to the navigation menu and scroll down to vpc network and go
851:00 and scroll down to vpc network and go over to firewall here under firewall as
851:03 over to firewall here under firewall as expected you have the default firewall
851:05 expected you have the default firewall rules from the default created vpc
851:07 rules from the default created vpc network and so i'm going to go up to
851:09 network and so i'm going to go up to create firewall and you can name this
851:11 create firewall and you can name this firewall rule allow health check i'm
851:13 firewall rule allow health check i'm going to use the same for the
851:14 going to use the same for the description i'm going to scroll down
851:16 description i'm going to scroll down here a little bit and under targets i'm
851:18 here a little bit and under targets i'm going to select all instances in the
851:20 going to select all instances in the network source filter i'm going to leave
851:22 network source filter i'm going to leave as i p ranges and so here under source i
851:25 as i p ranges and so here under source i p ranges i want to enter in the ip
851:28 p ranges i want to enter in the ip addresses for the google cloud health
851:30 addresses for the google cloud health check probes and you can find these in
851:32 check probes and you can find these in the documentation and i will also be
851:34 the documentation and i will also be supplying them in the instructions and
851:36 supplying them in the instructions and there are two sets of ip addresses that
851:38 there are two sets of ip addresses that need to be entered and just as a note
851:40 need to be entered and just as a note you don't need to know this for the exam
851:43 you don't need to know this for the exam but it's always a good to know if you're
851:45 but it's always a good to know if you're ever adding health checks to any of your
851:47 ever adding health checks to any of your instances i'm going to scroll down a
851:48 instances i'm going to scroll down a little bit to protocols and ports and
851:50 little bit to protocols and ports and under tcp i'm going to check it off and
851:53 under tcp i'm going to check it off and put in port 80. that's pretty much all
851:55 put in port 80. that's pretty much all you have to do here so whenever you
851:57 you have to do here so whenever you entered all that information in you can
851:59 entered all that information in you can simply click on create and so now i have
852:01 simply click on create and so now i have a firewall rule that will allow health
852:04 a firewall rule that will allow health checks to be done and so it may take a
852:06 checks to be done and so it may take a minute or two but if i head back on over
852:09 minute or two but if i head back on over to my compute engine instances and go
852:11 to my compute engine instances and go over to my instance groups
852:14 over to my instance groups i'll be able to see that all my
852:16 i'll be able to see that all my instances are now healthy and so
852:18 instances are now healthy and so whenever you're creating instance groups
852:19 whenever you're creating instance groups and you're applying health checks this
852:22 and you're applying health checks this firewall rule is necessary so please be
852:24 firewall rule is necessary so please be aware okay so now that we've created our
852:26 aware okay so now that we've created our instance templates we've created our
852:29 instance templates we've created our instance groups and we created a
852:31 instance groups and we created a firewall rule in order to satisfy health
852:34 firewall rule in order to satisfy health checks we can now move on to the next
852:36 checks we can now move on to the next step which is creating the load balancer
852:38 step which is creating the load balancer so i'm going to go back up to the
852:39 so i'm going to go back up to the navigation menu and i'm going to scroll
852:42 navigation menu and i'm going to scroll down to network services and over to
852:44 down to network services and over to load balancing and as expected there are
852:47 load balancing and as expected there are no load balancers created and so
852:49 no load balancers created and so whenever you're ready you can click on
852:50 whenever you're ready you can click on the big blue button and create a new low
852:52 the big blue button and create a new low balancer here you have the option of
852:55 balancer here you have the option of creating an http or https load balancer
852:58 creating an http or https load balancer along with a tcp load balancer or a udp
853:02 along with a tcp load balancer or a udp load balancer and because we're serving
853:04 load balancer and because we're serving external traffic on port 80 we're going
853:07 external traffic on port 80 we're going to use the http load balancer so you can
853:09 to use the http load balancer so you can click on start configuration and i'm
853:11 click on start configuration and i'm being prompted to decide between
853:13 being prompted to decide between internet facing or internal only and
853:16 internet facing or internal only and you're going to be accepting traffic
853:17 you're going to be accepting traffic from the internet to your load bouncer
853:19 from the internet to your load bouncer so make sure that from internet to my
853:21 so make sure that from internet to my vms is checked off and simply click
853:23 vms is checked off and simply click continue and so next you will be
853:25 continue and so next you will be prompted with a page with a bunch of
853:28 prompted with a page with a bunch of configurations that you can enter and so
853:30 configurations that you can enter and so we'll get to that in just a second but
853:32 we'll get to that in just a second but first we need to name our load balancer
853:34 first we need to name our load balancer and so i'm going to call this
853:36 and so i'm going to call this bowtie dash lb for low balancer and so
853:39 bowtie dash lb for low balancer and so next step for your load balancer is you
853:41 next step for your load balancer is you need to configure a back end so you can
853:43 need to configure a back end so you can click on back end configuration and here
853:45 click on back end configuration and here you have the option of selecting from
853:48 you have the option of selecting from back-end services or back-end buckets so
853:50 back-end services or back-end buckets so you're going to go ahead and click on
853:52 you're going to go ahead and click on back-end services and create a back-end
853:54 back-end services and create a back-end service and here you will be prompted
853:56 service and here you will be prompted with a bunch of fields to fill out in
853:59 with a bunch of fields to fill out in order to create your back-end service
854:01 order to create your back-end service and you can go ahead and name the
854:02 and you can go ahead and name the backend service as bowtie backend
854:05 backend service as bowtie backend service back-end type is going to be
854:07 service back-end type is going to be instance group and you can leave the
854:09 instance group and you can leave the protocol named port and timeout as is as
854:12 protocol named port and timeout as is as we're going to be using http under
854:14 we're going to be using http under instance group in new back-end if you
854:16 instance group in new back-end if you select the drop-down you should see your
854:18 select the drop-down you should see your available bow tie group instance group
854:21 available bow tie group instance group select that
854:22 select that scroll down a little bit and under port
854:24 scroll down a little bit and under port numbers you can enter in port 80 and you
854:27 numbers you can enter in port 80 and you can leave all the other options as
854:28 can leave all the other options as default and simply click on done and so
854:31 default and simply click on done and so if you're ever interested you can always
854:33 if you're ever interested you can always add a cache using cloud cdn now i know
854:36 add a cache using cloud cdn now i know we haven't gone through cloud cdn in
854:38 we haven't gone through cloud cdn in this course but just know that this is
854:40 this course but just know that this is google's content delivery network and it
854:42 google's content delivery network and it uses google's global edge network to
854:45 uses google's global edge network to serve content closer to users and this
854:48 serve content closer to users and this accelerates your websites and your
854:50 accelerates your websites and your applications and delivers a better user
854:52 applications and delivers a better user experience for your user okay and moving
854:55 experience for your user okay and moving on here under health check if i click on
854:57 on here under health check if i click on the drop down you should see healthy bow
854:59 the drop down you should see healthy bow ties you can select that for your health
855:02 ties you can select that for your health check and so just as a note here under
855:04 check and so just as a note here under advanced configurations you can set your
855:07 advanced configurations you can set your session affinity your connection
855:09 session affinity your connection draining timeout as well as request and
855:11 draining timeout as well as request and response headers and so we don't need
855:13 response headers and so we don't need any of that for this demo and so i'm
855:15 any of that for this demo and so i'm going to go ahead and collapse this and
855:17 going to go ahead and collapse this and once you've finished filling in all the
855:19 once you've finished filling in all the fields you can simply click on create
855:21 fields you can simply click on create okay and so you should now have your
855:23 okay and so you should now have your back end configuration and your host and
855:26 back end configuration and your host and path rules configured and so the only
855:28 path rules configured and so the only thing that's left to configure is the
855:29 thing that's left to configure is the front end so you can go up and click on
855:31 front end so you can go up and click on front-end configuration and you can name
855:33 front-end configuration and you can name your front-end bowtie front-end service
855:36 your front-end bowtie front-end service gonna keep the protocols http and here
855:39 gonna keep the protocols http and here is where you would select the network
855:41 is where you would select the network service tier choosing either premium or
855:43 service tier choosing either premium or standard and if you remember in the load
855:45 standard and if you remember in the load balancing lesson in order to use this as
855:48 balancing lesson in order to use this as a global load balancer i need to use a
855:50 a global load balancer i need to use a premium tier okay and we're going to
855:52 premium tier okay and we're going to keep this as ipv4 with an ephemeral ip
855:55 keep this as ipv4 with an ephemeral ip address on port 80 so once you've
855:57 address on port 80 so once you've finished configuring the front end you
855:59 finished configuring the front end you can simply click on done and you can go
856:01 can simply click on done and you can go and click on review and finalize and
856:03 and click on review and finalize and this will give you a summary on your
856:05 this will give you a summary on your configuration and so i'm happy with the
856:07 configuration and so i'm happy with the way everything's configured and if you
856:09 way everything's configured and if you are as well you can simply click on
856:11 are as well you can simply click on create and this may take a minute or two
856:14 create and this may take a minute or two but it will create your low balancer
856:16 but it will create your low balancer along with your back end and your front
856:18 along with your back end and your front end so again i'm going to pause the
856:19 end so again i'm going to pause the video here for just a minute and i'll be
856:21 video here for just a minute and i'll be back before you can say cat in the hat
856:23 back before you can say cat in the hat okay and my load balancer has been
856:25 okay and my load balancer has been created and to get a little bit more
856:27 created and to get a little bit more details i'm going to drill down into it
856:29 details i'm going to drill down into it and i can see here the details of my
856:31 and i can see here the details of my load balancer along with my monitoring
856:34 load balancer along with my monitoring and any caching but i don't have any
856:36 and any caching but i don't have any caching enabled and therefore nothing is
856:38 caching enabled and therefore nothing is showing so going back to the details i
856:40 showing so going back to the details i can see here that i have a new ip
856:42 can see here that i have a new ip address for my load balancer and i'll be
856:45 address for my load balancer and i'll be getting into that in just a minute i'm
856:47 getting into that in just a minute i'm going to go back here and i'm going to
856:48 going to go back here and i'm going to check out my back ends click on bow tie
856:50 check out my back ends click on bow tie back end service and here i can see the
856:53 back end service and here i can see the requests per second as well as my
856:55 requests per second as well as my configuration and if you do see this
856:57 configuration and if you do see this caution symbol here showing that some of
857:00 caution symbol here showing that some of your instances are unhealthy it's only
857:02 your instances are unhealthy it's only because the low balancer needs time to
857:04 because the low balancer needs time to do a full health check on all the
857:06 do a full health check on all the instances in the instance group and so
857:09 instances in the instance group and so this will take some time okay and so i'm
857:11 this will take some time okay and so i'm going to go back over and check out my
857:13 going to go back over and check out my front end and there's nothing to drill
857:15 front end and there's nothing to drill down into with the front end service but
857:17 down into with the front end service but it does show me my scope the address the
857:20 it does show me my scope the address the protocol
857:21 protocol network tier and the low balancer itself
857:24 network tier and the low balancer itself so this is the end of part one of this
857:26 so this is the end of part one of this demo it was getting a bit long so i
857:28 demo it was getting a bit long so i decided to break it up this would be a
857:30 decided to break it up this would be a great opportunity for you to get up have
857:33 great opportunity for you to get up have a stretch get yourself a coffee or tea
857:36 a stretch get yourself a coffee or tea and whenever you're ready part two will
857:38 and whenever you're ready part two will be starting immediately from the end of
857:40 be starting immediately from the end of part one so you can now mark this as
857:43 part one so you can now mark this as complete and i'll see you in part two
857:51 this is part two of the managing bow ties demo and we will be starting
857:53 ties demo and we will be starting exactly where we left off in part one so
857:56 exactly where we left off in part one so with that being said let's dive in and
857:58 with that being said let's dive in and so before you move forward you want to
858:00 so before you move forward you want to make sure that all your instances are
858:02 make sure that all your instances are considered healthy by your load balancer
858:05 considered healthy by your load balancer and as i can see here all my instances
858:08 and as i can see here all my instances in my instance group are considered
858:09 in my instance group are considered healthy by the load balancer and so just
858:12 healthy by the load balancer and so just to verify this i'm going to go ahead and
858:14 to verify this i'm going to go ahead and copy the i p address and you can open up
858:17 copy the i p address and you can open up a new tab in your browser and simply
858:19 a new tab in your browser and simply paste it in
858:20 paste it in and success as you can see here managing
858:23 and success as you can see here managing the production of many bow ties can be
858:26 the production of many bow ties can be automated but managing the wearer of
858:28 automated but managing the wearer of them definitely cannot another fine
858:31 them definitely cannot another fine message from the people at bow tie inc
858:33 message from the people at bow tie inc now although this is a simple web page i
858:36 now although this is a simple web page i used a couple variables just to show you
858:38 used a couple variables just to show you the low balancing that happens in the
858:41 the low balancing that happens in the background and traffic will be load
858:42 background and traffic will be load balanced in between all of the instances
858:45 balanced in between all of the instances in the instance group so if you click on
858:47 in the instance group so if you click on refresh then you should see the machine
858:49 refresh then you should see the machine name and the data center change so every
858:52 name and the data center change so every time i click refresh the traffic will be
858:54 time i click refresh the traffic will be routed to a different instance in a
858:56 routed to a different instance in a different zone and so a simple
858:58 different zone and so a simple simulation on how traffic is low balance
859:01 simulation on how traffic is low balance between the different instances in their
859:03 between the different instances in their different zones okay so now that we've
859:05 different zones okay so now that we've verified the website application i'm
859:07 verified the website application i'm going to close down this tab and so now
859:09 going to close down this tab and so now that we've created our instance template
859:12 that we've created our instance template we've created our instance group and
859:14 we've created our instance group and we've created our low balancer with the
859:16 we've created our low balancer with the back end and front end service and it
859:18 back end and front end service and it looks like everything seems to be
859:20 looks like everything seems to be working together nicely we're going to
859:22 working together nicely we're going to go ahead and simulate a scale out using
859:25 go ahead and simulate a scale out using auto scaling and so in order to simulate
859:27 auto scaling and so in order to simulate this we're going to do a stress test on
859:29 this we're going to do a stress test on one of the instances so i'm going to
859:31 one of the instances so i'm going to head back on over to the navigation menu
859:34 head back on over to the navigation menu scroll down to compute engine and here
859:36 scroll down to compute engine and here you can ssh into any one of these
859:38 you can ssh into any one of these instances and run the stress test from
859:40 instances and run the stress test from there so i'm going to pick here the one
859:42 there so i'm going to pick here the one at the top and so whenever you're logged
859:44 at the top and so whenever you're logged in you can simply paste in the command
859:46 in you can simply paste in the command that i've included in the instructions
859:48 that i've included in the instructions that will run the stress test and so
859:50 that will run the stress test and so this is a stress test application called
859:52 this is a stress test application called stress that was included in the startup
859:55 stress that was included in the startup script and this again will put stress on
859:57 script and this again will put stress on the server itself and trigger a scale
860:00 the server itself and trigger a scale out to handle the load and it'll do this
860:02 out to handle the load and it'll do this for 30 seconds so you can go ahead and
860:05 for 30 seconds so you can go ahead and hit enter and head back over to the
860:07 hit enter and head back over to the console and in about a minute or two you
860:09 console and in about a minute or two you should see some new instances that will
860:11 should see some new instances that will be created by your instance group in
860:14 be created by your instance group in order to handle the load okay and after
860:16 order to handle the load okay and after about a couple minutes it's showing here
860:18 about a couple minutes it's showing here that instances are being created and it
860:20 that instances are being created and it will be scaling out to the maximum
860:22 will be scaling out to the maximum amount of instances that i've set it to
860:25 amount of instances that i've set it to which is six i'm going to drill down
860:27 which is six i'm going to drill down into this
860:28 into this and yes a scale out is happening and
860:30 and yes a scale out is happening and some new instances are being created to
860:32 some new instances are being created to handle the load so i'm going to give it
860:34 handle the load so i'm going to give it just a minute here okay and as you can
860:36 just a minute here okay and as you can see here all the instances have been
860:38 see here all the instances have been created they've been added to the
860:40 created they've been added to the instance group and all of them are
860:42 instance group and all of them are marked as healthy and so just to verify
860:44 marked as healthy and so just to verify that all the instances are working i'm
860:46 that all the instances are working i'm going to go ahead and open up a new tab
860:48 going to go ahead and open up a new tab i'm going to plug in the ip address on
860:50 i'm going to plug in the ip address on my load balancer and i'm going to simply
860:52 my load balancer and i'm going to simply cycle through all these instances to
860:55 cycle through all these instances to make sure that all them are working and
860:57 make sure that all them are working and it looks like i have no issues and so
861:00 it looks like i have no issues and so now that you've simulated a scale out i
861:02 now that you've simulated a scale out i wanted to go ahead and run a scale in
861:05 wanted to go ahead and run a scale in and so i'm first going to close up these
861:06 and so i'm first going to close up these tabs now with regards to scaling there
861:10 tabs now with regards to scaling there is a 10 minute stabilization period that
861:12 is a 10 minute stabilization period that cannot be adjusted for scaling and this
861:15 cannot be adjusted for scaling and this is a built-in feature into google cloud
861:17 is a built-in feature into google cloud now because i respect your time as a
861:19 now because i respect your time as a student i'm going to show you a work
861:21 student i'm going to show you a work around to trigger a scale in sooner
861:24 around to trigger a scale in sooner strictly for this demo and i also wanted
861:26 strictly for this demo and i also wanted to caution that this should never be
861:29 to caution that this should never be done in a production or production-like
861:31 done in a production or production-like environment you should always wait for
861:34 environment you should always wait for the scaling to happen on its own and
861:36 the scaling to happen on its own and never force it this method is being used
861:39 never force it this method is being used strictly for learning purposes to save
861:42 strictly for learning purposes to save you some time and so i'm going to go
861:44 you some time and so i'm going to go ahead to the top menu and click on
861:46 ahead to the top menu and click on rolling restart and replace and this
861:48 rolling restart and replace and this will bring up a new page where you will
861:50 will bring up a new page where you will have the option to either restart or
861:52 have the option to either restart or replace any instances in your instance
861:55 replace any instances in your instance group and so for your purposes under
861:58 group and so for your purposes under operation make sure that you have
862:00 operation make sure that you have restart checked off and this will
862:02 restart checked off and this will restart all of your instances and only
862:04 restart all of your instances and only bring up the ones that are needed so i'm
862:06 bring up the ones that are needed so i'm going to go ahead and click on restart
862:08 going to go ahead and click on restart i'm going to go back to my instance
862:09 i'm going to go back to my instance group console and i'm just going to give
862:11 group console and i'm just going to give this a few minutes to cook and i'll be
862:13 this a few minutes to cook and i'll be right back in a flash okay so it looks
862:15 right back in a flash okay so it looks like the instance group has scaled in
862:17 like the instance group has scaled in and we are now down left to three
862:19 and we are now down left to three instances the minimum that we configured
862:22 instances the minimum that we configured for our instance group and so that
862:24 for our instance group and so that pretty much covers the managing bow ties
862:27 pretty much covers the managing bow ties demo so i wanted to congratulate you on
862:29 demo so i wanted to congratulate you on making it through this demo and i hope
862:32 making it through this demo and i hope that this has been extremely useful in
862:34 that this has been extremely useful in excelling your knowledge on managing
862:36 excelling your knowledge on managing instance templates managed instance
862:38 instance templates managed instance groups and creating load balancers with
862:41 groups and creating load balancers with back-end and front-end services now this
862:44 back-end and front-end services now this was a jam-packed demo and there was a
862:46 was a jam-packed demo and there was a lot to pack in with everything you've
862:48 lot to pack in with everything you've learned from the last few lessons and so
862:51 learned from the last few lessons and so just as a recap you created an instance
862:54 just as a recap you created an instance template with your startup script you
862:55 template with your startup script you then created a new instance group with a
862:58 then created a new instance group with a health check to go with it configuring
863:00 health check to go with it configuring auto scaling for a minimum of three
863:02 auto scaling for a minimum of three instances you then created a firewall
863:05 instances you then created a firewall rule so that the health check probes
863:07 rule so that the health check probes were able to connect to the application
863:09 were able to connect to the application and you then created a load balancer
863:12 and you then created a load balancer with its back end and front-end service
863:14 with its back end and front-end service and verified that the website
863:16 and verified that the website application was indeed up and running
863:19 application was indeed up and running you then ran a stress test to allow a
863:21 you then ran a stress test to allow a simulation of a scale out of your
863:23 simulation of a scale out of your instance group and then simulated a
863:25 instance group and then simulated a scale in of your instance group great
863:28 scale in of your instance group great job and so now that we've completed this
863:30 job and so now that we've completed this demo you want to make sure that you're
863:32 demo you want to make sure that you're not accumulating any unnecessary costs
863:35 not accumulating any unnecessary costs and so i'm going to go ahead and walk
863:37 and so i'm going to go ahead and walk you through the breakdown of deleting
863:39 you through the breakdown of deleting all these resources so first you're
863:41 all these resources so first you're going to go ahead and delete the load
863:43 going to go ahead and delete the load balancer go back up to the navigation
863:45 balancer go back up to the navigation menu and scroll down to network services
863:48 menu and scroll down to network services and go over to load balancing so i'm
863:50 and go over to load balancing so i'm going to go ahead and check off bow tie
863:52 going to go ahead and check off bow tie lb and simply go up to the top and click
863:54 lb and simply go up to the top and click on delete it's going to ask me if i'm
863:56 on delete it's going to ask me if i'm sure i want to do this i'm also going to
863:58 sure i want to do this i'm also going to select bow tie back end service and i
864:01 select bow tie back end service and i can delete my load balancer and my back
864:03 can delete my load balancer and my back end service all at once i'm going to go
864:05 end service all at once i'm going to go ahead and delete load balancer and the
864:07 ahead and delete load balancer and the selected resources
864:10 selected resources and this should clear up within a few
864:12 and this should clear up within a few seconds okay and our load balancer has
864:14 seconds okay and our load balancer has been deleted i'm going to just go up
864:16 been deleted i'm going to just go up here to the back end make sure
864:18 here to the back end make sure everything's good yeah we're all clean
864:20 everything's good yeah we're all clean same thing with front end and so now you
864:22 same thing with front end and so now you can move on to instance groups so i'm
864:24 can move on to instance groups so i'm going to head back up to the navigation
864:26 going to head back up to the navigation menu go down a compute engine and go up
864:29 menu go down a compute engine and go up to instance groups and here you can just
864:31 to instance groups and here you can just simply check off bow tie group and
864:33 simply check off bow tie group and simply click on delete
864:35 simply click on delete you're going to be prompted with a
864:36 you're going to be prompted with a notification to make sure you want to
864:37 notification to make sure you want to delete bow tie group yes i want to
864:40 delete bow tie group yes i want to delete and again this should take about
864:42 delete and again this should take about a minute okay it actually took a couple
864:44 a minute okay it actually took a couple minutes but my instance group has been
864:46 minutes but my instance group has been deleted and so now i'm going to go over
864:48 deleted and so now i'm going to go over to instance templates and i'm going to
864:50 to instance templates and i'm going to delete my template and check off bow tie
864:52 delete my template and check off bow tie template and simply click delete you're
864:55 template and simply click delete you're going to get a prompt to make sure you
864:56 going to get a prompt to make sure you want to delete your instance template
864:58 want to delete your instance template yes you want to delete
865:00 yes you want to delete and success you've now deleted all your
865:03 and success you've now deleted all your resources although there is one more
865:05 resources although there is one more resource that you will not be billed for
865:07 resource that you will not be billed for but since we're cleaning everything up
865:09 but since we're cleaning everything up we might as well clean that up as well
865:11 we might as well clean that up as well and this is the firewall rule that we
865:12 and this is the firewall rule that we created and go over to the navigation
865:14 created and go over to the navigation menu and scroll down to vpc network
865:18 menu and scroll down to vpc network i'm going to go to firewall here on the
865:20 i'm going to go to firewall here on the left hand menu and here i'm going to
865:22 left hand menu and here i'm going to check off the allow health check
865:24 check off the allow health check firewall rule and simply click on delete
865:27 firewall rule and simply click on delete i'm going to get a prompt to make sure
865:28 i'm going to get a prompt to make sure that i want to delete it yes you want to
865:31 that i want to delete it yes you want to delete i'm going to quickly hit refresh
865:33 delete i'm going to quickly hit refresh and yes we've deleted it and so this
865:36 and yes we've deleted it and so this concludes the end of this demo so you
865:38 concludes the end of this demo so you can now mark this as complete and i'll
865:40 can now mark this as complete and i'll see you in the next one
865:48 welcome back in this next section we will be focusing on google cloud's
865:50 will be focusing on google cloud's premier container orchestration service
865:53 premier container orchestration service called kubernetes but before we can dive
865:56 called kubernetes but before we can dive right into kubernetes and the benefits
865:58 right into kubernetes and the benefits that it gives to containers you'll need
866:00 that it gives to containers you'll need an understanding as to what containers
866:03 an understanding as to what containers are and what value containers provide in
866:06 are and what value containers provide in this lesson i will be covering the
866:08 this lesson i will be covering the difference between virtual machines and
866:10 difference between virtual machines and containers what containers are how they
866:13 containers what containers are how they work and the value proposition they
866:16 work and the value proposition they bring so with that being said let's dive
866:18 bring so with that being said let's dive in
866:19 in now for those of you who didn't know
866:22 now for those of you who didn't know container technology gets its name from
866:24 container technology gets its name from the shipping industry products get
866:26 the shipping industry products get placed into standardized shipping
866:29 placed into standardized shipping containers which are designed to fit
866:31 containers which are designed to fit into the ship that accommodates the
866:33 into the ship that accommodates the container's standard size instead of
866:36 container's standard size instead of having various sizes of packaging now by
866:40 having various sizes of packaging now by standardizing this process and keeping
866:42 standardizing this process and keeping the items together the container can be
866:45 the items together the container can be moved as a unit and it costs less to do
866:48 moved as a unit and it costs less to do it this way as well the standardization
866:51 it this way as well the standardization allows for consistency when packing and
866:54 allows for consistency when packing and moving the containers placing them on
866:57 moving the containers placing them on ships and docks as well as storage no
867:00 ships and docks as well as storage no matter where the container is it always
867:03 matter where the container is it always stays the same size and the contents
867:06 stays the same size and the contents stay isolated from all the other
867:08 stay isolated from all the other containers that they are stacked with
867:10 containers that they are stacked with and so now before we get into the
867:12 and so now before we get into the details of containers i wanted to cover
867:15 details of containers i wanted to cover how we got here and why
867:18 how we got here and why so a great way to discuss containers is
867:21 so a great way to discuss containers is through their comparison to virtual
867:23 through their comparison to virtual machines now as we discussed in a
867:25 machines now as we discussed in a previous lesson when it comes to vms the
867:29 previous lesson when it comes to vms the systems are virtualized through a
867:31 systems are virtualized through a hypervisor that sits on top of the
867:33 hypervisor that sits on top of the underlying host infrastructure the
867:36 underlying host infrastructure the underlying hardware is virtualized so
867:39 underlying hardware is virtualized so that multiple operating system instances
867:42 that multiple operating system instances can run on the hardware each vm runs its
867:45 can run on the hardware each vm runs its own operating system and has access to
867:49 own operating system and has access to virtualized resources representing the
867:52 virtualized resources representing the underlying hardware due to this process
867:55 underlying hardware due to this process vms come with the cost of large overhead
867:59 vms come with the cost of large overhead in cpu memory and disk as well can be
868:03 in cpu memory and disk as well can be very large due to the fact that each vm
868:06 very large due to the fact that each vm needs its own individual operating
868:08 needs its own individual operating system there also lacks standardization
868:11 system there also lacks standardization between each vm making them unique due
868:15 between each vm making them unique due to the os configuration the software
868:18 to the os configuration the software installed and the software libraries
868:20 installed and the software libraries thus not making it very portable to be
868:24 thus not making it very portable to be able to run in any environment now when
868:27 able to run in any environment now when dealing with containers things are run
868:29 dealing with containers things are run very differently the underlying host
868:31 very differently the underlying host infrastructure is still there but
868:33 infrastructure is still there but instead of just using a hypervisor and
868:36 instead of just using a hypervisor and abstracting the underlying hardware
868:38 abstracting the underlying hardware containerization takes it one step
868:40 containerization takes it one step further and abstracts the operating
868:43 further and abstracts the operating system
868:44 system thus
868:45 thus leaving the application with all of its
868:47 leaving the application with all of its dependencies in a neatly packaged
868:51 dependencies in a neatly packaged standardized container this is done by
868:53 standardized container this is done by installing the operating system on top
868:56 installing the operating system on top of the host infrastructure
868:58 of the host infrastructure and then a separate layer on top of the
869:00 and then a separate layer on top of the host operating system called the
869:02 host operating system called the container engine now instead of having
869:05 container engine now instead of having their own operating system the
869:07 their own operating system the containers share the operating system
869:09 containers share the operating system kernel with other containers
869:12 kernel with other containers while operating independently
869:14 while operating independently running just the application code and
869:17 running just the application code and the dependencies needed to run that
869:19 the dependencies needed to run that application this allows each container
869:22 application this allows each container to consume very little memory or disk
869:25 to consume very little memory or disk making containers very lightweight
869:28 making containers very lightweight efficient and portable containerized
869:31 efficient and portable containerized applications can start in seconds and
869:33 applications can start in seconds and many more instances of the application
869:36 many more instances of the application can fit onto the machine compared to a
869:39 can fit onto the machine compared to a vm environment this container can now be
869:42 vm environment this container can now be brought over to other environments
869:44 brought over to other environments running docker and able to run without
869:47 running docker and able to run without having the worries of running into
869:50 having the worries of running into issues of compatibility now although
869:52 issues of compatibility now although there are a few different container
869:54 there are a few different container engines out there the one that has
869:56 engines out there the one that has received the most popularity is docker
869:59 received the most popularity is docker and this is the engine that we will be
870:01 and this is the engine that we will be referring to for the remainder of this
870:03 referring to for the remainder of this course now a docker image is a
870:06 course now a docker image is a collection or stack of layers that are
870:09 collection or stack of layers that are created from sequential instructions on
870:11 created from sequential instructions on a docker file so each line in the
870:14 a docker file so each line in the dockerfile is run line by line and a
870:17 dockerfile is run line by line and a unique read-only layer is written to the
870:20 unique read-only layer is written to the image what makes docker images unique is
870:23 image what makes docker images unique is that each time you add another
870:25 that each time you add another instruction in the docker file a new
870:28 instruction in the docker file a new layer is created now going through a
870:30 layer is created now going through a practical example here shown on the
870:33 practical example here shown on the right is a docker file and we will be
870:35 right is a docker file and we will be able to map each line of code to a layer
870:38 able to map each line of code to a layer shown on the docker image on the left
870:41 shown on the docker image on the left the line marked from
870:43 the line marked from shows the base image that the image will
870:45 shows the base image that the image will be using the example shown here shows
870:48 be using the example shown here shows that the ubuntu image version 12.04
870:52 that the ubuntu image version 12.04 will be used next the run instruction is
870:55 will be used next the run instruction is used which will perform a general update
870:59 used which will perform a general update install apache 2 and output a message to
871:02 install apache 2 and output a message to be displayed that is written to the
871:04 be displayed that is written to the index.html file next up is the working
871:07 index.html file next up is the working directories and these are the
871:09 directories and these are the environment variables set by using an
871:12 environment variables set by using an env instruction and this will help run
871:14 env instruction and this will help run the apache runtime next layer is the
871:17 the apache runtime next layer is the expose instruction and this is used to
871:19 expose instruction and this is used to expose the container's port on 8080 and
871:23 expose the container's port on 8080 and lastly the command layer is an
871:26 lastly the command layer is an instruction that is executing the apache
871:29 instruction that is executing the apache web server from its executable path and
871:32 web server from its executable path and so this is a great example of how a
871:34 so this is a great example of how a docker file is broken down from each
871:36 docker file is broken down from each line to create the layers of this image
871:39 line to create the layers of this image and so just as a note here each docker
871:42 and so just as a note here each docker image starts with a base image as well
871:45 image starts with a base image as well each line in a docker file creates a new
871:48 each line in a docker file creates a new layer that is added to the image and
871:51 layer that is added to the image and finally all the layers in a docker image
871:54 finally all the layers in a docker image are read only and cannot be changed
871:56 are read only and cannot be changed unless the docker file is adjusted to
871:59 unless the docker file is adjusted to reflect that change
872:00 reflect that change so now how do we get from a docker image
872:03 so now how do we get from a docker image to a container well a running docker
872:06 to a container well a running docker container is actually an instantiation
872:09 container is actually an instantiation of an image so containers using the same
872:12 of an image so containers using the same image are identical to each other in
872:15 image are identical to each other in terms of their application code and
872:17 terms of their application code and runtime dependencies so i could use the
872:20 runtime dependencies so i could use the same image for multiple copies of the
872:23 same image for multiple copies of the same container that have different tasks
872:26 same container that have different tasks what makes each individual container
872:28 what makes each individual container different
872:29 different is that running containers include a
872:32 is that running containers include a writable layer on top of the read-only
872:35 writable layer on top of the read-only content runtime changes including any
872:38 content runtime changes including any rights and updates to data and files are
872:41 rights and updates to data and files are saved in this read write layer so in
872:43 saved in this read write layer so in this example when using the command
872:46 this example when using the command docker run fashionista a docker
872:49 docker run fashionista a docker container will be instantiated from the
872:52 container will be instantiated from the docker image and a read write layer is
872:54 docker image and a read write layer is always added on top of the read-only
872:57 always added on top of the read-only layers when a container is created
873:00 layers when a container is created writing any necessary files that's
873:02 writing any necessary files that's needed for the application and so just
873:05 needed for the application and so just as a note here docker containers are
873:08 as a note here docker containers are always created from docker images and
873:10 always created from docker images and containers can use the same image yet
873:13 containers can use the same image yet will always have a different read write
873:15 will always have a different read write layer no matter the amount of containers
873:18 layer no matter the amount of containers running on a given host so now when your
873:21 running on a given host so now when your containers have been created you need a
873:23 containers have been created you need a place to store them and so this is where
873:25 place to store them and so this is where a container registry comes into play now
873:28 a container registry comes into play now a container registry is a single place
873:31 a container registry is a single place for you to store and manage docker
873:33 for you to store and manage docker images now when you create your docker
873:35 images now when you create your docker file and then build your image
873:38 file and then build your image you want to store that image in a
873:40 you want to store that image in a central image repository whether it be a
873:43 central image repository whether it be a private one or a public one a popular
873:46 private one or a public one a popular public container registry is docker hub
873:49 public container registry is docker hub and this is a common registry where many
873:52 and this is a common registry where many open source images can be found
873:54 open source images can be found including those used for the base layer
873:57 including those used for the base layer images like the ubuntu example that i
874:00 images like the ubuntu example that i showed you earlier and so once you have
874:02 showed you earlier and so once you have your containers in a container registry
874:05 your containers in a container registry you need to be able to run these
874:06 you need to be able to run these containers so in order to run these
874:08 containers so in order to run these containers you need docker hosts and
874:11 containers you need docker hosts and these can consist of any machine running
874:14 these can consist of any machine running the docker engine and this could be your
874:16 the docker engine and this could be your laptop
874:17 laptop server or you can run them in provided
874:20 server or you can run them in provided hosted cloud environments now this may
874:22 hosted cloud environments now this may have been a refresher for some but for
874:25 have been a refresher for some but for those of you who are new to containers i
874:27 those of you who are new to containers i hope this has given you a lot more
874:29 hope this has given you a lot more clarity on what containers are what they
874:32 clarity on what containers are what they do and the value that they bring to any
874:35 do and the value that they bring to any environment and so that's pretty much
874:37 environment and so that's pretty much all i wanted to cover on this short
874:39 all i wanted to cover on this short lesson of an introduction to containers
874:42 lesson of an introduction to containers so you can now mark this lesson as
874:43 so you can now mark this lesson as complete and let's move on to the next
874:45 complete and let's move on to the next one
874:52 welcome back so now that you've gotten familiar with what containers are and
874:55 familiar with what containers are and how they work i wanted to dive into
874:57 how they work i wanted to dive into google cloud's platform as a service
874:59 google cloud's platform as a service offering for containers called google
875:02 offering for containers called google kubernetes engine also known as short as
875:05 kubernetes engine also known as short as gke now although the exam goes into a
875:08 gke now although the exam goes into a more operational perspective with
875:10 more operational perspective with regards to gke knowing the foundation of
875:13 regards to gke knowing the foundation of kubernetes and the different topics of
875:16 kubernetes and the different topics of kubernetes is a must in order to
875:18 kubernetes is a must in order to understand the abstractions that take
875:20 understand the abstractions that take place with gke from regular kubernetes
875:24 place with gke from regular kubernetes in this lesson i will be getting into
875:26 in this lesson i will be getting into key topics with regards to kubernetes
875:29 key topics with regards to kubernetes and we'll be touching on the
875:30 and we'll be touching on the architecture
875:31 architecture components and how they all work
875:34 components and how they all work together to achieve the desired state
875:36 together to achieve the desired state for your containerized workloads now
875:39 for your containerized workloads now there's a lot to get into so with that
875:41 there's a lot to get into so with that being said let's dive in now before i
875:45 being said let's dive in now before i can get into gke i need to set the stage
875:48 can get into gke i need to set the stage on explaining what kubernetes is put
875:51 on explaining what kubernetes is put simply kubernetes is an orchestration
875:54 simply kubernetes is an orchestration platform for containers which was
875:57 platform for containers which was invented by google and eventually open
876:00 invented by google and eventually open source it is now maintained by the cncf
876:04 source it is now maintained by the cncf short for the cloud native computing
876:06 short for the cloud native computing foundation and has achieved incredible
876:09 foundation and has achieved incredible widespread adoption kubernetes provides
876:11 widespread adoption kubernetes provides a platform to automate schedule and run
876:15 a platform to automate schedule and run containers on clusters of physical or
876:18 containers on clusters of physical or virtual machines
876:19 virtual machines thus eliminating many of the manual
876:22 thus eliminating many of the manual processes involved in deploying and
876:25 processes involved in deploying and scaling containerized applications
876:27 scaling containerized applications kubernetes manages the containers that
876:29 kubernetes manages the containers that run the applications and ensure that
876:32 run the applications and ensure that there is no downtime in a way that you
876:35 there is no downtime in a way that you the user can define
876:37 the user can define for example if you define that when a
876:40 for example if you define that when a container goes down and another
876:42 container goes down and another container needs to start kubernetes
876:44 container needs to start kubernetes would take care of that for you
876:46 would take care of that for you automatically and seamlessly kubernetes
876:49 automatically and seamlessly kubernetes provides you with the framework to run
876:51 provides you with the framework to run distributed systems resiliently it takes
876:54 distributed systems resiliently it takes care of scaling and failover for your
876:56 care of scaling and failover for your application provides deployment patterns
876:59 application provides deployment patterns and allows you to manage your
877:01 and allows you to manage your applications with tons of flexibility
877:04 applications with tons of flexibility reliability and power it works with a
877:07 reliability and power it works with a range of container tools including
877:09 range of container tools including docker now although this adoption was
877:11 docker now although this adoption was widespread it did come with its various
877:14 widespread it did come with its various challenges this included scaling at cd
877:17 challenges this included scaling at cd load balancing availability auto scaling
877:21 load balancing availability auto scaling networking
877:22 networking rollback on faulty deployments and so
877:24 rollback on faulty deployments and so much more
877:26 much more so now google cloud has since developed
877:29 so now google cloud has since developed a managed offering for kubernetes
877:31 a managed offering for kubernetes providing a managed environment for
877:34 providing a managed environment for deploying managing and scaling your
877:37 deploying managing and scaling your containerized applications using google
877:40 containerized applications using google infrastructure the gke environment
877:42 infrastructure the gke environment consists of compute engine instances
877:45 consists of compute engine instances grouped together to form a cluster and
877:48 grouped together to form a cluster and it provides all the same benefits as
877:50 it provides all the same benefits as on-premises kubernetes yet has
877:52 on-premises kubernetes yet has abstracted the complexity of having to
877:55 abstracted the complexity of having to worry about the hardware and to top it
877:57 worry about the hardware and to top it off it has the benefits of advanced
878:00 off it has the benefits of advanced cluster management features that google
878:03 cluster management features that google cloud provides
878:04 cloud provides with things like cloud load balancing
878:07 with things like cloud load balancing and being able to spread traffic amongst
878:09 and being able to spread traffic amongst clusters and nodes node pools to
878:12 clusters and nodes node pools to designate subnets of nodes within a
878:14 designate subnets of nodes within a cluster for additional flexibility
878:16 cluster for additional flexibility automatic scaling of your cluster's node
878:18 automatic scaling of your cluster's node instance count and automatic upgrades
878:21 instance count and automatic upgrades for your clusters node software it also
878:23 for your clusters node software it also allows you to maintain node health and
878:25 allows you to maintain node health and availability with node auto repair and
878:28 availability with node auto repair and takes care of logging and monitoring
878:31 takes care of logging and monitoring with google cloud's operation suite for
878:33 with google cloud's operation suite for visibility into your cluster so as you
878:36 visibility into your cluster so as you can see here gke holds a lot of benefits
878:39 can see here gke holds a lot of benefits when it comes to running kubernetes in
878:42 when it comes to running kubernetes in google cloud so i wanted to take a
878:44 google cloud so i wanted to take a moment now to dive into the cluster
878:47 moment now to dive into the cluster architecture and help familiarize you
878:50 architecture and help familiarize you with all the components involved in a
878:52 with all the components involved in a cluster so a cluster is the foundation
878:55 cluster so a cluster is the foundation of google kubernetes engine and
878:58 of google kubernetes engine and kubernetes as a whole the kubernetes
879:00 kubernetes as a whole the kubernetes objects that represent your
879:02 objects that represent your containerized applications all run on
879:05 containerized applications all run on top of the cluster in gke a cluster
879:08 top of the cluster in gke a cluster consists of at least one control plane
879:11 consists of at least one control plane and multiple worker machines called
879:13 and multiple worker machines called nodes the control plane and node
879:15 nodes the control plane and node machines run the kubernetes cluster the
879:18 machines run the kubernetes cluster the control plane is responsible to
879:21 control plane is responsible to coordinate the entire cluster and this
879:23 coordinate the entire cluster and this can include scheduling workloads like
879:26 can include scheduling workloads like containerized applications and managing
879:28 containerized applications and managing the workload's life cycle scaling and
879:31 the workload's life cycle scaling and upgrades the control plane also manages
879:34 upgrades the control plane also manages network and storage resources for those
879:36 network and storage resources for those workloads and most importantly it
879:39 workloads and most importantly it manages the state of the cluster and
879:41 manages the state of the cluster and make sure it is at the desired state now
879:44 make sure it is at the desired state now the nodes are the worker machines that
879:47 the nodes are the worker machines that run your containerized applications and
879:49 run your containerized applications and other workloads the nodes are compute
879:51 other workloads the nodes are compute engine vm instances that gke creates on
879:55 engine vm instances that gke creates on your behalf when you create a cluster
879:57 your behalf when you create a cluster each node is managed from the control
880:00 each node is managed from the control plane which receives updates on each
880:03 plane which receives updates on each node's self-reported status a node also
880:06 node's self-reported status a node also runs the services necessary to support
880:09 runs the services necessary to support the docker containers that make up your
880:11 the docker containers that make up your cluster's workloads these include the
880:13 cluster's workloads these include the docker runtime and the kubernetes node
880:16 docker runtime and the kubernetes node agent known as the cubelet which
880:18 agent known as the cubelet which communicates with the control plane and
880:21 communicates with the control plane and is responsible for starting and running
880:23 is responsible for starting and running docker containers scheduled on that node
880:27 docker containers scheduled on that node now diving deeper into the architecture
880:30 now diving deeper into the architecture there are components within the control
880:32 there are components within the control plane and nodes that you should
880:34 plane and nodes that you should familiarize yourself with as these
880:36 familiarize yourself with as these components are what ties the cluster
880:39 components are what ties the cluster together and helps manage the
880:41 together and helps manage the orchestration as well as the state now
880:44 orchestration as well as the state now the control plane is the unified
880:46 the control plane is the unified endpoint for your cluster the control
880:48 endpoint for your cluster the control plane's components make global decisions
880:51 plane's components make global decisions about the cluster for example scheduling
880:54 about the cluster for example scheduling as well as detecting and responding to
880:56 as well as detecting and responding to cluster events all interactions with the
880:59 cluster events all interactions with the cluster are done via kubernetes api
881:02 cluster are done via kubernetes api calls and the control plane runs the
881:05 calls and the control plane runs the kubernetes api server process to handle
881:08 kubernetes api server process to handle those requests you can make kubernetes
881:10 those requests you can make kubernetes api calls directly via http or grpc or
881:16 api calls directly via http or grpc or can also be done indirectly by running
881:18 can also be done indirectly by running commands from the kubernetes command
881:20 commands from the kubernetes command line client called cubectl and of course
881:24 line client called cubectl and of course you can interact with the ui in the
881:26 you can interact with the ui in the cloud console the api server process is
881:30 cloud console the api server process is the hub for all communications for the
881:33 the hub for all communications for the cluster moving on to the next component
881:35 cluster moving on to the next component is cube scheduler the cube scheduler is
881:38 is cube scheduler the cube scheduler is a component that discovers and assigns
881:41 a component that discovers and assigns newly created pods to a node for them to
881:44 newly created pods to a node for them to run on so any new pods that are created
881:47 run on so any new pods that are created will automatically be assigned to its
881:49 will automatically be assigned to its appropriate node by the cube scheduler
881:52 appropriate node by the cube scheduler taking into consideration any
881:54 taking into consideration any constraints that are in place next up is
881:57 constraints that are in place next up is the cube controller manager and this is
882:00 the cube controller manager and this is the component that runs controller
882:02 the component that runs controller processes and is responsible for things
882:05 processes and is responsible for things like noticing and responding when nodes
882:07 like noticing and responding when nodes go down
882:08 go down maintaining the correct number of pods
882:11 maintaining the correct number of pods populating the services and pods as well
882:13 populating the services and pods as well as creating default accounts and api
882:16 as creating default accounts and api access tokens for new namespaces it is
882:20 access tokens for new namespaces it is these controllers that will basically
882:22 these controllers that will basically look to make changes to the cluster when
882:25 look to make changes to the cluster when the current state does not meet the
882:27 the current state does not meet the desired state now when it comes to the
882:29 desired state now when it comes to the cloud controller manager this is what
882:32 cloud controller manager this is what embeds cloud-specific control logic the
882:35 embeds cloud-specific control logic the cloud controller manager lets you link
882:37 cloud controller manager lets you link your cluster into any cloud providers
882:40 your cluster into any cloud providers api
882:41 api and separates out the components that
882:43 and separates out the components that interact with that cloud platform from
882:47 interact with that cloud platform from components that just interact with your
882:48 components that just interact with your cluster the cloud controller manager
882:51 cluster the cloud controller manager only runs controllers that are specific
882:54 only runs controllers that are specific to your cloud provider in this case
882:56 to your cloud provider in this case google cloud and lastly we have fcd and
883:00 google cloud and lastly we have fcd and this component is responsible to store
883:02 this component is responsible to store the state of the cluster at cd is a
883:05 the state of the cluster at cd is a consistent and highly available key
883:07 consistent and highly available key value store that only interacts with the
883:11 value store that only interacts with the api server it saves all the
883:13 api server it saves all the configuration data along with what nodes
883:16 configuration data along with what nodes are part of the cluster and what pods
883:18 are part of the cluster and what pods they are running so now the control
883:20 they are running so now the control plane needs a way to interact with the
883:22 plane needs a way to interact with the nodes of the cluster thus the nodes
883:25 nodes of the cluster thus the nodes having components themselves for this
883:27 having components themselves for this communication to occur this component is
883:31 communication to occur this component is called a cubelet and this is an agent
883:33 called a cubelet and this is an agent that runs on each node in the cluster
883:35 that runs on each node in the cluster that communicates with the control plane
883:38 that communicates with the control plane it is responsible for starting and
883:40 it is responsible for starting and running docker containers scheduled on
883:43 running docker containers scheduled on that node it takes a set of pod specs
883:46 that node it takes a set of pod specs that are provided to it and ensures that
883:49 that are provided to it and ensures that the containers described in those pod
883:51 the containers described in those pod specs are running and healthy and i will
883:53 specs are running and healthy and i will be diving into pod specs in a later
883:55 be diving into pod specs in a later lesson next up is cube proxy and this is
883:58 lesson next up is cube proxy and this is the component that maintains network
884:00 the component that maintains network connectivity to the pods in a cluster
884:03 connectivity to the pods in a cluster and lastly the container runtime is the
884:06 and lastly the container runtime is the software that is responsible for running
884:09 software that is responsible for running containers kubernetes supports container
884:12 containers kubernetes supports container runtimes like docker and container d and
884:15 runtimes like docker and container d and so these are the main components in a
884:17 so these are the main components in a cluster covering the control plane and
884:20 cluster covering the control plane and nodes with regards to communication
884:23 nodes with regards to communication within the cluster now before i end this
884:26 within the cluster now before i end this lesson there is one more topic i wanted
884:28 lesson there is one more topic i wanted to touch on with regards to the
884:30 to touch on with regards to the architecture of a gke cluster and that
884:33 architecture of a gke cluster and that is the abstraction that happens and what
884:36 is the abstraction that happens and what exactly does gke manage with regards to
884:39 exactly does gke manage with regards to kubernetes well gke manages all the
884:42 kubernetes well gke manages all the control plane components the endpoint
884:45 control plane components the endpoint exposes the kubernetes api server that
884:49 exposes the kubernetes api server that cubectl uses to communicate with your
884:52 cubectl uses to communicate with your cluster control plane the endpoint
884:54 cluster control plane the endpoint exposes the kubernetes api server that
884:57 exposes the kubernetes api server that cubectl uses to communicate with your
885:00 cubectl uses to communicate with your cluster control plane the endpoint ip is
885:02 cluster control plane the endpoint ip is displayed in cloud console and this ip
885:06 displayed in cloud console and this ip will allow you to interact with the
885:08 will allow you to interact with the cluster when you run the command gcloud
885:10 cluster when you run the command gcloud container clusters get dash credentials
885:14 container clusters get dash credentials you see that the command gets the
885:16 you see that the command gets the cluster endpoint as part of updating
885:18 cluster endpoint as part of updating cubeconfig an ip address for the cluster
885:22 cubeconfig an ip address for the cluster is then exposed to interact with and is
885:25 is then exposed to interact with and is responsible for provisioning and
885:27 responsible for provisioning and managing all the infrastructure that is
885:29 managing all the infrastructure that is needed for the control plane gke also
885:32 needed for the control plane gke also automates the kubernetes nodes by
885:35 automates the kubernetes nodes by launching them as compute engine vms
885:37 launching them as compute engine vms under the hood but still allows the user
885:40 under the hood but still allows the user to change the machine type and access
885:42 to change the machine type and access upgrade options by default google
885:45 upgrade options by default google kubernetes engine clusters and node
885:47 kubernetes engine clusters and node pools are upgraded automatically by
885:50 pools are upgraded automatically by google but you can also control when
885:52 google but you can also control when auto upgrades can and cannot occur by
885:55 auto upgrades can and cannot occur by configuring maintenance windows and
885:57 configuring maintenance windows and exclusions and just as a note a clusters
886:00 exclusions and just as a note a clusters control plane and nodes do not
886:02 control plane and nodes do not necessarily run the same version at all
886:05 necessarily run the same version at all times and i will be digging more into
886:07 times and i will be digging more into that in a later lesson and so i know
886:09 that in a later lesson and so i know this is a lot of theory to take in but
886:12 this is a lot of theory to take in but is as i said before a necessity to
886:15 is as i said before a necessity to understanding kubernetes and gke and as
886:18 understanding kubernetes and gke and as we go further along into kubernetes and
886:20 we go further along into kubernetes and get into demos i promise that this will
886:23 get into demos i promise that this will start to make a lot more sense and you
886:25 start to make a lot more sense and you will start becoming more comfortable
886:27 will start becoming more comfortable with gke and the underlying components
886:30 with gke and the underlying components of kubernetes knowing kubernetes is a
886:33 of kubernetes knowing kubernetes is a must when working in any cloud
886:35 must when working in any cloud environment as it is a popular and
886:38 environment as it is a popular and growing technology that is not slowing
886:40 growing technology that is not slowing down so knowing gke will put you in a
886:43 down so knowing gke will put you in a really good position for your career as
886:46 really good position for your career as an engineer in google cloud as well will
886:49 an engineer in google cloud as well will give you a leg up on diving into other
886:52 give you a leg up on diving into other cloud vendors implementation of
886:54 cloud vendors implementation of kubernetes and so that's pretty much all
886:57 kubernetes and so that's pretty much all i wanted to cover when it comes to
886:59 i wanted to cover when it comes to google kubernetes engine and kubernetes
887:02 google kubernetes engine and kubernetes so you can now mark this lesson as
887:03 so you can now mark this lesson as complete and let's move on to the next
887:05 complete and let's move on to the next one
887:13 welcome back in this lesson i will be covering cluster and node management in
887:15 covering cluster and node management in gke as it refers to choosing different
887:18 gke as it refers to choosing different cluster types for your workloads cluster
887:21 cluster types for your workloads cluster versions
887:22 versions node pools as well as upgrades and the
887:25 node pools as well as upgrades and the many different options to choose from it
887:28 many different options to choose from it is good to familiarize yourself with
887:30 is good to familiarize yourself with these options as they may be the
887:31 these options as they may be the deciding factor of having to keep your
887:34 deciding factor of having to keep your workloads highly available and your
887:36 workloads highly available and your tolerance to risk within your
887:38 tolerance to risk within your environment so with that being said
887:40 environment so with that being said let's dive in now in the last lesson we
887:43 let's dive in now in the last lesson we touched on nodes and how they are the
887:46 touched on nodes and how they are the workers for the kubernetes cluster so
887:48 workers for the kubernetes cluster so now that you are familiar with nodes i
887:51 now that you are familiar with nodes i wanted to touch on a concept that builds
887:53 wanted to touch on a concept that builds on it called node pools now a node pool
887:56 on it called node pools now a node pool is a group of nodes within a cluster
887:58 is a group of nodes within a cluster that all have the same configuration and
888:01 that all have the same configuration and using node config specification to
888:04 using node config specification to achieve this a node pool can also
888:06 achieve this a node pool can also contain one or multiple nodes when you
888:09 contain one or multiple nodes when you first create a cluster the number and
888:11 first create a cluster the number and type of nodes that you specify becomes
888:14 type of nodes that you specify becomes the default node pool as shown here in
888:17 the default node pool as shown here in the diagram then you can add additional
888:20 the diagram then you can add additional custom node pools of different sizes and
888:23 custom node pools of different sizes and types to your cluster all nodes in any
888:25 types to your cluster all nodes in any given node pool are identical to one
888:28 given node pool are identical to one another now custom node pools are really
888:31 another now custom node pools are really useful when you need to schedule pods
888:33 useful when you need to schedule pods that require more resources than others
888:36 that require more resources than others such as more memory more disk space or
888:39 such as more memory more disk space or even different machine types you can
888:41 even different machine types you can create upgrade and delete node pools
888:44 create upgrade and delete node pools individually without affecting the whole
888:47 individually without affecting the whole cluster and just as a note you cannot
888:50 cluster and just as a note you cannot configure a single node in any node pool
888:53 configure a single node in any node pool any configuration changes affect all
888:56 any configuration changes affect all nodes in the node pool and by default
888:59 nodes in the node pool and by default all new node pools run the latest stable
889:01 all new node pools run the latest stable version of kubernetes existing node
889:04 version of kubernetes existing node pools can be manually upgraded or
889:07 pools can be manually upgraded or automatically upgraded you can also run
889:10 automatically upgraded you can also run multiple kubernetes node versions on
889:12 multiple kubernetes node versions on each node pool in your cluster update
889:14 each node pool in your cluster update each node pool independently and target
889:17 each node pool independently and target different node pools for specific
889:19 different node pools for specific deployments in that node now with gke
889:22 deployments in that node now with gke you can create a cluster tailored to
889:25 you can create a cluster tailored to your availability requirements and your
889:27 your availability requirements and your budget the types of available clusters
889:30 budget the types of available clusters include zonal both single zone or
889:33 include zonal both single zone or multi-zonal and regional zonal clusters
889:36 multi-zonal and regional zonal clusters have a single control plane in a single
889:39 have a single control plane in a single zone depending on what kind of
889:41 zone depending on what kind of availability you want you can distribute
889:43 availability you want you can distribute your nodes for your zonal cluster in a
889:46 your nodes for your zonal cluster in a single zone or in multiple zones now
889:49 single zone or in multiple zones now when you decide to deploy a single zone
889:52 when you decide to deploy a single zone cluster it again has a single control
889:55 cluster it again has a single control plane running in one zone this control
889:57 plane running in one zone this control plane manages workloads on nodes running
890:01 plane manages workloads on nodes running in the same zone a multi-zonal cluster
890:03 in the same zone a multi-zonal cluster on the other hand has a single replica
890:06 on the other hand has a single replica of the control plane running in a single
890:09 of the control plane running in a single zone and has nodes running in multiple
890:12 zone and has nodes running in multiple zones during an upgrade of the cluster
890:14 zones during an upgrade of the cluster or an outage of the zone where the
890:16 or an outage of the zone where the control plane runs
890:18 control plane runs workloads still run however the cluster
890:22 workloads still run however the cluster its nodes and its workloads cannot be
890:24 its nodes and its workloads cannot be configured until the control plane is
890:27 configured until the control plane is available multi-zonal clusters are
890:30 available multi-zonal clusters are designed to balance availability and
890:32 designed to balance availability and cost for consistent workloads and just
890:35 cost for consistent workloads and just as a note the same number of nodes will
890:38 as a note the same number of nodes will be deployed to each selected zone and
890:41 be deployed to each selected zone and may cost you more than budgeted so
890:43 may cost you more than budgeted so please be aware and of course when
890:46 please be aware and of course when you're looking to achieve high
890:47 you're looking to achieve high availability for your cluster regional
890:50 availability for your cluster regional clusters are always the way to go a
890:52 clusters are always the way to go a regional cluster has multiple replicas
890:55 regional cluster has multiple replicas of the control plane running in multiple
890:57 of the control plane running in multiple zones within a given region nodes also
891:00 zones within a given region nodes also run in each zone where a replica of the
891:03 run in each zone where a replica of the control plane runs because a regional
891:06 control plane runs because a regional cluster replicates the control plane and
891:08 cluster replicates the control plane and nodes it consumes more compute engine
891:11 nodes it consumes more compute engine resources than a similar single zone or
891:14 resources than a similar single zone or multi-zonal cluster the same number of
891:17 multi-zonal cluster the same number of nodes will be deployed to each selected
891:19 nodes will be deployed to each selected zone and the default when selecting
891:22 zone and the default when selecting regional clusters is three zones now if
891:25 regional clusters is three zones now if you're dealing with more sensitive
891:27 you're dealing with more sensitive workloads that require more strict
891:29 workloads that require more strict guidelines private clusters give you the
891:32 guidelines private clusters give you the ability to isolate nodes from having
891:35 ability to isolate nodes from having inbound and outbound connectivity to the
891:37 inbound and outbound connectivity to the public internet this isolation is
891:40 public internet this isolation is achieved as the nodes have internal ip
891:43 achieved as the nodes have internal ip addresses only if you want to provide
891:45 addresses only if you want to provide outbound internet access for certain
891:47 outbound internet access for certain private nodes you can use cloudnat or
891:50 private nodes you can use cloudnat or manage your own nat gateway by default
891:53 manage your own nat gateway by default private google access is enabled in
891:55 private google access is enabled in private clusters and their workloads
891:57 private clusters and their workloads with limited outbound access to google
892:00 with limited outbound access to google cloud apis and services over google's
892:03 cloud apis and services over google's private network in private clusters the
892:06 private network in private clusters the control plane's vpc network is connected
892:09 control plane's vpc network is connected to your clusters vpc network with vpc
892:12 to your clusters vpc network with vpc network peering your vpc network
892:15 network peering your vpc network contains the cluster nodes and a
892:17 contains the cluster nodes and a separate google cloud vpc network
892:20 separate google cloud vpc network contains your cluster's control plane
892:22 contains your cluster's control plane the control plane's vpc network is
892:24 the control plane's vpc network is located in a project controlled by
892:27 located in a project controlled by google traffic between nodes and the
892:29 google traffic between nodes and the control plane is routed entirely using
892:33 control plane is routed entirely using internal ip addresses the control plane
892:35 internal ip addresses the control plane for a private cluster has a private
892:38 for a private cluster has a private endpoint in addition to a public
892:41 endpoint in addition to a public endpoint the control plane for a
892:42 endpoint the control plane for a non-private cluster only has a public
892:45 non-private cluster only has a public endpoint the private endpoint is an
892:48 endpoint the private endpoint is an internal ip address in the control
892:50 internal ip address in the control plane's vpc network the public endpoint
892:53 plane's vpc network the public endpoint is the external ip address of the
892:55 is the external ip address of the control plane and you can control access
892:58 control plane and you can control access to this endpoint using authorized
893:01 to this endpoint using authorized networks or you can disable access to
893:04 networks or you can disable access to the public endpoint as shown here in the
893:06 the public endpoint as shown here in the diagram you can disable the public
893:08 diagram you can disable the public endpoint and connect to your network
893:11 endpoint and connect to your network using an internal ip address using cloud
893:14 using an internal ip address using cloud interconnect or cloud vpn and you always
893:17 interconnect or cloud vpn and you always have the option of enabling or disabling
893:20 have the option of enabling or disabling this public endpoint now when you create
893:22 this public endpoint now when you create a cluster you can choose the cluster
893:24 a cluster you can choose the cluster specific kubernetes version or you can
893:27 specific kubernetes version or you can mix the versions for flexibility on
893:30 mix the versions for flexibility on features either way it is always
893:33 features either way it is always recommended that you enable auto upgrade
893:36 recommended that you enable auto upgrade for the cluster and its nodes now when
893:38 for the cluster and its nodes now when you have auto upgrade enabled you are
893:40 you have auto upgrade enabled you are given the choice to choose from what are
893:42 given the choice to choose from what are called release channels when you enroll
893:45 called release channels when you enroll a new cluster in a release channel
893:47 a new cluster in a release channel google automatically manages the version
893:49 google automatically manages the version and upgrade cadence for the cluster and
893:52 and upgrade cadence for the cluster and its node pools all channels offer
893:54 its node pools all channels offer supported releases of gke and are
893:56 supported releases of gke and are considered in general availability you
893:59 considered in general availability you can choose from three different release
894:01 can choose from three different release channels for automatic management of
894:03 channels for automatic management of your cluster's version and upgrade
894:05 your cluster's version and upgrade cadence as shown here the available
894:07 cadence as shown here the available release channels are rapid regular and
894:11 release channels are rapid regular and stable release channels the rapid
894:13 stable release channels the rapid release channel gets the latest
894:15 release channel gets the latest kubernetes release as early as possible
894:18 kubernetes release as early as possible and be able to use new gka features the
894:20 and be able to use new gka features the moment that they go into general
894:22 moment that they go into general availability with the regular release
894:25 availability with the regular release channel you have access to gke and
894:28 channel you have access to gke and kubernetes features reasonably soon
894:30 kubernetes features reasonably soon after they are released but on a version
894:33 after they are released but on a version that has been qualified two to three
894:35 that has been qualified two to three months after releasing in the rapid
894:37 months after releasing in the rapid release channel and finally we have the
894:40 release channel and finally we have the stable release channel where stability
894:43 stable release channel where stability is prioritized over new functionality
894:45 is prioritized over new functionality changes and new versions in this channel
894:48 changes and new versions in this channel are rolled out last after being
894:50 are rolled out last after being validated two to three months in the
894:53 validated two to three months in the regular release channel and so if you're
894:55 regular release channel and so if you're looking for more direct management of
894:57 looking for more direct management of your cluster's version choose a static
894:59 your cluster's version choose a static version when you enroll a cluster in a
895:02 version when you enroll a cluster in a release channel that cluster is upgraded
895:05 release channel that cluster is upgraded automatically when a new version is
895:07 automatically when a new version is available in that channel now if you do
895:09 available in that channel now if you do not use a release channel or choose a
895:12 not use a release channel or choose a cluster version the current default
895:14 cluster version the current default version is use the default version is
895:17 version is use the default version is selected based on usage and real world
895:20 selected based on usage and real world performance and is changed regularly
895:23 performance and is changed regularly while the default version is the most
895:25 while the default version is the most mature one
895:26 mature one other versions being made available are
895:29 other versions being made available are generally available versions that pass
895:32 generally available versions that pass internal testing and qualification
895:34 internal testing and qualification changes to the default version are
895:36 changes to the default version are announced in a release note now if you
895:38 announced in a release note now if you know that you need to use a specific
895:41 know that you need to use a specific supported version of kubernetes for a
895:43 supported version of kubernetes for a given workload you can specify it when
895:46 given workload you can specify it when creating the cluster if you do not need
895:48 creating the cluster if you do not need to control the specific patch version
895:50 to control the specific patch version you use consider enrolling your cluster
895:53 you use consider enrolling your cluster in a release channel instead of managing
895:55 in a release channel instead of managing its version directly now when it comes
895:57 its version directly now when it comes to upgrading the cluster please be aware
896:00 to upgrading the cluster please be aware that control plane and nodes do not
896:03 that control plane and nodes do not always run the same version at all times
896:06 always run the same version at all times as well a control plane is always
896:08 as well a control plane is always upgraded before its nodes when it comes
896:11 upgraded before its nodes when it comes to zonal clusters you cannot launch or
896:14 to zonal clusters you cannot launch or edit workloads during that upgrade and
896:17 edit workloads during that upgrade and with regional clusters each control
896:19 with regional clusters each control plane is upgraded one by one as well
896:22 plane is upgraded one by one as well with control planes auto upgrade is
896:25 with control planes auto upgrade is enabled by default and this is google
896:28 enabled by default and this is google cloud's best practice now again if you
896:30 cloud's best practice now again if you choose you can do a manual upgrade but
896:33 choose you can do a manual upgrade but you cannot upgrade the control plane
896:36 you cannot upgrade the control plane more than one minor version at a time so
896:39 more than one minor version at a time so please be aware as well with any cluster
896:42 please be aware as well with any cluster upgrades maintenance windows and
896:44 upgrades maintenance windows and exclusions are available and so this way
896:47 exclusions are available and so this way you can choose the best times for your
896:49 you can choose the best times for your upgrades and so like cluster upgrades by
896:52 upgrades and so like cluster upgrades by default a clusters nodes have auto
896:55 default a clusters nodes have auto upgrade enabled and it is recommended
896:57 upgrade enabled and it is recommended that you do not disable it again this is
897:00 that you do not disable it again this is best practice by google cloud and again
897:03 best practice by google cloud and again like the cluster upgrades a manual
897:05 like the cluster upgrades a manual upgrade is available and maintenance
897:07 upgrade is available and maintenance windows and exclusions are available for
897:10 windows and exclusions are available for all of these upgrades now when a no pool
897:13 all of these upgrades now when a no pool is upgraded gke upgrades one node at a
897:15 is upgraded gke upgrades one node at a time
897:16 time while a node is being upgraded gke stops
897:19 while a node is being upgraded gke stops scheduling new pods onto it and attempts
897:22 scheduling new pods onto it and attempts to schedule its running pods onto other
897:25 to schedule its running pods onto other nodes the node is then recreated at the
897:27 nodes the node is then recreated at the new version but using the same name as
897:30 new version but using the same name as before this is similar to other events
897:33 before this is similar to other events that recreate the node such as enabling
897:35 that recreate the node such as enabling or disabling a feature on the node pool
897:38 or disabling a feature on the node pool and the upgrade is only complete when
897:41 and the upgrade is only complete when all nodes have been recreated and the
897:43 all nodes have been recreated and the cluster is in the desired state when a
897:46 cluster is in the desired state when a newly upgraded node registers with the
897:48 newly upgraded node registers with the control plane gke marks the node as
897:51 control plane gke marks the node as schedulable upgrading a no pool may
897:54 schedulable upgrading a no pool may disrupt workloads running in that pool
897:56 disrupt workloads running in that pool and so in order to avoid this you can
897:58 and so in order to avoid this you can create a new node pool with the desired
898:01 create a new node pool with the desired version and migrate the workload then
898:04 version and migrate the workload then after migration you can delete the old
898:06 after migration you can delete the old node pool now surge upgrades let you
898:09 node pool now surge upgrades let you control the number of nodes gke can
898:12 control the number of nodes gke can upgrade at a time and control how
898:15 upgrade at a time and control how disruptive upgrades are to your
898:16 disruptive upgrades are to your workloads you can change how many nodes
898:19 workloads you can change how many nodes gke attempts to upgrade at once by
898:22 gke attempts to upgrade at once by changing the surge upgrade parameters on
898:24 changing the surge upgrade parameters on a no pool surge upgrades reduce
898:27 a no pool surge upgrades reduce disruption to your workloads during
898:29 disruption to your workloads during cluster maintenance and also allow you
898:31 cluster maintenance and also allow you to control the number of nodes upgraded
898:34 to control the number of nodes upgraded in parallel surge upgrades also work
898:37 in parallel surge upgrades also work with the cluster auto scaler to prevent
898:40 with the cluster auto scaler to prevent changes to nodes that are being upgraded
898:42 changes to nodes that are being upgraded now surge upgrade behavior is determined
898:45 now surge upgrade behavior is determined by two settings max surge upgrade and
898:48 by two settings max surge upgrade and max unavailable upgrade
898:50 max unavailable upgrade now with max surge upgrade this is the
898:52 now with max surge upgrade this is the number of additional nodes that can be
898:55 number of additional nodes that can be added to the no pool during an upgrade
898:58 added to the no pool during an upgrade increasing max surge upgrade raises the
899:01 increasing max surge upgrade raises the number of nodes that can be upgraded
899:03 number of nodes that can be upgraded simultaneously and when it comes to the
899:05 simultaneously and when it comes to the max unavailable upgrade this is the
899:08 max unavailable upgrade this is the number of nodes that can be
899:10 number of nodes that can be simultaneously unavailable during an
899:13 simultaneously unavailable during an upgrade increasing max unavailable
899:15 upgrade increasing max unavailable upgrade raises the number of nodes that
899:18 upgrade raises the number of nodes that can be upgraded in parallel so with max
899:21 can be upgraded in parallel so with max surge upgrade the higher the number the
899:23 surge upgrade the higher the number the more parallel upgrades which will end up
899:26 more parallel upgrades which will end up costing you more money with max
899:28 costing you more money with max unavailable upgrade the higher the
899:30 unavailable upgrade the higher the number the more disruptive it is and so
899:33 number the more disruptive it is and so the more risk you are taking and so
899:35 the more risk you are taking and so during upgrades gke brings down at most
899:39 during upgrades gke brings down at most the sum of the max surge upgrade added
899:42 the sum of the max surge upgrade added with the max unavailable upgrade so as
899:44 with the max unavailable upgrade so as you can see here there are a slew of
899:47 you can see here there are a slew of options when it comes to deciding on the
899:49 options when it comes to deciding on the type of cluster you want as well as the
899:51 type of cluster you want as well as the type of upgrades that are available
899:54 type of upgrades that are available along with when you want them to occur
899:56 along with when you want them to occur and so your deciding factor in the end
899:58 and so your deciding factor in the end will be the workload that you are
900:00 will be the workload that you are running and your risk tolerance and this
900:02 running and your risk tolerance and this will play a big factor in keeping up
900:05 will play a big factor in keeping up time for your cluster as well as saving
900:07 time for your cluster as well as saving money in any type of environment and so
900:10 money in any type of environment and so that's pretty much all i wanted to cover
900:12 that's pretty much all i wanted to cover when it comes to gke cluster and node
900:14 when it comes to gke cluster and node management so you can now mark this
900:16 management so you can now mark this lesson as complete and let's move on to
900:18 lesson as complete and let's move on to the next one
900:19 the next one [Music]
900:23 [Music] welcome back and in this lesson i will
900:26 welcome back and in this lesson i will be diving into some more theory within
900:29 be diving into some more theory within kubernetes and gke this time touching on
900:32 kubernetes and gke this time touching on objects and how objects are managed pods
900:36 objects and how objects are managed pods are only one type of object but there
900:38 are only one type of object but there are many other parts that are involved
900:40 are many other parts that are involved in the management of these objects and
900:43 in the management of these objects and this is what this lesson is set out to
900:45 this is what this lesson is set out to teach you now there's quite a bit to
900:46 teach you now there's quite a bit to cover here so with that being said let's
900:49 cover here so with that being said let's dive in
900:50 dive in now kubernetes objects are persistent
900:53 now kubernetes objects are persistent entities in kubernetes kubernetes uses
900:56 entities in kubernetes kubernetes uses these entities to represent the state of
900:59 these entities to represent the state of your cluster for example it can describe
901:01 your cluster for example it can describe things like what containerized
901:03 things like what containerized applications are running and on which
901:05 applications are running and on which nodes and what resources are available
901:08 nodes and what resources are available to those applications a kubernetes
901:11 to those applications a kubernetes object is a record of intent once you
901:14 object is a record of intent once you create the object kubernetes will
901:16 create the object kubernetes will constantly work to ensure that object
901:18 constantly work to ensure that object exists by creating an object you're
901:21 exists by creating an object you're effectively telling kubernetes what you
901:24 effectively telling kubernetes what you want your cluster's workload to look
901:26 want your cluster's workload to look like and this is your cluster's desired
901:28 like and this is your cluster's desired state and you've heard me speak about
901:30 state and you've heard me speak about this many times before and this is what
901:33 this many times before and this is what i was referring to now almost every
901:35 i was referring to now almost every kubernetes object includes two nested
901:38 kubernetes object includes two nested object fields that govern the object's
901:41 object fields that govern the object's configuration the object spec and the
901:44 configuration the object spec and the object's status for objects that have a
901:46 object's status for objects that have a spec you have to set this when you
901:48 spec you have to set this when you create the object providing a
901:50 create the object providing a description of the characteristics you
901:53 description of the characteristics you want the resource to have its desired
901:56 want the resource to have its desired state the status describes the current
901:59 state the status describes the current state of the object supplied and updated
902:02 state of the object supplied and updated by kubernetes and its components the
902:04 by kubernetes and its components the kubernetes control plane continually and
902:08 kubernetes control plane continually and actively manages every object's actual
902:11 actively manages every object's actual state to match the desired state you
902:13 state to match the desired state you supplied now each object in your cluster
902:16 supplied now each object in your cluster has a name that is unique for that type
902:19 has a name that is unique for that type of resource every kubernetes object also
902:22 of resource every kubernetes object also has a uid that is unique across your
902:25 has a uid that is unique across your whole cluster only one object of a given
902:28 whole cluster only one object of a given kind can have a given name at a time
902:31 kind can have a given name at a time however if you delete the object you can
902:33 however if you delete the object you can make a new object with that same name
902:36 make a new object with that same name every object created over the whole
902:38 every object created over the whole lifetime of a kubernetes cluster has a
902:41 lifetime of a kubernetes cluster has a distinct uid these distinct uids are
902:45 distinct uid these distinct uids are also known as uuids which we discussed
902:48 also known as uuids which we discussed earlier on in the course now when
902:50 earlier on in the course now when creating updating or deleting objects in
902:53 creating updating or deleting objects in kubernetes this is done through the use
902:56 kubernetes this is done through the use of a manifest file where you would
902:58 of a manifest file where you would specify the desired state of an object
903:01 specify the desired state of an object that kubernetes will maintain when you
903:04 that kubernetes will maintain when you apply the manifest each configuration
903:06 apply the manifest each configuration file can contain multiple manifests and
903:09 file can contain multiple manifests and is common practice to do so when
903:12 is common practice to do so when possible a manifest file is defined in
903:15 possible a manifest file is defined in the form of a yaml file or a json file
903:18 the form of a yaml file or a json file and it is recommended to use yaml now in
903:21 and it is recommended to use yaml now in each yaml file for the kubernetes object
903:24 each yaml file for the kubernetes object that you want to create there are some
903:26 that you want to create there are some required values that need to be set the
903:29 required values that need to be set the first one is the api version and this
903:32 first one is the api version and this defines which version of the kubernetes
903:35 defines which version of the kubernetes api you're using to create this object
903:38 api you're using to create this object the kind described in this example as a
903:41 the kind described in this example as a pod is the kind of object you want to
903:44 pod is the kind of object you want to create next up is the metadata and this
903:47 create next up is the metadata and this is the data that helps uniquely identify
903:50 is the data that helps uniquely identify the object including a string name
903:53 the object including a string name a uid and an optional namespace and the
903:56 a uid and an optional namespace and the last required value is the spec
903:59 last required value is the spec and this is what state you desire for
904:02 and this is what state you desire for the object and the spec in this example
904:05 the object and the spec in this example is a container by the name of bow tie
904:07 is a container by the name of bow tie dash web server and is to be built with
904:10 dash web server and is to be built with the latest nginx web server image as
904:12 the latest nginx web server image as well as having port 80 open on the
904:15 well as having port 80 open on the container now when it comes to objects
904:17 container now when it comes to objects pods are the smallest most basic
904:20 pods are the smallest most basic deployable objects in kubernetes a pod
904:23 deployable objects in kubernetes a pod represents a single instance of a
904:26 represents a single instance of a running process in your cluster pods
904:28 running process in your cluster pods contain one or more containers such as
904:31 contain one or more containers such as docker containers and when a pod runs
904:34 docker containers and when a pod runs multiple containers the containers are
904:36 multiple containers the containers are managed as a single entity and share the
904:39 managed as a single entity and share the pods resources which also includes
904:42 pods resources which also includes shared networking and shared storage for
904:46 shared networking and shared storage for their containers generally one pod is
904:49 their containers generally one pod is meant to run a single instance of an
904:51 meant to run a single instance of an application on your cluster which is
904:54 application on your cluster which is self-contained and isolated
904:56 self-contained and isolated now although a pod is meant to run a
904:59 now although a pod is meant to run a single instance of your application on
905:01 single instance of your application on your cluster
905:02 your cluster it is not recommended to create
905:05 it is not recommended to create individual pods directly instead you
905:08 individual pods directly instead you generally create a set of identical pods
905:11 generally create a set of identical pods called replicas to run your application
905:14 called replicas to run your application a set of replicated pods are created and
905:17 a set of replicated pods are created and managed by a controller such as a
905:20 managed by a controller such as a deployment controllers manage the life
905:22 deployment controllers manage the life cycle of their pods as well as
905:25 cycle of their pods as well as performing horizontal scaling changing
905:27 performing horizontal scaling changing the number of pods is necessary now
905:30 the number of pods is necessary now although you might occasionally interact
905:32 although you might occasionally interact with pods directly to debug troubleshoot
905:35 with pods directly to debug troubleshoot or inspect them it's recommended that
905:38 or inspect them it's recommended that you use a controller to manage your pods
905:41 you use a controller to manage your pods and so once your pods are created they
905:43 and so once your pods are created they are then run on nodes in your cluster
905:46 are then run on nodes in your cluster which we discussed earlier the pod will
905:48 which we discussed earlier the pod will then remain on its node until its
905:50 then remain on its node until its process is complete the pot is deleted
905:53 process is complete the pot is deleted the pod is evicted from the node due to
905:55 the pod is evicted from the node due to lack of resources or the node fails if a
905:58 lack of resources or the node fails if a node fails pods on the node are
906:01 node fails pods on the node are automatically scheduled for deletion now
906:04 automatically scheduled for deletion now a single gke cluster should be able to
906:06 a single gke cluster should be able to satisfy the needs of multiple users or
906:09 satisfy the needs of multiple users or groups of users and kubernetes
906:12 groups of users and kubernetes namespaces help different projects teams
906:15 namespaces help different projects teams or customers to share a kubernetes
906:18 or customers to share a kubernetes cluster you can think of a namespace as
906:20 cluster you can think of a namespace as a virtual cluster inside of your
906:23 a virtual cluster inside of your kubernetes cluster and you can have
906:25 kubernetes cluster and you can have multiple namespaces logically isolated
906:28 multiple namespaces logically isolated from each other they can help you and
906:30 from each other they can help you and your teams with organization and
906:32 your teams with organization and security now you can name your
906:34 security now you can name your namespaces whatever you'd like but
906:37 namespaces whatever you'd like but kubernetes starts with four initial
906:39 kubernetes starts with four initial namespaces the first one is the default
906:42 namespaces the first one is the default namespace and this is for objects with
906:45 namespace and this is for objects with no other namespace
906:46 no other namespace so when creating new objects without a
906:49 so when creating new objects without a namespace your object will automatically
906:52 namespace your object will automatically be assigned to this namespace cube dash
906:54 be assigned to this namespace cube dash system is the next one and these are for
906:57 system is the next one and these are for objects created by kubernetes
906:59 objects created by kubernetes cube-public is created automatically and
907:03 cube-public is created automatically and is readable by all users but is mostly
907:05 is readable by all users but is mostly reserved for cluster usage in case that
907:08 reserved for cluster usage in case that some resources should be visible and
907:11 some resources should be visible and readable publicly throughout the whole
907:13 readable publicly throughout the whole cluster and finally cube node lease is
907:16 cluster and finally cube node lease is the namespace for the lease objects
907:19 the namespace for the lease objects associated with each node which improves
907:22 associated with each node which improves the performance of the node heartbeats
907:24 the performance of the node heartbeats as the cluster scales and so like most
907:27 as the cluster scales and so like most resources in google cloud labels are key
907:30 resources in google cloud labels are key value pairs that help you organize your
907:33 value pairs that help you organize your resources in this case kubernetes
907:36 resources in this case kubernetes objects labels can be attached to
907:38 objects labels can be attached to objects at creation time and can be
907:40 objects at creation time and can be added or modified at any time each
907:43 added or modified at any time each object can have a set of key value
907:46 object can have a set of key value labels defined and each key must be
907:49 labels defined and each key must be unique for a given object and labels can
907:52 unique for a given object and labels can be found under metadata in your manifest
907:55 be found under metadata in your manifest file and so the one thing to remember
907:57 file and so the one thing to remember about pods is that they are ephemeral
908:00 about pods is that they are ephemeral they are not designed to run forever and
908:03 they are not designed to run forever and when a pod is terminated it cannot be
908:05 when a pod is terminated it cannot be brought back in general pods do not
908:07 brought back in general pods do not disappear until they are deleted by a
908:10 disappear until they are deleted by a user or by a controller pods do not heal
908:14 user or by a controller pods do not heal or repair themselves for example if a
908:17 or repair themselves for example if a pod is scheduled on a node which later
908:19 pod is scheduled on a node which later fails the pod is deleted as well if a
908:23 fails the pod is deleted as well if a pod is evicted from a node for any
908:25 pod is evicted from a node for any reason the pod does not replace itself
908:27 reason the pod does not replace itself and so here is a diagram of a pod life
908:30 and so here is a diagram of a pod life cycle that shows the different phases of
908:33 cycle that shows the different phases of its running time to give you some better
908:35 its running time to give you some better clarity of its ephemeral nature when
908:38 clarity of its ephemeral nature when first creating the pod the pod will
908:40 first creating the pod the pod will start impending and this is the pod's
908:43 start impending and this is the pod's initial phase and is waiting for one or
908:46 initial phase and is waiting for one or more of the containers to be set up and
908:49 more of the containers to be set up and made ready to run this includes the time
908:51 made ready to run this includes the time a pod spends waiting to be scheduled as
908:54 a pod spends waiting to be scheduled as well as the time spent downloading
908:57 well as the time spent downloading container images over the network once
908:59 container images over the network once the pod has completed the pending phase
909:02 the pod has completed the pending phase it is moved on to be scheduled and once
909:04 it is moved on to be scheduled and once it is scheduled it will move into the
909:06 it is scheduled it will move into the running phase and this is the phase
909:08 running phase and this is the phase where the pod has been bound to a node
909:11 where the pod has been bound to a node and all of the containers have been
909:13 and all of the containers have been created the running phase has at least
909:16 created the running phase has at least one container in the pod running or is
909:19 one container in the pod running or is in the process of starting or restarting
909:22 in the process of starting or restarting and once the workload is complete the
909:24 and once the workload is complete the pod will move into the succeeded phase
909:27 pod will move into the succeeded phase and this is where all the containers in
909:29 and this is where all the containers in the pod have terminated in success and
909:32 the pod have terminated in success and will not be restarted now if all the
909:34 will not be restarted now if all the containers in the pod have not
909:36 containers in the pod have not terminated successfully the pod will
909:39 terminated successfully the pod will move into a failed phase and this is
909:41 move into a failed phase and this is where all the containers in the pod have
909:43 where all the containers in the pod have terminated and at least one container
909:46 terminated and at least one container has terminated in failure now there's
909:48 has terminated in failure now there's one more phase in the pod life cycle
909:50 one more phase in the pod life cycle that i wanted to bring up which is the
909:52 that i wanted to bring up which is the unknown phase and this is the state of
909:55 unknown phase and this is the state of the pod that could not be obtained this
909:57 the pod that could not be obtained this phase typically occurs due to an error
910:00 phase typically occurs due to an error in communicating with the node where the
910:03 in communicating with the node where the pod should be running so now when you're
910:05 pod should be running so now when you're creating pods using a deployment is a
910:09 creating pods using a deployment is a common way to do this a deployment runs
910:12 common way to do this a deployment runs multiple replicas of your application
910:14 multiple replicas of your application and automatically replaces any instances
910:17 and automatically replaces any instances that fail or become unresponsive
910:20 that fail or become unresponsive deployments help ensure that one or more
910:23 deployments help ensure that one or more instances of your application are
910:25 instances of your application are available to serve user requests
910:28 available to serve user requests deployments use a pod template which
910:31 deployments use a pod template which contains a specification for its pods
910:34 contains a specification for its pods the pod specification determines how
910:37 the pod specification determines how each pod should look like for instance
910:39 each pod should look like for instance what applications should run inside its
910:42 what applications should run inside its containers which volumes the pods should
910:44 containers which volumes the pods should mount its labels and more and so when a
910:47 mount its labels and more and so when a deployments pod template is changed new
910:50 deployments pod template is changed new pods are automatically created one at a
910:53 pods are automatically created one at a time now i wanted to quickly bring up
910:55 time now i wanted to quickly bring up replica sets for just a moment you'll
910:58 replica sets for just a moment you'll hear about replica sets and i wanted to
911:00 hear about replica sets and i wanted to make sure that i covered it replica sets
911:03 make sure that i covered it replica sets ensures that a specified number of pod
911:06 ensures that a specified number of pod replicas are running at any given time
911:09 replicas are running at any given time however a deployment is a higher level
911:12 however a deployment is a higher level concept that manages replica sets and
911:15 concept that manages replica sets and provides updates to pods along with
911:18 provides updates to pods along with other features and so using deployments
911:22 other features and so using deployments is recommended over using replica sets
911:25 is recommended over using replica sets unless your workload requires it and i
911:27 unless your workload requires it and i will be including a link to replica sets
911:30 will be including a link to replica sets in the lesson text so speaking of
911:33 in the lesson text so speaking of workloads in kubernetes workloads are
911:36 workloads in kubernetes workloads are objects that set deployment rules four
911:39 objects that set deployment rules four pods based on these rules kubernetes
911:42 pods based on these rules kubernetes performs the deployment and updates the
911:44 performs the deployment and updates the workload with the current state of the
911:47 workload with the current state of the application workloads let you define the
911:50 application workloads let you define the rules for application scheduling scaling
911:53 rules for application scheduling scaling and upgrading now deployments which we
911:56 and upgrading now deployments which we just discussed is a type of workload and
911:59 just discussed is a type of workload and as we've seen a deployment runs multiple
912:02 as we've seen a deployment runs multiple replicas of your application and
912:04 replicas of your application and automatically replaces any instances
912:07 automatically replaces any instances that fail or become unresponsive
912:10 that fail or become unresponsive deployments are best used
912:12 deployments are best used for stateless applications another type
912:15 for stateless applications another type of workload is stateful sets and in
912:17 of workload is stateful sets and in contrast to deployments these are great
912:20 contrast to deployments these are great for when your application needs to
912:22 for when your application needs to maintain its identity and store data so
912:25 maintain its identity and store data so basically any application that requires
912:29 basically any application that requires some sort of persistent storage daemon
912:31 some sort of persistent storage daemon sets is another common workload that
912:34 sets is another common workload that ensures every node in the cluster runs a
912:36 ensures every node in the cluster runs a copy of that pod and this is for use
912:38 copy of that pod and this is for use cases where you're collecting logs or
912:41 cases where you're collecting logs or monitoring node performance now jobs is
912:44 monitoring node performance now jobs is a workload that launches one or more
912:46 a workload that launches one or more pods and ensures that a specified number
912:49 pods and ensures that a specified number of them successfully terminate jobs are
912:52 of them successfully terminate jobs are best used to run a finite task to
912:54 best used to run a finite task to completion as opposed to managing an
912:57 completion as opposed to managing an ongoing desired application state and
913:00 ongoing desired application state and cron jobs are similar to jobs however
913:03 cron jobs are similar to jobs however cron jobs runs to completion on a
913:06 cron jobs runs to completion on a cron-based schedule and so the last
913:08 cron-based schedule and so the last workload that i wanted to cover are
913:10 workload that i wanted to cover are config maps and these store general
913:13 config maps and these store general configuration information and so after
913:15 configuration information and so after you upload a config map any workload can
913:18 you upload a config map any workload can reference it as either an environment
913:20 reference it as either an environment variable or a volume mount and so just
913:23 variable or a volume mount and so just as a note config maps are not meant to
913:26 as a note config maps are not meant to store sensitive data if you're planning
913:28 store sensitive data if you're planning to do this please use secrets now i know
913:31 to do this please use secrets now i know this lesson has been extremely heavy in
913:34 this lesson has been extremely heavy in theory but these are fundamental
913:36 theory but these are fundamental concepts to know when dealing with
913:38 concepts to know when dealing with kubernetes and gke as well as the
913:41 kubernetes and gke as well as the objects that it supports so i recommend
913:43 objects that it supports so i recommend that if you need to go back and review
913:46 that if you need to go back and review this lesson if things aren't making
913:48 this lesson if things aren't making sense so that you can better understand
913:50 sense so that you can better understand it as these concepts all tie in together
913:54 it as these concepts all tie in together and will come up in the exam and so
913:56 and will come up in the exam and so that's pretty much all i wanted to cover
913:58 that's pretty much all i wanted to cover in this lesson on pods and object
914:01 in this lesson on pods and object management within gke so you can now
914:04 management within gke so you can now mark this lesson as complete
914:06 mark this lesson as complete and let's move on to the next one
914:08 and let's move on to the next one [Music]
914:12 [Music] welcome back and in this lesson i'm
914:14 welcome back and in this lesson i'm going to be diving into kubernetes
914:16 going to be diving into kubernetes services now services are a major
914:19 services now services are a major networking component when it comes to
914:21 networking component when it comes to working in kubernetes and can play a
914:23 working in kubernetes and can play a major factor when it comes to deciding
914:26 major factor when it comes to deciding on how you want to route your traffic
914:28 on how you want to route your traffic within your kubernetes cluster as well
914:31 within your kubernetes cluster as well in my experience services show up on the
914:34 in my experience services show up on the exam and so an understanding of how they
914:36 exam and so an understanding of how they work and the different types to use are
914:39 work and the different types to use are essential to understanding the big
914:41 essential to understanding the big picture of kubernetes this lesson will
914:44 picture of kubernetes this lesson will cover an overview on what services are
914:47 cover an overview on what services are what they do and the different types
914:49 what they do and the different types that are available along with their use
914:51 that are available along with their use cases now there's a lot to cover here so
914:54 cases now there's a lot to cover here so with that being said let's dive in now
914:56 with that being said let's dive in now as i had discussed earlier kubernetes
914:59 as i had discussed earlier kubernetes pods are ephemeral pods are created and
915:02 pods are ephemeral pods are created and destroyed to match the state of your
915:04 destroyed to match the state of your cluster so these resources are never
915:07 cluster so these resources are never permanent a perfect example of this is
915:09 permanent a perfect example of this is by using a deployment object so you can
915:12 by using a deployment object so you can create and destroy pods dynamically now
915:15 create and destroy pods dynamically now when it comes to networking in
915:16 when it comes to networking in kubernetes each pod gets its own ip
915:19 kubernetes each pod gets its own ip address however in a deployment a pod
915:22 address however in a deployment a pod that is running once destroyed will be
915:25 that is running once destroyed will be recreated with a new ip address and
915:28 recreated with a new ip address and there is no real way to keep track of
915:30 there is no real way to keep track of these i p addresses for communication as
915:33 these i p addresses for communication as they change very frequently and this is
915:36 they change very frequently and this is where services come into play now a
915:39 where services come into play now a service is an abstraction in the sense
915:41 service is an abstraction in the sense that it is not a process that listens on
915:44 that it is not a process that listens on some network interface
915:46 some network interface a service can be defined as a logical
915:48 a service can be defined as a logical set of pods an abstraction on top of the
915:51 set of pods an abstraction on top of the pod which provides a single persistent
915:54 pod which provides a single persistent ip address and dns name by which pods
915:57 ip address and dns name by which pods can be accessed it allows for routing
916:00 can be accessed it allows for routing external traffic into your kubernetes
916:02 external traffic into your kubernetes cluster and used inside your cluster for
916:05 cluster and used inside your cluster for more intelligent routing with services
916:08 more intelligent routing with services it is also very easy to manage load
916:10 it is also very easy to manage load balancing configuration for traffic
916:13 balancing configuration for traffic between replicas it helps pods scale
916:16 between replicas it helps pods scale quickly and easily as the service will
916:19 quickly and easily as the service will automatically handle the recreation of
916:22 automatically handle the recreation of pods and their new ip addresses the main
916:25 pods and their new ip addresses the main goal of services in kubernetes is to
916:27 goal of services in kubernetes is to provide persistent access to its pods
916:31 provide persistent access to its pods without the necessity to look for a
916:33 without the necessity to look for a pod's ip
916:34 pod's ip each time when the pod is recreated and
916:37 each time when the pod is recreated and again services also allow for external
916:41 again services also allow for external access from users to the applications
916:44 access from users to the applications inside the cluster without having to
916:46 inside the cluster without having to know the ip address of the individual
916:49 know the ip address of the individual pod in order to reach that application
916:52 pod in order to reach that application now in order for a service to route
916:54 now in order for a service to route traffic to the correct pod in the
916:56 traffic to the correct pod in the cluster there are some fields in the
916:58 cluster there are some fields in the manifest file that will help determine
917:00 manifest file that will help determine the end points on where traffic should
917:02 the end points on where traffic should be routed shown here on the right is the
917:05 be routed shown here on the right is the deployment manifest for reference and on
917:08 deployment manifest for reference and on the left is the services manifest now as
917:10 the left is the services manifest now as you can see here in the service manifest
917:13 you can see here in the service manifest on the left the kind is clearly defined
917:16 on the left the kind is clearly defined as service under metadata is the name of
917:19 as service under metadata is the name of the service and this will be the dns
917:21 the service and this will be the dns name of the service when it is created
917:24 name of the service when it is created so when it comes to the spec there is a
917:26 so when it comes to the spec there is a field here called a selector and this is
917:29 field here called a selector and this is what defines what pods should be
917:32 what defines what pods should be included in the service and it is the
917:34 included in the service and it is the labels under the selector that define
917:37 labels under the selector that define which pods and labels are what we
917:39 which pods and labels are what we discussed in the last lesson as
917:41 discussed in the last lesson as arbitrary key value pairs so any pod
917:44 arbitrary key value pairs so any pod with these matching labels is what will
917:46 with these matching labels is what will be added to the service as shown here in
917:49 be added to the service as shown here in the deployment file this workload will
917:52 the deployment file this workload will be a part of the service and its labels
917:54 be a part of the service and its labels match that of the selector in the
917:56 match that of the selector in the services file for type this is the type
918:00 services file for type this is the type of service that you will want to use in
918:02 of service that you will want to use in this example type cluster ip is used but
918:06 this example type cluster ip is used but depending on the use case you have a few
918:08 depending on the use case you have a few different ones to choose from now at the
918:11 different ones to choose from now at the bottom here is a list of port
918:13 bottom here is a list of port configurations protocol being the
918:15 configurations protocol being the network protocol to use with the port
918:17 network protocol to use with the port port being the port that incoming
918:19 port being the port that incoming traffic goes to and finally the target
918:22 traffic goes to and finally the target port which is the port on the pod that
918:25 port which is the port on the pod that traffic should be sent to and this will
918:27 traffic should be sent to and this will make more sense as we go through the
918:29 make more sense as we go through the upcoming diagrams so touching on
918:32 upcoming diagrams so touching on selectors and labels for a moment
918:34 selectors and labels for a moment kubernetes has a very unique way of
918:36 kubernetes has a very unique way of routing traffic and when it comes to
918:38 routing traffic and when it comes to services it's not any different services
918:42 services it's not any different services select pods based on their labels now
918:45 select pods based on their labels now when a selector request is made to the
918:47 when a selector request is made to the service it selects all pods in the
918:50 service it selects all pods in the cluster matching the key value pair
918:52 cluster matching the key value pair under the selector it chooses one of the
918:55 under the selector it chooses one of the pods if there are more than one with the
918:57 pods if there are more than one with the same key value pair and forwards the
918:59 same key value pair and forwards the network request to it and so here in
919:02 network request to it and so here in this example you can see that the
919:04 this example you can see that the selector specified for the service has a
919:07 selector specified for the service has a key value pair of app inventory you can
919:10 key value pair of app inventory you can see the pod on node 1 on the left holds
919:13 see the pod on node 1 on the left holds the label of app inventory as well which
919:16 the label of app inventory as well which matches the key value pair of the
919:18 matches the key value pair of the selector and so traffic will get routed
919:21 selector and so traffic will get routed to that pod because of it if you look at
919:23 to that pod because of it if you look at the label for the pod in node 2 on the
919:25 the label for the pod in node 2 on the right the label does not match that of
919:28 right the label does not match that of the selector and so it will not route
919:30 the selector and so it will not route traffic to that pod and so to sum it up
919:33 traffic to that pod and so to sum it up the label on the pod matching the
919:35 the label on the pod matching the selector in the service determines where
919:38 selector in the service determines where the network request will get routed to
919:41 the network request will get routed to and so now i will be going through the
919:43 and so now i will be going through the many different service types that are
919:45 many different service types that are available for routing network traffic
919:48 available for routing network traffic within gke starting with cluster ip
919:52 within gke starting with cluster ip now a cluster ip service is the default
919:55 now a cluster ip service is the default kubernetes service it gives you a
919:57 kubernetes service it gives you a service inside your cluster that other
919:59 service inside your cluster that other apps inside your cluster can access the
920:02 apps inside your cluster can access the service is not exposed outside the
920:04 service is not exposed outside the cluster but can be addressed from within
920:06 cluster but can be addressed from within the cluster when you create a service of
920:09 the cluster when you create a service of type cluster ip kubernetes creates a
920:12 type cluster ip kubernetes creates a stable ip address that is accessible
920:15 stable ip address that is accessible from nodes in the cluster clients in the
920:18 from nodes in the cluster clients in the cluster call the service by using the
920:21 cluster call the service by using the cluster ip address and the port value
920:23 cluster ip address and the port value specified in the port field of the
920:26 specified in the port field of the service manifest the request is
920:28 service manifest the request is forwarded to one of the member pods on
920:31 forwarded to one of the member pods on the port specified in the target port
920:33 the port specified in the target port field and just as a note this ip address
920:37 field and just as a note this ip address is stable for the lifetime of the
920:39 is stable for the lifetime of the service so for this example a client
920:42 service so for this example a client calls the service at 10.176
920:52 on tcp port 80. the request is forwarded to one of the member pods on tcp port
920:55 to one of the member pods on tcp port 80. note that the member pod must have a
920:58 80. note that the member pod must have a container that is listening on tcp port
921:01 container that is listening on tcp port 80. if there is no container listening
921:03 80. if there is no container listening on port 80 clients will see a message
921:05 on port 80 clients will see a message like fail to connect or this site can't
921:08 like fail to connect or this site can't be reached think of the case when you
921:11 be reached think of the case when you have a dns record that you don't want to
921:13 have a dns record that you don't want to change and you want the name to resolve
921:16 change and you want the name to resolve to the same ip address or you merely
921:18 to the same ip address or you merely want a static ip address for your
921:20 want a static ip address for your workload this would be a great use case
921:23 workload this would be a great use case for the use of the cluster ip service
921:26 for the use of the cluster ip service now although the service is not
921:27 now although the service is not accessible by network requests outside
921:30 accessible by network requests outside of the cluster
921:31 of the cluster if you need to connect to the service
921:33 if you need to connect to the service you can still connect to it with the
921:35 you can still connect to it with the cloud sdk or cloud shell by using the
921:39 cloud sdk or cloud shell by using the exposed ip address of the cluster and so
921:41 exposed ip address of the cluster and so i wanted to take a moment to show you
921:43 i wanted to take a moment to show you what a cluster ip manifest actually
921:46 what a cluster ip manifest actually looks like and i will be going through
921:48 looks like and i will be going through the manifest for each service type for
921:51 the manifest for each service type for you to familiarize yourself with we
921:53 you to familiarize yourself with we first have the name of the service which
921:55 first have the name of the service which is cluster ip dash service
921:57 is cluster ip dash service we then have the label used for the
922:00 we then have the label used for the selector which is the key value pair of
922:02 selector which is the key value pair of app inventory and then we have the
922:05 app inventory and then we have the service type which is cluster ip and we
922:08 service type which is cluster ip and we have the port number exposed internally
922:10 have the port number exposed internally in the cluster which is port 80 along
922:13 in the cluster which is port 80 along with the target port that containers are
922:16 with the target port that containers are listening on which again is port 80. and
922:18 listening on which again is port 80. and so the next service type we have is node
922:21 so the next service type we have is node port
922:22 port so when you create a service of type
922:24 so when you create a service of type node port you specify a node port value
922:28 node port you specify a node port value the node port is a static port and is
922:30 the node port is a static port and is chosen from a pre-configured range
922:33 chosen from a pre-configured range between 30 000 and 32
922:36 between 30 000 and 32 760
922:38 760 you can specify your own value within
922:40 you can specify your own value within this range but please note that any
922:43 this range but please note that any value outside of this range will not be
922:45 value outside of this range will not be accepted by kubernetes as well if you do
922:48 accepted by kubernetes as well if you do not choose a value a random value within
922:51 not choose a value a random value within the range specified will be assigned
922:54 the range specified will be assigned once this port range has been assigned
922:55 once this port range has been assigned to the service then the service is
922:58 to the service then the service is accessible by using the ip address of
923:01 accessible by using the ip address of any node along with the no port value
923:04 any node along with the no port value the service is then exposed on a port on
923:07 the service is then exposed on a port on every node in the cluster the service
923:10 every node in the cluster the service can then be accessed externally at the
923:12 can then be accessed externally at the node ip along with the node port when
923:15 node ip along with the node port when using node port services you must make
923:17 using node port services you must make sure that the selected port is not
923:19 sure that the selected port is not already open on your nodes and so just
923:22 already open on your nodes and so just as a note the no port type is an
923:24 as a note the no port type is an extension of the cluster i p type so a
923:27 extension of the cluster i p type so a service of type node port naturally has
923:31 service of type node port naturally has a cluster i p address and so this method
923:34 a cluster i p address and so this method isn't very secure as it opens up each
923:36 isn't very secure as it opens up each node to external entry as well this
923:40 node to external entry as well this method relies on knowing the ip
923:42 method relies on knowing the ip addresses of the nodes which could
923:44 addresses of the nodes which could change at any time and so going through
923:47 change at any time and so going through the manifest of type node port service
923:49 the manifest of type node port service we start off with the name of the
923:51 we start off with the name of the service which is node port dash service
923:54 service which is node port dash service the label used for the selector which
923:56 the label used for the selector which uses the key value pair of app inventory
924:00 uses the key value pair of app inventory the type which is node port and notice
924:02 the type which is node port and notice the case sensitivity here which you will
924:05 the case sensitivity here which you will find in most service types along with
924:07 find in most service types along with the port number exposed internally in
924:09 the port number exposed internally in the cluster which is port 80 and again
924:12 the cluster which is port 80 and again the port that the containers are
924:14 the port that the containers are listening on which is the target port
924:16 listening on which is the target port which is port 80 as well and lastly and
924:19 which is port 80 as well and lastly and most importantly we have the no port
924:22 most importantly we have the no port value which is marked as you saw in the
924:24 value which is marked as you saw in the diagram earlier as port
924:27 diagram earlier as port 32002 the next service type we have up
924:30 32002 the next service type we have up is low balancer and this service is
924:33 is low balancer and this service is exposed as a load balancer in the
924:35 exposed as a load balancer in the cluster low balancer services will
924:38 cluster low balancer services will create an internal kubernetes service
924:41 create an internal kubernetes service that is connected to a cloud provider's
924:43 that is connected to a cloud provider's load balancer and in this case google
924:46 load balancer and in this case google cloud this will create a static publicly
924:49 cloud this will create a static publicly addressable ip address and a dns name
924:52 addressable ip address and a dns name that can be used to access your cluster
924:54 that can be used to access your cluster from an external source the low balancer
924:57 from an external source the low balancer type is an extension of the no port type
925:00 type is an extension of the no port type so a service of type load balancer
925:02 so a service of type load balancer naturally has a cluster ip address if
925:05 naturally has a cluster ip address if you want to directly expose a service
925:08 you want to directly expose a service this is the default method all traffic
925:10 this is the default method all traffic on the port you specify
925:12 on the port you specify will be forwarded to the service there
925:14 will be forwarded to the service there is no filtering or routing and it means
925:17 is no filtering or routing and it means you can send many different types of
925:19 you can send many different types of traffic to it like http https tcp or udp
925:24 traffic to it like http https tcp or udp and more the downside here is that for
925:27 and more the downside here is that for each service you expose with a low
925:30 each service you expose with a low balancer you pay for that load balancer
925:32 balancer you pay for that load balancer and so you can really rack up your bill
925:35 and so you can really rack up your bill if you're using multiple load balancers
925:37 if you're using multiple load balancers and shown here is the manifest for type
925:40 and shown here is the manifest for type load balancer it shows the name of the
925:42 load balancer it shows the name of the service load balancer dash service the
925:45 service load balancer dash service the label which is used for the selector
925:47 label which is used for the selector which is the key value pair of app
925:49 which is the key value pair of app inventory
925:51 inventory the service type which is low balancer
925:54 the service type which is low balancer again notice the case sensitivity along
925:56 again notice the case sensitivity along with the port and the target port which
925:59 with the port and the target port which are both port 80. and so this is the end
926:01 are both port 80. and so this is the end of part one of this lesson it was
926:03 of part one of this lesson it was getting a bit long so i decided to break
926:06 getting a bit long so i decided to break it up this would be a great opportunity
926:08 it up this would be a great opportunity for you to get up and have a stretch get
926:10 for you to get up and have a stretch get yourself a coffee or tea and whenever
926:13 yourself a coffee or tea and whenever you're ready part two will be starting
926:15 you're ready part two will be starting immediately from the end of part one so
926:18 immediately from the end of part one so go ahead and mark this as complete and
926:20 go ahead and mark this as complete and i'll see you in the next one
926:21 i'll see you in the next one [Music]
926:25 [Music] welcome back this is part two of the
926:28 welcome back this is part two of the kubernetes services lesson and we're
926:30 kubernetes services lesson and we're going to continue immediately from the
926:32 going to continue immediately from the end of part one so whenever you're ready
926:34 end of part one so whenever you're ready let's dive in and so the next service
926:36 let's dive in and so the next service type we have is multiport services
926:39 type we have is multiport services now for some services there is the need
926:42 now for some services there is the need to expose more than one port kubernetes
926:45 to expose more than one port kubernetes lets you configure multiple port
926:46 lets you configure multiple port definitions on a service object so when
926:49 definitions on a service object so when using multiple ports for a service you
926:52 using multiple ports for a service you must give all your ports names and if
926:54 must give all your ports names and if you have multiple service ports these
926:56 you have multiple service ports these names must be unique in this example if
926:59 names must be unique in this example if a client calls the service at 10.176.1
927:09 on tcp port 80 the request is forwarded to a member pod on tcp port 80 on either
927:12 to a member pod on tcp port 80 on either node 1 or node 2. but if a client calls
927:15 node 1 or node 2. but if a client calls the service at 10.176.133.7
927:24 on tcp port 9752 the request is forwarded to the pod on tcp port 9752
927:28 forwarded to the pod on tcp port 9752 that resides on node 1. each member pod
927:31 that resides on node 1. each member pod must have a container listening on tcp
927:34 must have a container listening on tcp port 80 and a container listening on tcp
927:37 port 80 and a container listening on tcp port 9752 this could be a single
927:40 port 9752 this could be a single container with two threads or two
927:43 container with two threads or two containers running in the same pod and
927:45 containers running in the same pod and of course as shown here is a manifest
927:48 of course as shown here is a manifest showing the multi-port services
927:50 showing the multi-port services the name of the service
927:52 the name of the service the label used for the selector
927:55 the label used for the selector as well as the service type the port
927:57 as well as the service type the port node exposed internally for each
927:59 node exposed internally for each separate workload as well as the port
928:02 separate workload as well as the port that containers are listening on for
928:05 that containers are listening on for each workload as well and as you saw
928:07 each workload as well and as you saw before nginx was using target port 80
928:11 before nginx was using target port 80 where appy was using port 9752 moving on
928:15 where appy was using port 9752 moving on to another service type is external name
928:18 to another service type is external name now a service of type external name
928:20 now a service of type external name provides an internal alias for an
928:23 provides an internal alias for an external dns name internal clients make
928:26 external dns name internal clients make requests using the internal dns name and
928:29 requests using the internal dns name and the requests are redirected to the
928:31 the requests are redirected to the external name when you create a service
928:34 external name when you create a service kubernetes creates a dns name that
928:36 kubernetes creates a dns name that internal clients can use to call the
928:39 internal clients can use to call the service in this example the internal dns
928:42 service in this example the internal dns name is bowtie.sql when an internal
928:45 name is bowtie.sql when an internal client makes a request to the internal
928:47 client makes a request to the internal dns name of bowtie.sql the request gets
928:51 dns name of bowtie.sql the request gets redirected to bowtie.sql2
928:54 redirected to bowtie.sql2 dot bow tie inc dot private the external
928:57 dot bow tie inc dot private the external name service type is a bit different
928:59 name service type is a bit different than other service types as it's not
929:02 than other service types as it's not associated with a set of pods or an ip
929:05 associated with a set of pods or an ip address it is a mapping from an internal
929:08 address it is a mapping from an internal dns name to an external dns name this
929:11 dns name to an external dns name this service does a simple cname redirection
929:14 service does a simple cname redirection and is a great use case for any external
929:17 and is a great use case for any external service that resides outside of your
929:20 service that resides outside of your cluster and again here is a view of a
929:22 cluster and again here is a view of a manifest for type external name here
929:26 manifest for type external name here showing the internal dns name along with
929:29 showing the internal dns name along with the external dns name redirect and
929:32 the external dns name redirect and moving on to the last service type we
929:34 moving on to the last service type we have the headless service type now
929:36 have the headless service type now sometimes you don't need or want low
929:38 sometimes you don't need or want low balancing and a single service ip in
929:41 balancing and a single service ip in this case you can create headless
929:43 this case you can create headless services by specifying none as the
929:46 services by specifying none as the service type in the manifest file this
929:48 service type in the manifest file this option also allows you to choose other
929:51 option also allows you to choose other service discovery mechanisms without
929:53 service discovery mechanisms without being tied to kubernetes implementation
929:56 being tied to kubernetes implementation applications can still use a
929:58 applications can still use a self-registration pattern with this
930:00 self-registration pattern with this service and so a great use case for this
930:02 service and so a great use case for this is when you don't need any low balancing
930:04 is when you don't need any low balancing or routing you only need the service to
930:07 or routing you only need the service to patch the request to the back end pod no
930:10 patch the request to the back end pod no ips needed headless service is typically
930:13 ips needed headless service is typically used with stateful sets where the name
930:16 used with stateful sets where the name of the pods are fixed this is useful in
930:18 of the pods are fixed this is useful in situations like when you're setting up a
930:20 situations like when you're setting up a mysql cluster where you need to know the
930:23 mysql cluster where you need to know the name of the master and so here is a
930:26 name of the master and so here is a manifest for the headless service again
930:29 manifest for the headless service again the service type is marked as none and
930:32 the service type is marked as none and so to sum it up kubernetes services
930:34 so to sum it up kubernetes services provides the interfaces through which
930:36 provides the interfaces through which pods can communicate with each other
930:39 pods can communicate with each other they also act as the main gateway for
930:41 they also act as the main gateway for your application services use selectors
930:44 your application services use selectors to identify which pods they should
930:46 to identify which pods they should control they expose an ip address and a
930:49 control they expose an ip address and a port that is not necessarily the same
930:51 port that is not necessarily the same port at which the pod is listening and
930:53 port at which the pod is listening and services can expose more than one port
930:57 services can expose more than one port and can also route traffic to other
930:59 and can also route traffic to other services external ip addresses or dns
931:02 services external ip addresses or dns names services make it really easy to
931:05 names services make it really easy to create network services in kubernetes
931:08 create network services in kubernetes each service can be backed with as many
931:10 each service can be backed with as many pods as needed without having to make
931:13 pods as needed without having to make your code aware of how each service is
931:15 your code aware of how each service is backed also please note that there are
931:18 backed also please note that there are many other features and use cases within
931:20 many other features and use cases within the services that have been mentioned
931:22 the services that have been mentioned that i've not brought up i will also
931:24 that i've not brought up i will also include some links in the lesson text
931:27 include some links in the lesson text for those who are interested in diving
931:29 for those who are interested in diving deeper into services this lesson was to
931:32 deeper into services this lesson was to merely summarize the different service
931:34 merely summarize the different service types and knowing these service types
931:36 types and knowing these service types will put you in a great position on the
931:38 will put you in a great position on the exam for any questions that cover
931:41 exam for any questions that cover services within gke now i know this has
931:44 services within gke now i know this has been another lesson that's been
931:46 been another lesson that's been extremely heavy in theory and has been a
931:49 extremely heavy in theory and has been a tremendous amount to take in but not to
931:52 tremendous amount to take in but not to worry next up is a demo that will put
931:55 worry next up is a demo that will put all this theory into practice and we'll
931:57 all this theory into practice and we'll be going ahead and building a cluster
931:59 be going ahead and building a cluster along with touching on much of the
932:01 along with touching on much of the components discussed within the past few
932:04 components discussed within the past few lessons and so that's pretty much all i
932:06 lessons and so that's pretty much all i wanted to cover when it comes to
932:08 wanted to cover when it comes to kubernetes service types so you can now
932:10 kubernetes service types so you can now mark this lesson as complete and
932:12 mark this lesson as complete and whenever you're ready join me in the
932:14 whenever you're ready join me in the console
932:15 console [Music]
932:19 [Music] welcome back
932:20 welcome back in this lesson i'll be going over
932:22 in this lesson i'll be going over ingress for gke
932:24 ingress for gke an object within gke that defines rules
932:28 an object within gke that defines rules for routing traffic to specific services
932:32 for routing traffic to specific services ingress is a well-known topic that comes
932:35 ingress is a well-known topic that comes up in the exam
932:36 up in the exam as well as being a common resource that
932:39 as well as being a common resource that is used in many gke clusters that you
932:42 is used in many gke clusters that you will see in most environments
932:45 will see in most environments something that you will get very
932:46 something that you will get very familiar with while diving deeper into
932:49 familiar with while diving deeper into more complex environments
932:52 more complex environments so whenever you're ready let's dive in
932:55 so whenever you're ready let's dive in now in gke an ingress object defines
932:58 now in gke an ingress object defines rules for routing http and https traffic
933:03 rules for routing http and https traffic to applications running in a cluster
933:06 to applications running in a cluster an ingress object is associated with one
933:10 an ingress object is associated with one or more service objects
933:12 or more service objects each of which is associated with a set
933:14 each of which is associated with a set of pods
933:16 of pods when you create an ingress object the
933:18 when you create an ingress object the gke ingress controller
933:21 gke ingress controller creates a google cloud
933:23 creates a google cloud http or https load balancer and
933:27 http or https load balancer and configures it according to the
933:29 configures it according to the information in the ingress and its
933:32 information in the ingress and its associated services
933:34 associated services gke ingress is a built-in
933:37 gke ingress is a built-in and managed ingress controller
933:39 and managed ingress controller this controller implements ingress
933:42 this controller implements ingress resources as google cloud load balancers
933:46 resources as google cloud load balancers for http and https workloads in gke
933:51 for http and https workloads in gke also the load balancer is given a stable
933:54 also the load balancer is given a stable ip address that you can associate with a
933:57 ip address that you can associate with a domain name each external http
934:00 domain name each external http and https load balancer or internal http
934:05 and https load balancer or internal http or https load balancer uses a single url
934:09 or https load balancer uses a single url map
934:10 map which references one or more back-end
934:13 which references one or more back-end services
934:14 services one back-end service corresponds to each
934:18 one back-end service corresponds to each service referenced by the ingress in
934:21 service referenced by the ingress in this example assume that you have
934:23 this example assume that you have associated the load balancers ip address
934:26 associated the load balancers ip address with the domain name bowtieinc.co
934:29 with the domain name bowtieinc.co when a client sends a request to
934:31 when a client sends a request to bowtieinc.co
934:33 bowtieinc.co the request is routed to a kubernetes
934:36 the request is routed to a kubernetes service named products on port 80. and
934:39 service named products on port 80. and when a client sends a request to
934:40 when a client sends a request to bowtieinc.co
934:42 bowtieinc.co forward slash discontinued the request
934:45 forward slash discontinued the request is routed to a kubernetes service named
934:48 is routed to a kubernetes service named discontinued on port 21337
934:52 discontinued on port 21337 ingress is probably the most powerful
934:55 ingress is probably the most powerful way to expose your services but can also
934:58 way to expose your services but can also be very complex
935:00 be very complex as there are also many types of ingress
935:03 as there are also many types of ingress controllers to choose from
935:06 controllers to choose from along with plugins for ingress
935:08 along with plugins for ingress controllers
935:09 controllers ingress is the most useful and cost
935:12 ingress is the most useful and cost effective if you want to expose
935:15 effective if you want to expose multiple services under the same ip
935:18 multiple services under the same ip address
935:19 address as you only pay
935:20 as you only pay for one load balancer if you are using
935:23 for one load balancer if you are using the native gcp integration
935:26 the native gcp integration and comes with a slew of features
935:30 and comes with a slew of features and so shown here is the ingress
935:32 and so shown here is the ingress manifest which is a bit different from
935:34 manifest which is a bit different from the other manifest that you've seen
935:37 the other manifest that you've seen as it holds rules for different paths
935:40 as it holds rules for different paths explain in the previous diagram in the
935:43 explain in the previous diagram in the manifest shown here one path directs all
935:46 manifest shown here one path directs all traffic
935:47 traffic to the product's service name
935:50 to the product's service name while the other path redirects traffic
935:53 while the other path redirects traffic from discontinued to the back end
935:55 from discontinued to the back end service name of discontinued
935:58 service name of discontinued and note that each of these service
936:00 and note that each of these service names have their own independent
936:03 names have their own independent manifest
936:04 manifest as it is needed to create the service
936:07 as it is needed to create the service and are referenced within the ingress
936:09 and are referenced within the ingress manifest
936:10 manifest so the more rules you have for different
936:13 so the more rules you have for different paths or ports the more services you
936:16 paths or ports the more services you will need
936:17 will need now i wanted to touch on network
936:20 now i wanted to touch on network endpoint groups or any g's for short for
936:23 endpoint groups or any g's for short for just a second
936:24 just a second now this is a configuration object that
936:27 now this is a configuration object that specifies a group of back-end endpoints
936:30 specifies a group of back-end endpoints or services
936:32 or services negs are useful
936:34 negs are useful for container native load balancing
936:37 for container native load balancing where each container can be represented
936:40 where each container can be represented as an endpoint to the load balancer the
936:43 as an endpoint to the load balancer the negs are used to track pod endpoints
936:46 negs are used to track pod endpoints dynamically so the google low balancer
936:50 dynamically so the google low balancer can route traffic to its appropriate
936:52 can route traffic to its appropriate back ends
936:54 back ends so traffic is low balanced from the load
936:56 so traffic is low balanced from the load balancer directly to the pod ip
937:00 balancer directly to the pod ip as opposed to traversing the vm ip and
937:04 as opposed to traversing the vm ip and coupe proxy networking in these
937:06 coupe proxy networking in these conditions
937:07 conditions services will be annotated automatically
937:10 services will be annotated automatically indicating that a neg
937:12 indicating that a neg should be created to mirror the pod ips
937:16 should be created to mirror the pod ips within the service the neg is what
937:19 within the service the neg is what allows compute engine load balancers to
937:22 allows compute engine load balancers to communicate directly with pods the
937:25 communicate directly with pods the diagram shown here is the ingress to
937:28 diagram shown here is the ingress to compute engine resource mappings of the
937:31 compute engine resource mappings of the manifest that you saw earlier where the
937:34 manifest that you saw earlier where the gke ingress controller
937:36 gke ingress controller deploys and manages compute engine low
937:39 deploys and manages compute engine low balancer resources based on the
937:42 balancer resources based on the ingressed resources
937:43 ingressed resources that are deployed in the cluster
937:46 that are deployed in the cluster now touching on health checks for just a
937:48 now touching on health checks for just a minute if there are no specified health
937:51 minute if there are no specified health check parameters for a corresponding
937:54 check parameters for a corresponding service
937:55 service using a back-end custom resource
937:57 using a back-end custom resource definition
937:58 definition a set of default and inferred parameters
938:01 a set of default and inferred parameters are used health check parameters for a
938:04 are used health check parameters for a back-end service should be explicitly
938:07 back-end service should be explicitly defined by creating a back-end config
938:11 defined by creating a back-end config custom resource definition for the
938:13 custom resource definition for the service and this should be done if
938:15 service and this should be done if you're using anthos
938:16 you're using anthos a backend config custom resource
938:19 a backend config custom resource definition should also be used if you
938:22 definition should also be used if you have more than one container in the
938:24 have more than one container in the serving pods as well if you need control
938:27 serving pods as well if you need control over the port that's used for the low
938:30 over the port that's used for the low balancers health checks now you can
938:33 balancers health checks now you can specify the backend services health
938:35 specify the backend services health check parameters
938:37 check parameters using the health check parameter of a
938:39 using the health check parameter of a back-end config custom resource
938:42 back-end config custom resource definition referenced by the
938:44 definition referenced by the corresponding service
938:46 corresponding service this gives you more flexibility and
938:48 this gives you more flexibility and control
938:49 control over health checks for a google cloud
938:52 over health checks for a google cloud external http or https load balancer or
938:58 external http or https load balancer or internal http or https load balancer
939:02 internal http or https load balancer created by an ingress
939:04 created by an ingress and lastly i wanted to touch on ssl
939:07 and lastly i wanted to touch on ssl certificates and there are three ways to
939:09 certificates and there are three ways to provide ssl certificates to an http or
939:14 provide ssl certificates to an http or https load balancer the first way is
939:17 https load balancer the first way is google managed certificates
939:19 google managed certificates and these are provisioned deployed
939:22 and these are provisioned deployed renewed
939:23 renewed and managed for your domains and just as
939:26 and managed for your domains and just as a note
939:27 a note managed certificates do not support
939:30 managed certificates do not support wildcard domains the second way to
939:33 wildcard domains the second way to provide ssl certificates is through
939:36 provide ssl certificates is through self-managed certificates that are
939:38 self-managed certificates that are shared with google cloud you can
939:40 shared with google cloud you can provision your own ssl certificate
939:43 provision your own ssl certificate and create a certificate resource
939:46 and create a certificate resource in your google cloud project you can
939:48 in your google cloud project you can then list the certificate resource in an
939:51 then list the certificate resource in an annotation on an ingress to create an
939:55 annotation on an ingress to create an http or https load balancer that uses
939:59 http or https load balancer that uses the certificate and the last way to
940:02 the certificate and the last way to provide ssl certificates is through
940:04 provide ssl certificates is through self-managed certificates as secret
940:07 self-managed certificates as secret resources so you can provision your own
940:10 resources so you can provision your own ssl certificate and create a secret to
940:13 ssl certificate and create a secret to hold it you can then refer to the secret
940:16 hold it you can then refer to the secret as an ingress specification to create an
940:20 as an ingress specification to create an http or https load balancer that uses
940:24 http or https load balancer that uses this certificate and just as a note you
940:27 this certificate and just as a note you can specify multiple certificates in an
940:30 can specify multiple certificates in an ingress manifest the load balancer
940:33 ingress manifest the load balancer chooses a certificate if the common name
940:36 chooses a certificate if the common name in the certificate
940:37 in the certificate matches the host name
940:39 matches the host name used in the request and so that pretty
940:42 used in the request and so that pretty much covers
940:43 much covers all the main topics in this short lesson
940:46 all the main topics in this short lesson on ingress for gke
940:49 on ingress for gke so you can now mark this lesson as
940:50 so you can now mark this lesson as complete
940:51 complete and let's move on to the next one
940:53 and let's move on to the next one [Music]
940:57 [Music] welcome back
940:59 welcome back in this lesson i'll be going over gke
941:02 in this lesson i'll be going over gke storage options now kubernetes currently
941:06 storage options now kubernetes currently offers a slew of different storage
941:08 offers a slew of different storage options
941:09 options and is only enhanced by the added
941:12 and is only enhanced by the added features available in google cloud for
941:15 features available in google cloud for gke we'll also be getting into the
941:18 gke we'll also be getting into the different abstractions that kubernetes
941:21 different abstractions that kubernetes offers to manage storage
941:23 offers to manage storage and how they can be used for different
941:26 and how they can be used for different types of workloads
941:27 types of workloads now there's quite a bit to go over here
941:29 now there's quite a bit to go over here so with that being said let's dive in
941:32 so with that being said let's dive in now as i stated before there are several
941:35 now as i stated before there are several storage options for applications running
941:38 storage options for applications running on gke
941:40 on gke the choices vary in terms of flexibility
941:43 the choices vary in terms of flexibility and ease of use
941:45 and ease of use google cloud offers several storage
941:47 google cloud offers several storage options that can be used for your
941:50 options that can be used for your specific workload
941:52 specific workload kubernetes also provides storage
941:54 kubernetes also provides storage abstractions which i will be getting
941:56 abstractions which i will be getting into in just a bit the easiest storage
941:59 into in just a bit the easiest storage options are google cloud's managed
942:02 options are google cloud's managed storage products if you need to connect
942:04 storage products if you need to connect a database to your cluster you can
942:06 a database to your cluster you can consider
942:08 consider using cloud sql datastore or cloud
942:11 using cloud sql datastore or cloud spanner and when it comes to object
942:13 spanner and when it comes to object storage cloud storage would be an
942:15 storage cloud storage would be an excellent option to fill the gap file
942:18 excellent option to fill the gap file store is a great option for when your
942:20 store is a great option for when your application requires
942:22 application requires managed network attached storage and if
942:25 managed network attached storage and if your application requires block storage
942:28 your application requires block storage the best option is to use persistent
942:31 the best option is to use persistent disks
942:32 disks and can be provisioned manually or
942:34 and can be provisioned manually or provisioned dynamically through
942:36 provisioned dynamically through kubernetes now i wanted to first start
942:39 kubernetes now i wanted to first start off with kubernetes storage abstractions
942:42 off with kubernetes storage abstractions but in order to understand kubernetes
942:44 but in order to understand kubernetes storage abstractions i wanted to take a
942:47 storage abstractions i wanted to take a moment to explain how storage is mounted
942:50 moment to explain how storage is mounted in the concept of docker now docker has
942:52 in the concept of docker now docker has a concept of volumes
942:54 a concept of volumes though it is somewhat looser and less
942:57 though it is somewhat looser and less managed than kubernetes
942:59 managed than kubernetes a docker volume is a directory on disk
943:02 a docker volume is a directory on disk or in another container
943:04 or in another container docker provides volume drivers but the
943:07 docker provides volume drivers but the functionality is somewhat limited a
943:10 functionality is somewhat limited a docker container has a writable layer
943:13 docker container has a writable layer and this is where the data is stored by
943:15 and this is where the data is stored by default
943:16 default making the data ephemeral and so data is
943:19 making the data ephemeral and so data is not persisted when the container is
943:22 not persisted when the container is removed so storing data inside a
943:25 removed so storing data inside a container is not always recommended now
943:28 container is not always recommended now there are three ways to mount data
943:30 there are three ways to mount data inside a docker container the first way
943:33 inside a docker container the first way is a docker volume and sits inside the
943:36 is a docker volume and sits inside the docker area within the host's file
943:39 docker area within the host's file system and can be shared amongst other
943:42 system and can be shared amongst other containers
943:44 containers this volume is a docker object and is
943:46 this volume is a docker object and is decoupled from the container
943:49 decoupled from the container they can be attached and shared across
943:51 they can be attached and shared across multiple containers as well bind
943:54 multiple containers as well bind mounting is the second way to mount data
943:56 mounting is the second way to mount data and is coming directly from the host's
943:59 and is coming directly from the host's file system
944:00 file system bind mounts are great for local
944:03 bind mounts are great for local application development
944:05 application development yet cannot be shared across containers
944:08 yet cannot be shared across containers and the last way to mount data is by
944:10 and the last way to mount data is by using temp-fs
944:12 using temp-fs and is stored in the host's memory this
944:15 and is stored in the host's memory this way is great for ephemeral data and
944:17 way is great for ephemeral data and increases performance as it no longer
944:20 increases performance as it no longer lies in the container's writable layer
944:23 lies in the container's writable layer now with kubernetes storage abstractions
944:27 now with kubernetes storage abstractions file system and block based storage are
944:29 file system and block based storage are provided to your pods but are different
944:32 provided to your pods but are different than docker in nature volumes are the
944:35 than docker in nature volumes are the basic storage unit in kubernetes that
944:38 basic storage unit in kubernetes that decouples the storage from the container
944:41 decouples the storage from the container and tie it to the pod and not the
944:43 and tie it to the pod and not the container like in docker a regular
944:46 container like in docker a regular volume simply called volume is basically
944:49 volume simply called volume is basically a directory that the containers in a pod
944:53 a directory that the containers in a pod have access to the particular volume
944:55 have access to the particular volume type used is what will determine its
944:58 type used is what will determine its purpose
944:59 purpose some volume types are backed by
945:01 some volume types are backed by ephemeral storage
945:03 ephemeral storage like empty dir
945:05 like empty dir config map and secrets and these volumes
945:08 config map and secrets and these volumes do not persist after the pod ceases to
945:11 do not persist after the pod ceases to exist
945:12 exist volumes are useful for caching temporary
945:16 volumes are useful for caching temporary information
945:17 information sharing files between containers
945:19 sharing files between containers or to load data into a pod other volume
945:23 or to load data into a pod other volume types are backed by durable storage and
945:26 types are backed by durable storage and persist beyond the lifetime of a pod
945:29 persist beyond the lifetime of a pod like persistent volumes and persistent
945:32 like persistent volumes and persistent volume claims a persistent volume is a
945:35 volume claims a persistent volume is a cluster resource
945:37 cluster resource that pods can use for durable storage a
945:40 that pods can use for durable storage a persistent volume claim
945:42 persistent volume claim can be used to dynamically provision a
945:45 can be used to dynamically provision a persistent volume
945:46 persistent volume backed by persistent disks persistent
945:49 backed by persistent disks persistent volume claims can also be used to
945:52 volume claims can also be used to provision other types of backing storage
945:56 provision other types of backing storage like nfs
945:57 like nfs and i will be getting more into
945:59 and i will be getting more into persistent volumes and persistent volume
946:02 persistent volumes and persistent volume claims in just a bit
946:05 claims in just a bit now as you saw in docker on disk files
946:08 now as you saw in docker on disk files in a container are the simplest place
946:11 in a container are the simplest place for an application to write data but
946:14 for an application to write data but files are lost when the container
946:16 files are lost when the container crashes or stops for any other reason
946:20 crashes or stops for any other reason as well as being unaccessible to other
946:22 as well as being unaccessible to other containers running in the same pod in
946:26 containers running in the same pod in kubernetes the volume source declared in
946:29 kubernetes the volume source declared in the pod specification determines how the
946:33 the pod specification determines how the directory is created
946:35 directory is created the storage medium used
946:37 the storage medium used and the directory's initial contents
946:40 and the directory's initial contents a pod specifies what volumes it contains
946:44 a pod specifies what volumes it contains and the path where containers mount the
946:46 and the path where containers mount the volume ephemeral volume types
946:49 volume ephemeral volume types live the same amount of time as the pods
946:52 live the same amount of time as the pods they are connected to these volumes are
946:55 they are connected to these volumes are created when the pod is created and
946:58 created when the pod is created and persist through container restarts only
947:01 persist through container restarts only when the pod terminates or is deleted
947:04 when the pod terminates or is deleted are the volumes terminated as well other
947:07 are the volumes terminated as well other volume types are interfaces to durable
947:11 volume types are interfaces to durable storage that exist independently of a
947:14 storage that exist independently of a pod like ephemeral volumes data in a
947:17 pod like ephemeral volumes data in a volume backed by durable storage
947:20 volume backed by durable storage is preserved when the pod is removed
947:23 is preserved when the pod is removed the volume is merely unmounted and the
947:26 the volume is merely unmounted and the data can be handed off to another pod
947:30 data can be handed off to another pod now volumes differ in their storage
947:32 now volumes differ in their storage implementation
947:34 implementation and their initial contents you can
947:36 and their initial contents you can choose the volume source that best fits
947:39 choose the volume source that best fits your use case
947:40 your use case and i will be going over some common
947:42 and i will be going over some common volume sources
947:43 volume sources that are used and you will see
947:46 that are used and you will see in many gke implementations the first
947:49 in many gke implementations the first volume that i want to bring up is empty
947:52 volume that i want to bring up is empty dir
947:53 dir now an empty dir volume provides an
947:56 now an empty dir volume provides an empty directory that containers in the
947:59 empty directory that containers in the pod can read and write from when the pod
948:02 pod can read and write from when the pod is removed from a node for any reason
948:05 is removed from a node for any reason the data in the empty dir is deleted
948:08 the data in the empty dir is deleted forever an empty dir volume is stored on
948:11 forever an empty dir volume is stored on whatever medium is backing the node
948:14 whatever medium is backing the node which might be a disk
948:16 which might be a disk ssd or network storage
948:19 ssd or network storage empty der volumes are useful for scratch
948:22 empty der volumes are useful for scratch space and sharing data between multiple
948:26 space and sharing data between multiple containers in a pod
948:28 containers in a pod the next type of volume that i wanted to
948:30 the next type of volume that i wanted to go over is config map
948:32 go over is config map and config map is a resource that
948:34 and config map is a resource that provides a way to inject configuration
948:37 provides a way to inject configuration data into pods
948:39 data into pods the data stored in a config map object
948:42 the data stored in a config map object can be referenced in a volume of type
948:45 can be referenced in a volume of type config map
948:47 config map and then consumed through files running
948:50 and then consumed through files running in a pod the next volume type is secret
948:53 in a pod the next volume type is secret and a secret volume is used to make
948:55 and a secret volume is used to make sensitive data such as passwords oauth
948:59 sensitive data such as passwords oauth tokens and ssh keys
949:01 tokens and ssh keys available to applications
949:04 available to applications the data stored in a secret object can
949:06 the data stored in a secret object can be referenced in a volume of type secret
949:10 be referenced in a volume of type secret and then consumed through files running
949:13 and then consumed through files running in a pod
949:14 in a pod next volume type is downward api and
949:17 next volume type is downward api and this volume makes downward api data
949:20 this volume makes downward api data available to applications
949:22 available to applications so this data includes information
949:25 so this data includes information about the pod and container in which an
949:28 about the pod and container in which an application is running in
949:30 application is running in an example of this would be to expose
949:33 an example of this would be to expose information about the pods namespace and
949:37 information about the pods namespace and ip address to applications and the last
949:40 ip address to applications and the last volume type that i wanted to touch on is
949:42 volume type that i wanted to touch on is persistent volume claim now a persistent
949:45 persistent volume claim now a persistent volume claim volume can be used to
949:48 volume claim volume can be used to provision durable storage so that they
949:51 provision durable storage so that they can be used by applications a pod uses a
949:54 can be used by applications a pod uses a persistent volume claim
949:56 persistent volume claim to mount a volume that is backed by this
949:59 to mount a volume that is backed by this durable storage and so now that i've
950:02 durable storage and so now that i've covered volumes i wanted to go into a
950:05 covered volumes i wanted to go into a bit of detail about persistent volumes
950:08 bit of detail about persistent volumes persistent volume resources are used to
950:11 persistent volume resources are used to manage durable storage in a cluster in
950:14 manage durable storage in a cluster in gke a persistent volume
950:17 gke a persistent volume is typically backed by a persistent disk
950:20 is typically backed by a persistent disk or file store can be used as an nfs
950:22 or file store can be used as an nfs solution unlike volumes the persistent
950:26 solution unlike volumes the persistent volume life cycle is managed by
950:28 volume life cycle is managed by kubernetes and can be dynamically
950:31 kubernetes and can be dynamically provisioned
950:32 provisioned without the need to manually create and
950:35 without the need to manually create and delete the backing storage persistent
950:38 delete the backing storage persistent volume resources are cluster resources
950:41 volume resources are cluster resources that exist independently of pods and
950:44 that exist independently of pods and continue to persist as the cluster
950:46 continue to persist as the cluster changes and as pods are deleted and
950:50 changes and as pods are deleted and recreated moving on to persistent volume
950:53 recreated moving on to persistent volume claims this is a request for
950:55 claims this is a request for and claim to
950:57 and claim to a persistent volume resource persistent
950:59 a persistent volume resource persistent volume claim objects request a specific
951:03 volume claim objects request a specific size
951:04 size access mode and storage class for the
951:07 access mode and storage class for the persistent volume if an existing
951:09 persistent volume if an existing persistent volume can satisfy the
951:12 persistent volume can satisfy the request or can be provisioned the
951:15 request or can be provisioned the persistent volume claim is bound to that
951:18 persistent volume claim is bound to that persistent volume and just as a note
951:21 persistent volume and just as a note pods use claims
951:22 pods use claims as volumes the cluster inspects the
951:25 as volumes the cluster inspects the claim to find the bound volume and
951:28 claim to find the bound volume and mounts that volume for the pod
951:31 mounts that volume for the pod now i wanted to take a moment to go over
951:33 now i wanted to take a moment to go over storage classes and how they apply to
951:36 storage classes and how they apply to the overall storage in gke
951:39 the overall storage in gke now these volume implementations
951:42 now these volume implementations such as gce persistent disk are
951:45 such as gce persistent disk are configured through storage class
951:47 configured through storage class resources
951:48 resources gke creates a default storage class for
951:50 gke creates a default storage class for you which uses the standard persistent
951:53 you which uses the standard persistent disk type of ext4 as shown here the
951:57 disk type of ext4 as shown here the default storage class is used when a
952:00 default storage class is used when a persistent volume claim doesn't specify
952:03 persistent volume claim doesn't specify a storage class name
952:05 a storage class name and can also be replaced with one of
952:08 and can also be replaced with one of your choosing you can even create your
952:11 your choosing you can even create your own storage class resources to describe
952:14 own storage class resources to describe different classes of storage
952:16 different classes of storage and is helpful when using windows node
952:19 and is helpful when using windows node pools
952:20 pools now as i stated before persistent volume
952:23 now as i stated before persistent volume claims can automatically provision
952:26 claims can automatically provision persistent disks for you
952:28 persistent disks for you when you create this persistent volume
952:30 when you create this persistent volume claim object
952:32 claim object kubernetes dynamically creates a
952:35 kubernetes dynamically creates a corresponding persistent volume object
952:38 corresponding persistent volume object due to the gke default storage class
952:41 due to the gke default storage class this persistent volume
952:43 this persistent volume is backed by a new empty compute engine
952:46 is backed by a new empty compute engine persistent disk you use this disk in a
952:49 persistent disk you use this disk in a pod by using the claim as a volume when
952:52 pod by using the claim as a volume when you delete a claim the corresponding
952:55 you delete a claim the corresponding persistent volume object and the
952:57 persistent volume object and the provision compute engine persistent disk
953:00 provision compute engine persistent disk are also deleted now to prevent deletion
953:03 are also deleted now to prevent deletion you can set the reclaim policy of the
953:06 you can set the reclaim policy of the persistent disk resource
953:08 persistent disk resource or its storage class resource to retain
953:12 or its storage class resource to retain now deployments as shown here in this
953:14 now deployments as shown here in this diagram are designed for stateless
953:17 diagram are designed for stateless applications
953:18 applications so all replicas of a deployment
953:21 so all replicas of a deployment share the same persistent volume claim
953:24 share the same persistent volume claim which is why stateful sets are the
953:26 which is why stateful sets are the recommended method of deploying stateful
953:29 recommended method of deploying stateful applications
953:31 applications that require a unique volume per replica
953:34 that require a unique volume per replica by using stateful sets with persistent
953:37 by using stateful sets with persistent volume claim templates
953:39 volume claim templates you can have applications that can scale
953:42 you can have applications that can scale up automatically
953:44 up automatically with unique persistent volume claims
953:47 with unique persistent volume claims associated to each replica pod
953:50 associated to each replica pod now lastly i wanted to touch on some
953:53 now lastly i wanted to touch on some topics that will determine the storage
953:55 topics that will determine the storage access that is available for any gke
953:59 access that is available for any gke cluster in your environment now i first
954:02 cluster in your environment now i first wanted to start off with access modes
954:04 wanted to start off with access modes and there are three supported modes for
954:06 and there are three supported modes for your persistent disks that allow read
954:09 your persistent disks that allow read write access and are listed here read
954:12 write access and are listed here read write once is where the volume can be
954:15 write once is where the volume can be mounted as read write by a single node
954:18 mounted as read write by a single node read only many is where the volume can
954:21 read only many is where the volume can be mounted as a read only by many nodes
954:25 be mounted as a read only by many nodes and lastly read write many is where the
954:28 and lastly read write many is where the volume can be mounted as read write by
954:31 volume can be mounted as read write by many nodes and just as a note
954:34 many nodes and just as a note read write once is the most common use
954:36 read write once is the most common use case for persistent disks
954:39 case for persistent disks and works as the default access mode for
954:42 and works as the default access mode for most applications next i wanted to touch
954:45 most applications next i wanted to touch on the type of persistent disks that are
954:48 on the type of persistent disks that are available and the benefits and caveats
954:51 available and the benefits and caveats of access for each now going through the
954:54 of access for each now going through the persistent disks lesson of this course
954:57 persistent disks lesson of this course you probably know by now about the
955:00 you probably know by now about the available persistent disks when it comes
955:03 available persistent disks when it comes to zonal versus regional availability
955:06 to zonal versus regional availability and so this may be a refresher for some
955:09 and so this may be a refresher for some now going into regional persistent disks
955:12 now going into regional persistent disks these are multi-zonal resources that
955:14 these are multi-zonal resources that replicate data between two zones in the
955:18 replicate data between two zones in the same region and can be used similarly to
955:21 same region and can be used similarly to zonal persistent disks in the event of a
955:24 zonal persistent disks in the event of a zonal outage kubernetes can fail over
955:27 zonal outage kubernetes can fail over workloads using the volume to the other
955:30 workloads using the volume to the other zone regional persistent disks are great
955:34 zone regional persistent disks are great for highly available solutions for
955:37 for highly available solutions for stateful workloads on gke now zonal
955:40 stateful workloads on gke now zonal persistent disks
955:42 persistent disks are zonal resources
955:44 are zonal resources and so unless a zone is specified
955:47 and so unless a zone is specified gke assigns the disk to a single zone
955:51 gke assigns the disk to a single zone and chooses the zone at random once a
955:54 and chooses the zone at random once a persistent disk is provisioned
955:56 persistent disk is provisioned any pods referencing the disk are
955:59 any pods referencing the disk are scheduled to the same zone as the disk
956:03 scheduled to the same zone as the disk and just as a note
956:04 and just as a note using anti-affinity on zones
956:07 using anti-affinity on zones allows stateful set pods to be spread
956:11 allows stateful set pods to be spread across zones
956:13 across zones along with the corresponding disks and
956:15 along with the corresponding disks and the last point that i wanted to cover
956:18 the last point that i wanted to cover when it comes to persistent volume
956:20 when it comes to persistent volume access
956:21 access is the speed of access
956:23 is the speed of access now as stated in an earlier lesson the
956:26 now as stated in an earlier lesson the size of persistent disks determine the
956:29 size of persistent disks determine the iops and throughput of the disk gke
956:33 iops and throughput of the disk gke typically uses persistent disks
956:35 typically uses persistent disks as boot disks and to back kubernetes
956:39 as boot disks and to back kubernetes persistent volumes
956:40 persistent volumes so whenever possible use larger and
956:44 so whenever possible use larger and fewer disks
956:45 fewer disks to achieve higher iops and throughput
956:48 to achieve higher iops and throughput and so that pretty much covers
956:50 and so that pretty much covers everything that i wanted to go over
956:53 everything that i wanted to go over in this lesson on gke storage options
956:56 in this lesson on gke storage options so you can now mark this lesson as
956:58 so you can now mark this lesson as complete
956:59 complete and let's move on to the next one
957:01 and let's move on to the next one [Music]
957:05 [Music] welcome back in these next few demos i'm
957:08 welcome back in these next few demos i'm going to be doing a complete walkthrough
957:10 going to be doing a complete walkthrough and putting all the theory we learned
957:13 and putting all the theory we learned into practice through building and
957:15 into practice through building and interacting with gke clusters and you'll
957:18 interacting with gke clusters and you'll be building and deploying your own
957:21 be building and deploying your own containerized application on this
957:22 containerized application on this cluster called box of bowties so in this
957:26 cluster called box of bowties so in this demo we're going to be setting up our
957:28 demo we're going to be setting up our own gke cluster in the console along
957:31 own gke cluster in the console along with going through all the options that
957:33 with going through all the options that are available when deploying it we're
957:35 are available when deploying it we're also going to use the command line to
957:38 also going to use the command line to configure the cubectl command line tool
957:40 configure the cubectl command line tool so that we can interact with the cluster
957:43 so that we can interact with the cluster so with that being said let's dive in
957:45 so with that being said let's dive in and so here in the console i am logged
957:47 and so here in the console i am logged in as tonybowties gmail.com
957:51 in as tonybowties gmail.com under the project of bow tie inc and so
957:53 under the project of bow tie inc and so before launching the cluster i need to
957:55 before launching the cluster i need to make sure that my default vpc has been
957:58 make sure that my default vpc has been created so i'm going to go over to the
957:59 created so i'm going to go over to the navigation menu and i'm going to scroll
958:02 navigation menu and i'm going to scroll down to vpc network
958:04 down to vpc network and as expected the default network is
958:06 and as expected the default network is here so i can go ahead and create my
958:08 here so i can go ahead and create my cluster and so in order to get to my
958:10 cluster and so in order to get to my kubernetes engine console i'm going to
958:13 kubernetes engine console i'm going to go up to the navigation menu and i'm
958:15 go up to the navigation menu and i'm going to scroll down under compute and
958:17 going to scroll down under compute and you will find here kubernetes engine and
958:20 you will find here kubernetes engine and you'll see a few different options to
958:22 you'll see a few different options to choose from and over here on the left
958:24 choose from and over here on the left hand menu i will be going through these
958:26 hand menu i will be going through these options in the upcoming demos but for
958:29 options in the upcoming demos but for now i want to concentrate on creating
958:31 now i want to concentrate on creating our cluster now gk makes things pretty
958:33 our cluster now gk makes things pretty easy
958:34 easy as i have the option to create a cluster
958:37 as i have the option to create a cluster to deploy a container or even taking the
958:39 to deploy a container or even taking the quick start and so we're going to go
958:41 quick start and so we're going to go ahead and click on create our cluster
958:43 ahead and click on create our cluster and so here we are prompted with our
958:45 and so here we are prompted with our cluster basics now if i really wanted to
958:48 cluster basics now if i really wanted to i can simply fill out all the fields
958:50 i can simply fill out all the fields that you see here and click on create
958:53 that you see here and click on create and it will use all the defaults to
958:55 and it will use all the defaults to build my cluster but we're going to
958:57 build my cluster but we're going to customize it a little bit so we're going
958:59 customize it a little bit so we're going to go ahead and go through all these
959:00 to go ahead and go through all these options so first under name we're going
959:03 options so first under name we're going to name this cluster bowtie dash cluster
959:06 to name this cluster bowtie dash cluster and so under location type we want to
959:08 and so under location type we want to keep things as zonal and if i check off
959:11 keep things as zonal and if i check off the specify default node locations i'll
959:14 the specify default node locations i'll be able to make this a multi-zonal
959:16 be able to make this a multi-zonal cluster as i have the option of
959:19 cluster as i have the option of selecting from multiple zones where i
959:22 selecting from multiple zones where i can situate my nodes and so i can select
959:24 can situate my nodes and so i can select off a bunch of different zones if i
959:26 off a bunch of different zones if i choose but we want to keep it as a
959:29 choose but we want to keep it as a single zonal cluster and so i'm going to
959:31 single zonal cluster and so i'm going to check these all off
959:33 check these all off and under zone i'm going to click on the
959:34 and under zone i'm going to click on the drop down menu and i'm going to select
959:37 drop down menu and i'm going to select us east 1b and just as a note for each
959:40 us east 1b and just as a note for each zone that you select this is where the
959:43 zone that you select this is where the control plane will live so if i was to
959:45 control plane will live so if i was to create a multi-zonal cluster as you can
959:48 create a multi-zonal cluster as you can see the master zone is the zone where
959:50 see the master zone is the zone where the control plane will be created and is
959:53 the control plane will be created and is selected as us east 1b as that is the
959:56 selected as us east 1b as that is the zone that i had selected and so if i
959:58 zone that i had selected and so if i change this
960:00 change this to let's say us east 1d you can see that
960:03 to let's say us east 1d you can see that the control plane will change with it so
960:05 the control plane will change with it so i'm going to change it back to us east
960:07 i'm going to change it back to us east 1b and you also have the option of
960:09 1b and you also have the option of creating a regional cluster and the
960:12 creating a regional cluster and the location selection will change from zone
960:15 location selection will change from zone to region and here you will have to
960:17 to region and here you will have to specify at least one zone to select but
960:20 specify at least one zone to select but please also remember that the same
960:22 please also remember that the same number of nodes will be deployed to each
960:25 number of nodes will be deployed to each selected zone so if i have three nodes
960:27 selected zone so if i have three nodes in this cluster and i decide to select
960:30 in this cluster and i decide to select three zones then i will have nine nodes
960:33 three zones then i will have nine nodes in this cluster and so doing something
960:35 in this cluster and so doing something like this could get quite pricey when
960:38 like this could get quite pricey when you're looking to be cost conscious okay
960:40 you're looking to be cost conscious okay so moving on i'm going to uncheck
960:42 so moving on i'm going to uncheck specify default node locations i'm going
960:44 specify default node locations i'm going to change the location type back to
960:46 to change the location type back to zonal and make sure that my zone is at
960:49 zonal and make sure that my zone is at us east 1b moving down to the master
960:51 us east 1b moving down to the master version this is where we would select
960:54 version this is where we would select either a static version or opt-in to a
960:57 either a static version or opt-in to a release channel for the version of
960:59 release channel for the version of kubernetes that you want for your
961:01 kubernetes that you want for your cluster and so with the static version i
961:03 cluster and so with the static version i can choose from a bunch of different
961:05 can choose from a bunch of different versions here all the way back from
961:07 versions here all the way back from 1.14.10
961:10 1.14.10 all the way to the latest version and so
961:12 all the way to the latest version and so with the release channel i have the
961:14 with the release channel i have the release channel selection here and i can
961:16 release channel selection here and i can choose from the rapid channel the
961:18 choose from the rapid channel the regular channel or the stable channel
961:21 regular channel or the stable channel and so i'm going to keep things as the
961:22 and so i'm going to keep things as the default with the regular channel as well
961:25 default with the regular channel as well i'm going to keep the default version as
961:27 i'm going to keep the default version as the version of my choice now i could go
961:30 the version of my choice now i could go ahead and simply click on create here
961:32 ahead and simply click on create here but as this demo is a walkthrough i'm
961:34 but as this demo is a walkthrough i'm going to go ahead and go through all the
961:36 going to go ahead and go through all the available options so i'm going to start
961:38 available options so i'm going to start by going over to the left hand menu and
961:41 by going over to the left hand menu and clicking on default pool under no pools
961:44 clicking on default pool under no pools now here i have one node pool already
961:46 now here i have one node pool already with three nodes and this is the default
961:49 with three nodes and this is the default node pool that comes with any cluster
961:51 node pool that comes with any cluster but if i was doing something specific i
961:54 but if i was doing something specific i could add another node pool and
961:56 could add another node pool and configure it from here but because i
961:58 configure it from here but because i don't have a need for two node pools i'm
962:01 don't have a need for two node pools i'm gonna go ahead and remove nodepool1 so
962:04 gonna go ahead and remove nodepool1 so i'm going to go up here to remove
962:06 i'm going to go up here to remove nodepool and as you can see gke makes it
962:09 nodepool and as you can see gke makes it really easy for me to add or remove node
962:12 really easy for me to add or remove node pools so i'm going to go back to the
962:14 pools so i'm going to go back to the default pool and i'm going to keep the
962:15 default pool and i'm going to keep the name as is i'm gonna keep my number of
962:18 name as is i'm gonna keep my number of nodes as three and if i wanted to change
962:20 nodes as three and if i wanted to change the number of nodes i can simply select
962:23 the number of nodes i can simply select this i can choose six or however many
962:25 this i can choose six or however many nodes you need for your workload and so
962:28 nodes you need for your workload and so because we're not deploying a large
962:29 because we're not deploying a large workload i'm gonna keep this number at 3
962:32 workload i'm gonna keep this number at 3 and moving right along we do want to
962:34 and moving right along we do want to check off enable auto scaling and so
962:37 check off enable auto scaling and so this way we don't have to worry about
962:39 this way we don't have to worry about scaling up or scaling down and here i'm
962:42 scaling up or scaling down and here i'm going to put the minimum number of nodes
962:44 going to put the minimum number of nodes as one and i'm going to keep my maximum
962:46 as one and i'm going to keep my maximum number of nodes at 3. and so here i'm
962:48 number of nodes at 3. and so here i'm given the option to select the zone
962:51 given the option to select the zone location for my nodes but again for each
962:54 location for my nodes but again for each zone that i select it will run the same
962:56 zone that i select it will run the same amount of nodes so basically i have
962:59 amount of nodes so basically i have another option in order to choose from
963:01 another option in order to choose from having a zonal or multi-zonal cluster
963:04 having a zonal or multi-zonal cluster and because we're creating our cluster
963:06 and because we're creating our cluster in a single zone i'm going to uncheck
963:08 in a single zone i'm going to uncheck this and under automation as you can see
963:11 this and under automation as you can see enable auto upgrade and enable auto
963:14 enable auto upgrade and enable auto repair are both checked off and this is
963:16 repair are both checked off and this is due to the fact that the auto upgrade
963:19 due to the fact that the auto upgrade feature is always enabled for the
963:21 feature is always enabled for the release channel that i selected but as i
963:24 release channel that i selected but as i pointed out in a previous lesson that
963:26 pointed out in a previous lesson that this is google's best practice
963:28 this is google's best practice to have auto upgrade and auto repair
963:31 to have auto upgrade and auto repair enabled and so moving down to the bottom
963:33 enabled and so moving down to the bottom are some fields to change the surge
963:35 are some fields to change the surge upgrade behavior and so just as a
963:37 upgrade behavior and so just as a refresher surge upgrades allow you to
963:40 refresher surge upgrades allow you to control the number of nodes gke can
963:42 control the number of nodes gke can upgrade at a time and control how
963:44 upgrade at a time and control how disruptive those upgrades are to your
963:47 disruptive those upgrades are to your workloads so max surge being the number
963:49 workloads so max surge being the number of additional nodes that can be added to
963:51 of additional nodes that can be added to the node pool during an upgrade and max
963:53 the node pool during an upgrade and max unavailable being the number of nodes
963:55 unavailable being the number of nodes that can be simultaneously unavailable
963:58 that can be simultaneously unavailable during that upgrade and because we're
964:00 during that upgrade and because we're not worried about disruptions we'll just
964:02 not worried about disruptions we'll just leave it set as the default and so
964:04 leave it set as the default and so moving on we're going to move back over
964:06 moving on we're going to move back over to the left hand menu and under no pools
964:08 to the left hand menu and under no pools we're going to click on nodes and here
964:10 we're going to click on nodes and here is where i can choose the type of
964:12 is where i can choose the type of instance that i want to be using for my
964:15 instance that i want to be using for my nodes and so i'm going to keep the image
964:17 nodes and so i'm going to keep the image type as container optimize os and this
964:20 type as container optimize os and this is the default image type but i also
964:22 is the default image type but i also have the option of choosing from others
964:24 have the option of choosing from others like ubuntu or windows and so i'm going
964:27 like ubuntu or windows and so i'm going to keep it as the default and under
964:29 to keep it as the default and under machine configuration i'm going to keep
964:32 machine configuration i'm going to keep it under general purpose
964:34 it under general purpose with series e2 but i do want to change
964:36 with series e2 but i do want to change the machine type to e2 micro just to be
964:40 the machine type to e2 micro just to be cost conscious and under boot disk size
964:43 cost conscious and under boot disk size i want to keep it as 10 gigabytes as we
964:46 i want to keep it as 10 gigabytes as we don't really need 100 gigabytes for what
964:48 don't really need 100 gigabytes for what we're doing here and you also have the
964:50 we're doing here and you also have the option of choosing from a different boot
964:53 option of choosing from a different boot disk type you can change it from
964:55 disk type you can change it from standard persistent disk to ssd but i'm
964:57 standard persistent disk to ssd but i'm going to keep things as standard as well
965:00 going to keep things as standard as well i also have the option here to use
965:02 i also have the option here to use customer manage keys for encryption on
965:04 customer manage keys for encryption on my boot disk as well as selecting from
965:06 my boot disk as well as selecting from preemptable nodes for some cost savings
965:09 preemptable nodes for some cost savings and so i'm going to now move down to
965:11 and so i'm going to now move down to networking and here if i wanted to get
965:13 networking and here if i wanted to get really granular i can add a maximum pods
965:16 really granular i can add a maximum pods per node as well as some network tags
965:19 per node as well as some network tags but our demo doesn't require this so i'm
965:21 but our demo doesn't require this so i'm going to leave it as is and i'm going to
965:23 going to leave it as is and i'm going to go back over to the left hand menu and
965:26 go back over to the left hand menu and click on security and under node
965:28 click on security and under node security you have the option of changing
965:30 security you have the option of changing your service account along with the
965:33 your service account along with the access scopes and so for this demo we
965:35 access scopes and so for this demo we can keep things as the default service
965:38 can keep things as the default service account and the access scopes can be
965:40 account and the access scopes can be left as is i'm going to go back over to
965:42 left as is i'm going to go back over to the left hand menu and click on metadata
965:45 the left hand menu and click on metadata and here i can add kubernetes labels as
965:48 and here i can add kubernetes labels as well as the instance metadata and so i
965:50 well as the instance metadata and so i know i didn't get into node taints but
965:53 know i didn't get into node taints but just to fill you in on no taints when
965:55 just to fill you in on no taints when you submit a workload to run in a
965:57 you submit a workload to run in a cluster the scheduler determines where
966:00 cluster the scheduler determines where to place the pods associated with the
966:02 to place the pods associated with the workload and so the scheduler will place
966:05 workload and so the scheduler will place a pod on any node that satisfies the
966:08 a pod on any node that satisfies the resource requirements for that workload
966:11 resource requirements for that workload so no taints will give you some more
966:13 so no taints will give you some more control over which workloads can run on
966:16 control over which workloads can run on a particular pool of nodes and so they
966:19 a particular pool of nodes and so they let you mark a node so that the
966:21 let you mark a node so that the scheduler avoids or prevents using it
966:24 scheduler avoids or prevents using it for certain pods so for instance if you
966:26 for certain pods so for instance if you had a node pool that is dedicated to
966:29 had a node pool that is dedicated to gpus you'd want to keep that node pool
966:32 gpus you'd want to keep that node pool specifically for the workload that
966:34 specifically for the workload that requires it and although it is in beta
966:37 requires it and although it is in beta this is a great feature to have and so
966:39 this is a great feature to have and so that pretty much covers no pools as we
966:42 that pretty much covers no pools as we see it here and so this is the end of
966:44 see it here and so this is the end of part one of this demo it was getting a
966:46 part one of this demo it was getting a bit long so i decided to break it up
966:49 bit long so i decided to break it up this would be a great opportunity for
966:51 this would be a great opportunity for you to get up and have a stretch get
966:53 you to get up and have a stretch get yourself a coffee or a tea and whenever
966:56 yourself a coffee or a tea and whenever you're ready part two will be starting
966:58 you're ready part two will be starting immediately from the end of part one so
967:01 immediately from the end of part one so you can now mark this as complete and
967:03 you can now mark this as complete and i'll see you in the next one
967:05 i'll see you in the next one [Music]
967:09 [Music] this is part two of creating a gke
967:11 this is part two of creating a gke cluster part 2 will be starting
967:14 cluster part 2 will be starting immediately from the end of part 1. so
967:17 immediately from the end of part 1. so with that being said let's dive in and
967:19 with that being said let's dive in and so i'm going to go back over to the left
967:21 so i'm going to go back over to the left hand menu and under cluster i'm going to
967:24 hand menu and under cluster i'm going to click on automation and here i have the
967:26 click on automation and here i have the option of enabling a maintenance window
967:29 option of enabling a maintenance window for aligning times when auto upgrades
967:31 for aligning times when auto upgrades are allowed i have the option of adding
967:34 are allowed i have the option of adding the window here and i can do it at
967:36 the window here and i can do it at specified times during the week or i can
967:39 specified times during the week or i can create a custom maintenance window and
967:41 create a custom maintenance window and so we don't need a maintenance window
967:43 so we don't need a maintenance window right now so i'm going to uncheck this
967:45 right now so i'm going to uncheck this and as well you have the option of doing
967:48 and as well you have the option of doing maintenance exclusions for when you
967:50 maintenance exclusions for when you don't want maintenance to occur ngk
967:53 don't want maintenance to occur ngk gives you the option of doing multiple
967:55 gives you the option of doing multiple maintenance exclusions for whenever you
967:58 maintenance exclusions for whenever you need them and because we don't need any
968:00 need them and because we don't need any maintenance exclusions i'm going to
968:02 maintenance exclusions i'm going to delete these and here you have the
968:04 delete these and here you have the option to enable vertical pod auto
968:06 option to enable vertical pod auto scaling and this is where gke will
968:09 scaling and this is where gke will automatically schedule pods onto other
968:12 automatically schedule pods onto other nodes that satisfy the resources
968:14 nodes that satisfy the resources required for that workload as well here
968:18 required for that workload as well here i can enable my node auto provisioning
968:21 i can enable my node auto provisioning and enabling this option allows gke to
968:24 and enabling this option allows gke to automatically manage a set of node pools
968:27 automatically manage a set of node pools that can be created and deleted as
968:29 that can be created and deleted as needed and i have a bunch of fields that
968:31 needed and i have a bunch of fields that i can choose from the resource type the
968:34 i can choose from the resource type the minimum and maximum for cpu and memory
968:37 minimum and maximum for cpu and memory the service account
968:39 the service account as well as adding even more resources
968:41 as well as adding even more resources like gpus but our workload doesn't
968:44 like gpus but our workload doesn't require anything this fancy so i'm going
968:46 require anything this fancy so i'm going to delete this and i'm going to uncheck
968:49 to delete this and i'm going to uncheck enable auto provisioning and lastly we
968:51 enable auto provisioning and lastly we have the auto scaling profile and i have
968:54 have the auto scaling profile and i have the option from choosing the balance
968:56 the option from choosing the balance profile which is the default as well as
968:58 profile which is the default as well as the optimize utilization which is still
969:01 the optimize utilization which is still in beta and so i'm going to keep things
969:03 in beta and so i'm going to keep things as the default and i'm going to move
969:05 as the default and i'm going to move back on over to the left hand menu over
969:07 back on over to the left hand menu over to networking and so here i can get
969:09 to networking and so here i can get really granular with my cluster when it
969:12 really granular with my cluster when it comes to networking i have the option of
969:14 comes to networking i have the option of choosing from a public or a private
969:17 choosing from a public or a private cluster as well i can choose from a
969:19 cluster as well i can choose from a different network and since we only have
969:21 different network and since we only have the default that's what shows up but if
969:23 the default that's what shows up but if you had different networks here you can
969:25 you had different networks here you can choose from them as well as the subnets
969:28 choose from them as well as the subnets i can also choose from other networking
969:30 i can also choose from other networking options like pod address range maximum
969:33 options like pod address range maximum pods per node and there's a bunch of
969:35 pods per node and there's a bunch of other options which i won't get into any
969:37 other options which i won't get into any detail with but i encourage you if
969:40 detail with but i encourage you if you're very curious to go through the
969:42 you're very curious to go through the docs and to check out these different
969:44 docs and to check out these different options now the one thing that i wanted
969:46 options now the one thing that i wanted to note here is the enable http low
969:49 to note here is the enable http low balancing and this is a add-on that is
969:51 balancing and this is a add-on that is required in order to use google cloud
969:54 required in order to use google cloud load balancer and so as we discussed
969:56 load balancer and so as we discussed previously in the services lesson when
969:59 previously in the services lesson when you enable service type load balancer a
970:02 you enable service type load balancer a load balancer will be created for you by
970:05 load balancer will be created for you by the cloud provider and so google
970:07 the cloud provider and so google requires you to check this off so that a
970:09 requires you to check this off so that a controller can be installed in the
970:12 controller can be installed in the cluster upon creation and will allow a
970:15 cluster upon creation and will allow a load balancer to be created when the
970:17 load balancer to be created when the service is created and so i'm going to
970:19 service is created and so i'm going to leave this checked as we will be
970:21 leave this checked as we will be deploying a load balancer a little bit
970:23 deploying a load balancer a little bit later and so moving back over to the
970:25 later and so moving back over to the left hand menu i'm going to now click on
970:27 left hand menu i'm going to now click on security and there are many options here
970:29 security and there are many options here to choose from that will allow you to
970:32 to choose from that will allow you to really lock down your cluster and again
970:34 really lock down your cluster and again this would all depend on your specific
970:36 this would all depend on your specific type of workload now i'm not going to go
970:38 type of workload now i'm not going to go through all these options here but i did
970:41 through all these options here but i did want to highlight it for those who are
970:42 want to highlight it for those who are looking to be more security focused with
970:45 looking to be more security focused with your cluster and so moving down the list
970:47 your cluster and so moving down the list in the menu i'm going to click on
970:48 in the menu i'm going to click on metadata and so here i can enter a
970:51 metadata and so here i can enter a description for my cluster as well as
970:53 description for my cluster as well as adding labels and so the last option on
970:56 adding labels and so the last option on the cluster menu is features and here i
970:58 the cluster menu is features and here i have the option of running cloud run for
971:01 have the option of running cloud run for anthos which will allow you to deploy
971:03 anthos which will allow you to deploy serverless workloads to anthos clusters
971:06 serverless workloads to anthos clusters and runs on top of gke and here you can
971:09 and runs on top of gke and here you can enable monitoring for gke and have it be
971:12 enable monitoring for gke and have it be natively monitored by google cloud
971:14 natively monitored by google cloud monitoring and if i was running a
971:16 monitoring and if i was running a third-party product to monitor my
971:18 third-party product to monitor my cluster i can simply uncheck this and
971:21 cluster i can simply uncheck this and use my third-party monitoring and
971:23 use my third-party monitoring and there's a whole bunch of other features
971:25 there's a whole bunch of other features that i won't dive into right now but if
971:28 that i won't dive into right now but if you're curious you can always hover over
971:30 you're curious you can always hover over the question mark and get some more
971:32 the question mark and get some more information about what it does and so
971:34 information about what it does and so now i've pretty much covered all the
971:36 now i've pretty much covered all the configuration that's needed for this
971:39 configuration that's needed for this cluster and so now i'm going to finally
971:41 cluster and so now i'm going to finally head down to the bottom and click on
971:43 head down to the bottom and click on create and so it may take a few minutes
971:45 create and so it may take a few minutes to create this cluster so i'm going to
971:47 to create this cluster so i'm going to go ahead and pause this video here and
971:50 go ahead and pause this video here and i'll be back faster than you can say cat
971:52 i'll be back faster than you can say cat in the hat okay and the cluster has been
971:54 in the hat okay and the cluster has been created as you can see it's in the
971:56 created as you can see it's in the location of us east 1b with three nodes
971:59 location of us east 1b with three nodes six vcpus and three gigabytes of memory
972:03 six vcpus and three gigabytes of memory and i can drill down and see exactly the
972:05 and i can drill down and see exactly the details of the cluster as well if i
972:08 details of the cluster as well if i wanted to edit any of these options i
972:10 wanted to edit any of these options i can simply go up to the top click on
972:12 can simply go up to the top click on edit and make the necessary changes and
972:15 edit and make the necessary changes and so now you're probably wondering what
972:17 so now you're probably wondering what will i need to do in order to create
972:19 will i need to do in order to create this cluster through the command line
972:21 this cluster through the command line well it's a bit simpler than what you
972:23 well it's a bit simpler than what you think and i'm going to show you right
972:25 think and i'm going to show you right now i'm going to simply go over to the
972:26 now i'm going to simply go over to the right hand menu and activate cloud shell
972:29 right hand menu and activate cloud shell and bring this up for better viewing and
972:31 and bring this up for better viewing and i'm going to paste in my command gcloud
972:34 i'm going to paste in my command gcloud container clusters create bow tie dash
972:37 container clusters create bow tie dash cluster with the flag num nodes and the
972:40 cluster with the flag num nodes and the number of nodes that i choose which is
972:42 number of nodes that i choose which is three and so like i said before if i
972:44 three and so like i said before if i wanted to simply create a simple cluster
972:47 wanted to simply create a simple cluster i can do so like this but if i wanted to
972:50 i can do so like this but if i wanted to create the cluster exactly how i built
972:52 create the cluster exactly how i built my last cluster then i can use this
972:54 my last cluster then i can use this command which has all the necessary
972:57 command which has all the necessary flags that i need to make it customize
972:59 flags that i need to make it customize to my liking a not so very exciting
973:02 to my liking a not so very exciting demonstration but at the same time shows
973:05 demonstration but at the same time shows you how easy yet powerful gke really is
973:08 you how easy yet powerful gke really is and so i'm not going to launch this
973:10 and so i'm not going to launch this cluster as i already have one and so now
973:13 cluster as i already have one and so now i wanted to show you how to interact
973:15 i wanted to show you how to interact with your new gke cluster so i'm going
973:17 with your new gke cluster so i'm going to simply clear my screen and so now in
973:20 to simply clear my screen and so now in order for me to interact with my cluster
973:22 order for me to interact with my cluster i'm going to be using the cube ctl
973:25 i'm going to be using the cube ctl command line tool and this is the tool
973:27 command line tool and this is the tool that is used to interact with any
973:29 that is used to interact with any kubernetes cluster no matter the
973:31 kubernetes cluster no matter the platform now i could use the gcloud
973:34 platform now i could use the gcloud container commands but they won't allow
973:36 container commands but they won't allow me to get very granular as the cubectl
973:39 me to get very granular as the cubectl tool and so a caveat of creating your
973:41 tool and so a caveat of creating your cluster through the console is that you
973:44 cluster through the console is that you need to run a command in order to
973:46 need to run a command in order to retrieve the cluster's credentials and
973:49 retrieve the cluster's credentials and configure the cubectl command line tool
973:52 configure the cubectl command line tool and i'm going to go ahead and paste that
973:53 and i'm going to go ahead and paste that in now and the command is gcloud
973:56 in now and the command is gcloud container clusters get dash credentials
974:00 container clusters get dash credentials and the name of my cluster which is bow
974:02 and the name of my cluster which is bow tie dash cluster along with the zone
974:04 tie dash cluster along with the zone flag dash dash zone followed by the zone
974:07 flag dash dash zone followed by the zone itself which is us east 1b i'm going to
974:10 itself which is us east 1b i'm going to go ahead and hit enter and as you can
974:12 go ahead and hit enter and as you can see cubectl has now been configured and
974:15 see cubectl has now been configured and so now i'm able to interact with my
974:17 so now i'm able to interact with my cluster so just to verify i'm going to
974:20 cluster so just to verify i'm going to run the command cubectl
974:22 run the command cubectl getpods and naturally as no workloads
974:26 getpods and naturally as no workloads are currently deployed in the cluster
974:28 are currently deployed in the cluster there are no pods so i'm going to run
974:30 there are no pods so i'm going to run the command cube ctl get nodes
974:34 the command cube ctl get nodes and as you can see the cubectl command
974:36 and as you can see the cubectl command line tool is configured correctly and so
974:39 line tool is configured correctly and so now this cluster is ready to have
974:40 now this cluster is ready to have workloads deployed to it and is also
974:42 workloads deployed to it and is also configured with the cubectl command line
974:45 configured with the cubectl command line tool so that you're able to manage the
974:48 tool so that you're able to manage the cluster and troubleshoot if necessary
974:50 cluster and troubleshoot if necessary now i know that there has been a ton of
974:52 now i know that there has been a ton of features that i covered but i wanted to
974:55 features that i covered but i wanted to give you the full walkthrough so that
974:57 give you the full walkthrough so that you are able to tie in some of the
974:59 you are able to tie in some of the theory from the last few lessons and get
975:01 theory from the last few lessons and get a feel for the gke cluster as we will be
975:05 a feel for the gke cluster as we will be getting more involved with it over the
975:07 getting more involved with it over the next couple of demos and so that's
975:09 next couple of demos and so that's pretty much all i wanted to cover when
975:11 pretty much all i wanted to cover when it comes to creating and setting up a
975:14 it comes to creating and setting up a gke cluster so you can now mark this as
975:16 gke cluster so you can now mark this as complete and whenever you're ready join
975:19 complete and whenever you're ready join me in the console in the next one where
975:21 me in the console in the next one where you will be building your box a bow ties
975:23 you will be building your box a bow ties container to deploy to your new cluster
975:26 container to deploy to your new cluster but if you are not planning to go
975:27 but if you are not planning to go straight into the next demo i do
975:29 straight into the next demo i do recommend that you delete your cluster
975:32 recommend that you delete your cluster to avoid any unnecessary costs and
975:34 to avoid any unnecessary costs and recreate it when you are ready to go
975:36 recreate it when you are ready to go into the next demo
975:38 into the next demo [Music]
975:42 [Music] welcome back now in the last lesson you
975:45 welcome back now in the last lesson you built a custom gke cluster and
975:47 built a custom gke cluster and configured the cube ctl command line
975:50 configured the cube ctl command line tool to interact with the cluster in
975:52 tool to interact with the cluster in this lesson you're going to be building
975:54 this lesson you're going to be building a docker image for a box of bow ties
975:58 a docker image for a box of bow ties using cloud build which will then be
976:00 using cloud build which will then be pushed over to google cloud container
976:03 pushed over to google cloud container registry so that you can deploy it to
976:05 registry so that you can deploy it to your current gke cluster and so as you
976:08 your current gke cluster and so as you can see there's a lot to do here so with
976:10 can see there's a lot to do here so with that being said let's dive in so now the
976:13 that being said let's dive in so now the first thing that you want to do is to
976:15 first thing that you want to do is to clone your repo within cloud shell so
976:18 clone your repo within cloud shell so you can run the necessary commands to
976:20 you can run the necessary commands to build your image so i'm going to go up
976:22 build your image so i'm going to go up here to the top right and i'm going to
976:24 here to the top right and i'm going to open up cloud shell i'm going to make
976:25 open up cloud shell i'm going to make sure that i'm in my home directory so
976:27 sure that i'm in my home directory so i'm going to run the command cd space
976:30 i'm going to run the command cd space tilde
976:31 tilde hit enter and i'm in my home directory
976:33 hit enter and i'm in my home directory if i run the command ls i can see that i
976:36 if i run the command ls i can see that i only have cloud shell.txt and so now i'm
976:39 only have cloud shell.txt and so now i'm going to clone my github repository and
976:42 going to clone my github repository and i'll have a link in the instructions in
976:44 i'll have a link in the instructions in the github repo as well as having it in
976:47 the github repo as well as having it in the lesson text below and so the command
976:49 the lesson text below and so the command would be git clone along with the https
976:52 would be git clone along with the https address of the github repo and i'm going
976:55 address of the github repo and i'm going to hit enter
976:56 to hit enter and it's finished cloning my repo i'm
976:58 and it's finished cloning my repo i'm going to quickly clear my screen
977:00 going to quickly clear my screen and i'm going to run the command ls and
977:03 and i'm going to run the command ls and i can see my repo here and now i'm going
977:05 i can see my repo here and now i'm going to drill down into the directory by
977:07 to drill down into the directory by running cd google cloud associate cloud
977:10 running cd google cloud associate cloud engineer if i run an ls i can see all my
977:13 engineer if i run an ls i can see all my clone files and folders and so now the
977:15 clone files and folders and so now the files that we need are going to be found
977:18 files that we need are going to be found in the box of bowties folder under
977:20 in the box of bowties folder under kubernetes engine and containers so i'm
977:23 kubernetes engine and containers so i'm going to change directories to that
977:24 going to change directories to that location and run ls and under box of bow
977:28 location and run ls and under box of bow ties is a folder called container which
977:31 ties is a folder called container which will have all the necessary files that
977:33 will have all the necessary files that you need in order to build your image we
977:36 you need in order to build your image we have the jpeg for box of bow ties we
977:39 have the jpeg for box of bow ties we have the docker file and we have our
977:41 have the docker file and we have our index.html and so these are the three
977:44 index.html and so these are the three files that we need in order to build the
977:46 files that we need in order to build the image and so as i said before we are
977:48 image and so as i said before we are going to be using a tool called cloud
977:51 going to be using a tool called cloud build which we have not discussed yet
977:53 build which we have not discussed yet cloudbuild is a serverless ci cd
977:56 cloudbuild is a serverless ci cd platform that allows me to package
977:58 platform that allows me to package source code into containers and you can
978:01 source code into containers and you can get really fancy with cloud build but
978:03 get really fancy with cloud build but we're not going to be setting up any ci
978:05 we're not going to be setting up any ci cd pipelines we're merely using cloud
978:08 cd pipelines we're merely using cloud build to build our image and to push it
978:11 build to build our image and to push it out to container registry as well
978:13 out to container registry as well container registry is google cloud's
978:16 container registry is google cloud's private docker repository where you can
978:18 private docker repository where you can manage your docker images and integrates
978:21 manage your docker images and integrates with cloud build gke app engine cloud
978:25 with cloud build gke app engine cloud functions
978:26 functions and other repos like github or bitbucket
978:29 and other repos like github or bitbucket and it allows for an amazing build
978:31 and it allows for an amazing build experience with absolutely no heavy
978:34 experience with absolutely no heavy lifting and because you're able to build
978:36 lifting and because you're able to build images without having to leave google
978:38 images without having to leave google cloud i figured that this would be a
978:40 cloud i figured that this would be a great time to highlight these services
978:43 great time to highlight these services so getting back to it we've cloned the
978:45 so getting back to it we've cloned the repo and so we have our files here in
978:48 repo and so we have our files here in cloud shell and so what you want to do
978:50 cloud shell and so what you want to do now is you want to make sure the cloud
978:52 now is you want to make sure the cloud build api has been enabled as this is a
978:55 build api has been enabled as this is a service that we haven't used before now
978:57 service that we haven't used before now we can go through the console and enable
978:59 we can go through the console and enable the api there but i'm going to run it
979:01 the api there but i'm going to run it here from cloud shell and i'm going to
979:03 here from cloud shell and i'm going to paste in the command gcloud services
979:06 paste in the command gcloud services enable cloudbuild.googleapis.com
979:09 enable cloudbuild.googleapis.com i'm going to hit enter and you should
979:11 i'm going to hit enter and you should get a prompt asking you to authorize the
979:14 get a prompt asking you to authorize the api call you definitely want to
979:16 api call you definitely want to authorize
979:17 authorize should take a few seconds all right and
979:19 should take a few seconds all right and the api has been enabled for cloud build
979:22 the api has been enabled for cloud build so now i'm going to quickly clear my
979:24 so now i'm going to quickly clear my screen and so because i want to show you
979:26 screen and so because i want to show you exactly what cloud build is doing i want
979:28 exactly what cloud build is doing i want to head on over there through the
979:30 to head on over there through the console and so i'm going to go over to
979:31 console and so i'm going to go over to the navigation menu and i'm going to
979:34 the navigation menu and i'm going to scroll down to tools until you come to
979:36 scroll down to tools until you come to cloud build
979:38 cloud build and as expected there is nothing here in
979:40 and as expected there is nothing here in the build history as well not a lot here
979:43 the build history as well not a lot here to interact with and so now you're going
979:45 to interact with and so now you're going to run the command that builds the image
979:47 to run the command that builds the image and so you're going to paste that
979:48 and so you're going to paste that command into the cloud shell which is
979:50 command into the cloud shell which is gcloud builds submit dash dash tag
979:54 gcloud builds submit dash dash tag gcr.io which is the google cloud
979:56 gcr.io which is the google cloud container registry our variable for our
979:59 container registry our variable for our google cloud project along with the
980:01 google cloud project along with the image name of box bow ties version 1.0.0
980:06 image name of box bow ties version 1.0.0 and please don't forget the trailing dot
980:08 and please don't forget the trailing dot at the end i'm going to go ahead and hit
980:10 at the end i'm going to go ahead and hit enter cloud build will now compress the
980:12 enter cloud build will now compress the files and move them to a cloud storage
980:15 files and move them to a cloud storage bucket and then cloud build takes those
980:17 bucket and then cloud build takes those files from the bucket and uses the
980:20 files from the bucket and uses the docker file to execute the docker build
980:22 docker file to execute the docker build process and so i'm going to pause the
980:24 process and so i'm going to pause the video here till the build completes and
980:27 video here till the build completes and i'll be back in a flash okay and the
980:29 i'll be back in a flash okay and the image is complete and is now showing up
980:32 image is complete and is now showing up in the build history in the cloud build
980:34 in the build history in the cloud build dashboard and so if i want to drill down
980:36 dashboard and so if i want to drill down into the actual build right beside the
980:39 into the actual build right beside the green check mark you will see the hot
980:41 green check mark you will see the hot link so you can just simply click on
980:42 link so you can just simply click on that and here you will see a build
980:45 that and here you will see a build summary with the build log
980:48 summary with the build log the execution details along with the
980:51 the execution details along with the build artifacts and as well the
980:53 build artifacts and as well the compressed files are stored in cloud
980:55 compressed files are stored in cloud storage and it has a hot link right here
980:57 storage and it has a hot link right here if i wanted to download the build log i
981:00 if i wanted to download the build log i can do so here and i conveniently have a
981:03 can do so here and i conveniently have a hot link to the image of box of bow ties
981:07 hot link to the image of box of bow ties and this will bring me to my container
981:09 and this will bring me to my container registry so you can go ahead and click
981:11 registry so you can go ahead and click on the link
981:12 on the link it should open up another tab and bring
981:15 it should open up another tab and bring you right to the page of the image that
981:17 you right to the page of the image that covers a lot of its details now the
981:19 covers a lot of its details now the great thing i love about container
981:21 great thing i love about container registry is again it's so tightly
981:24 registry is again it's so tightly coupled with a lot of the other
981:25 coupled with a lot of the other resources within google cloud that i am
981:28 resources within google cloud that i am able to simply deploy right from here
981:31 able to simply deploy right from here and i can deploy to cloud run to gke as
981:35 and i can deploy to cloud run to gke as well as compute engine now i could
981:37 well as compute engine now i could simply deploy this image right from here
981:39 simply deploy this image right from here but i wanted to do it from gke so i'm
981:42 but i wanted to do it from gke so i'm going to go back over to gke in the
981:44 going to go back over to gke in the other tab i'm going to go to the
981:45 other tab i'm going to go to the navigation menu go down to kubernetes
981:48 navigation menu go down to kubernetes engine
981:48 engine and i'm going to go up to the top menu
981:51 and i'm going to go up to the top menu and click on deploy it's going to ask
981:53 and click on deploy it's going to ask for the image you want to deploy and you
981:55 for the image you want to deploy and you want to click on select to select a new
981:57 want to click on select to select a new container image and you should have a
981:59 container image and you should have a menu pop up from the right hand side of
982:01 menu pop up from the right hand side of your screen and under container registry
982:04 your screen and under container registry you should see box of bow ties you can
982:06 you should see box of bow ties you can expand the node here and simply click on
982:09 expand the node here and simply click on the image and then hit select
982:11 the image and then hit select and so now the container image has been
982:13 and so now the container image has been populated into my image path and you
982:16 populated into my image path and you want to scroll down and if i wanted to i
982:18 want to scroll down and if i wanted to i could add another container and even add
982:21 could add another container and even add some environment variables and so we're
982:23 some environment variables and so we're not looking to do that right now so you
982:25 not looking to do that right now so you can simply click on continue and you're
982:27 can simply click on continue and you're going to be prompted with some fields to
982:29 going to be prompted with some fields to fill out for your configuration on your
982:31 fill out for your configuration on your deployment and so the application name
982:34 deployment and so the application name is going to be called box of bow ties
982:37 is going to be called box of bow ties i'm going to keep it in the default
982:38 i'm going to keep it in the default namespace as well i'm going to keep the
982:41 namespace as well i'm going to keep the key value pair as app box of bow ties
982:45 key value pair as app box of bow ties for my labels and because this
982:47 for my labels and because this configuration will create a deployment
982:49 configuration will create a deployment file for me you can always have a look
982:52 file for me you can always have a look at the manifest by clicking on the view
982:54 at the manifest by clicking on the view yaml button before it's deployed and
982:56 yaml button before it's deployed and this is always good practice before you
982:59 this is always good practice before you deploy any workload so as you can see
983:01 deploy any workload so as you can see here at the top i have the kind as
983:04 here at the top i have the kind as deployment the name as well as the
983:06 deployment the name as well as the namespace my labels
983:09 namespace my labels replicas of three as well as my selector
983:12 replicas of three as well as my selector and my spec down here at the bottom as
983:15 and my spec down here at the bottom as well this manifest also holds another
983:17 well this manifest also holds another kind
983:18 kind of horizontal pod auto scaler and is
983:21 of horizontal pod auto scaler and is coupled with the deployment in this
983:22 coupled with the deployment in this manifest due to the reference of the
983:25 manifest due to the reference of the deployment itself and so it's always
983:27 deployment itself and so it's always common practice to try and group the
983:29 common practice to try and group the manifest together whenever you can and
983:32 manifest together whenever you can and so this is a really cool feature to take
983:34 so this is a really cool feature to take advantage of on gke so i'm going to
983:36 advantage of on gke so i'm going to close this now and i'm actually going to
983:38 close this now and i'm actually going to close cloud shell as i don't need it
983:40 close cloud shell as i don't need it right now as well you can see here that
983:43 right now as well you can see here that it's going to deploy to my kubernetes
983:45 it's going to deploy to my kubernetes cluster of bow tie cluster in us east 1b
983:49 cluster of bow tie cluster in us east 1b and if i wanted to i can deploy it to a
983:52 and if i wanted to i can deploy it to a new cluster and if i had any other
983:54 new cluster and if i had any other clusters in my environment they would
983:57 clusters in my environment they would show up here and i'd be able to select
983:59 show up here and i'd be able to select from them as well but bow tie cluster is
984:02 from them as well but bow tie cluster is the only one that i have and so now that
984:04 the only one that i have and so now that you've completed your configuration for
984:06 you've completed your configuration for your deployment you can simply click on
984:08 your deployment you can simply click on deploy this is just going to take a
984:10 deploy this is just going to take a couple minutes so i'm just going to
984:12 couple minutes so i'm just going to pause the video here and i'll be back as
984:14 pause the video here and i'll be back as soon as the deployment is done okay the
984:16 soon as the deployment is done okay the workload has been deployed and i got
984:19 workload has been deployed and i got some default messages that popped up i
984:21 some default messages that popped up i can set an automated pipeline for this
984:23 can set an automated pipeline for this workload but we're not going to do that
984:25 workload but we're not going to do that for this demo but feel free to try it on
984:27 for this demo but feel free to try it on your own later if you'd like and we will
984:29 your own later if you'd like and we will want to expose our service as we want to
984:32 want to expose our service as we want to see if it's up and running and we're
984:34 see if it's up and running and we're going to take care of that in just a bit
984:36 going to take care of that in just a bit and so if i scroll through some of the
984:37 and so if i scroll through some of the details here i can see that i have some
984:40 details here i can see that i have some metrics here for cpu memory and disk the
984:43 metrics here for cpu memory and disk the cluster
984:44 cluster namespace
984:45 namespace labels and all the pods that it's
984:48 labels and all the pods that it's running on basically a live visual
984:50 running on basically a live visual representation of my deployment if i
984:53 representation of my deployment if i scroll back up to the top i can dive
984:55 scroll back up to the top i can dive into some details events
984:58 into some details events and even my manifest i can also copy my
985:01 and even my manifest i can also copy my manifest and download it if i'd like so
985:03 manifest and download it if i'd like so as you can see a lot of different
985:05 as you can see a lot of different options and so now i want to verify my
985:08 options and so now i want to verify my deployment and so i'm going to use the
985:10 deployment and so i'm going to use the cube ctl command line tool to run some
985:13 cube ctl command line tool to run some commands to verify the information so
985:16 commands to verify the information so i'm going to open back up my cloud shell
985:18 i'm going to open back up my cloud shell and make this a little bit bigger for
985:19 and make this a little bit bigger for better viewing and i'm going to run the
985:21 better viewing and i'm going to run the command cubectl get all
985:24 command cubectl get all and as you can see here i have a list of
985:27 and as you can see here i have a list of all the pods that are running the name
985:29 all the pods that are running the name of the service the deployment the
985:32 of the service the deployment the replica set everything about my cluster
985:34 replica set everything about my cluster and my deployment and you should be
985:36 and my deployment and you should be seeing the same when running this
985:38 seeing the same when running this command and so next you want to pull up
985:40 command and so next you want to pull up the details on your deployments in the
985:42 the details on your deployments in the cluster and so the command for that is
985:44 cluster and so the command for that is cube ctl get deployments and it came out
985:48 cube ctl get deployments and it came out kind of crammed at the bottom so i'm
985:50 kind of crammed at the bottom so i'm going to simply clear my screen and run
985:52 going to simply clear my screen and run that command again
985:54 that command again and as you can see the box of bowties
985:56 and as you can see the box of bowties deployment is displayed how many
985:58 deployment is displayed how many replicas that are available how many of
986:00 replicas that are available how many of those replicas achieve their desired
986:02 those replicas achieve their desired state and along with how long the
986:04 state and along with how long the application has been running and so now
986:07 application has been running and so now i want to dive into my pods and in order
986:09 i want to dive into my pods and in order to do that i'm going to run the command
986:12 to do that i'm going to run the command cube ctl get pods and here i can see all
986:16 cube ctl get pods and here i can see all my pods now if i wanted to look at a
986:18 my pods now if i wanted to look at a list of events
986:20 list of events for a specific pod the command for that
986:23 for a specific pod the command for that would be cubectl describe pod and then
986:27 would be cubectl describe pod and then the name of one of the pods so i'm going
986:30 the name of one of the pods so i'm going to pick this first one copy that i'm
986:32 to pick this first one copy that i'm going to paste it and i'm going to hit
986:34 going to paste it and i'm going to hit enter and here i can see all the events
986:36 enter and here i can see all the events that have occurred for this pod as well
986:39 that have occurred for this pod as well i also have access to some other
986:41 i also have access to some other information with regards to volumes
986:44 information with regards to volumes conditions and even the container and
986:47 conditions and even the container and image ids and this is a great command to
986:50 image ids and this is a great command to use for when you're troubleshooting your
986:52 use for when you're troubleshooting your pods and you're trying to get to the
986:54 pods and you're trying to get to the bottom of a problem and so now the final
986:56 bottom of a problem and so now the final step that you want to do is you want to
986:58 step that you want to do is you want to be able to expose your application so
987:01 be able to expose your application so you can check to see if it's running
987:03 you can check to see if it's running properly and so we're going to go ahead
987:05 properly and so we're going to go ahead and do that through the console so i'm
987:06 and do that through the console so i'm going to close down cloud shell and i'm
987:08 going to close down cloud shell and i'm going to go to overview and scroll down
987:11 going to go to overview and scroll down to the bottom click on the button that
987:12 to the bottom click on the button that says expose and if i wanted to i can do
987:15 says expose and if i wanted to i can do it from up here in the top right hand
987:17 it from up here in the top right hand corner where it says expose deployment
987:20 corner where it says expose deployment so i'm going to click on expose and this
987:22 so i'm going to click on expose and this probably looks very familiar to you as
987:25 probably looks very familiar to you as this is a graphical representation of
987:27 this is a graphical representation of the services manifest and so the port
987:30 the services manifest and so the port mapping here will cover the ports
987:32 mapping here will cover the ports configuration of the services manifest
987:34 configuration of the services manifest starting here with port target port as
987:37 starting here with port target port as well as protocol for target port i'm
987:40 well as protocol for target port i'm going to open up port 80. here under
987:42 going to open up port 80. here under service type you have the option of
987:44 service type you have the option of selecting cluster ip
987:46 selecting cluster ip node port or load balancer and the
987:49 node port or load balancer and the service type you want to use is going to
987:51 service type you want to use is going to be low balancer and we can keep the
987:53 be low balancer and we can keep the service name as box of bowties service
987:56 service name as box of bowties service and again you can view the manifest file
987:59 and again you can view the manifest file for this service and you can copy or
988:01 for this service and you can copy or download it if you need to but we don't
988:03 download it if you need to but we don't need this right now so i'm going to
988:04 need this right now so i'm going to close it in a pretty simple process
988:07 close it in a pretty simple process so all i need to do is click on expose
988:10 so all i need to do is click on expose and within a minute or two you should
988:12 and within a minute or two you should have your service up and running with
988:14 have your service up and running with your shiny new low balancer okay and the
988:17 your shiny new low balancer okay and the service has been created and as you can
988:19 service has been created and as you can see we're under the services and ingress
988:22 see we're under the services and ingress from the left hand menu and if i go back
988:24 from the left hand menu and if i go back to the main page of services in ingress
988:26 to the main page of services in ingress you can see that box a bow tie service
988:29 you can see that box a bow tie service is the only one that's here i also have
988:31 is the only one that's here i also have the option of creating a service type
988:34 the option of creating a service type ingress but we don't want to do that
988:36 ingress but we don't want to do that right now so i'm going to go back to
988:37 right now so i'm going to go back to services and here you will see your
988:40 services and here you will see your endpoint and this is the hot link that
988:42 endpoint and this is the hot link that should bring you to your application so
988:45 should bring you to your application so you can click on it now you'll get a
988:46 you can click on it now you'll get a redirect notice as it is only http and
988:50 redirect notice as it is only http and not https so it's safe to click on it so
988:53 not https so it's safe to click on it so i'm going to click on it now and success
988:56 i'm going to click on it now and success and here is your box of bow ties
988:59 and here is your box of bow ties what were you expecting and so i wanted
989:01 what were you expecting and so i wanted to congratulate you on deploying your
989:03 to congratulate you on deploying your first application box of bow ties on
989:06 first application box of bow ties on your gke cluster and so just as a recap
989:09 your gke cluster and so just as a recap you've cloned your repo into your cloud
989:11 you've cloned your repo into your cloud shell environment you then built a
989:13 shell environment you then built a container image using cloud build and
989:16 container image using cloud build and pushed the image to container registry
989:18 pushed the image to container registry you then created a deployment using this
989:21 you then created a deployment using this image and verified the deployment using
989:24 image and verified the deployment using the cube ctl command line tool you then
989:26 the cube ctl command line tool you then launched a service of type low balancer
989:29 launched a service of type low balancer to expose your application and verified
989:32 to expose your application and verified that your application was working so
989:35 that your application was working so fantastic job on your part and that's
989:37 fantastic job on your part and that's pretty much all i wanted to cover in
989:39 pretty much all i wanted to cover in this part of the demo
989:40 this part of the demo so you can now mark this as complete and
989:43 so you can now mark this as complete and whenever you're ready join me in the
989:45 whenever you're ready join me in the console for the next part of the demo
989:47 console for the next part of the demo where you will manage your workload on
989:49 where you will manage your workload on the gke cluster so please be aware of
989:52 the gke cluster so please be aware of the charges incurred on your currently
989:54 the charges incurred on your currently deployed cluster if you plan to do the
989:57 deployed cluster if you plan to do the next demo at a later date again you can
990:00 next demo at a later date again you can mark this as complete and i'll see you
990:02 mark this as complete and i'll see you in the next
990:09 welcome back in the last couple of demo lessons you built a custom gke cluster
990:13 lessons you built a custom gke cluster and deployed the box of bowties
990:14 and deployed the box of bowties application in this lesson you will be
990:17 application in this lesson you will be interacting with this workload on gke by
990:20 interacting with this workload on gke by scaling the application editing your
990:23 scaling the application editing your application and rebuilding your docker
990:25 application and rebuilding your docker image so you can do a rolling update to
990:28 image so you can do a rolling update to the current workload in your cluster now
990:30 the current workload in your cluster now there's a lot to do here so with that
990:32 there's a lot to do here so with that being said let's dive in so continuing
990:35 being said let's dive in so continuing where we left off you currently have
990:38 where we left off you currently have your box of bow ties workload deployed
990:41 your box of bow ties workload deployed on your gke cluster and so the first
990:43 on your gke cluster and so the first thing you want to do is scale your
990:45 thing you want to do is scale your deployment and you are looking to scale
990:48 deployment and you are looking to scale down your cluster to one pod and then
990:51 down your cluster to one pod and then back up again to three and this is just
990:53 back up again to three and this is just to simulate scaling your workload so
990:56 to simulate scaling your workload so whether it be ten pods or one the action
990:59 whether it be ten pods or one the action is still the same so now we can easily
991:02 is still the same so now we can easily do it through the console by drilling
991:04 do it through the console by drilling down into the box of bowties workload
991:07 down into the box of bowties workload going up to the top menu and clicking on
991:09 going up to the top menu and clicking on actions and clicking on scale and here i
991:12 actions and clicking on scale and here i can indicate how many replicas i'd like
991:14 can indicate how many replicas i'd like and scale it accordingly and so i wanted
991:16 and scale it accordingly and so i wanted to do this using the command line so i'm
991:19 to do this using the command line so i'm going to cancel out of here and then i'm
991:21 going to cancel out of here and then i'm going to open up cloud shell instead
991:23 going to open up cloud shell instead okay and now that you have cloud shell
991:25 okay and now that you have cloud shell open up you want to run the command cube
991:28 open up you want to run the command cube ctl get pods to show the currently
991:31 ctl get pods to show the currently running available pods for the box of
991:34 running available pods for the box of bowties workload and you may get a
991:36 bowties workload and you may get a pop-up asking you to authorize the api
991:38 pop-up asking you to authorize the api call using your credentials and you
991:40 call using your credentials and you definitely want to authorize and here
991:42 definitely want to authorize and here you will get a list of all the pods that
991:44 you will get a list of all the pods that are running your box of bow ties
991:46 are running your box of bow ties workload and so now since you want to
991:49 workload and so now since you want to scale your replicas down to one you can
991:52 scale your replicas down to one you can run this command cube ctl scale
991:55 run this command cube ctl scale deployment and your workload which is
991:57 deployment and your workload which is box of bowties dash dash replicas is
992:00 box of bowties dash dash replicas is equal to one you can hit enter and it is
992:03 equal to one you can hit enter and it is now scaled
992:05 now scaled and in order to verify that i'm going to
992:07 and in order to verify that i'm going to run cube ctl get pods and notice that
992:11 run cube ctl get pods and notice that there is only one pod running with my
992:13 there is only one pod running with my box of bow ties workload and in order
992:16 box of bow ties workload and in order for me to scale my deployment back up to
992:18 for me to scale my deployment back up to three replicas i can simply run the same
992:20 three replicas i can simply run the same command but change the replicas from 1
992:23 command but change the replicas from 1 to 3. hit enter it's been scaled i'm
992:26 to 3. hit enter it's been scaled i'm going to run cube ctl get pods and
992:29 going to run cube ctl get pods and notice that i am now back up to 3
992:31 notice that i am now back up to 3 replicas and so as you can see
992:33 replicas and so as you can see increasing or decreasing the number of
992:35 increasing or decreasing the number of replicas in order to scale your
992:38 replicas in order to scale your application is pretty simple to do okay
992:41 application is pretty simple to do okay so now that you've learned how to scale
992:42 so now that you've learned how to scale your application you're gonna learn how
992:44 your application you're gonna learn how to perform a rolling update but in order
992:47 to perform a rolling update but in order to do that you need to make changes to
992:49 to do that you need to make changes to your application and so what you're
992:51 your application and so what you're going to do is edit your application
992:54 going to do is edit your application then rebuild your docker image and apply
992:56 then rebuild your docker image and apply a rolling update and in order to do that
992:58 a rolling update and in order to do that we can stay here in cloud shell as
993:01 we can stay here in cloud shell as you're going to edit the file in cloud
993:02 you're going to edit the file in cloud shell editor i'm going to first clear my
993:05 shell editor i'm going to first clear my screen i'm going to change directory
993:07 screen i'm going to change directory into my home directory and now you want
993:09 into my home directory and now you want to change directories to your container
993:11 to change directories to your container folder where the files are that i need
993:13 folder where the files are that i need to edit i'm going to run ls and here's
993:16 to edit i'm going to run ls and here's the files that i need and so what you're
993:18 the files that i need and so what you're going to do now is edit the
993:20 going to do now is edit the index.html file and the easiest way to
993:23 index.html file and the easiest way to do that is to simply type in edit
993:26 do that is to simply type in edit index.html and hit enter and this will
993:29 index.html and hit enter and this will open up your editor so you can edit your
993:32 open up your editor so you can edit your index.html file and if you remember when
993:35 index.html file and if you remember when we launched our application it looked
993:37 we launched our application it looked exactly like this and so instead of what
993:40 exactly like this and so instead of what were you expecting we're going to
993:42 were you expecting we're going to actually change that text to something a
993:44 actually change that text to something a little different and so i'm going to go
993:46 little different and so i'm going to go back to the editor in my other tab and
993:48 back to the editor in my other tab and where it says what were you expecting
993:51 where it says what were you expecting i'm going to actually change this to
993:53 i'm going to actually change this to well i could always use something to eat
993:56 well i could always use something to eat then i'm going to go back up to the menu
993:58 then i'm going to go back up to the menu click on file and click on save and so
994:00 click on file and click on save and so now in order for me to deploy this i
994:03 now in order for me to deploy this i need to rebuild my container and so i'm
994:05 need to rebuild my container and so i'm going to go back to my terminal i'm
994:07 going to go back to my terminal i'm going to clear the screen and i'm going
994:09 going to clear the screen and i'm going to run the same command that i did the
994:10 to run the same command that i did the last time which is gcloud build submit
994:13 last time which is gcloud build submit dash dash tag gcr dot io with the
994:17 dash dash tag gcr dot io with the variable for your google cloud project
994:19 variable for your google cloud project followed by the image box of bowties
994:22 followed by the image box of bowties colon
994:23 colon 1.0.1 and so this will be a different
994:26 1.0.1 and so this will be a different version of the image also don't forget
994:28 version of the image also don't forget that trailing dot at the end and you can
994:30 that trailing dot at the end and you can hit enter and again this is the process
994:33 hit enter and again this is the process where cloud build compresses the files
994:35 where cloud build compresses the files moves them to a cloud storage bucket and
994:38 moves them to a cloud storage bucket and then takes the files from the bucket and
994:40 then takes the files from the bucket and uses the docker file to execute the
994:43 uses the docker file to execute the docker build process and this will take
994:45 docker build process and this will take a couple minutes so i'm going to pause
994:47 a couple minutes so i'm going to pause the video here and i'll be back before
994:49 the video here and i'll be back before you can say cat in the hat okay and my
994:51 you can say cat in the hat okay and my new image has been created and so i want
994:53 new image has been created and so i want to head over to cloud build just to make
994:56 to head over to cloud build just to make sure that there are no errors so i'm
994:57 sure that there are no errors so i'm going to close down cloud shell because
994:59 going to close down cloud shell because i don't need it right now i'm going to
995:01 i don't need it right now i'm going to head back up to the navigation menu and
995:03 head back up to the navigation menu and scroll down to cloud build and under
995:05 scroll down to cloud build and under build history you should see your second
995:07 build history you should see your second build and if you drill down into it you
995:10 build and if you drill down into it you will see that the build was successful
995:12 will see that the build was successful and heading over to build artifacts you
995:15 and heading over to build artifacts you should now see your new image as version
995:18 should now see your new image as version 1.0.1 and so now i'm going to head over
995:20 1.0.1 and so now i'm going to head over to the registry and verify the image
995:22 to the registry and verify the image there and it seems like everything looks
995:24 there and it seems like everything looks okay so now i'm gonna head back on over
995:26 okay so now i'm gonna head back on over to my gke cluster
995:28 to my gke cluster i'm gonna go to the navigation menu down
995:31 i'm gonna go to the navigation menu down to kubernetes engine and here i'm gonna
995:33 to kubernetes engine and here i'm gonna click on workloads i'm gonna select box
995:35 click on workloads i'm gonna select box of bowties and up at the top menu you
995:38 of bowties and up at the top menu you can click on actions and select a
995:40 can click on actions and select a rolling update and here you are prompted
995:42 rolling update and here you are prompted with a pop-up where you can enter in
995:44 with a pop-up where you can enter in your minimum seconds ready your maximum
995:47 your minimum seconds ready your maximum search percentage as well as your
995:49 search percentage as well as your maximum unavailable percentage and so
995:51 maximum unavailable percentage and so here under container images i am
995:53 here under container images i am prompted to enter in the sha-256 hash of
995:57 prompted to enter in the sha-256 hash of this docker image now a docker image's
996:00 this docker image now a docker image's id is a digest which contains a sha-256
996:04 id is a digest which contains a sha-256 hash of the image's configuration and if
996:07 hash of the image's configuration and if i go back over to the open tab for
996:09 i go back over to the open tab for container registry you can see here the
996:11 container registry you can see here the digest details to give you a little bit
996:14 digest details to give you a little bit more context along with the sha 256 hash
996:18 more context along with the sha 256 hash for the image that i need to deploy and
996:20 for the image that i need to deploy and so you can copy this digest by simply
996:22 so you can copy this digest by simply clicking on the copy button and then you
996:24 clicking on the copy button and then you can head back on over to the gke console
996:27 can head back on over to the gke console head over to the container images
996:29 head over to the container images highlight the hash and paste in the new
996:32 highlight the hash and paste in the new hash and so when you copy it in make
996:34 hash and so when you copy it in make sure it's still in the same format of
996:36 sure it's still in the same format of gcr dot io forward slash your project
996:40 gcr dot io forward slash your project name forward slash box of bow ties the
996:44 name forward slash box of bow ties the at symbol followed by the hash and so
996:46 at symbol followed by the hash and so once you've done that you can click on
996:48 once you've done that you can click on the update button and this will schedule
996:50 the update button and this will schedule an update for your application and as
996:53 an update for your application and as you can see here at the top it says that
996:55 you can see here at the top it says that pods are pending
996:57 pods are pending as well if i go down to active revisions
997:00 as well if i go down to active revisions you can see here that there is a summary
997:02 you can see here that there is a summary and the status that pods are pending and
997:05 and the status that pods are pending and so just as a note
997:06 so just as a note rolling updates allow the deployments
997:09 rolling updates allow the deployments update to take place with zero downtime
997:12 update to take place with zero downtime by incrementally updating pods instances
997:15 by incrementally updating pods instances with new ones so the pods will be
997:17 with new ones so the pods will be scheduled on nodes with available
997:20 scheduled on nodes with available resources and if the nodes do not have
997:22 resources and if the nodes do not have enough resources the pods will stay in a
997:25 enough resources the pods will stay in a pending state but i don't think we're
997:27 pending state but i don't think we're going to have any problems with these
997:28 going to have any problems with these nodes as this application is very light
997:31 nodes as this application is very light in resources and if i open up cloud
997:33 in resources and if i open up cloud shell
997:34 shell and run a cube ctl get pods command
997:39 and run a cube ctl get pods command you will see that new pods have started
997:41 you will see that new pods have started and you can tell this by the age of the
997:43 and you can tell this by the age of the pod as well if you ran the command keep
997:46 pod as well if you ran the command keep ctl describe pod along with the pod name
997:49 ctl describe pod along with the pod name you could also see the event logs when
997:51 you could also see the event logs when the pod was created and if i close cloud
997:54 the pod was created and if i close cloud shell i can see up here at the top of my
997:57 shell i can see up here at the top of my deployment details it shows that my
997:59 deployment details it shows that my replicas have one updated four ready
998:03 replicas have one updated four ready three available and one unavailable and
998:05 three available and one unavailable and if i click on refresh i can see now that
998:08 if i click on refresh i can see now that my replicas are all updated and
998:11 my replicas are all updated and available and so now in order to check
998:13 available and so now in order to check your new update you can simply go down
998:15 your new update you can simply go down to exposing services and click on the
998:18 to exposing services and click on the endpoints link you'll get that redirect
998:20 endpoints link you'll get that redirect notice you can simply click on the link
998:23 notice you can simply click on the link and because the old site may be cached
998:25 and because the old site may be cached in your browser you may have to refresh
998:27 in your browser you may have to refresh your web page
998:29 your web page and success and you have now completed a
998:32 and success and you have now completed a rolling update in gke so i wanted to
998:34 rolling update in gke so i wanted to congratulate you on making it to the end
998:37 congratulate you on making it to the end of this multi-part demo and hope that
998:40 of this multi-part demo and hope that it's been extremely useful in excelling
998:42 it's been extremely useful in excelling your knowledge in gke and so just as a
998:45 your knowledge in gke and so just as a recap you scaled your application to
998:48 recap you scaled your application to accommodate both less and more replicas
998:51 accommodate both less and more replicas you edited your application in the cloud
998:53 you edited your application in the cloud shell editor and rebuilt your container
998:56 shell editor and rebuilt your container image using cloud build you then applied
998:58 image using cloud build you then applied the new digest to your rolling update
999:01 the new digest to your rolling update and applied that rolling update to your
999:03 and applied that rolling update to your deployment while verifying it all in the
999:06 deployment while verifying it all in the end fantastic job on your part as this
999:09 end fantastic job on your part as this was a pretty complex and long multi-part
999:13 was a pretty complex and long multi-part demo and you can expect things like what
999:15 demo and you can expect things like what you've experienced in this demo to pop
999:18 you've experienced in this demo to pop up in your role of being a cloud
999:20 up in your role of being a cloud engineer when dealing with gke and so
999:23 engineer when dealing with gke and so that's pretty much all i wanted to cover
999:25 that's pretty much all i wanted to cover with this multi-part demo working with
999:27 with this multi-part demo working with gke so before you go i wanted to take a
999:31 gke so before you go i wanted to take a few moments to delete all the resources
999:33 few moments to delete all the resources you've created one by one so i'm going
999:35 you've created one by one so i'm going to go up to the top i'm going to close
999:37 to go up to the top i'm going to close all my tabs i'm going to head on over to
999:39 all my tabs i'm going to head on over to clusters and so i don't want to delete
999:42 clusters and so i don't want to delete my cluster just yet but the first thing
999:44 my cluster just yet but the first thing that i want to do is delete my container
999:46 that i want to do is delete my container images so i'm going to head up to the
999:48 images so i'm going to head up to the top and open up cloud shell
999:50 top and open up cloud shell and i'm going to use the command gcloud
999:53 and i'm going to use the command gcloud container images delete gcr dot io
999:56 container images delete gcr dot io forward slash your google cloud project
999:58 forward slash your google cloud project variable forward slash along with your
1000:01 variable forward slash along with your first image of box of bow ties colon
1000:04 first image of box of bow ties colon 1.0.0
1000:06 1.0.0 hit enter it's going to prompt you if
1000:08 hit enter it's going to prompt you if you want to continue you want to hit y
1000:09 you want to continue you want to hit y for yes and it has now deleted the image
1000:12 for yes and it has now deleted the image as well you want to delete your latest
1000:14 as well you want to delete your latest image which is
1000:16 image which is 1.0.1 so i'm going to change the zero to
1000:18 1.0.1 so i'm going to change the zero to one hit enter it's going to ask if you
1000:20 one hit enter it's going to ask if you want to continue yes and so the
1000:22 want to continue yes and so the container images have now been deleted
1000:25 container images have now been deleted and so now along with the images you
1000:27 and so now along with the images you want to delete the artifacts as well and
1000:29 want to delete the artifacts as well and those are stored in cloud storage so i'm
1000:31 those are stored in cloud storage so i'm going to close down cloud shell i'm
1000:33 going to close down cloud shell i'm going to head on up to the navigation
1000:35 going to head on up to the navigation menu and i'm going to head down to
1000:36 menu and i'm going to head down to storage and you want to select your
1000:38 storage and you want to select your bucket that has your project name
1000:40 bucket that has your project name underscore cloud build select the source
1000:42 underscore cloud build select the source folder and click on delete and you're
1000:44 folder and click on delete and you're going to get a prompt asking you to
1000:45 going to get a prompt asking you to delete the selected folder but in order
1000:48 delete the selected folder but in order to do this you need to type in the name
1000:50 to do this you need to type in the name of the folder so i'm going to type it in
1000:51 of the folder so i'm going to type it in now
1000:52 now you can click on confirm and so now the
1000:54 you can click on confirm and so now the folder has been deleted along with the
1000:56 folder has been deleted along with the artifacts and so now that we've taken
1000:59 artifacts and so now that we've taken care of the images along with the
1001:01 care of the images along with the artifacts we need to clean up our gke
1001:04 artifacts we need to clean up our gke cluster so i'm going to head back on up
1001:06 cluster so i'm going to head back on up to the navigation menu and i'm going to
1001:08 to the navigation menu and i'm going to head on over to kubernetes engine and
1001:10 head on over to kubernetes engine and the first thing that i want to delete is
1001:12 the first thing that i want to delete is the low balancer so i'm going to head on
1001:14 the low balancer so i'm going to head on up to services and ingress and you can
1001:16 up to services and ingress and you can select box of bow tie service and go up
1001:19 select box of bow tie service and go up to the top and click on delete you're
1001:21 to the top and click on delete you're going to get a confirmation and you want
1001:23 going to get a confirmation and you want to click on delete and it's going to
1001:24 to click on delete and it's going to take a couple minutes you do quick
1001:26 take a couple minutes you do quick refresh and the service has finally been
1001:28 refresh and the service has finally been deleted i now want to delete my workload
1001:31 deleted i now want to delete my workload so i'm going to go over to the left hand
1001:32 so i'm going to go over to the left hand menu click on workloads select the
1001:35 menu click on workloads select the workload box of bowties and go up to the
1001:38 workload box of bowties and go up to the top and click on delete and you want to
1001:40 top and click on delete and you want to delete all resources including the
1001:42 delete all resources including the horizontal pod auto scaler so you can
1001:44 horizontal pod auto scaler so you can simply click on delete and it may take a
1001:46 simply click on delete and it may take a few minutes to delete gonna go up to the
1001:48 few minutes to delete gonna go up to the top and hit refresh and my workload has
1001:50 top and hit refresh and my workload has been deleted and so now all that's left
1001:53 been deleted and so now all that's left to delete is the gke cluster itself so
1001:55 to delete is the gke cluster itself so i'm going to go back to clusters so
1001:57 i'm going to go back to clusters so you're going to select the cluster and
1001:59 you're going to select the cluster and go up to the top and click on delete and
1002:01 go up to the top and click on delete and you're going to get a prompt asking you
1002:03 you're going to get a prompt asking you if you want to delete these storage pods
1002:05 if you want to delete these storage pods and these are default storage pods that
1002:07 and these are default storage pods that are installed with the cluster as well
1002:09 are installed with the cluster as well you can delete the cluster while the
1002:11 you can delete the cluster while the workload is still in play but i have
1002:13 workload is still in play but i have this habit of being thorough so i wanted
1002:15 this habit of being thorough so i wanted to delete the workload before deleting
1002:18 to delete the workload before deleting the cluster and so you want to go ahead
1002:20 the cluster and so you want to go ahead and click on delete and so that's pretty
1002:22 and click on delete and so that's pretty much all i have for this demo
1002:24 much all i have for this demo and this section on google kubernetes
1002:27 and this section on google kubernetes engine and again congrats on the great
1002:29 engine and again congrats on the great job you can now mark this as complete
1002:32 job you can now mark this as complete and i'll see you in the next one
1002:34 and i'll see you in the next one [Music]
1002:38 [Music] welcome back and in this lesson i will
1002:40 welcome back and in this lesson i will be covering the features of cloud vpn an
1002:43 be covering the features of cloud vpn an essential service for any engineer to
1002:46 essential service for any engineer to know about when looking to connect
1002:48 know about when looking to connect another network to google cloud whether
1002:50 another network to google cloud whether it be your on-premises network another
1002:53 it be your on-premises network another cloud provider
1002:54 cloud provider or even when connecting to vpcs
1002:57 or even when connecting to vpcs this service is a must know for any
1002:59 this service is a must know for any engineer and for the exam so with that
1003:02 engineer and for the exam so with that being said let's dive in
1003:04 being said let's dive in now cloudvpn securely connects your peer
1003:07 now cloudvpn securely connects your peer network to your vpc network through an
1003:10 network to your vpc network through an ipsec vpn connection when i talk about a
1003:13 ipsec vpn connection when i talk about a peer network this is referring to an
1003:16 peer network this is referring to an on-premises vpn device or vpn service a
1003:20 on-premises vpn device or vpn service a vpn gateway hosted by another cloud
1003:22 vpn gateway hosted by another cloud provider such as aws or azure or another
1003:26 provider such as aws or azure or another google cloud vpn gateway and so this is
1003:29 google cloud vpn gateway and so this is an ipsec or encrypted tunnel from your
1003:32 an ipsec or encrypted tunnel from your peer network to your vpc network that
1003:35 peer network to your vpc network that traverses the public internet and so for
1003:38 traverses the public internet and so for those who don't know ipsec being short
1003:40 those who don't know ipsec being short for internet security protocol and this
1003:43 for internet security protocol and this is a set of protocols using algorithms
1003:46 is a set of protocols using algorithms allowing the transport of secure data
1003:49 allowing the transport of secure data over an ip network ipsec operates at the
1003:52 over an ip network ipsec operates at the network layer so layer 3 of the osi
1003:55 network layer so layer 3 of the osi model which allows it to be independent
1003:58 model which allows it to be independent of any applications although it does
1004:00 of any applications although it does come with some additional overhead so
1004:02 come with some additional overhead so please be aware and so when creating
1004:05 please be aware and so when creating your cloud vpn
1004:06 your cloud vpn traffic traveling between the two
1004:08 traffic traveling between the two networks is encrypted by one vpn gateway
1004:12 networks is encrypted by one vpn gateway and then decrypted by the other vpn
1004:15 and then decrypted by the other vpn gateway now moving on to some details
1004:17 gateway now moving on to some details about cloud vpn this is a regional
1004:20 about cloud vpn this is a regional service and so please take that into
1004:22 service and so please take that into consideration when connecting your
1004:24 consideration when connecting your on-premises location to google cloud for
1004:27 on-premises location to google cloud for the least amount of latency it also
1004:30 the least amount of latency it also means that if that region were to go
1004:31 means that if that region were to go down you would lose your connection
1004:33 down you would lose your connection until the region is back up and running
1004:36 until the region is back up and running now cloud vpn is also a site-to-site vpn
1004:39 now cloud vpn is also a site-to-site vpn only and therefore it does not support
1004:42 only and therefore it does not support site-to-client so this means that if you
1004:44 site-to-client so this means that if you have a laptop or a computer at home you
1004:47 have a laptop or a computer at home you cannot use this option with a vpn client
1004:50 cannot use this option with a vpn client to connect to google cloud cloudvpn can
1004:53 to connect to google cloud cloudvpn can also be used in conjunction with private
1004:56 also be used in conjunction with private google access for your on-premises hosts
1004:59 google access for your on-premises hosts so if you're using private google access
1005:02 so if you're using private google access within gcp you can simply connect to
1005:05 within gcp you can simply connect to your data center with vpn and have
1005:08 your data center with vpn and have access as if you were already in gcp so
1005:11 access as if you were already in gcp so if you're looking to extend private
1005:13 if you're looking to extend private google access to your on-premises data
1005:15 google access to your on-premises data center cloud vpn would be the perfect
1005:18 center cloud vpn would be the perfect choice and so when it comes to speeds
1005:20 choice and so when it comes to speeds each cloud vpn tunnel can support up to
1005:23 each cloud vpn tunnel can support up to three gigabits per second total for
1005:26 three gigabits per second total for ingress and egress as well routing
1005:29 ingress and egress as well routing options that are available are both
1005:31 options that are available are both static and dynamic but are only
1005:33 static and dynamic but are only available as dynamic for aha vpn and
1005:37 available as dynamic for aha vpn and lastly cloudvpn supports ik version 1
1005:40 lastly cloudvpn supports ik version 1 and ike version 2 using shared secret
1005:44 and ike version 2 using shared secret and for those of you who are unaware ike
1005:46 and for those of you who are unaware ike stands for internet key exchange and
1005:48 stands for internet key exchange and this helps establish a secure
1005:50 this helps establish a secure authenticated communication channel by
1005:53 authenticated communication channel by using a key exchange algorithm to
1005:55 using a key exchange algorithm to generate a shared secret key to encrypt
1005:58 generate a shared secret key to encrypt communications so know that when you
1006:00 communications so know that when you choose cloudvpn that your connection is
1006:03 choose cloudvpn that your connection is both private and secure so now there are
1006:06 both private and secure so now there are two types of vpn options that are
1006:08 two types of vpn options that are available in google cloud one being the
1006:11 available in google cloud one being the classic vpn and the other being h a vpn
1006:14 classic vpn and the other being h a vpn and i'm going to take a moment to go
1006:16 and i'm going to take a moment to go through the differences now with classic
1006:18 through the differences now with classic vpn this provides a service level
1006:20 vpn this provides a service level agreement of 99.9 percent also known as
1006:24 agreement of 99.9 percent also known as an sla of three nines while h a vpn
1006:28 an sla of three nines while h a vpn provides a four nines sla when
1006:31 provides a four nines sla when configured with two interfaces and two
1006:33 configured with two interfaces and two external ips now when it comes to
1006:35 external ips now when it comes to routing classic vpn supports both static
1006:39 routing classic vpn supports both static and dynamic routing whereas havpn
1006:42 and dynamic routing whereas havpn supports dynamic routing only and this
1006:45 supports dynamic routing only and this must be done through bgp using cloud
1006:48 must be done through bgp using cloud router classic vpn gateways have a
1006:51 router classic vpn gateways have a single interface
1006:52 single interface and a single external ip address and
1006:55 and a single external ip address and support tunnels using static routing as
1006:58 support tunnels using static routing as well as dynamic routing and the static
1007:00 well as dynamic routing and the static routing can be either route based or
1007:02 routing can be either route based or policy based whereas with havpn it can
1007:06 policy based whereas with havpn it can be configured for two interfaces and two
1007:09 be configured for two interfaces and two external ips for true ha capabilities
1007:12 external ips for true ha capabilities and as mentioned earlier when it comes
1007:14 and as mentioned earlier when it comes to routing for havpn dynamic routing is
1007:18 to routing for havpn dynamic routing is the only available option now the one
1007:20 the only available option now the one thing about classic vpn is that google
1007:23 thing about classic vpn is that google cloud is deprecating certain
1007:24 cloud is deprecating certain functionality on october 31st of 2021
1007:29 functionality on october 31st of 2021 and is recommending all their customers
1007:32 and is recommending all their customers to move to h a vpn and so know that this
1007:35 to move to h a vpn and so know that this has not been reflected in the exam and
1007:37 has not been reflected in the exam and not sure if and when it will be but know
1007:40 not sure if and when it will be but know that when you are creating a cloud vpn
1007:42 that when you are creating a cloud vpn connection in your current environment h
1007:45 connection in your current environment h a vpn is the recommended option and so
1007:48 a vpn is the recommended option and so now i wanted to dive into some
1007:49 now i wanted to dive into some architecture of how cloud vpn is set up
1007:53 architecture of how cloud vpn is set up for these two options starting with
1007:55 for these two options starting with classic vpn
1007:57 classic vpn now as i said before classic vpn is a
1008:00 now as i said before classic vpn is a cloud vpn solution
1008:02 cloud vpn solution that lets you connect your peer network
1008:04 that lets you connect your peer network to your vpc network through an ipsec vpn
1008:08 to your vpc network through an ipsec vpn connection in a single region now unlike
1008:11 connection in a single region now unlike h a vpn classic vpn offers no redundancy
1008:15 h a vpn classic vpn offers no redundancy out of the box you would have to create
1008:17 out of the box you would have to create another vpn connection and if the
1008:20 another vpn connection and if the connection were to go down you would
1008:22 connection were to go down you would have to manually switch over the
1008:23 have to manually switch over the connection from one to the other now as
1008:26 connection from one to the other now as you can see here when you create a vpn
1008:28 you can see here when you create a vpn gateway google cloud automatically
1008:31 gateway google cloud automatically chooses only one external ip address for
1008:34 chooses only one external ip address for its interface and the diagram shown here
1008:37 its interface and the diagram shown here shows that of a classic vpn network
1008:41 shows that of a classic vpn network connected from the bowtie dash network
1008:44 connected from the bowtie dash network vpc in bowtie project to an on-premises
1008:47 vpc in bowtie project to an on-premises network configured using a static route
1008:50 network configured using a static route to connect now moving on to h-a-v-p-n
1008:54 to connect now moving on to h-a-v-p-n again this is a highly available cloud
1008:56 again this is a highly available cloud vpn solution that lets you connect your
1008:59 vpn solution that lets you connect your peer network to your vpc network using
1009:02 peer network to your vpc network using an ipsec vpn connection in a single
1009:05 an ipsec vpn connection in a single region exactly like classic vpn where
1009:09 region exactly like classic vpn where havpn differs is that it provides four
1009:12 havpn differs is that it provides four nines sla and as you can see here it
1009:14 nines sla and as you can see here it supports double the connections so when
1009:17 supports double the connections so when you create an h a vpn gateway google
1009:20 you create an h a vpn gateway google cloud automatically chooses two external
1009:23 cloud automatically chooses two external ip addresses
1009:24 ip addresses one for each of its fixed number of two
1009:27 one for each of its fixed number of two interfaces each ip address is
1009:30 interfaces each ip address is automatically chosen from a unique
1009:32 automatically chosen from a unique address pool to support high
1009:34 address pool to support high availability each of these ha vpn
1009:36 availability each of these ha vpn gateway interfaces supports multiple
1009:39 gateway interfaces supports multiple tunnels which allows you to create
1009:41 tunnels which allows you to create multiple h a vpn gateways and you can
1009:44 multiple h a vpn gateways and you can configure an h a vpn gateway with only
1009:47 configure an h a vpn gateway with only one active interface and one public ip
1009:50 one active interface and one public ip address however this configuration does
1009:53 address however this configuration does not provide a four nines sla now for h a
1009:57 not provide a four nines sla now for h a vpn gateway you configure an external
1010:00 vpn gateway you configure an external peer vpn gateway resource that
1010:03 peer vpn gateway resource that represents your physical peer gateway in
1010:05 represents your physical peer gateway in google cloud you can also create this
1010:07 google cloud you can also create this resource as a standalone resource and
1010:09 resource as a standalone resource and use it later in this diagram the two
1010:12 use it later in this diagram the two interfaces of an h a vpn gateway in the
1010:16 interfaces of an h a vpn gateway in the bowtie network vpc living in bowtie
1010:19 bowtie network vpc living in bowtie project are connected to two peer vpn
1010:22 project are connected to two peer vpn gateways in an on-premises network and
1010:25 gateways in an on-premises network and this connection is using dynamic routing
1010:28 this connection is using dynamic routing with bgp connecting to a cloud router in
1010:31 with bgp connecting to a cloud router in google cloud now when it comes to the
1010:33 google cloud now when it comes to the times when using cloudvpn makes sense
1010:37 times when using cloudvpn makes sense one of the first things you should think
1010:38 one of the first things you should think about is whether or not you need public
1010:41 about is whether or not you need public internet access so when you're sharing
1010:43 internet access so when you're sharing files or your company needs a specific
1010:46 files or your company needs a specific sas product that's only available on the
1010:48 sas product that's only available on the internet vpn would be your only option
1010:51 internet vpn would be your only option as well when you're looking to use
1010:53 as well when you're looking to use interconnect and your peering location
1010:56 interconnect and your peering location is not available so you're not able to
1010:58 is not available so you're not able to connect your data center to the
1011:00 connect your data center to the colocation facility of your choice vpn
1011:03 colocation facility of your choice vpn would be the only other option that you
1011:05 would be the only other option that you have as well if budget constraints come
1011:08 have as well if budget constraints come into play when deciding on connecting to
1011:10 into play when deciding on connecting to your peer network vpn would always be
1011:13 your peer network vpn would always be the way to go as cloud interconnect is
1011:16 the way to go as cloud interconnect is going to be the more expensive option
1011:18 going to be the more expensive option and lastly if you don't need a high
1011:20 and lastly if you don't need a high speed network and low latency is not
1011:23 speed network and low latency is not really a concern for you and you only
1011:25 really a concern for you and you only have regular outgoing traffic coming
1011:28 have regular outgoing traffic coming from google cloud then vpn would suffice
1011:31 from google cloud then vpn would suffice for your everyday needs and so the
1011:33 for your everyday needs and so the options shown here are also the deciding
1011:36 options shown here are also the deciding factors to look for when it comes to
1011:39 factors to look for when it comes to questions in the exam that refer to
1011:41 questions in the exam that refer to cloudvpn or connecting networks and so
1011:44 cloudvpn or connecting networks and so that's pretty much all i have for this
1011:46 that's pretty much all i have for this short lesson on cloudvpn so you can now
1011:49 short lesson on cloudvpn so you can now mark this lesson as complete and let's
1011:52 mark this lesson as complete and let's move on to the next one
1011:53 move on to the next one [Music]
1011:57 [Music] welcome back and in this lesson i'm
1011:59 welcome back and in this lesson i'm going to go over another connection type
1012:02 going to go over another connection type that allows for on-premises connectivity
1012:05 that allows for on-premises connectivity to your google cloud vpcs which is cloud
1012:08 to your google cloud vpcs which is cloud interconnect other than vpn this is the
1012:11 interconnect other than vpn this is the other connection type that allows
1012:13 other connection type that allows connectivity from your on-premises
1012:15 connectivity from your on-premises environment to your google cloud vpc
1012:18 environment to your google cloud vpc cloud interconnect is the most common
1012:20 cloud interconnect is the most common connection for most larger organizations
1012:23 connection for most larger organizations and are for those that demand fast low
1012:26 and are for those that demand fast low latency connections this lesson will
1012:28 latency connections this lesson will cover the features of cloud interconnect
1012:31 cover the features of cloud interconnect and the different types that are
1012:32 and the different types that are available so with that being said let's
1012:34 available so with that being said let's dive in so getting right into it cloud
1012:37 dive in so getting right into it cloud interconnect is a low latency highly
1012:40 interconnect is a low latency highly available connection between your
1012:42 available connection between your on-premises data center and google cloud
1012:45 on-premises data center and google cloud vpc networks also cloud interconnect
1012:48 vpc networks also cloud interconnect connections provide internal ip address
1012:51 connections provide internal ip address connection which means internal ip
1012:54 connection which means internal ip addresses are directly accessible from
1012:57 addresses are directly accessible from both networks and so on premises hosts
1013:00 both networks and so on premises hosts can use internal ip addresses and take
1013:03 can use internal ip addresses and take advantage of private google access
1013:06 advantage of private google access rather than external ip addresses to
1013:09 rather than external ip addresses to reach google apis and services traffic
1013:12 reach google apis and services traffic between your on-premises network and
1013:14 between your on-premises network and your vpc network doesn't traverse the
1013:17 your vpc network doesn't traverse the public internet traffic traverses a
1013:19 public internet traffic traverses a dedicated connection or through a
1013:21 dedicated connection or through a service provider with a dedicated
1013:23 service provider with a dedicated connection your vpc network's internal
1013:26 connection your vpc network's internal ip addresses are directly accessible
1013:29 ip addresses are directly accessible from your on-premises network now unlike
1013:32 from your on-premises network now unlike vpn this connection is not encrypted if
1013:35 vpn this connection is not encrypted if you need to encrypt your traffic at the
1013:37 you need to encrypt your traffic at the ip layer you can create one or more
1013:39 ip layer you can create one or more self-managed vpn gateways in your vpc
1013:43 self-managed vpn gateways in your vpc network and assign a private ip address
1013:46 network and assign a private ip address to each gateway now although this may be
1013:48 to each gateway now although this may be a very fast connection it also comes
1013:51 a very fast connection it also comes with a very high price tag now unlike
1013:53 with a very high price tag now unlike vpn this connection type is not
1013:56 vpn this connection type is not encrypted if you need to encrypt your
1013:58 encrypted if you need to encrypt your traffic at the ip layer you can create
1014:01 traffic at the ip layer you can create one or more self-managed vpn gateways in
1014:04 one or more self-managed vpn gateways in your vpc network and assign a private ip
1014:07 your vpc network and assign a private ip address to each gateway now although
1014:10 address to each gateway now although this may be a very fast connection it
1014:12 this may be a very fast connection it also comes with a very high price tag
1014:15 also comes with a very high price tag and is the highest price connection type
1014:17 and is the highest price connection type cloud interconnect offers two options
1014:20 cloud interconnect offers two options for extending your on-premises network
1014:22 for extending your on-premises network dedicated interconnect which provides a
1014:25 dedicated interconnect which provides a direct physical connection between your
1014:27 direct physical connection between your on-premises network and google's network
1014:30 on-premises network and google's network as well as partner interconnect which
1014:32 as well as partner interconnect which provides connectivity between your
1014:34 provides connectivity between your on-premises and vpc networks through a
1014:38 on-premises and vpc networks through a supported service provider and so i
1014:40 supported service provider and so i wanted to take a moment to highlight the
1014:42 wanted to take a moment to highlight the different options for cloud interconnect
1014:44 different options for cloud interconnect starting with dedicated interconnect now
1014:47 starting with dedicated interconnect now dedicated interconnect provides a direct
1014:50 dedicated interconnect provides a direct physical connection between your
1014:52 physical connection between your on-premises network and google's network
1014:55 on-premises network and google's network dedicated interconnect enables you to
1014:58 dedicated interconnect enables you to transfer large amounts of data between
1015:00 transfer large amounts of data between your network and google cloud which can
1015:03 your network and google cloud which can be more cost effective than purchasing
1015:06 be more cost effective than purchasing additional bandwidth over the public
1015:08 additional bandwidth over the public internet for dedicated interconnect you
1015:10 internet for dedicated interconnect you provision a dedicated interconnect
1015:13 provision a dedicated interconnect connection between the google network
1015:15 connection between the google network and your own router in a common location
1015:18 and your own router in a common location the following example shown here shows a
1015:21 the following example shown here shows a single dedicated interconnect connection
1015:24 single dedicated interconnect connection between a vpc network and an on-premises
1015:27 between a vpc network and an on-premises network for this basic setup a dedicated
1015:30 network for this basic setup a dedicated interconnect connection is provisioned
1015:32 interconnect connection is provisioned between the google network and the
1015:34 between the google network and the on-premises router in a common
1015:37 on-premises router in a common co-location facility when you create a
1015:39 co-location facility when you create a vlan attachment you associate it with a
1015:42 vlan attachment you associate it with a cloud router this cloud router creates a
1015:44 cloud router this cloud router creates a bgp session for the vlan attachment and
1015:48 bgp session for the vlan attachment and its corresponding on-premises peer
1015:50 its corresponding on-premises peer router these routes are added as custom
1015:53 router these routes are added as custom dynamic routes in your vpc network and
1015:55 dynamic routes in your vpc network and so for dedicated interconnect connection
1015:57 so for dedicated interconnect connection capacity is delivered over one or more
1016:01 capacity is delivered over one or more 10 gigabits per second or 100 gigabits
1016:04 10 gigabits per second or 100 gigabits per second ethernet connections with the
1016:06 per second ethernet connections with the follow-on maximum capacity supported per
1016:09 follow-on maximum capacity supported per interconnect connection so with your 10
1016:12 interconnect connection so with your 10 gigabit per second connections you can
1016:14 gigabit per second connections you can get up to eight connections totaling a
1016:17 get up to eight connections totaling a speed of 80 gigabits per second with the
1016:19 speed of 80 gigabits per second with the 100 gigabit per second connection you
1016:22 100 gigabit per second connection you can connect two of them together to have
1016:24 can connect two of them together to have a total speed of 200 gigabits per second
1016:27 a total speed of 200 gigabits per second and so for dedicated interconnect your
1016:30 and so for dedicated interconnect your network must physically meet google's
1016:32 network must physically meet google's network in a supported co-location
1016:35 network in a supported co-location facility also known as an interconnect
1016:37 facility also known as an interconnect connection location this facility
1016:40 connection location this facility is where a vendor the co-location
1016:42 is where a vendor the co-location facility provider provisions a circuit
1016:45 facility provider provisions a circuit between your network and a google edge
1016:47 between your network and a google edge point of presence also known as a pop
1016:50 point of presence also known as a pop the setup shown here is suitable for
1016:52 the setup shown here is suitable for non-critical applications that can
1016:55 non-critical applications that can tolerate some downtime but for sensitive
1016:57 tolerate some downtime but for sensitive production applications at least two
1017:00 production applications at least two interconnect connections in two
1017:03 interconnect connections in two different edge availability domains are
1017:05 different edge availability domains are recommended now partner interconnect
1017:08 recommended now partner interconnect provides connectivity between your
1017:10 provides connectivity between your on-premises network and your vpc network
1017:13 on-premises network and your vpc network through a supported service provider so
1017:15 through a supported service provider so this is not a direct connection from
1017:17 this is not a direct connection from your on-premises network to google as
1017:20 your on-premises network to google as the service provider provides a conduit
1017:22 the service provider provides a conduit between your on-premises network and
1017:25 between your on-premises network and google's pop now a partner interconnect
1017:27 google's pop now a partner interconnect connection is useful if a dedicated
1017:30 connection is useful if a dedicated interconnect co-location facility is
1017:33 interconnect co-location facility is physically out of reach or your
1017:35 physically out of reach or your workloads don't warrant an entire 10
1017:38 workloads don't warrant an entire 10 gigabit per second connection for
1017:40 gigabit per second connection for partner interconnect 50 megabits per
1017:42 partner interconnect 50 megabits per second to 50 gigabits per second vlan
1017:45 second to 50 gigabits per second vlan attachments are available with the
1017:47 attachments are available with the maximum supported attachment size of 50
1017:51 maximum supported attachment size of 50 gigabits per second now service
1017:53 gigabits per second now service providers have existing physical
1017:55 providers have existing physical connections to google's network that
1017:57 connections to google's network that they make available for their customer
1017:59 they make available for their customer to use so in this example shown here you
1018:02 to use so in this example shown here you would provision a partner interconnect
1018:05 would provision a partner interconnect connection with a service provider and
1018:07 connection with a service provider and connecting your on-premises network to
1018:10 connecting your on-premises network to that service provider after connectivity
1018:12 that service provider after connectivity is established with the service provider
1018:15 is established with the service provider a partner interconnect connection is
1018:17 a partner interconnect connection is requested from the service provider and
1018:19 requested from the service provider and the service provider configures your vln
1018:22 the service provider configures your vln attachment for use once your connection
1018:24 attachment for use once your connection is provisioned you can start passing
1018:26 is provisioned you can start passing traffic between your networks by using
1018:28 traffic between your networks by using the service providers network now there
1018:31 the service providers network now there are many more detailed steps involved to
1018:33 are many more detailed steps involved to get a connection established along with
1018:35 get a connection established along with traffic flowing but i just wanted to
1018:38 traffic flowing but i just wanted to give you a high level summary of how a
1018:40 give you a high level summary of how a connection would be established with a
1018:42 connection would be established with a service provider now as well to build a
1018:45 service provider now as well to build a highly available topology you can use
1018:48 highly available topology you can use multiple service providers as well you
1018:50 multiple service providers as well you must build redundant connections for
1018:53 must build redundant connections for each service provider in each
1018:55 each service provider in each metropolitan and so now there's a couple
1018:57 metropolitan and so now there's a couple more connection types that run through
1018:59 more connection types that run through service providers that are not on the
1019:01 service providers that are not on the exam but i wanted you to be aware of
1019:03 exam but i wanted you to be aware of them if ever the situation arises in
1019:06 them if ever the situation arises in your role as a cloud engineer so the
1019:08 your role as a cloud engineer so the first one is direct peering and direct
1019:10 first one is direct peering and direct peering enables you to establish a
1019:13 peering enables you to establish a direct peering connection between your
1019:15 direct peering connection between your business network and google's edge
1019:18 business network and google's edge network and exchange high throughput
1019:20 network and exchange high throughput cloud traffic this capability is
1019:23 cloud traffic this capability is available at any of more than 100
1019:26 available at any of more than 100 locations in 33 countries around the
1019:28 locations in 33 countries around the world when established direct peering
1019:31 world when established direct peering provides a direct path from your
1019:33 provides a direct path from your on-premises network to google services
1019:36 on-premises network to google services including google cloud products that can
1019:39 including google cloud products that can be exposed through one or more public ip
1019:42 be exposed through one or more public ip addresses traffic from google's network
1019:45 addresses traffic from google's network to your on-premises network also takes
1019:47 to your on-premises network also takes that direct path
1019:49 that direct path including traffic from vpc networks in
1019:52 including traffic from vpc networks in your projects now you can also save
1019:54 your projects now you can also save money and receive direct egress pricing
1019:57 money and receive direct egress pricing for your projects after they have
1019:59 for your projects after they have established direct peering with google
1020:02 established direct peering with google direct peering exists outside of google
1020:04 direct peering exists outside of google cloud unless you need to access google
1020:07 cloud unless you need to access google workspace applications the recommended
1020:09 workspace applications the recommended methods of access to google cloud are
1020:12 methods of access to google cloud are dedicated interconnect or partner
1020:15 dedicated interconnect or partner interconnect establishing a direct
1020:17 interconnect establishing a direct peering connection with google is free
1020:19 peering connection with google is free and there are no costs per port and no
1020:22 and there are no costs per port and no per hour charges you just have to meet
1020:25 per hour charges you just have to meet google's technical peering requirements
1020:27 google's technical peering requirements and can then be considered for the
1020:29 and can then be considered for the direct peering service
1020:31 direct peering service and moving on to the last connection
1020:33 and moving on to the last connection type is cdn interconnect now i know we
1020:36 type is cdn interconnect now i know we haven't gotten into cdns in the course
1020:39 haven't gotten into cdns in the course as the exam does not require you to know
1020:41 as the exam does not require you to know it but cdn standing for content delivery
1020:45 it but cdn standing for content delivery network is what caches content at the
1020:48 network is what caches content at the network edge to deliver files faster to
1020:51 network edge to deliver files faster to those requesting it one of the main ways
1020:53 those requesting it one of the main ways to improve website performance now
1020:56 to improve website performance now moving on to cdn interconnect this
1020:58 moving on to cdn interconnect this connection type enables select
1021:01 connection type enables select third-party cdn providers like akamai
1021:04 third-party cdn providers like akamai and cloudflare along with others to
1021:07 and cloudflare along with others to establish and optimize your cdn
1021:09 establish and optimize your cdn population costs by using direct peering
1021:12 population costs by using direct peering links with google's edge network and
1021:15 links with google's edge network and enables you to direct your traffic from
1021:17 enables you to direct your traffic from your vpc networks to the provider's
1021:20 your vpc networks to the provider's network and so your egress traffic from
1021:23 network and so your egress traffic from google cloud through one of these links
1021:26 google cloud through one of these links benefits from the direct connectivity to
1021:29 benefits from the direct connectivity to the cdn provider and is billed
1021:31 the cdn provider and is billed automatically with reduced pricing
1021:34 automatically with reduced pricing typical use cases for cdn interconnect
1021:37 typical use cases for cdn interconnect is if you're populating your cdn with
1021:40 is if you're populating your cdn with large data files from google cloud or
1021:42 large data files from google cloud or you have frequent content updates stored
1021:45 you have frequent content updates stored in different cdn locations and so
1021:47 in different cdn locations and so getting into the use cases of when to
1021:50 getting into the use cases of when to use cloud interconnect a big purpose for
1021:53 use cloud interconnect a big purpose for it would be to prevent traffic from
1021:55 it would be to prevent traffic from traversing the public internet it is a
1021:58 traversing the public internet it is a dedicated physical connection right to
1022:00 dedicated physical connection right to google's data centers so when you need
1022:02 google's data centers so when you need an extension of your vpc network to your
1022:05 an extension of your vpc network to your on-premises network interconnect is
1022:08 on-premises network interconnect is definitely the way to go now in speed
1022:10 definitely the way to go now in speed and low latencies of extreme importance
1022:13 and low latencies of extreme importance interconnect is always the best option
1022:16 interconnect is always the best option and will support up to 200 gigabits per
1022:18 and will support up to 200 gigabits per second as well when you have heavy
1022:21 second as well when you have heavy outgoing traffic or egress traffic
1022:24 outgoing traffic or egress traffic leaving google cloud cloud interconnect
1022:26 leaving google cloud cloud interconnect fits the bill perfectly and lastly when
1022:29 fits the bill perfectly and lastly when it comes to private google access this
1022:31 it comes to private google access this travels over the backbone of google's
1022:34 travels over the backbone of google's network and so when you are connected
1022:35 network and so when you are connected with interconnect this is an extension
1022:38 with interconnect this is an extension of that backbone and therefore your
1022:40 of that backbone and therefore your on-premises hosts will be able to take
1022:43 on-premises hosts will be able to take advantage of private google access and
1022:45 advantage of private google access and so i hope this has given you some
1022:47 so i hope this has given you some clarity on the differences between the
1022:49 clarity on the differences between the different connection types and how to
1022:52 different connection types and how to extend your google cloud network to a
1022:55 extend your google cloud network to a peer or on-premises network so that's
1022:58 peer or on-premises network so that's pretty much all i had to cover when it
1023:00 pretty much all i had to cover when it comes to cloud interconnect so you can
1023:02 comes to cloud interconnect so you can now mark this lesson as complete and
1023:05 now mark this lesson as complete and let's move on to the next one
1023:06 let's move on to the next one [Music]
1023:10 [Music] welcome back in this lesson i'm going to
1023:13 welcome back in this lesson i'm going to be covering an overview of app engine
1023:16 be covering an overview of app engine now this is not a deep dive lesson for
1023:18 now this is not a deep dive lesson for app engine as there is so much to cover
1023:21 app engine as there is so much to cover with this service but i will be listing
1023:23 with this service but i will be listing a lot of the features of app engine to
1023:25 a lot of the features of app engine to give you a good feel for what it can do
1023:28 give you a good feel for what it can do and what you will need to know for the
1023:30 and what you will need to know for the exam so with that being said let's dive
1023:33 exam so with that being said let's dive in now app engine is a fully managed
1023:36 in now app engine is a fully managed serverless platform for developing and
1023:39 serverless platform for developing and hosting web applications at scale this
1023:42 hosting web applications at scale this is google's platform as a service
1023:44 is google's platform as a service offering that was designed for
1023:46 offering that was designed for developers so that they can develop
1023:49 developers so that they can develop their application and let app engine do
1023:52 their application and let app engine do all the heavy lifting by taking care of
1023:54 all the heavy lifting by taking care of provisioning the servers and scaling the
1023:57 provisioning the servers and scaling the instances needed based on demand app
1024:00 instances needed based on demand app engine gives you the flexibility of
1024:02 engine gives you the flexibility of launching your code as is or you can
1024:04 launching your code as is or you can launch it as a container and uses
1024:07 launch it as a container and uses runtime environments of a variety of
1024:09 runtime environments of a variety of different programming languages like
1024:11 different programming languages like python java node.js go ruby php or net
1024:18 python java node.js go ruby php or net applications deployed on app engine that
1024:21 applications deployed on app engine that experience regular traffic fluctuations
1024:24 experience regular traffic fluctuations or newly deployed applications where
1024:26 or newly deployed applications where you're simply unsure about the load are
1024:28 you're simply unsure about the load are auto scaled accordingly and
1024:30 auto scaled accordingly and automatically your apps scale up to the
1024:33 automatically your apps scale up to the number of instances that are running to
1024:36 number of instances that are running to provide consistent performance or scale
1024:39 provide consistent performance or scale down to minimize idle instances and
1024:42 down to minimize idle instances and reduces costs app engine also has the
1024:45 reduces costs app engine also has the capabilities of being able to deal with
1024:48 capabilities of being able to deal with rapid scaling for sudden extreme spikes
1024:51 rapid scaling for sudden extreme spikes of traffic having multiple versions of
1024:53 of traffic having multiple versions of your application within each service
1024:56 your application within each service allows you to quickly switch between
1024:58 allows you to quickly switch between different versions of that application
1025:00 different versions of that application for rollbacks testing or other temporary
1025:04 for rollbacks testing or other temporary events you can route traffic to one or
1025:07 events you can route traffic to one or more specific versions of your
1025:08 more specific versions of your application by migrating or splitting
1025:11 application by migrating or splitting traffic and you can use traffic
1025:13 traffic and you can use traffic splitting to specify a percentage
1025:16 splitting to specify a percentage distribution of traffic across two or
1025:19 distribution of traffic across two or more of the versions within a service
1025:21 more of the versions within a service and allows you to do a b testing or blue
1025:24 and allows you to do a b testing or blue green deployment between your versions
1025:27 green deployment between your versions when rolling out new features app engine
1025:30 when rolling out new features app engine supports connecting to back-end storage
1025:32 supports connecting to back-end storage services such as cloud firestore cloud
1025:35 services such as cloud firestore cloud sql and cloud storage along with
1025:38 sql and cloud storage along with connecting to on-premises databases and
1025:41 connecting to on-premises databases and even external databases that are hosted
1025:43 even external databases that are hosted on other public clouds app engine is
1025:46 on other public clouds app engine is available in two separate flavors
1025:48 available in two separate flavors standard and flexible environments and
1025:51 standard and flexible environments and each environment offers their own set of
1025:54 each environment offers their own set of features that i will get into in just a
1025:56 features that i will get into in just a sec
1025:57 sec now as i mentioned before app engine is
1025:59 now as i mentioned before app engine is available in standard and flexible
1026:02 available in standard and flexible environments and depending on your
1026:04 environments and depending on your application needs either one will
1026:06 application needs either one will support what you need for your workload
1026:09 support what you need for your workload or you could even use both
1026:11 or you could even use both simultaneously the features shown here
1026:14 simultaneously the features shown here will give you a feel for both types of
1026:16 will give you a feel for both types of environments and i'm going to be doing a
1026:18 environments and i'm going to be doing a quick run through summarizing the
1026:20 quick run through summarizing the features of each starting with the
1026:22 features of each starting with the standard environment now with the
1026:24 standard environment now with the standard environment applications run in
1026:27 standard environment applications run in a secure sandboxed environment allowing
1026:31 a secure sandboxed environment allowing app engine standard to distribute
1026:33 app engine standard to distribute requests across multiple servers and
1026:36 requests across multiple servers and scaling servers to meet traffic demands
1026:39 scaling servers to meet traffic demands your application runs with its own
1026:41 your application runs with its own secure reliable environment that is
1026:44 secure reliable environment that is independent of the hardware
1026:47 independent of the hardware operating system or physical location of
1026:49 operating system or physical location of the server the source code is written in
1026:52 the server the source code is written in specific versions
1026:54 specific versions of the supported programming languages
1026:56 of the supported programming languages and with app engine standard it is
1026:59 and with app engine standard it is intended to run for free or at a very
1027:01 intended to run for free or at a very low cost where you pay only for what you
1027:04 low cost where you pay only for what you need and when you need it with app
1027:07 need and when you need it with app engine standard your application can
1027:09 engine standard your application can scale to zero instances when there is no
1027:12 scale to zero instances when there is no traffic app engine standard is designed
1027:16 traffic app engine standard is designed for sudden and extreme spikes of traffic
1027:19 for sudden and extreme spikes of traffic which require immediate scaling and
1027:21 which require immediate scaling and pricing for standard app engine is based
1027:24 pricing for standard app engine is based on instance hours and so when it comes
1027:27 on instance hours and so when it comes to features for app engine flexible the
1027:30 to features for app engine flexible the application instances run within docker
1027:33 application instances run within docker containers that includes a custom
1027:35 containers that includes a custom runtime or source code written in other
1027:38 runtime or source code written in other programming languages these docker
1027:40 programming languages these docker containers are then run on compute
1027:43 containers are then run on compute engine vms app engine flexible will run
1027:46 engine vms app engine flexible will run any source code that is written in a
1027:48 any source code that is written in a version of any of the supported
1027:50 version of any of the supported programming languages for app engine
1027:52 programming languages for app engine flexible and unlike the standard
1027:55 flexible and unlike the standard environment unfortunately there is no
1027:57 environment unfortunately there is no free quota for app engine flexible as
1028:00 free quota for app engine flexible as well app engine flexible is designed for
1028:03 well app engine flexible is designed for consistent traffic or for applications
1028:06 consistent traffic or for applications that experience regular traffic
1028:08 that experience regular traffic fluctuations and pricing is based on the
1028:11 fluctuations and pricing is based on the vm resources and not on instance hours
1028:14 vm resources and not on instance hours like app engine standard and so where
1028:16 like app engine standard and so where app engine flexible really shines over
1028:19 app engine flexible really shines over app engine standard are how the vms are
1028:22 app engine standard are how the vms are managed so instances are health checked
1028:25 managed so instances are health checked healed as necessary and co-located with
1028:27 healed as necessary and co-located with other services within the project the
1028:30 other services within the project the vm's operating system is updated and
1028:32 vm's operating system is updated and applied automatically as well vms are
1028:35 applied automatically as well vms are restarted on a weekly basis to make sure
1028:38 restarted on a weekly basis to make sure any necessary operating system and
1028:40 any necessary operating system and security updates are applied ssh along
1028:43 security updates are applied ssh along with root access are available to the vm
1028:46 with root access are available to the vm instances running your containers now
1028:48 instances running your containers now deploying applications to app engine is
1028:51 deploying applications to app engine is as simple as using the gcloud app deploy
1028:53 as simple as using the gcloud app deploy command
1028:54 command this command automatically builds a
1028:57 this command automatically builds a container image from your configuration
1028:59 container image from your configuration file by using the cloud build service
1029:02 file by using the cloud build service and then deploys that image to app
1029:04 and then deploys that image to app engine now an app engine application is
1029:08 engine now an app engine application is made up of a single application resource
1029:11 made up of a single application resource that consists of one or more services
1029:14 that consists of one or more services each service can be configured to use
1029:16 each service can be configured to use different runtimes and to operate with
1029:19 different runtimes and to operate with different performance settings
1029:21 different performance settings services and app engine are used to
1029:23 services and app engine are used to factor your large applications into
1029:26 factor your large applications into logical components
1029:28 logical components that can securely share app engine
1029:30 that can securely share app engine features and communicate with one
1029:32 features and communicate with one another these app engine services become
1029:35 another these app engine services become loosely coupled behaving like
1029:37 loosely coupled behaving like microservices now within each service
1029:40 microservices now within each service you deploy versions of that service and
1029:43 you deploy versions of that service and each version then runs within one or
1029:47 each version then runs within one or more instances depending on how much
1029:49 more instances depending on how much traffic you configured it to handle
1029:52 traffic you configured it to handle having multiple versions of your
1029:53 having multiple versions of your application within each service allows
1029:56 application within each service allows you to quickly switch between different
1029:58 you to quickly switch between different versions of that application for
1030:00 versions of that application for rollbacks testing or other temporary
1030:03 rollbacks testing or other temporary events you can route traffic to one or
1030:06 events you can route traffic to one or more specific versions of your
1030:08 more specific versions of your application by migrating traffic to one
1030:11 application by migrating traffic to one specific version
1030:12 specific version or splitting your traffic between two
1030:14 or splitting your traffic between two separate versions and so the versions
1030:17 separate versions and so the versions within your services run on one or more
1030:20 within your services run on one or more instances by default app engine scales
1030:23 instances by default app engine scales your application to match the load your
1030:26 your application to match the load your applications will scale up the number of
1030:28 applications will scale up the number of instances that are running to provide
1030:31 instances that are running to provide consistent performance or scale down to
1030:34 consistent performance or scale down to minimize idle instances and reduce costs
1030:37 minimize idle instances and reduce costs now when it comes to managing instances
1030:40 now when it comes to managing instances app engine can automatically create and
1030:43 app engine can automatically create and shut down instances as traffic
1030:45 shut down instances as traffic fluctuates or you can specify a number
1030:48 fluctuates or you can specify a number of instances to run
1030:49 of instances to run regardless of the amount of traffic you
1030:52 regardless of the amount of traffic you can also configure how and when new
1030:54 can also configure how and when new instances are created by specifying a
1030:57 instances are created by specifying a scaling type for your application and
1031:00 scaling type for your application and how you do this is you specify the
1031:02 how you do this is you specify the scaling type in your application's
1031:04 scaling type in your application's app.yaml file now there are three
1031:07 app.yaml file now there are three different types of scaling choices to
1031:09 different types of scaling choices to choose from and the first one being
1031:11 choose from and the first one being automatic scaling and this scaling type
1031:14 automatic scaling and this scaling type creates instances based on request rate
1031:17 creates instances based on request rate response latencies and other application
1031:19 response latencies and other application metrics you can specify thresholds for
1031:22 metrics you can specify thresholds for each of these metrics as well as a
1031:25 each of these metrics as well as a minimum number instances to keep running
1031:27 minimum number instances to keep running at all times if you use automatic
1031:30 at all times if you use automatic scaling each instance in your
1031:32 scaling each instance in your application has its own queue for
1031:35 application has its own queue for incoming requests before the queues
1031:37 incoming requests before the queues become long enough to have a visible
1031:39 become long enough to have a visible effect on your app's latency app engine
1031:42 effect on your app's latency app engine automatically creates one or more new
1031:45 automatically creates one or more new instances to handle the load the second
1031:48 instances to handle the load the second type is basic scaling and this creates
1031:51 type is basic scaling and this creates instances when your application receives
1031:53 instances when your application receives requests each instance is shut down when
1031:56 requests each instance is shut down when the application becomes idle basic
1031:59 the application becomes idle basic scaling is fantastic for intermittent
1032:02 scaling is fantastic for intermittent workloads or if you're looking to drive
1032:04 workloads or if you're looking to drive your application by user activity app
1032:07 your application by user activity app engine will try to keep your costs low
1032:09 engine will try to keep your costs low even though it might result in higher
1032:11 even though it might result in higher latency as the volume of incoming
1032:14 latency as the volume of incoming requests increase and so the last
1032:16 requests increase and so the last scaling type is manual scaling and this
1032:18 scaling type is manual scaling and this is where you specify the number of
1032:20 is where you specify the number of instances that continuously run
1032:23 instances that continuously run regardless of the load so these are
1032:25 regardless of the load so these are instances that are constantly running
1032:27 instances that are constantly running and this allows complex startup tasks on
1032:30 and this allows complex startup tasks on the instances to have already been
1032:33 the instances to have already been completed when receiving requests and
1032:35 completed when receiving requests and applications that rely on the state of
1032:38 applications that rely on the state of the memory over time so this is ideal
1032:41 the memory over time so this is ideal for instances whose configuration
1032:43 for instances whose configuration scripts require some time to fully run
1032:46 scripts require some time to fully run their course so now that i've gone over
1032:48 their course so now that i've gone over managing the instances i wanted to take
1032:51 managing the instances i wanted to take a few moments to go over how app engine
1032:54 a few moments to go over how app engine manages traffic starting with traffic
1032:57 manages traffic starting with traffic migration now traffic migration switches
1032:59 migration now traffic migration switches the request routing between the versions
1033:02 the request routing between the versions within a service of your application
1033:05 within a service of your application moving traffic from one or more versions
1033:08 moving traffic from one or more versions to a single new version so when
1033:10 to a single new version so when deploying a new version with the same
1033:12 deploying a new version with the same name of an existing version it causes an
1033:15 name of an existing version it causes an immediate traffic migration all
1033:17 immediate traffic migration all instances of the old version are
1033:19 instances of the old version are immediately shut down in app engine
1033:22 immediately shut down in app engine standard you can choose to route
1033:24 standard you can choose to route requests to the target version either
1033:27 requests to the target version either immediately or gradually you can also
1033:30 immediately or gradually you can also choose to enable warm-up requests if you
1033:32 choose to enable warm-up requests if you want the traffic gradually migrated to a
1033:35 want the traffic gradually migrated to a version gradual traffic migration is not
1033:38 version gradual traffic migration is not supported in app engine flexible and
1033:41 supported in app engine flexible and traffic is migrated immediately now one
1033:44 traffic is migrated immediately now one thing to note is that when you
1033:46 thing to note is that when you immediately migrate traffic to a new
1033:48 immediately migrate traffic to a new version without any running instances
1033:51 version without any running instances then your application will have a spike
1033:53 then your application will have a spike in latency for loading requests
1033:56 in latency for loading requests while instances are being created and so
1033:59 while instances are being created and so another way to manage traffic on app
1034:01 another way to manage traffic on app engine is through traffic splitting now
1034:03 engine is through traffic splitting now you can use traffic splitting to specify
1034:06 you can use traffic splitting to specify a percentage distribution of traffic
1034:09 a percentage distribution of traffic across two or more of the versions
1034:11 across two or more of the versions within a service so in this example if
1034:14 within a service so in this example if i'm deploying a new version of my
1034:16 i'm deploying a new version of my service i can decide on how i want to
1034:19 service i can decide on how i want to distribute traffic to each version of my
1034:22 distribute traffic to each version of my application and so i decide that i want
1034:24 application and so i decide that i want to keep my current version in play but
1034:26 to keep my current version in play but roll out the new version of my
1034:28 roll out the new version of my application to 10 of my users leaving
1034:32 application to 10 of my users leaving the old version was still 90 of the
1034:34 the old version was still 90 of the traffic going to that version and so
1034:37 traffic going to that version and so splitting traffic allows you to conduct
1034:39 splitting traffic allows you to conduct a b testing between your versions and
1034:42 a b testing between your versions and provides control over the pace when
1034:45 provides control over the pace when rolling out features and just as a note
1034:48 rolling out features and just as a note when you've specified two or more
1034:50 when you've specified two or more versions for splitting you must choose
1034:52 versions for splitting you must choose whether to split traffic by either by
1034:55 whether to split traffic by either by either ip address http cookie or do it
1034:59 either ip address http cookie or do it randomly now again this has not been a
1035:02 randomly now again this has not been a deep dive lesson on app engine but i
1035:05 deep dive lesson on app engine but i hope this has given you an overview of
1035:07 hope this has given you an overview of the features that are available as the
1035:09 the features that are available as the exam touches on these features i also
1035:12 exam touches on these features i also wanted to give you some familiarity with
1035:14 wanted to give you some familiarity with the service itself as coming up next i
1035:17 the service itself as coming up next i will be going into a demo where we will
1035:19 will be going into a demo where we will be launching an application using app
1035:22 be launching an application using app engine and trying on some of these
1035:24 engine and trying on some of these features for yourself and so that's
1035:26 features for yourself and so that's pretty much all i wanted to cover
1035:28 pretty much all i wanted to cover when it comes to app engine so you can
1035:30 when it comes to app engine so you can now mark this lesson as complete and
1035:32 now mark this lesson as complete and whenever you're ready join me in the
1035:34 whenever you're ready join me in the console where you will deploy an
1035:36 console where you will deploy an application on app engine and try out
1035:38 application on app engine and try out some of these features for yourself
1035:40 some of these features for yourself [Music]
1035:44 [Music] welcome back and in this demo you're
1035:47 welcome back and in this demo you're going to build another application to
1035:49 going to build another application to deploy on app engine called serverless
1035:52 deploy on app engine called serverless bowties this demo will run you through
1035:55 bowties this demo will run you through the ins and outs of deploying a website
1035:57 the ins and outs of deploying a website application on app engine along with
1036:00 application on app engine along with managing it while experiencing no
1036:02 managing it while experiencing no downtime so there's quite a bit of work
1036:04 downtime so there's quite a bit of work to do here so with that being said let's
1036:07 to do here so with that being said let's dive in and so here in my console i am
1036:09 dive in and so here in my console i am logged in as tonybowtieace gmail.com
1036:13 logged in as tonybowtieace gmail.com under project bowtie inc and so the
1036:15 under project bowtie inc and so the first thing i want to do here is i want
1036:18 first thing i want to do here is i want to head on over to app engine so in
1036:20 to head on over to app engine so in order to do that i'm going to go to the
1036:22 order to do that i'm going to go to the top left-hand navigation menu and i'm
1036:24 top left-hand navigation menu and i'm going to go down to app engine and
1036:26 going to go down to app engine and because i haven't created any
1036:27 because i haven't created any applications i'm going to be brought to
1036:29 applications i'm going to be brought to this splash page now in order to deploy
1036:32 this splash page now in order to deploy this application we're not going to be
1036:34 this application we're not going to be doing it through the console but we will
1036:36 doing it through the console but we will be doing it through the command line and
1036:38 be doing it through the command line and so to get started with that i'm going to
1036:39 so to get started with that i'm going to go up to the top and open up cloud shell
1036:42 go up to the top and open up cloud shell i'm going to make this bigger for better
1036:43 i'm going to make this bigger for better viewing and so in order for me to get
1036:45 viewing and so in order for me to get the code to launch this application i'm
1036:47 the code to launch this application i'm going to be cloning my github repository
1036:50 going to be cloning my github repository into cloud shell and so for those of you
1036:52 into cloud shell and so for those of you who haven't deleted your repository from
1036:55 who haven't deleted your repository from the last demo you can go ahead and skip
1036:58 the last demo you can go ahead and skip the cloning step for those of you who
1037:00 the cloning step for those of you who need to clone your repository you will
1037:02 need to clone your repository you will find a link to the instructions in the
1037:04 find a link to the instructions in the lesson text and there you'll be able to
1037:06 lesson text and there you'll be able to retrieve the command which will be git
1037:08 retrieve the command which will be git clone along with the address of the repo
1037:11 clone along with the address of the repo i'm going to hit enter and because i've
1037:13 i'm going to hit enter and because i've already cloned this repo i'm receiving
1037:15 already cloned this repo i'm receiving this error i'm going to do an ls and as
1037:17 this error i'm going to do an ls and as you can see here the google cloud
1037:19 you can see here the google cloud associate cloud engineer repo has
1037:21 associate cloud engineer repo has already been cloned so i'm going to cd
1037:23 already been cloned so i'm going to cd into that directory and in order to get
1037:26 into that directory and in order to get the code i'm going to simply run the
1037:28 the code i'm going to simply run the command git pull
1037:30 command git pull to get the latest and i'm going to
1037:32 to get the latest and i'm going to simply clear my screen and so now that
1037:34 simply clear my screen and so now that i've retrieved all the code that i need
1037:36 i've retrieved all the code that i need in order to deploy it i need to go to
1037:38 in order to deploy it i need to go to that directory and that directory is
1037:40 that directory and that directory is going to be 11 serverless services
1037:44 going to be 11 serverless services forward slash 0 1 serverless bowties and
1037:47 forward slash 0 1 serverless bowties and hit enter you're going to run ls and
1037:49 hit enter you're going to run ls and here you will find two versions of the
1037:51 here you will find two versions of the website application site v1 and site v2
1037:55 website application site v1 and site v2 along with the instructions if you want
1037:57 along with the instructions if you want to follow straight from here and so i
1037:59 to follow straight from here and so i want to go ahead and deploy my first
1038:01 want to go ahead and deploy my first website application so i'm going to cd
1038:04 website application so i'm going to cd into site v1 ls and here you will see
1038:08 into site v1 ls and here you will see the app.yaml which is the configuration
1038:11 the app.yaml which is the configuration file that you will need in order to run
1038:13 file that you will need in order to run the application on app engine and so
1038:15 the application on app engine and so before i go ahead and deploy this i
1038:18 before i go ahead and deploy this i wanted to take a moment to show you the
1038:20 wanted to take a moment to show you the application configuration so i'm going
1038:22 application configuration so i'm going to go ahead and open it up in cloud
1038:24 to go ahead and open it up in cloud shell editor so i'm going to type in
1038:25 shell editor so i'm going to type in edit app.yaml enter and as you can see
1038:29 edit app.yaml enter and as you can see here my runtime is python 3.7 and as you
1038:32 here my runtime is python 3.7 and as you can see i have a default expiration of
1038:35 can see i have a default expiration of two seconds along with an expiration
1038:37 two seconds along with an expiration underneath each handler and this is due
1038:40 underneath each handler and this is due to the caching issue that happens with
1038:42 to the caching issue that happens with app engine and so in order to simulate
1038:44 app engine and so in order to simulate traffic splitting between the two
1038:46 traffic splitting between the two website applications in order to make
1038:48 website applications in order to make things easy i needed to expire the cash
1038:51 things easy i needed to expire the cash and this is an easy way to do it now
1038:54 and this is an easy way to do it now there may be applications out there that
1038:56 there may be applications out there that do need that caching and so the
1038:58 do need that caching and so the expiration may be a lot higher but for
1039:00 expiration may be a lot higher but for the purposes of this demo two seconds
1039:02 the purposes of this demo two seconds expiration should suffice as well
1039:05 expiration should suffice as well explain the two handlers here the first
1039:08 explain the two handlers here the first one showing the files that will be
1039:09 one showing the files that will be uploaded to the cloud storage bucket as
1039:12 uploaded to the cloud storage bucket as well as the second stating what static
1039:14 well as the second stating what static files will be presented and so i'm going
1039:16 files will be presented and so i'm going to go ahead back over to my terminal and
1039:18 to go ahead back over to my terminal and i'm going to go ahead and clear my
1039:20 i'm going to go ahead and clear my screen and i'm going to go ahead and run
1039:22 screen and i'm going to go ahead and run the command gcloud app deploy with the
1039:25 the command gcloud app deploy with the flag dash dash version and this is going
1039:28 flag dash dash version and this is going to be version one so i'm going to go
1039:30 to be version one so i'm going to go ahead and hit enter and you may get a
1039:32 ahead and hit enter and you may get a pop-up asking you to authorize this api
1039:34 pop-up asking you to authorize this api call using your credentials and you want
1039:36 call using your credentials and you want to click on authorize and you're going
1039:38 to click on authorize and you're going to be prompted to enter in a region that
1039:41 to be prompted to enter in a region that you want to deploy your website
1039:43 you want to deploy your website application to we want to keep this in
1039:45 application to we want to keep this in us east one so i'm going to type in 15
1039:48 us east one so i'm going to type in 15 hit enter
1039:49 hit enter and you're going to be prompted to
1039:51 and you're going to be prompted to verify your configuration for your
1039:53 verify your configuration for your application before it's deployed you're
1039:55 application before it's deployed you're also going to be prompted if you want to
1039:56 also going to be prompted if you want to continue definitely yes so i'm going to
1039:59 continue definitely yes so i'm going to hit y enter and so now as you've seen
1040:01 hit y enter and so now as you've seen the files have been uploaded to cloud
1040:03 the files have been uploaded to cloud storage and app engine is going to take
1040:06 storage and app engine is going to take a few minutes to create the service
1040:08 a few minutes to create the service along with the version so i'm going to
1040:10 along with the version so i'm going to let it do the needful and i'll be back
1040:12 let it do the needful and i'll be back before you know it okay and my
1040:13 before you know it okay and my application has been deployed now
1040:15 application has been deployed now although you don't see it here in the
1040:17 although you don't see it here in the console it has been deployed all i need
1040:20 console it has been deployed all i need to do is refresh my screen but i wanted
1040:22 to do is refresh my screen but i wanted to just point out a couple things that
1040:24 to just point out a couple things that is shown here in the terminal the first
1040:26 is shown here in the terminal the first one being the default service now the
1040:29 one being the default service now the first time you deploy a version of your
1040:31 first time you deploy a version of your application it will always deploy to the
1040:34 application it will always deploy to the default service initially and only then
1040:36 default service initially and only then will you be able to deploy another named
1040:39 will you be able to deploy another named service to app engine now here where it
1040:42 service to app engine now here where it says setting traffic split for service
1040:44 says setting traffic split for service this is referring to the configuration
1040:46 this is referring to the configuration for traffic splitting being applied in
1040:49 for traffic splitting being applied in the background which i will be getting
1040:51 the background which i will be getting into a little bit later and lastly the
1040:54 into a little bit later and lastly the url shown for the deployed service will
1040:56 url shown for the deployed service will always start with the name of your
1040:58 always start with the name of your project followed by.ue.r.appspot.com
1041:06 which is why in production google recommends to run app engine in a
1041:08 recommends to run app engine in a completely separate project
1041:10 completely separate project before this demo running it in the same
1041:12 before this demo running it in the same project that we've been using will
1041:14 project that we've been using will suffice okay so let's go ahead and take
1041:17 suffice okay so let's go ahead and take a look at the application so i'm going
1041:19 a look at the application so i'm going to go back up to the top here to the
1041:21 to go back up to the top here to the navigation menu and i'm gonna go down to
1041:24 navigation menu and i'm gonna go down to app engine and go over to services and
1041:26 app engine and go over to services and so here you will see the default service
1041:29 so here you will see the default service with version one and if i go over to
1041:31 with version one and if i go over to versions i will see here my version the
1041:34 versions i will see here my version the status
1041:35 status the traffic allocation along with any
1041:38 the traffic allocation along with any instances that it needs the run time the
1041:41 instances that it needs the run time the specific environment and i'll have some
1041:44 specific environment and i'll have some diagnostic tools here that i could use
1041:46 diagnostic tools here that i could use and so because this is a static website
1041:48 and so because this is a static website application we won't be using any
1041:51 application we won't be using any instances and so this will always show a
1041:53 instances and so this will always show a zero so now i want to head back on over
1041:56 zero so now i want to head back on over to services and i'm going to launch my
1041:58 to services and i'm going to launch my application by simply clicking on this
1042:00 application by simply clicking on this hot link
1042:01 hot link and success serverless bow ties for all
1042:04 and success serverless bow ties for all and so it looks like my application has
1042:06 and so it looks like my application has been successfully deployed so i'm going
1042:08 been successfully deployed so i'm going to close down this tab now there's a
1042:10 to close down this tab now there's a couple of things that i wanted to run
1042:12 couple of things that i wanted to run through here on the left hand menu just
1042:14 through here on the left hand menu just for your information so here i can click
1042:17 for your information so here i can click on instances and if i was running any
1042:19 on instances and if i was running any instances i am able to see a summary of
1042:22 instances i am able to see a summary of those instances and i can click on the
1042:25 those instances and i can click on the drop down here and choose a different
1042:27 drop down here and choose a different metric and find out any information that
1042:29 metric and find out any information that i need as well i can click on this drop
1042:32 i need as well i can click on this drop down and select a version if i had
1042:34 down and select a version if i had multiple versions which i do not
1042:37 multiple versions which i do not clicking on task queues here is where i
1042:39 clicking on task queues here is where i can manage my task queues but this is a
1042:41 can manage my task queues but this is a legacy service that will soon be
1042:43 legacy service that will soon be deprecated clicking on cron jobs here i
1042:46 deprecated clicking on cron jobs here i can schedule any tasks that i need to
1042:48 can schedule any tasks that i need to run at a specific time on a recurring
1042:51 run at a specific time on a recurring basis i can edit or add any firewall
1042:54 basis i can edit or add any firewall rules if i need to and as you can see
1042:56 rules if i need to and as you can see the default firewall rule is open to the
1042:59 the default firewall rule is open to the world now you probably noticed memcache
1043:02 world now you probably noticed memcache as being one of the options here in the
1043:04 as being one of the options here in the menu but this is a legacy service that
1043:07 menu but this is a legacy service that will soon be deprecated
1043:09 will soon be deprecated memcache is a distributed in-memory data
1043:11 memcache is a distributed in-memory data store that is bundled into the python to
1043:14 store that is bundled into the python to runtime acting as a cache for specific
1043:18 runtime acting as a cache for specific tasks and google recommends moving to
1043:21 tasks and google recommends moving to memory store for redis if you're
1043:23 memory store for redis if you're planning on applying caching for your
1043:25 planning on applying caching for your app engine application and so i'm not
1043:27 app engine application and so i'm not sure how much longer this will be here
1043:30 sure how much longer this will be here and lastly under settings here is where
1043:32 and lastly under settings here is where you can change your settings for your
1043:34 you can change your settings for your application i can add any custom domains
1043:37 application i can add any custom domains any ssl certificates as well as setting
1043:40 any ssl certificates as well as setting up email for any applications that want
1043:42 up email for any applications that want to send email out to your users okay and
1043:45 to send email out to your users okay and now that we've done that walkthrough i
1043:47 now that we've done that walkthrough i want to go ahead and deploy my second
1043:49 want to go ahead and deploy my second version of the application and so i'm
1043:51 version of the application and so i'm going to go ahead back down to cloud
1043:53 going to go ahead back down to cloud shell i'm going to quickly clear my
1043:55 shell i'm going to quickly clear my screen and i want to move into the site
1043:57 screen and i want to move into the site v2 directory so i'm going to hit cd dot
1044:00 v2 directory so i'm going to hit cd dot dot which will bring you back one
1044:02 dot which will bring you back one directory you do an ls and i'm going to
1044:05 directory you do an ls and i'm going to change directories into site v2 and do
1044:08 change directories into site v2 and do an ls just to verify and yes you will
1044:10 an ls just to verify and yes you will see serverless bow ties too i'm going to
1044:12 see serverless bow ties too i'm going to quickly clear my screen and i'm going to
1044:15 quickly clear my screen and i'm going to run the same command as before which is
1044:17 run the same command as before which is gcloud app deploy with the version flag
1044:20 gcloud app deploy with the version flag dash dash version and instead of one i'm
1044:23 dash dash version and instead of one i'm going to launch version 2. so i'm going
1044:25 going to launch version 2. so i'm going to hit enter i'm going to be prompted if
1044:27 to hit enter i'm going to be prompted if i want to continue yes i do and as you
1044:29 i want to continue yes i do and as you can see the files have been uploaded to
1044:31 can see the files have been uploaded to cloud storage for version 2 of the
1044:34 cloud storage for version 2 of the website application and app engine is
1044:36 website application and app engine is going to take a few minutes to create
1044:39 going to take a few minutes to create the service along with the version so
1044:41 the service along with the version so i'm going to let it cook here for a
1044:42 i'm going to let it cook here for a couple minutes and i'll be back before
1044:44 couple minutes and i'll be back before you can say cat in the hat okay so
1044:47 you can say cat in the hat okay so version 2 has been deployed and so if i
1044:49 version 2 has been deployed and so if i go up here to the console and i click on
1044:52 go up here to the console and i click on refresh you should see version 2 of your
1044:54 refresh you should see version 2 of your service and as you can see 100 of the
1044:57 service and as you can see 100 of the traffic has been allocated to version 2
1045:00 traffic has been allocated to version 2 automatically and this is the default
1045:02 automatically and this is the default behavior for whenever you launch a new
1045:05 behavior for whenever you launch a new version of your service the only way to
1045:07 version of your service the only way to avoid this is to deploy your new version
1045:10 avoid this is to deploy your new version with the no promote flag and so if i go
1045:13 with the no promote flag and so if i go back to services here on the left and i
1045:15 back to services here on the left and i click on the default service
1045:17 click on the default service you should see success for version two
1045:20 you should see success for version two and so i know that my website
1045:21 and so i know that my website application for version 2 has been
1045:24 application for version 2 has been deployed successfully so i'm going to
1045:25 deployed successfully so i'm going to close down this tab again and i'm going
1045:27 close down this tab again and i'm going to go back to versions and so what i
1045:29 to go back to versions and so what i want to do now is i want to simulate an
1045:32 want to do now is i want to simulate an a b test or blue green deployment by
1045:35 a b test or blue green deployment by migrating my traffic back to the old
1045:37 migrating my traffic back to the old version in this case being version one
1045:40 version in this case being version one so in production let's say that you
1045:42 so in production let's say that you would release a new version and the
1045:45 would release a new version and the version doesn't go according to plan you
1045:47 version doesn't go according to plan you can always go back to the previous
1045:49 can always go back to the previous version and app engine allows you to do
1045:52 version and app engine allows you to do that very easily and so i'm going to
1045:54 that very easily and so i'm going to click on version 1 and i'm going to go
1045:56 click on version 1 and i'm going to go up to the top menu and click on migrate
1045:58 up to the top menu and click on migrate traffic you'll be prompted if you want
1046:00 traffic you'll be prompted if you want to migrate traffic yes i do so i'm going
1046:03 to migrate traffic yes i do so i'm going to click on migrate and it should take a
1046:05 to click on migrate and it should take a minute here and traffic should migrate
1046:07 minute here and traffic should migrate over to version one and success traffic
1046:10 over to version one and success traffic has been migrated and so we want to
1046:12 has been migrated and so we want to verify that this has happened i'm gonna
1046:14 verify that this has happened i'm gonna go back to services i'm gonna click on
1046:16 go back to services i'm gonna click on the default service and yes the traffic
1046:19 the default service and yes the traffic has been allocated to version one okay
1046:21 has been allocated to version one okay so i'm going to shut down this tab i'm
1046:24 so i'm going to shut down this tab i'm going to go back to versions and so now
1046:26 going to go back to versions and so now what i want to do is i want to simulate
1046:28 what i want to do is i want to simulate splitting the traffic between the two
1046:30 splitting the traffic between the two versions and so in order for you to do
1046:32 versions and so in order for you to do this you can go up to the top menu
1046:35 this you can go up to the top menu click on split traffic and you'll be
1046:37 click on split traffic and you'll be prompted with a new menu here and here i
1046:40 prompted with a new menu here and here i can choose from different versions and
1046:42 can choose from different versions and because i only have two versions i'm
1046:44 because i only have two versions i'm going to add version 2 and in order to
1046:47 going to add version 2 and in order to allocate the traffic between the two i
1046:49 allocate the traffic between the two i can either use this slider
1046:52 can either use this slider and as you can see the allocation
1046:53 and as you can see the allocation percentage will change or i can simply
1046:56 percentage will change or i can simply just type it in and so i'm going to
1046:58 just type it in and so i'm going to leave this at 50 percent so fifty
1047:00 leave this at 50 percent so fifty percent of version one fifty percent of
1047:02 percent of version one fifty percent of version two i'm going to split traffic
1047:04 version two i'm going to split traffic randomly i'm gonna move this down just a
1047:07 randomly i'm gonna move this down just a little bit and so that's exactly how you
1047:09 little bit and so that's exactly how you wanna allocate your traffic and so once
1047:11 wanna allocate your traffic and so once you've completed that you can simply
1047:12 you've completed that you can simply click on save it's going to take a
1047:14 click on save it's going to take a moment to update the settings and it's
1047:16 moment to update the settings and it's been successful so if i head back on
1047:18 been successful so if i head back on over to the previous page you can see
1047:20 over to the previous page you can see here that traffic has been allocated to
1047:22 here that traffic has been allocated to both versions and so now in order to
1047:25 both versions and so now in order to verify this what you're going to do is
1047:27 verify this what you're going to do is go over to services and click on the
1047:29 go over to services and click on the default hot link and you'll see version
1047:32 default hot link and you'll see version one but if i continuously refresh my
1047:34 one but if i continuously refresh my screen i can see that here i have
1047:36 screen i can see that here i have version two so because it's random i
1047:39 version two so because it's random i have a 50 chance of getting version 1
1047:42 have a 50 chance of getting version 1 and a 50 chance of getting version 2.
1047:44 and a 50 chance of getting version 2. and so this is a simulation of splitting
1047:47 and so this is a simulation of splitting traffic to different versions and
1047:49 traffic to different versions and usually with a b testing only a small
1047:52 usually with a b testing only a small percentage of the traffic is routed to
1047:54 percentage of the traffic is routed to the new version until verification can
1047:56 the new version until verification can be made that the new version deployed
1047:59 be made that the new version deployed has indeed been successful and this can
1048:02 has indeed been successful and this can be done by receiving feedback from the
1048:04 be done by receiving feedback from the users and so now i wanted to take a
1048:06 users and so now i wanted to take a quick moment to congratulate you on
1048:08 quick moment to congratulate you on making it through this demo and hope
1048:10 making it through this demo and hope that it has been extremely useful in
1048:13 that it has been extremely useful in excelling your knowledge in deploying
1048:15 excelling your knowledge in deploying and managing applications on app engine
1048:18 and managing applications on app engine so just as a recap you've cloned the
1048:20 so just as a recap you've cloned the repo to cloud shell you then deployed
1048:22 repo to cloud shell you then deployed version one of your application into app
1048:25 version one of your application into app engine you verified its launch and then
1048:27 engine you verified its launch and then you deployed version two of the
1048:29 you deployed version two of the application and verified its launch as
1048:31 application and verified its launch as well you then migrated traffic from
1048:34 well you then migrated traffic from version two over to version one and then
1048:36 version two over to version one and then you went ahead and split traffic between
1048:39 you went ahead and split traffic between both versions and allotted 50 of the
1048:42 both versions and allotted 50 of the traffic allocation to each version and
1048:45 traffic allocation to each version and so now before you go i want to make sure
1048:47 so now before you go i want to make sure that we clean up any resources that
1048:49 that we clean up any resources that we've deployed so that we don't incur
1048:51 we've deployed so that we don't incur any unnecessary costs and so the way to
1048:54 any unnecessary costs and so the way to do this is very simple so first step you
1048:57 do this is very simple so first step you want to go over to the left hand menu
1048:59 want to go over to the left hand menu and click on settings and simply click
1049:02 and click on settings and simply click on disable application you're going to
1049:03 on disable application you're going to be prompted to type in the app's id for
1049:06 be prompted to type in the app's id for me it's bowtie inc so i'm going to type
1049:08 me it's bowtie inc so i'm going to type that in and i'm going to click on
1049:09 that in and i'm going to click on disable now unfortunately with app
1049:11 disable now unfortunately with app engine you can't actually delete the
1049:13 engine you can't actually delete the application it can only be disabled and
1049:16 application it can only be disabled and so now here i'm going to hit the hot
1049:18 so now here i'm going to hit the hot link to go over to the cloud storage
1049:20 link to go over to the cloud storage bucket and as you can see here i have no
1049:22 bucket and as you can see here i have no files but i'm going to move back to my
1049:24 files but i'm going to move back to my buckets
1049:25 buckets and i'm going to move into the staging
1049:27 and i'm going to move into the staging bucket which is appended with your
1049:29 bucket which is appended with your project id.appspot.com
1049:31 project id.appspot.com and as you can see here there's a whole
1049:33 and as you can see here there's a whole bunch of different files as well if i
1049:35 bunch of different files as well if i drill down into the directory marked as
1049:37 drill down into the directory marked as ae for app engine i can see here that i
1049:40 ae for app engine i can see here that i have some more directories along with
1049:42 have some more directories along with the manifest and so now if you want to
1049:44 the manifest and so now if you want to keep your application in order to run it
1049:47 keep your application in order to run it later you don't need to delete this
1049:48 later you don't need to delete this bucket but because i don't need it i'm
1049:51 bucket but because i don't need it i'm going to go ahead and delete the bucket
1049:53 going to go ahead and delete the bucket hit delete paste in my bucket name hit
1049:56 hit delete paste in my bucket name hit delete as well under us.artifacts
1049:59 delete as well under us.artifacts you will find a directory called
1050:00 you will find a directory called containers and as explained in the last
1050:03 containers and as explained in the last lesson code build builds a container for
1050:05 lesson code build builds a container for your application before deploying it to
1050:08 your application before deploying it to app engine so i'm going to drill down
1050:10 app engine so i'm going to drill down into images so here's all the container
1050:12 into images so here's all the container digests and i don't need any of these so
1050:15 digests and i don't need any of these so i'm gonna go ahead and delete this
1050:17 i'm gonna go ahead and delete this bucket as well and so this is the last
1050:19 bucket as well and so this is the last step in order to delete all the
1050:21 step in order to delete all the directories and files that we use to
1050:23 directories and files that we use to deploy our application in an app engine
1050:25 deploy our application in an app engine okay and so i'm gonna head back on over
1050:28 okay and so i'm gonna head back on over to app engine and so now that cleanup
1050:31 to app engine and so now that cleanup has been taken care of that's pretty
1050:33 has been taken care of that's pretty much all i wanted to cover in this demo
1050:35 much all i wanted to cover in this demo for deploying and managing applications
1050:38 for deploying and managing applications on app engine so you can now mark this
1050:40 on app engine so you can now mark this as complete and i'll see you in the next
1050:42 as complete and i'll see you in the next one and again congrats on a job well
1050:45 one and again congrats on a job well done
1050:46 done [Music]
1050:50 [Music] welcome back in this lesson i will be
1050:52 welcome back in this lesson i will be diving into another serverless product
1050:55 diving into another serverless product from google cloud by the name of cloud
1050:57 from google cloud by the name of cloud functions an extremely useful and
1050:59 functions an extremely useful and advanced service that can be used with
1051:02 advanced service that can be used with almost every service on the platform now
1051:04 almost every service on the platform now there's quite a bit to cover here so
1051:06 there's quite a bit to cover here so with that being said let's dive in now
1051:09 with that being said let's dive in now cloud functions as i said before are a
1051:11 cloud functions as i said before are a serverless execution environment and
1051:14 serverless execution environment and what i mean by this is like app engine
1051:16 what i mean by this is like app engine there is no need to provision any
1051:18 there is no need to provision any servers or updating vms as the
1051:21 servers or updating vms as the infrastructure is all handled by google
1051:24 infrastructure is all handled by google but unlike app engine you will never see
1051:26 but unlike app engine you will never see the servers so the provisioning of
1051:28 the servers so the provisioning of resources happens when the code is
1051:30 resources happens when the code is executed now cloud functions are a
1051:33 executed now cloud functions are a function as a service offering and this
1051:35 function as a service offering and this is where you upload code that is
1051:38 is where you upload code that is purposefully written in a supported
1051:40 purposefully written in a supported programming language and when your code
1051:42 programming language and when your code is triggered it is executed in a fully
1051:45 is triggered it is executed in a fully managed environment and your billed for
1051:48 managed environment and your billed for when that code is executed cloud
1051:50 when that code is executed cloud functions run in a runtime environment
1051:53 functions run in a runtime environment and support many different runtimes like
1051:55 and support many different runtimes like python java node.js go and net core
1052:01 python java node.js go and net core cloud functions are event driven so when
1052:03 cloud functions are event driven so when something happens in your environment
1052:05 something happens in your environment you can choose whether or not you'd like
1052:07 you can choose whether or not you'd like to respond to this event if you do then
1052:10 to respond to this event if you do then your code can be executed in response to
1052:12 your code can be executed in response to the event these triggers can be one of a
1052:15 the event these triggers can be one of a few different types such as http
1052:18 few different types such as http pub sub cloud storage and now firestore
1052:22 pub sub cloud storage and now firestore and firebase which are in beta and have
1052:24 and firebase which are in beta and have yet to be seen in the exam cloud
1052:27 yet to be seen in the exam cloud functions are priced according to how
1052:29 functions are priced according to how long your function runs and how many
1052:31 long your function runs and how many resources you provision for your
1052:33 resources you provision for your function if your function makes an
1052:35 function if your function makes an outbound network request there are also
1052:38 outbound network request there are also additional data transfer fees cloud
1052:40 additional data transfer fees cloud functions also include a perpetual free
1052:43 functions also include a perpetual free tier which allows you 2 million
1052:45 tier which allows you 2 million invocations or executions of your
1052:48 invocations or executions of your function now cloud functions themselves
1052:50 function now cloud functions themselves are very simple but have a few steps to
1052:53 are very simple but have a few steps to execute before actually running so i
1052:56 execute before actually running so i wanted to give you a walkthrough on
1052:58 wanted to give you a walkthrough on exactly how cloud functions work now
1053:01 exactly how cloud functions work now after selecting the name and region you
1053:03 after selecting the name and region you want your function to live in you would
1053:05 want your function to live in you would then select the trigger you wish to use
1053:08 then select the trigger you wish to use and you can choose from the many i
1053:10 and you can choose from the many i listed earlier being http cloud storage
1053:13 listed earlier being http cloud storage pub sub cloud firestore and firebase a
1053:16 pub sub cloud firestore and firebase a trigger is a declaration that you are
1053:19 trigger is a declaration that you are interested in a certain event or set of
1053:21 interested in a certain event or set of events binding a function to a trigger
1053:25 events binding a function to a trigger allows you to capture and act on these
1053:27 allows you to capture and act on these events authentication configuration is
1053:29 events authentication configuration is the next step and can be selected with
1053:32 the next step and can be selected with public access
1053:34 public access or configured through iam now there are
1053:37 or configured through iam now there are some optional settings that can be
1053:39 some optional settings that can be configured where you would provide the
1053:41 configured where you would provide the amount of memory the function will need
1053:43 amount of memory the function will need to run
1053:44 to run networking preferences and even
1053:46 networking preferences and even selection for a service account now once
1053:49 selection for a service account now once all the settings have been solidified
1053:51 all the settings have been solidified your written code can then be put into
1053:53 your written code can then be put into the function now the functions code
1053:55 the function now the functions code supports a variety of languages as
1053:58 supports a variety of languages as stated before like python java node.js
1054:01 stated before like python java node.js or go now when writing your code there
1054:04 or go now when writing your code there are two distinct types of cloud
1054:06 are two distinct types of cloud functions that you could use http
1054:09 functions that you could use http functions and background functions with
1054:12 functions and background functions with http functions you invoke them from
1054:14 http functions you invoke them from standard http requests these http
1054:18 standard http requests these http requests wait for the response and
1054:20 requests wait for the response and support handling of common http request
1054:24 support handling of common http request methods like get put
1054:26 methods like get put post delete and options when you use
1054:29 post delete and options when you use cloud functions a tls certificate is
1054:32 cloud functions a tls certificate is automatically provisioned for you so all
1054:35 automatically provisioned for you so all http functions can be invoked via a
1054:38 http functions can be invoked via a secure connection now when it comes to
1054:41 secure connection now when it comes to background functions
1054:42 background functions these are used to handle events from
1054:44 these are used to handle events from your gcp infrastructure such as messages
1054:48 your gcp infrastructure such as messages on a pub sub topic or changes in a cloud
1054:51 on a pub sub topic or changes in a cloud storage bucket now once you have put all
1054:53 storage bucket now once you have put all this together you are ready to deploy
1054:55 this together you are ready to deploy your code now there are two things that
1054:57 your code now there are two things that will happen when deploying your code the
1055:00 will happen when deploying your code the first one is the binding of your trigger
1055:02 first one is the binding of your trigger to your function once you bind a trigger
1055:05 to your function once you bind a trigger you cannot bind another one to the same
1055:07 you cannot bind another one to the same function
1055:08 function only one trigger can be bound to a
1055:10 only one trigger can be bound to a function at a time now the second thing
1055:12 function at a time now the second thing that will happen when you deploy your
1055:14 that will happen when you deploy your function's source code to cloud
1055:16 function's source code to cloud functions is that source code is stored
1055:18 functions is that source code is stored in a cloud storage bucket as a zip file
1055:21 in a cloud storage bucket as a zip file cloud build then automatically builds
1055:24 cloud build then automatically builds your code into a container image that
1055:27 your code into a container image that pushes that image to container registry
1055:30 pushes that image to container registry cloud functions accesses this image when
1055:32 cloud functions accesses this image when it needs to run the container to execute
1055:35 it needs to run the container to execute your function the process of building
1055:37 your function the process of building the image is entirely automatic and
1055:40 the image is entirely automatic and requires no manual intervention and so
1055:42 requires no manual intervention and so at this point of the process the
1055:44 at this point of the process the building of your function is now
1055:46 building of your function is now complete now that the function has been
1055:48 complete now that the function has been created we now wait for an event to
1055:51 created we now wait for an event to happen and events are things that happen
1055:54 happen and events are things that happen within your cloud environment that you
1055:56 within your cloud environment that you might want to take action on these might
1055:58 might want to take action on these might be changes to data in cloud sql files
1056:01 be changes to data in cloud sql files added to cloud storage or a new vm being
1056:04 added to cloud storage or a new vm being created currently cloud functions
1056:06 created currently cloud functions supports
1056:08 supports events from the same services used for
1056:10 events from the same services used for triggers that i have just mentioned
1056:13 triggers that i have just mentioned including other google services like
1056:15 including other google services like bigquery cloud sql and cloud spanner now
1056:19 bigquery cloud sql and cloud spanner now when an event triggers the execution of
1056:21 when an event triggers the execution of your cloud function
1056:23 your cloud function data associated with the event is passed
1056:26 data associated with the event is passed via the functions parameters the type of
1056:28 via the functions parameters the type of event determines the parameters that are
1056:31 event determines the parameters that are passed to your function cloud functions
1056:33 passed to your function cloud functions handles incoming requests by assigning
1056:36 handles incoming requests by assigning them to instances of your function now
1056:38 them to instances of your function now depending on the volume of requests
1056:40 depending on the volume of requests as well as the number of existing
1056:42 as well as the number of existing function instances cloud functions may
1056:44 function instances cloud functions may assign a request to an existing instance
1056:48 assign a request to an existing instance or create a new one so the cloud
1056:50 or create a new one so the cloud function will grab the image from cloud
1056:52 function will grab the image from cloud registry and hand off the image along
1056:55 registry and hand off the image along with the event data to the instance for
1056:57 with the event data to the instance for processing now each instance of a
1056:59 processing now each instance of a function handles only one concurrent
1057:02 function handles only one concurrent request at a time this means that while
1057:04 request at a time this means that while your code is processing one request
1057:06 your code is processing one request there is no possibility of a second
1057:09 there is no possibility of a second request being routed to the same
1057:11 request being routed to the same instance thus the original request can
1057:14 instance thus the original request can use the full amount of resources that
1057:17 use the full amount of resources that you requested and this is the memory
1057:19 you requested and this is the memory that you assign to your cloud function
1057:21 that you assign to your cloud function when deploying it now to allow google to
1057:24 when deploying it now to allow google to automatically manage and scale the
1057:26 automatically manage and scale the functions they must be stateless
1057:28 functions they must be stateless functions are not meant to be persistent
1057:31 functions are not meant to be persistent nor is the data that is passed on to the
1057:33 nor is the data that is passed on to the function and so once the function has
1057:36 function and so once the function has run and all data has been processed by
1057:38 run and all data has been processed by the server it is then passed on to
1057:40 the server it is then passed on to either a vpc or to the internet now by
1057:44 either a vpc or to the internet now by default functions have public internet
1057:46 default functions have public internet access unless configured otherwise
1057:49 access unless configured otherwise functions can also be private and used
1057:52 functions can also be private and used within your vpc but must be configured
1057:55 within your vpc but must be configured before deployment now there are so many
1057:57 before deployment now there are so many use cases for cloud functions and there
1058:00 use cases for cloud functions and there are many that have already been created
1058:02 are many that have already been created by google for you to try out
1058:04 by google for you to try out and can be located in the documentation
1058:07 and can be located in the documentation that i've supplied in the lesson text
1058:09 that i've supplied in the lesson text below now the exam doesn't go into too
1058:12 below now the exam doesn't go into too much depth on cloud functions but i did
1058:14 much depth on cloud functions but i did want to give you some exposure to this
1058:16 want to give you some exposure to this fantastic serverless product from google
1058:19 fantastic serverless product from google as it is so commonly used in many
1058:22 as it is so commonly used in many production environments in a simple and
1058:25 production environments in a simple and easy way to take in data process it and
1058:28 easy way to take in data process it and return a result from any event you are
1058:30 return a result from any event you are given and i have no doubt that once you
1058:33 given and i have no doubt that once you get the hang of deploying them that you
1058:35 get the hang of deploying them that you will be a huge fan of them as well and
1058:37 will be a huge fan of them as well and so that's pretty much all i had to cover
1058:40 so that's pretty much all i had to cover when it comes to cloud functions so you
1058:42 when it comes to cloud functions so you can now mark this lesson as complete and
1058:44 can now mark this lesson as complete and whenever you're ready join me in the
1058:46 whenever you're ready join me in the next one where we go hands-on in the
1058:49 next one where we go hands-on in the console creating and deploying your very
1058:51 console creating and deploying your very first function
1059:00 welcome back and in this demo we will be diving into creating and deploying our
1059:02 diving into creating and deploying our very first cloud function we're going to
1059:04 very first cloud function we're going to take a tour of all the options in the
1059:07 take a tour of all the options in the console but we're going to do most of
1059:09 console but we're going to do most of the work in cloud shell to get a good
1059:11 the work in cloud shell to get a good feel for doing it in the command line so
1059:13 feel for doing it in the command line so with that being said let's dive in and
1059:16 with that being said let's dive in and so i'm logged in here as tony bowties
1059:19 so i'm logged in here as tony bowties gmail.com
1059:21 gmail.com and i'm in the project of bowtie inc and
1059:23 and i'm in the project of bowtie inc and so the first thing i want to do is head
1059:26 so the first thing i want to do is head on over to cloud functions in the
1059:27 on over to cloud functions in the console so i'm going to go up to the top
1059:29 console so i'm going to go up to the top left to the navigation menu and i'm
1059:32 left to the navigation menu and i'm going to scroll down to cloud functions
1059:34 going to scroll down to cloud functions and as you can see here cloud functions
1059:36 and as you can see here cloud functions is getting ready and this is because
1059:38 is getting ready and this is because we've never used it before and the api
1059:40 we've never used it before and the api is being enabled okay and the api has
1059:43 is being enabled okay and the api has been enabled and we can go ahead and
1059:45 been enabled and we can go ahead and start creating our function so you can
1059:47 start creating our function so you can go ahead and click create function and
1059:49 go ahead and click create function and you will be prompted with some fields to
1059:51 you will be prompted with some fields to fill out for the configuration of your
1059:54 fill out for the configuration of your cloud function and so under basics for
1059:56 cloud function and so under basics for function name i'm going to name this
1059:59 function name i'm going to name this hello underscore world for region i'm
1060:02 hello underscore world for region i'm going to select us east one and under
1060:04 going to select us east one and under trigger for trigger type we're gonna
1060:06 trigger for trigger type we're gonna keep this as http although if i click on
1060:09 keep this as http although if i click on the drop down menu you can see that i
1060:11 the drop down menu you can see that i will have options for cloud pub sub
1060:14 will have options for cloud pub sub cloud storage and the ones that i
1060:16 cloud storage and the ones that i mentioned before that are in beta so
1060:18 mentioned before that are in beta so we're going to keep things as http and
1060:21 we're going to keep things as http and here under url is the url for the actual
1060:24 here under url is the url for the actual cloud function under authentication i
1060:27 cloud function under authentication i have the option of choosing require
1060:29 have the option of choosing require authentication or allow unauthenticated
1060:32 authentication or allow unauthenticated invocations and as you can see this is
1060:35 invocations and as you can see this is clearly marked saying that check this if
1060:37 clearly marked saying that check this if you are creating a public api or website
1060:40 you are creating a public api or website which we are and so this is the
1060:42 which we are and so this is the authentication method that you want to
1060:44 authentication method that you want to select and so now that we have all the
1060:46 select and so now that we have all the fields filled out for the basic
1060:48 fields filled out for the basic configuration i'm going to go ahead and
1060:50 configuration i'm going to go ahead and click on save and just to give you a
1060:52 click on save and just to give you a quick run through of what else is
1060:54 quick run through of what else is available i'm going to click on the drop
1060:56 available i'm going to click on the drop down here and this will give me access
1060:58 down here and this will give me access to variables networking and advanced
1061:01 to variables networking and advanced settings the first field here memory
1061:03 settings the first field here memory allocated i can actually add more memory
1061:07 allocated i can actually add more memory depending what i am doing with my cloud
1061:09 depending what i am doing with my cloud function but i'm going to keep it as the
1061:10 function but i'm going to keep it as the default if you have a cloud function
1061:12 default if you have a cloud function that runs a little bit longer and you
1061:15 that runs a little bit longer and you need more time to run the cloud function
1061:17 need more time to run the cloud function you can add additional time for the
1061:19 you can add additional time for the timeout and as well i have the option of
1061:22 timeout and as well i have the option of choosing a different service account for
1061:24 choosing a different service account for this cloud function and so moving on
1061:26 this cloud function and so moving on under environment variables you will see
1061:29 under environment variables you will see the options to add build environment
1061:31 the options to add build environment variables along with runtime environment
1061:34 variables along with runtime environment variables and the last option being
1061:36 variables and the last option being connections here you can change the
1061:38 connections here you can change the different networking settings for
1061:40 different networking settings for ingress and egress traffic under ingress
1061:43 ingress and egress traffic under ingress settings i can allow all traffic which
1061:45 settings i can allow all traffic which is the default i can allow internal
1061:47 is the default i can allow internal traffic only as well i can allow
1061:50 traffic only as well i can allow internal traffic and traffic from cloud
1061:52 internal traffic and traffic from cloud low balancing now as well when it comes
1061:55 low balancing now as well when it comes to the egress settings as i said before
1061:58 to the egress settings as i said before by default your cloud function is able
1062:00 by default your cloud function is able to send requests to the internet but not
1062:03 to send requests to the internet but not to resources in your vpc network and so
1062:05 to resources in your vpc network and so this is where you would create a vpc
1062:07 this is where you would create a vpc connector to send requests from your
1062:10 connector to send requests from your cloud function to resources in your vpc
1062:13 cloud function to resources in your vpc so if i click on create a connector
1062:15 so if i click on create a connector it'll open up a new tab and bring me to
1062:18 it'll open up a new tab and bring me to vpc network to add serverless vpc access
1062:21 vpc network to add serverless vpc access and so i don't want to do that right now
1062:23 and so i don't want to do that right now so i'm going to close down this tab and
1062:25 so i'm going to close down this tab and i'm going to go ahead and leave
1062:26 i'm going to go ahead and leave everything else as is and click on next
1062:30 everything else as is and click on next and so now that the configuration is
1062:32 and so now that the configuration is done i can dive right into the code and
1062:34 done i can dive right into the code and so google cloud gives you a inline
1062:37 so google cloud gives you a inline editor right here along with the
1062:38 editor right here along with the different runtime environments so if i
1062:40 different runtime environments so if i click on the drop down menu you can see
1062:43 click on the drop down menu you can see i have the options
1062:44 i have the options of net core go java node.js and python
1062:49 of net core go java node.js and python 3.7 and 3.8 and so for this demo i'm
1062:53 3.7 and 3.8 and so for this demo i'm going to keep it as node.js 10. the
1062:55 going to keep it as node.js 10. the entry point will be hello world and i'm
1062:57 entry point will be hello world and i'm going to keep the code exactly as is and
1063:00 going to keep the code exactly as is and this is a default cloud function that is
1063:02 this is a default cloud function that is packaged with any runtime whenever you
1063:05 packaged with any runtime whenever you create a function from the console and
1063:07 create a function from the console and so if i had any different code i can
1063:09 so if i had any different code i can change it here but i'm not going to do
1063:11 change it here but i'm not going to do that i'm going to leave everything else
1063:13 that i'm going to leave everything else as is and click on deploy and it'll take
1063:15 as is and click on deploy and it'll take a couple minutes here to create my cloud
1063:18 a couple minutes here to create my cloud function and so i'm going to pause the
1063:19 function and so i'm going to pause the video here for just a quick sec and i'll
1063:22 video here for just a quick sec and i'll be back in a flash okay and my cloud
1063:24 be back in a flash okay and my cloud function has been deployed and i got a
1063:26 function has been deployed and i got a green check mark which means that i'm
1063:28 green check mark which means that i'm all good and so i want to dive right
1063:30 all good and so i want to dive right into it for just a second so i can get
1063:32 into it for just a second so i can get some more details here i have the
1063:34 some more details here i have the metrics for my cloud function the
1063:36 metrics for my cloud function the invocations per second execution time
1063:39 invocations per second execution time memory utilization and active instances
1063:42 memory utilization and active instances i have my versions up here at the top
1063:44 i have my versions up here at the top but since i only have one version only
1063:47 but since i only have one version only one version shows up if i click on
1063:49 one version shows up if i click on details it'll show me the general
1063:51 details it'll show me the general information along with the networking
1063:53 information along with the networking settings the source will show me the
1063:56 settings the source will show me the code for this cloud function as well as
1063:58 code for this cloud function as well as the variables the trigger permissions
1064:01 the variables the trigger permissions logs and testing and here i can write in
1064:03 logs and testing and here i can write in some code and test the function and so
1064:05 some code and test the function and so in order for me to invoke this function
1064:08 in order for me to invoke this function i can simply go to trigger and it'll
1064:10 i can simply go to trigger and it'll show me the url but a quick way to do
1064:12 show me the url but a quick way to do this through the command line is to
1064:14 this through the command line is to simply open up cloud shell and make this
1064:16 simply open up cloud shell and make this a little bigger for better viewing and
1064:18 a little bigger for better viewing and i'm going to paste in the command gcloud
1064:20 i'm going to paste in the command gcloud functions describe along with the
1064:23 functions describe along with the function name which is hello underscore
1064:25 function name which is hello underscore world along with the region flag dash
1064:27 world along with the region flag dash dash region with the region that my
1064:29 dash region with the region that my cloud function has been deployed in
1064:31 cloud function has been deployed in which is us east one and i'm going to
1064:33 which is us east one and i'm going to hit enter
1064:34 hit enter it's going to ask me to authorize my api
1064:37 it's going to ask me to authorize my api call yes i want to authorize it and this
1064:39 call yes i want to authorize it and this command should output some information
1064:41 command should output some information on your screen and so what we're looking
1064:43 on your screen and so what we're looking for here is the http trigger which you
1064:46 for here is the http trigger which you will find here under https trigger and
1064:49 will find here under https trigger and it is the same as what you see here in
1064:51 it is the same as what you see here in the console and so just know if you want
1064:54 the console and so just know if you want to grab the http url trigger you can
1064:57 to grab the http url trigger you can also do it from the command line and so
1064:59 also do it from the command line and so i'm going to now trigger it by going to
1065:00 i'm going to now trigger it by going to this url and you should see in the top
1065:03 this url and you should see in the top left hand side of your screen hello
1065:05 left hand side of your screen hello world not as exciting as spinning bow
1065:07 world not as exciting as spinning bow ties but this example gives you an idea
1065:10 ties but this example gives you an idea of what an http function can do and so
1065:12 of what an http function can do and so i'm going to close down this tab and so
1065:14 i'm going to close down this tab and so now what i want to do is i want to
1065:16 now what i want to do is i want to deploy another function but i want to do
1065:18 deploy another function but i want to do it now through the command line and so
1065:20 it now through the command line and so i'm going to now quickly clear my screen
1065:22 i'm going to now quickly clear my screen and so since i've already uploaded the
1065:24 and so since i've already uploaded the code to the repo i'm going to simply
1065:26 code to the repo i'm going to simply clone that repo and run it from here so
1065:29 clone that repo and run it from here so i'm going to simply do a cd tilde to
1065:32 i'm going to simply do a cd tilde to make sure i'm in my home directory for
1065:34 make sure i'm in my home directory for those of you who haven't deleted the
1065:35 those of you who haven't deleted the directory you can simply cd into it so
1065:38 directory you can simply cd into it so i'm going to run cd google cloud
1065:41 i'm going to run cd google cloud associate cloud engineer hit enter and
1065:43 associate cloud engineer hit enter and i'm going to run a get pull command
1065:45 i'm going to run a get pull command and it pull down all the files that i
1065:47 and it pull down all the files that i needed i'm going to quickly clear my
1065:49 needed i'm going to quickly clear my screen and so i'm going to change
1065:51 screen and so i'm going to change directories into the directory that has
1065:53 directories into the directory that has my code and so you're going to find it
1065:55 my code and so you're going to find it under 11 serverless services under zero
1065:58 under 11 serverless services under zero to you called hit enter and again i will
1066:01 to you called hit enter and again i will have a link in the lesson text for the
1066:04 have a link in the lesson text for the full instructions on this demo and it
1066:06 full instructions on this demo and it will list the directory where you can
1066:08 will list the directory where you can find this code okay so moving forward
1066:11 find this code okay so moving forward i'm going to run ls and you should see
1066:13 i'm going to run ls and you should see three files here
1066:15 three files here main.py
1066:17 main.py requirements.txt and the text file with
1066:19 requirements.txt and the text file with the instructions and so now that i have
1066:21 the instructions and so now that i have everything in place in order to deploy
1066:23 everything in place in order to deploy my code i'm going to paste in the
1066:25 my code i'm going to paste in the command to actually deploy my function
1066:27 command to actually deploy my function which is gcloud functions deploy the
1066:30 which is gcloud functions deploy the name of the function which is you
1066:32 name of the function which is you underscore called the flag for the
1066:34 underscore called the flag for the runtime dash dash runtime and the
1066:36 runtime dash dash runtime and the runtime is going to be python 3.8 the
1066:39 runtime is going to be python 3.8 the flag for the trigger which is going to
1066:40 flag for the trigger which is going to be http and because i'm a nice guy and i
1066:43 be http and because i'm a nice guy and i want everyone to have access to this i'm
1066:45 want everyone to have access to this i'm going to tag it with the flag dash dash
1066:48 going to tag it with the flag dash dash allow unauthenticated so i'm going to
1066:50 allow unauthenticated so i'm going to hit enter okay and this function should
1066:52 hit enter okay and this function should take a couple minutes to deploy so i'm
1066:54 take a couple minutes to deploy so i'm going to sit here and let it cook and
1066:55 going to sit here and let it cook and i'll be back before you can say cat in
1066:57 i'll be back before you can say cat in the hat okay and our function has been
1066:59 the hat okay and our function has been deployed i'm going to do a quick refresh
1067:01 deployed i'm going to do a quick refresh here in the console and it deployed
1067:03 here in the console and it deployed successfully as you can see the green
1067:06 successfully as you can see the green check mark is here okay and so now that
1067:08 check mark is here okay and so now that it's been deployed we want to trigger
1067:10 it's been deployed we want to trigger our function and so because i just
1067:11 our function and so because i just deployed this function the url trigger
1067:14 deployed this function the url trigger is conveniently located here in my
1067:16 is conveniently located here in my screen so you can go ahead and click on
1067:18 screen so you can go ahead and click on it and hello lover of bow ties you
1067:21 it and hello lover of bow ties you called now although this may be similar
1067:23 called now although this may be similar to the hello world demo but i did add a
1067:26 to the hello world demo but i did add a small feature that might spice things up
1067:28 small feature that might spice things up and so if you go up to the url and you
1067:30 and so if you go up to the url and you type in question mark name equals and
1067:34 type in question mark name equals and your name and since my name is anthony
1067:36 your name and since my name is anthony i'm going to type in anthony
1067:38 i'm going to type in anthony hit enter and hello anthony you called
1067:42 hit enter and hello anthony you called and so this is a perfect example of the
1067:44 and so this is a perfect example of the many different ways you can use
1067:46 many different ways you can use functions and although i've only
1067:47 functions and although i've only highlighted some very simple
1067:49 highlighted some very simple demonstrations there are many different
1067:51 demonstrations there are many different ways that you can use functions such as
1067:54 ways that you can use functions such as running pipelines running batch jobs and
1067:57 running pipelines running batch jobs and even event driven security now although
1067:59 even event driven security now although the exam doesn't go into too much depth
1068:02 the exam doesn't go into too much depth on cloud functions it's always good to
1068:04 on cloud functions it's always good to know its use cases and where its
1068:06 know its use cases and where its strengths lie for when you do decide to
1068:09 strengths lie for when you do decide to use it in your role as a cloud engineer
1068:12 use it in your role as a cloud engineer now before you go be sure to delete all
1068:14 now before you go be sure to delete all the resources you've created by deleting
1068:16 the resources you've created by deleting the functions and the storage buckets
1068:19 the functions and the storage buckets that house the code for the cloud
1068:20 that house the code for the cloud functions and i will walk you through
1068:23 functions and i will walk you through the steps right now okay so first i'm
1068:25 the steps right now okay so first i'm going to close down this tab and next
1068:27 going to close down this tab and next you're going to select all the functions
1068:29 you're going to select all the functions and you're going to simply click on
1068:30 and you're going to simply click on delete you're going to get a prompt to
1068:32 delete you're going to get a prompt to delete the functions you're going to
1068:33 delete the functions you're going to click on delete and it's going to take a
1068:35 click on delete and it's going to take a minute or two and the functions are
1068:37 minute or two and the functions are deleted i'm going to close down my cloud
1068:39 deleted i'm going to close down my cloud shell and i'm going to head over to
1068:41 shell and i'm going to head over to cloud storage
1068:42 cloud storage and as you can see here both these
1068:44 and as you can see here both these buckets that start with gcf standing for
1068:46 buckets that start with gcf standing for google cloud functions can be safely
1068:49 google cloud functions can be safely deleted as inside them are the files
1068:52 deleted as inside them are the files that were used for the cloud function so
1068:54 that were used for the cloud function so i'm going to go back out i'm going to
1068:55 i'm going to go back out i'm going to select both of these and i'm going to
1068:57 select both of these and i'm going to click on delete you get a prompt to
1068:59 click on delete you get a prompt to delete two buckets you can simply type
1069:02 delete two buckets you can simply type in delete and click on delete and the
1069:04 in delete and click on delete and the buckets have now been deleted and you've
1069:06 buckets have now been deleted and you've pretty much finished your cleanup and so
1069:08 pretty much finished your cleanup and so just as a recap you created a default
1069:11 just as a recap you created a default cloud function that was available from
1069:13 cloud function that was available from the console and then verified it by
1069:16 the console and then verified it by triggering the http url you then
1069:19 triggering the http url you then deployed another function from the
1069:20 deployed another function from the command line by pulling the code from
1069:23 command line by pulling the code from the repo and using it for deployment and
1069:26 the repo and using it for deployment and then you verified that function by
1069:28 then you verified that function by triggering it using the http url as well
1069:32 triggering it using the http url as well and then you modify the url for a
1069:35 and then you modify the url for a different output great job on another
1069:37 different output great job on another successful demo so you can now mark this
1069:39 successful demo so you can now mark this as complete and let's move on to the
1069:42 as complete and let's move on to the next one
1069:42 next one [Music]
1069:46 [Music] welcome back in this lesson we're going
1069:49 welcome back in this lesson we're going to dive into cloud storage the go to
1069:52 to dive into cloud storage the go to storage service from google cloud if
1069:54 storage service from google cloud if you're an engineer working in google
1069:56 you're an engineer working in google cloud you've probably used this many
1069:58 cloud you've probably used this many times as a storage solution and if you
1070:01 times as a storage solution and if you haven't this is definitely a service
1070:03 haven't this is definitely a service that you will need to know for both the
1070:05 that you will need to know for both the exam and your day-to-day role as a cloud
1070:08 exam and your day-to-day role as a cloud engineer now there's quite a bit to
1070:10 engineer now there's quite a bit to cover here so with that being said let's
1070:12 cover here so with that being said let's dive in now cloud storage is a
1070:15 dive in now cloud storage is a consistent scalable large capacity
1070:18 consistent scalable large capacity highly durable object storage and this
1070:21 highly durable object storage and this is unlimited storage for objects with no
1070:24 is unlimited storage for objects with no minimum object size but please remember
1070:27 minimum object size but please remember that this is object storage and is not
1070:29 that this is object storage and is not designed to store an operating system on
1070:31 designed to store an operating system on but to store whole objects like pictures
1070:34 but to store whole objects like pictures or videos cloud storage has worldwide
1070:36 or videos cloud storage has worldwide accessibility and worldwide storage
1070:39 accessibility and worldwide storage locations so anywhere that there is a
1070:41 locations so anywhere that there is a region or zone cloud storage is
1070:44 region or zone cloud storage is available from there and can be accessed
1070:47 available from there and can be accessed at any time through an internet
1070:49 at any time through an internet connection cloud storage is great for
1070:51 connection cloud storage is great for storing data from data analytics jobs
1070:54 storing data from data analytics jobs text files with code pictures of the
1070:57 text files with code pictures of the latest fashion from paris and videos of
1070:59 latest fashion from paris and videos of your favorite house dj at the shelter
1071:02 your favorite house dj at the shelter cloud storage excels for content
1071:04 cloud storage excels for content delivery big data sets and backups and
1071:07 delivery big data sets and backups and are all stored as objects in buckets and
1071:10 are all stored as objects in buckets and this is the heart of cloud storage that
1071:13 this is the heart of cloud storage that i will be diving into so starting with
1071:15 i will be diving into so starting with buckets these are the basic containers
1071:18 buckets these are the basic containers or construct that holds your data
1071:21 or construct that holds your data everything that you store in cloud
1071:23 everything that you store in cloud storage must be contained in a bucket
1071:26 storage must be contained in a bucket you can use buckets to organize your
1071:28 you can use buckets to organize your data and control access to your data but
1071:31 data and control access to your data but unlike directories and folders you
1071:34 unlike directories and folders you cannot nest buckets and i'll get into
1071:36 cannot nest buckets and i'll get into that in just a minute now when you
1071:38 that in just a minute now when you create a bucket you must specify a
1071:40 create a bucket you must specify a globally unique name as every bucket
1071:44 globally unique name as every bucket resides in a single cloud storage
1071:46 resides in a single cloud storage namespace as well as a name you must
1071:48 namespace as well as a name you must specify a geographic location where the
1071:52 specify a geographic location where the bucket and its contents are stored and
1071:54 bucket and its contents are stored and you have three available geography
1071:56 you have three available geography choices to choose from from region dual
1071:59 choices to choose from from region dual region and multi-region and so just as a
1072:02 region and multi-region and so just as a note choosing dual region and
1072:04 note choosing dual region and multi-region is considered geo-redundant
1072:07 multi-region is considered geo-redundant for dual region geo-redundancy is
1072:10 for dual region geo-redundancy is achieved using a specific pair of
1072:12 achieved using a specific pair of regions for multi-region geo-redundancy
1072:15 regions for multi-region geo-redundancy is achieved using a continent that
1072:18 is achieved using a continent that contains two or more geographic places
1072:21 contains two or more geographic places basically the more regions your data is
1072:23 basically the more regions your data is available in the greater your
1072:25 available in the greater your availability for that data after you've
1072:28 availability for that data after you've chosen a geographic location a default
1072:31 chosen a geographic location a default storage class must be chosen and this
1072:33 storage class must be chosen and this applies to objects added to the bucket
1072:36 applies to objects added to the bucket that don't have a storage class
1072:38 that don't have a storage class explicitly specified and i'll be diving
1072:40 explicitly specified and i'll be diving into storage classes in just a bit and
1072:42 into storage classes in just a bit and so after you create a bucket you can
1072:45 so after you create a bucket you can still change its default storage class
1072:47 still change its default storage class to any class supported in the buckets
1072:50 to any class supported in the buckets location with some stipulations
1072:52 location with some stipulations you can only change the bucket name
1072:55 you can only change the bucket name and location by deleting and recreating
1072:57 and location by deleting and recreating the bucket as well once dual region is
1073:00 the bucket as well once dual region is selected it cannot be changed to
1073:02 selected it cannot be changed to multi-region and when selecting
1073:04 multi-region and when selecting multi-region you will not be able to
1073:06 multi-region you will not be able to change the bucket to be dual region and
1073:09 change the bucket to be dual region and lastly you will need to choose what
1073:11 lastly you will need to choose what level of access you want others to have
1073:13 level of access you want others to have on your bucket whether you want to apply
1073:15 on your bucket whether you want to apply permissions using uniform or fine
1073:18 permissions using uniform or fine grained access uniform bucket level
1073:21 grained access uniform bucket level access allows you to use iam alone to
1073:24 access allows you to use iam alone to manage permissions iam applies
1073:26 manage permissions iam applies permissions to all the objects contained
1073:29 permissions to all the objects contained inside the bucket or groups of objects
1073:32 inside the bucket or groups of objects with common name prefixes the find green
1073:35 with common name prefixes the find green option enables you to use iam and access
1073:38 option enables you to use iam and access control lists or acls
1073:40 control lists or acls together to manage permissions acls are
1073:44 together to manage permissions acls are a legacy access control system for cloud
1073:46 a legacy access control system for cloud storage designed for interoperability
1073:49 storage designed for interoperability with amazon s3 for those of you who use
1073:52 with amazon s3 for those of you who use aws you can specify access and apply
1073:55 aws you can specify access and apply permissions at both the bucket level and
1073:58 permissions at both the bucket level and per individual object and i will also be
1074:01 per individual object and i will also be diving more into depth with access
1074:03 diving more into depth with access control
1074:04 control in just a bit and just as a note labels
1074:07 in just a bit and just as a note labels are an optional item for bucket creation
1074:10 are an optional item for bucket creation like every other resource creation
1074:12 like every other resource creation process in gcp now that we've covered
1074:14 process in gcp now that we've covered buckets i wanted to cover what is stored
1074:17 buckets i wanted to cover what is stored in those buckets which is objects and
1074:20 in those buckets which is objects and objects are the individual pieces of
1074:22 objects are the individual pieces of data or data chunks that you store in a
1074:25 data or data chunks that you store in a cloud storage bucket and there is no
1074:27 cloud storage bucket and there is no limit on the number of objects that you
1074:29 limit on the number of objects that you can create in a bucket so you can think
1074:31 can create in a bucket so you can think of objects kind of like files objects
1074:34 of objects kind of like files objects have two components object data and
1074:36 have two components object data and object metadata
1074:38 object metadata object data is typically a file that you
1074:41 object data is typically a file that you want to store in cloud storage and in
1074:43 want to store in cloud storage and in this case it is the picture of the plaid
1074:46 this case it is the picture of the plaid bow tie and object metadata is a
1074:48 bow tie and object metadata is a collection of name value pairs that
1074:51 collection of name value pairs that describe the various properties of that
1074:53 describe the various properties of that object an object's name is treated as a
1074:56 object an object's name is treated as a piece of object metadata in cloud
1074:58 piece of object metadata in cloud storage and must be unique within the
1075:00 storage and must be unique within the bucket cloud storage uses a flat
1075:03 bucket cloud storage uses a flat namespace to store objects which means
1075:05 namespace to store objects which means that cloud storage isn't a file system
1075:08 that cloud storage isn't a file system hierarchy but sees all objects in a
1075:10 hierarchy but sees all objects in a given bucket as independent with no
1075:13 given bucket as independent with no relationship towards each other for
1075:15 relationship towards each other for convenience
1075:16 convenience tools such as the console and gsutil
1075:20 tools such as the console and gsutil work with objects that use the slash
1075:23 work with objects that use the slash character as if they were stored in a
1075:25 character as if they were stored in a virtual hierarchy for example you can
1075:28 virtual hierarchy for example you can name one object slash bow ties slash
1075:31 name one object slash bow ties slash spring 2021 slash plaid bowtie.jpg when
1075:35 spring 2021 slash plaid bowtie.jpg when using the cloud console you can then
1075:37 using the cloud console you can then navigate to these objects as if they
1075:39 navigate to these objects as if they were in a hierarchical directory
1075:41 were in a hierarchical directory structure under the folders bow ties and
1075:44 structure under the folders bow ties and spring 2021 now i mentioned before that
1075:47 spring 2021 now i mentioned before that the part of the bucket creation is the
1075:49 the part of the bucket creation is the selection of a storage class the storage
1075:51 selection of a storage class the storage class you set for an object affects the
1075:54 class you set for an object affects the object's availability and pricing model
1075:56 object's availability and pricing model so when you create a bucket you can
1075:58 so when you create a bucket you can specify a default storage class for the
1076:01 specify a default storage class for the bucket when you add objects to the
1076:02 bucket when you add objects to the bucket they inherit this storage class
1076:05 bucket they inherit this storage class unless explicitly set otherwise now i
1076:08 unless explicitly set otherwise now i wanted to touch on these four storage
1076:09 wanted to touch on these four storage classes now to give you a better
1076:11 classes now to give you a better understanding of the differences between
1076:13 understanding of the differences between them the first one is standard storage
1076:16 them the first one is standard storage and is considered best for hot data or
1076:19 and is considered best for hot data or frequently accessed data and is best for
1076:22 frequently accessed data and is best for short-term use as it does not have any
1076:25 short-term use as it does not have any specified storage duration and this is
1076:27 specified storage duration and this is excellent for use in analytical
1076:29 excellent for use in analytical workloads and transcoding and the price
1076:32 workloads and transcoding and the price for this storage class comes in at two
1076:34 for this storage class comes in at two cents per gigabyte per month next up is
1076:37 cents per gigabyte per month next up is near line storage and this is considered
1076:39 near line storage and this is considered hot data as well and is a low-cost
1076:42 hot data as well and is a low-cost storage class for storing in frequently
1076:45 storage class for storing in frequently accessed data nearline storage has a
1076:48 accessed data nearline storage has a slightly lower availability a 30-day
1076:50 slightly lower availability a 30-day minimum storage duration and comes with
1076:53 minimum storage duration and comes with the cost for data access nearline
1076:55 the cost for data access nearline storage is ideal if you're looking to
1076:57 storage is ideal if you're looking to continuously add files but only plan to
1077:00 continuously add files but only plan to access them once a month and is perfect
1077:03 access them once a month and is perfect for data backup and data archiving the
1077:06 for data backup and data archiving the price for this storage class comes in at
1077:08 price for this storage class comes in at a penny per gigabyte per month now cold
1077:11 a penny per gigabyte per month now cold line storage is considered cold data as
1077:14 line storage is considered cold data as it enters into more of the longer term
1077:16 it enters into more of the longer term storage classes and is a very low cost
1077:20 storage classes and is a very low cost storage class for storing and frequently
1077:22 storage class for storing and frequently accessed data it comes with slightly
1077:24 accessed data it comes with slightly lower availability than nearline storage
1077:27 lower availability than nearline storage a 90-day minimum storage duration and
1077:30 a 90-day minimum storage duration and comes with the cost for data access that
1077:32 comes with the cost for data access that is higher than the retrieval cost for
1077:35 is higher than the retrieval cost for nearline storage coldline storage is
1077:37 nearline storage coldline storage is ideal for data you plan to read or
1077:40 ideal for data you plan to read or modify at most once a quarter and is
1077:43 modify at most once a quarter and is perfect for data backup and data
1077:46 perfect for data backup and data archiving the price for this storage
1077:47 archiving the price for this storage class comes in at less than half of a
1077:50 class comes in at less than half of a penny per gigabyte per month and finally
1077:53 penny per gigabyte per month and finally archive storage is the lowest cost
1077:56 archive storage is the lowest cost highly durable storage service for data
1077:58 highly durable storage service for data archiving online backup and disaster
1078:01 archiving online backup and disaster recovery and even coming in at a lowest
1078:04 recovery and even coming in at a lowest cost the data access is still available
1078:08 cost the data access is still available within milliseconds archive storage
1078:10 within milliseconds archive storage comes in at a higher cost for data
1078:12 comes in at a higher cost for data retrieval as well as a
1078:15 retrieval as well as a day minimum storage duration and is the
1078:18 day minimum storage duration and is the best choice for data that you plan to
1078:20 best choice for data that you plan to access less than once a year archive
1078:23 access less than once a year archive storage also comes with the highest
1078:25 storage also comes with the highest price for data retrieval and it is ideal
1078:28 price for data retrieval and it is ideal for archive data storage that's used for
1078:31 for archive data storage that's used for regulatory purposes or disaster recovery
1078:34 regulatory purposes or disaster recovery data in the event that there is an
1078:36 data in the event that there is an oopsies in your environment the price of
1078:38 oopsies in your environment the price of the storage class comes in at a
1078:40 the storage class comes in at a ridiculously low price per gigabyte per
1078:43 ridiculously low price per gigabyte per month at a fraction of a penny per
1078:46 month at a fraction of a penny per gigabyte per month now when it comes to
1078:48 gigabyte per month now when it comes to choosing your geographic location this
1078:51 choosing your geographic location this will determine the availability of your
1078:53 will determine the availability of your data here as you can see the highest
1078:56 data here as you can see the highest availability is the standard
1078:58 availability is the standard multi-region whereas archive has the
1079:01 multi-region whereas archive has the lowest availability when stored in a
1079:03 lowest availability when stored in a regional setting now when it comes to
1079:05 regional setting now when it comes to the durability of your data meaning the
1079:08 the durability of your data meaning the measurement of how healthy and resilient
1079:10 measurement of how healthy and resilient your data is from data loss or data
1079:12 your data is from data loss or data corruption google cloud boasts 11 9's
1079:16 corruption google cloud boasts 11 9's durability annually on all data stored
1079:20 durability annually on all data stored in any storage class on cloud storage so
1079:23 in any storage class on cloud storage so know that your data is stored safely and
1079:26 know that your data is stored safely and will be there holding the same integrity
1079:28 will be there holding the same integrity from the day you stored it now when it
1079:30 from the day you stored it now when it comes to granting permissions to your
1079:32 comes to granting permissions to your cloud storage buckets and the objects
1079:35 cloud storage buckets and the objects within them there are four different
1079:37 within them there are four different options to choose from the first is iam
1079:40 options to choose from the first is iam permissions and these are the standard
1079:42 permissions and these are the standard permissions that control all your other
1079:44 permissions that control all your other resources in google cloud and follow the
1079:47 resources in google cloud and follow the same top-down hierarchy that we
1079:49 same top-down hierarchy that we discussed earlier the next available
1079:51 discussed earlier the next available option are access control list or acls
1079:55 option are access control list or acls and these define who has access to your
1079:57 and these define who has access to your buckets and objects as well as what type
1080:00 buckets and objects as well as what type of access they have and these can work
1080:03 of access they have and these can work in tandem with im permissions moving on
1080:06 in tandem with im permissions moving on to sign urls these are time limited
1080:09 to sign urls these are time limited reader write access urls that can be
1080:12 reader write access urls that can be created by you to give access to the
1080:15 created by you to give access to the object in question for the duration that
1080:17 object in question for the duration that you specify and lastly is sign policy
1080:20 you specify and lastly is sign policy documents and these are documents to
1080:23 documents and these are documents to specify what can be uploaded to a bucket
1080:25 specify what can be uploaded to a bucket and i will be going into each one of
1080:27 and i will be going into each one of these in a bit of detail now cloud
1080:30 these in a bit of detail now cloud storage offers two systems for granting
1080:32 storage offers two systems for granting users permission to access your buckets
1080:35 users permission to access your buckets and objects iam and access control lists
1080:39 and objects iam and access control lists these systems act in parallel in order
1080:42 these systems act in parallel in order for a user to access a cloud storage
1080:44 for a user to access a cloud storage resource only one of the systems needs
1080:47 resource only one of the systems needs to grant the user permission im is
1080:49 to grant the user permission im is always the recommended method when it
1080:51 always the recommended method when it comes to giving access to buckets and
1080:54 comes to giving access to buckets and the objects within those buckets
1080:56 the objects within those buckets granting roles at the bucket level does
1080:58 granting roles at the bucket level does not affect any existing roles that you
1081:00 not affect any existing roles that you granted at the project level and vice
1081:03 granted at the project level and vice versa giving you two levels of
1081:05 versa giving you two levels of granularity to customize your
1081:07 granularity to customize your permissions so for instance you can give
1081:09 permissions so for instance you can give a user permission to read objects in any
1081:12 a user permission to read objects in any bucket but permissions to create objects
1081:15 bucket but permissions to create objects only in one specific bucket the roles
1081:17 only in one specific bucket the roles that are available through iam are the
1081:19 that are available through iam are the primitive standard storage roles or the
1081:22 primitive standard storage roles or the legacy roles which are equivalent to
1081:24 legacy roles which are equivalent to acls now acls are there if you need to
1081:28 acls now acls are there if you need to customize access and really get granular
1081:31 customize access and really get granular with individual objects within a bucket
1081:33 with individual objects within a bucket and are used to define who has access to
1081:36 and are used to define who has access to your buckets and objects as well as what
1081:38 your buckets and objects as well as what level of access they have each acl
1081:41 level of access they have each acl consists of one or more entries and
1081:43 consists of one or more entries and gives a specific user or group the
1081:46 gives a specific user or group the ability to perform specific actions each
1081:49 ability to perform specific actions each entry consists of two pieces of
1081:51 entry consists of two pieces of information a permission which defines
1081:54 information a permission which defines what actions can be performed and a
1081:56 what actions can be performed and a scope which defines who can perform the
1081:58 scope which defines who can perform the specified actions now acls should be
1082:01 specified actions now acls should be used with caution as iam roles and acls
1082:04 used with caution as iam roles and acls overlap cloud storage will grant a
1082:07 overlap cloud storage will grant a broader permission so if you allow
1082:09 broader permission so if you allow specific users access to an object in a
1082:11 specific users access to an object in a bucket and then an acl is applied to
1082:14 bucket and then an acl is applied to that object to make it public then it
1082:16 that object to make it public then it will be publicly accessible so please be
1082:19 will be publicly accessible so please be aware now a signed url is a url that
1082:22 aware now a signed url is a url that provides limited permission and time to
1082:24 provides limited permission and time to make a request sign urls contain
1082:27 make a request sign urls contain authentication information allowing
1082:30 authentication information allowing users without credentials to perform
1082:32 users without credentials to perform specific actions on a resource when you
1082:35 specific actions on a resource when you generate a signed url you specify a user
1082:38 generate a signed url you specify a user or service account which must have
1082:41 or service account which must have sufficient permission to make the
1082:43 sufficient permission to make the request that the sign url will make
1082:45 request that the sign url will make after you generate a signed url anyone
1082:48 after you generate a signed url anyone who possesses it can use the sign url to
1082:51 who possesses it can use the sign url to perform specified actions such as
1082:54 perform specified actions such as reading an object within a specified
1082:56 reading an object within a specified period of time now if you want to
1082:58 period of time now if you want to provide public access to a user who
1083:00 provide public access to a user who doesn't have an account you can provide
1083:03 doesn't have an account you can provide a signed url to that user which gives
1083:05 a signed url to that user which gives the user read write or delete access to
1083:08 the user read write or delete access to that resource for a limited time you
1083:11 that resource for a limited time you specify an expiration date when you
1083:13 specify an expiration date when you create the sign url so anyone who knows
1083:16 create the sign url so anyone who knows the url can access the resource until
1083:19 the url can access the resource until the expiration time for the url is
1083:22 the expiration time for the url is reached or the key used to sign the url
1083:25 reached or the key used to sign the url is rotated and the command to create the
1083:27 is rotated and the command to create the sign url is shown here and as you can
1083:30 sign url is shown here and as you can see has been assigned for a limited time
1083:33 see has been assigned for a limited time of 10 minutes so as you've seen when it
1083:36 of 10 minutes so as you've seen when it comes to cloud storage there are so many
1083:38 comes to cloud storage there are so many configuration options to choose from and
1083:41 configuration options to choose from and lots of different ways to store and give
1083:43 lots of different ways to store and give access and this makes this resource from
1083:46 access and this makes this resource from google cloud such a flexible option and
1083:49 google cloud such a flexible option and full of great potential for many
1083:51 full of great potential for many different types of workloads this is
1083:53 different types of workloads this is also a service that comes up a lot in
1083:56 also a service that comes up a lot in the exam as one of the many different
1083:58 the exam as one of the many different storage options to choose from and so
1084:01 storage options to choose from and so knowing the features storage classes
1084:03 knowing the features storage classes pricing and access options will
1084:06 pricing and access options will definitely give you a leg up when you
1084:08 definitely give you a leg up when you are presented with questions regarding
1084:10 are presented with questions regarding storage and so that's pretty much all i
1084:12 storage and so that's pretty much all i wanted to cover when it comes to this
1084:14 wanted to cover when it comes to this overview on cloud storage so you can now
1084:16 overview on cloud storage so you can now mark this lesson as complete and let's
1084:19 mark this lesson as complete and let's move on to the next one
1084:20 move on to the next one [Music]
1084:24 [Music] welcome back and in this lesson i will
1084:27 welcome back and in this lesson i will be covering object versioning and life
1084:29 be covering object versioning and life cycle management a feature within cloud
1084:31 cycle management a feature within cloud storage that is used to manage and sort
1084:34 storage that is used to manage and sort through older files that need to be
1084:36 through older files that need to be deleted along with files that are not in
1084:39 deleted along with files that are not in high need of regular access knowing the
1084:42 high need of regular access knowing the capabilities of these two features
1084:44 capabilities of these two features can really help organize accumulated
1084:47 can really help organize accumulated objects in storage buckets and cut down
1084:50 objects in storage buckets and cut down on costs so without further ado let's
1084:53 on costs so without further ado let's dive in now to understand a bit more
1084:55 dive in now to understand a bit more about objects i wanted to dive into
1084:58 about objects i wanted to dive into immutability and versioning now objects
1085:01 immutability and versioning now objects are immutable which means that an
1085:03 are immutable which means that an uploaded object cannot change throughout
1085:05 uploaded object cannot change throughout its storage lifetime an object's storage
1085:08 its storage lifetime an object's storage lifetime is the time between a
1085:11 lifetime is the time between a successful object creation or upload and
1085:14 successful object creation or upload and successful object deletion this means
1085:16 successful object deletion this means that you cannot edit objects in place
1085:19 that you cannot edit objects in place instead objects are always replaced with
1085:22 instead objects are always replaced with a new version so after the upload of the
1085:25 a new version so after the upload of the new object completes the new version of
1085:27 new object completes the new version of the object is served to readers this
1085:30 the object is served to readers this replacement marks the end of one
1085:32 replacement marks the end of one object's life cycle and the beginning of
1085:34 object's life cycle and the beginning of a new one now to support the retrieval
1085:37 a new one now to support the retrieval of objects that are deleted or replaced
1085:39 of objects that are deleted or replaced cloud storage offers the object
1085:41 cloud storage offers the object versioning feature object versioning
1085:43 versioning feature object versioning retains a non-current object version
1085:46 retains a non-current object version when the live object version gets
1085:48 when the live object version gets replaced or deleted enabling object
1085:51 replaced or deleted enabling object versioning increases storage costs which
1085:53 versioning increases storage costs which can be partially mitigated by
1085:56 can be partially mitigated by configuring object lifecycle management
1085:58 configuring object lifecycle management to delete older object versions but more
1086:01 to delete older object versions but more on that in just a bit cloud storage uses
1086:03 on that in just a bit cloud storage uses two properties that together identify
1086:06 two properties that together identify the version of an object the generation
1086:09 the version of an object the generation which identifies the version of the
1086:11 which identifies the version of the object's data
1086:12 object's data and the meta generation which identifies
1086:15 and the meta generation which identifies the version of the object's metadata
1086:17 the version of the object's metadata these properties are always present with
1086:20 these properties are always present with every version of the object even if
1086:22 every version of the object even if object versioning is not enabled these
1086:25 object versioning is not enabled these properties can be used to enforce
1086:27 properties can be used to enforce ordering of updates so in order to
1086:29 ordering of updates so in order to enable object versioning you would do
1086:32 enable object versioning you would do that by enabling it on a bucket once
1086:34 that by enabling it on a bucket once enabled older versions remain in your
1086:36 enabled older versions remain in your bucket when a replacement or deletion
1086:38 bucket when a replacement or deletion occurs so by default when you replace an
1086:41 occurs so by default when you replace an object cloud storage deletes the old
1086:44 object cloud storage deletes the old version and adds a new version these
1086:46 version and adds a new version these older versions retain the name of the
1086:48 older versions retain the name of the object but are uniquely identified by
1086:51 object but are uniquely identified by their generation number when object
1086:53 their generation number when object versioning has created an older version
1086:55 versioning has created an older version of an object you can use the generation
1086:58 of an object you can use the generation number to refer to the older version
1087:01 number to refer to the older version this allows you to restore a replaced
1087:03 this allows you to restore a replaced object in your bucket or permanently
1087:06 object in your bucket or permanently delete older object versions that you no
1087:08 delete older object versions that you no longer need and so touching back on cost
1087:11 longer need and so touching back on cost for just a minute these versions can
1087:13 for just a minute these versions can really add up and start costing you some
1087:16 really add up and start costing you some serious money if you have thousands of
1087:18 serious money if you have thousands of files with hundreds of versions and this
1087:21 files with hundreds of versions and this is where life cycle management comes
1087:23 is where life cycle management comes into play now cloud storage offers the
1087:25 into play now cloud storage offers the object lifecycle management feature in
1087:28 object lifecycle management feature in order to support some common use cases
1087:30 order to support some common use cases like setting a time to live or ttl for
1087:33 like setting a time to live or ttl for objects
1087:34 objects retaining non-current versions of
1087:36 retaining non-current versions of objects or downgrading storage classes
1087:39 objects or downgrading storage classes of objects to help manage costs now in
1087:42 of objects to help manage costs now in order to apply this feature to your
1087:44 order to apply this feature to your objects you would assign a lifecycle
1087:46 objects you would assign a lifecycle management configuration to a bucket the
1087:49 management configuration to a bucket the configuration contains a set of rules
1087:51 configuration contains a set of rules which apply to current and feature
1087:53 which apply to current and feature objects in the bucket when an object
1087:55 objects in the bucket when an object meets the criteria of one of the rules
1087:58 meets the criteria of one of the rules cloud storage automatically performs the
1088:00 cloud storage automatically performs the specified action on the object and so
1088:03 specified action on the object and so some example use cases are shown here so
1088:06 some example use cases are shown here so if you're looking to downgrade the
1088:07 if you're looking to downgrade the storage class
1088:08 storage class of objects older than 365 days to cold
1088:12 of objects older than 365 days to cold line storage for compliance purposes
1088:15 line storage for compliance purposes along with saving money life cycle
1088:17 along with saving money life cycle management is perfect for this another
1088:19 management is perfect for this another use case is when you want to delete
1088:21 use case is when you want to delete objects created before january 1st of
1088:24 objects created before january 1st of 2020 and this is another great use case
1088:26 2020 and this is another great use case to save money as well with keeping only
1088:29 to save money as well with keeping only the three most recent versions of each
1088:32 the three most recent versions of each object in a bucket with versioning
1088:34 object in a bucket with versioning enabled to keep from version objects
1088:36 enabled to keep from version objects building up object lifecycle management
1088:39 building up object lifecycle management has so many other use cases across a
1088:41 has so many other use cases across a myriad of industries and when used
1088:43 myriad of industries and when used correctly is a great way to achieve
1088:46 correctly is a great way to achieve object management along with saving
1088:48 object management along with saving money now i wanted to take a moment to
1088:51 money now i wanted to take a moment to dive into the lifecycle management
1088:53 dive into the lifecycle management configuration each lifecycle management
1088:55 configuration each lifecycle management configuration contains a set of
1088:58 configuration contains a set of components these are a set of rules
1089:00 components these are a set of rules conditions and the action when the
1089:02 conditions and the action when the conditions are met rules are any set of
1089:05 conditions are met rules are any set of conditions for any action conditions is
1089:08 conditions for any action conditions is something an object must meet before the
1089:11 something an object must meet before the action defined in the rule occurs on the
1089:14 action defined in the rule occurs on the object and there are various conditions
1089:16 object and there are various conditions to choose from that allows you to get
1089:18 to choose from that allows you to get pretty granular and finally the action
1089:21 pretty granular and finally the action which is where you would have the option
1089:23 which is where you would have the option to delete or set storage class now when
1089:26 to delete or set storage class now when you delete current versions this will
1089:28 you delete current versions this will move the current version into a
1089:30 move the current version into a non-current state and when you delete a
1089:33 non-current state and when you delete a non-current version you will permanently
1089:36 non-current version you will permanently delete the version and cannot get it
1089:38 delete the version and cannot get it back and so when you set the storage
1089:40 back and so when you set the storage class it will transition the object to a
1089:43 class it will transition the object to a different storage class so when defining
1089:45 different storage class so when defining a rule you can specify any set of
1089:48 a rule you can specify any set of conditions for any action if you specify
1089:51 conditions for any action if you specify multiple conditions in a rule an object
1089:54 multiple conditions in a rule an object has to match all of the conditions for
1089:57 has to match all of the conditions for the action to be taken so if you have
1089:59 the action to be taken so if you have three conditions and one of those
1090:01 three conditions and one of those conditions have not been met then the
1090:03 conditions have not been met then the action will not take place if you
1090:05 action will not take place if you specify multiple rules that contain the
1090:08 specify multiple rules that contain the same action the action is taken when an
1090:11 same action the action is taken when an object matches the conditions in any of
1090:13 object matches the conditions in any of these rules now if multiple rules have
1090:16 these rules now if multiple rules have their conditions satisfied
1090:18 their conditions satisfied simultaneously for a single object cloud
1090:21 simultaneously for a single object cloud storage will either perform the delete
1090:23 storage will either perform the delete action as it takes precedence over the
1090:25 action as it takes precedence over the set storage class action or the set
1090:28 set storage class action or the set storage class action that switches the
1090:30 storage class action that switches the object to the storage class with the
1090:33 object to the storage class with the lowest at rest storage pricing takes
1090:35 lowest at rest storage pricing takes precedence so for example if you have
1090:38 precedence so for example if you have one rule that deletes an object and
1090:41 one rule that deletes an object and another rule that changes the object
1090:43 another rule that changes the object storage class but both rules use the
1090:46 storage class but both rules use the exact same condition the delete action
1090:49 exact same condition the delete action always occurs when the condition is met
1090:51 always occurs when the condition is met or if you have one rule that changes the
1090:54 or if you have one rule that changes the object storage class to near line
1090:56 object storage class to near line storage and another rule that changes
1090:58 storage and another rule that changes the object storage class to cold line
1091:00 the object storage class to cold line storage but both rules use the exact
1091:03 storage but both rules use the exact same condition the object storage class
1091:06 same condition the object storage class always changes to cold line storage when
1091:08 always changes to cold line storage when the condition is met and so some
1091:10 the condition is met and so some considerations that i wanted to point
1091:12 considerations that i wanted to point out when it comes to cloud storage is
1091:15 out when it comes to cloud storage is that when it comes to object life cycle
1091:17 that when it comes to object life cycle management
1091:18 management changes are in accordance to object
1091:20 changes are in accordance to object creation date as well once an object is
1091:23 creation date as well once an object is deleted it cannot be undeleted so please
1091:26 deleted it cannot be undeleted so please be careful when permanently deleting a
1091:29 be careful when permanently deleting a version as well life cycle rules can
1091:31 version as well life cycle rules can take up to 24 hours to take effect so be
1091:35 take up to 24 hours to take effect so be aware when setting them and always be
1091:37 aware when setting them and always be sure to test these life cycle rules in
1091:40 sure to test these life cycle rules in development first before rolling them
1091:42 development first before rolling them out into production and so that's pretty
1091:45 out into production and so that's pretty much all i had to cover when it comes to
1091:47 much all i had to cover when it comes to versioning and object life cycle
1091:49 versioning and object life cycle management and so you can now mark this
1091:51 management and so you can now mark this lesson as complete and whenever you're
1091:53 lesson as complete and whenever you're ready join me in the console where we go
1091:56 ready join me in the console where we go hands-on with versioning object life
1091:58 hands-on with versioning object life cycle management and cloud storage as a
1092:01 cycle management and cloud storage as a whole
1092:02 whole [Music]
1092:06 [Music] welcome back in this demo we're going to
1092:08 welcome back in this demo we're going to cement the knowledge that we learned
1092:10 cement the knowledge that we learned from the past couple lessons on cloud
1092:12 from the past couple lessons on cloud storage and really dive into the nitty
1092:15 storage and really dive into the nitty gritty when it comes to the features and
1092:17 gritty when it comes to the features and configuration you're first going to
1092:19 configuration you're first going to create a cloud storage bucket and upload
1092:22 create a cloud storage bucket and upload some files to it and then interact with
1092:24 some files to it and then interact with the bucket and the files using the
1092:26 the bucket and the files using the console as well you're going to get your
1092:29 console as well you're going to get your hands dirty using the gsutil command
1092:32 hands dirty using the gsutil command line tool and this is the tool for
1092:34 line tool and this is the tool for managing cloud storage from the command
1092:36 managing cloud storage from the command line now there's quite a bit of work to
1092:38 line now there's quite a bit of work to do here so with that being said let's
1092:41 do here so with that being said let's dive in and so i am logged in here as
1092:44 dive in and so i am logged in here as tony bowties at gmail.com along with
1092:47 tony bowties at gmail.com along with being in project bowtie inc and so the
1092:50 being in project bowtie inc and so the first thing i want to do is i want to
1092:52 first thing i want to do is i want to create a cloud storage bucket so in
1092:54 create a cloud storage bucket so in order for me to do that i'm going to
1092:55 order for me to do that i'm going to head over to the navigation menu and i'm
1092:58 head over to the navigation menu and i'm going to scroll down to storage
1093:00 going to scroll down to storage and here i already have a couple of
1093:02 and here i already have a couple of buckets that i created from earlier
1093:04 buckets that i created from earlier lessons and you may have a couple
1093:06 lessons and you may have a couple buckets as well but you're going to go
1093:08 buckets as well but you're going to go ahead and create a new bucket by going
1093:10 ahead and create a new bucket by going up to the top here and click on create
1093:12 up to the top here and click on create bucket now i know that we've gone
1093:14 bucket now i know that we've gone through this before in previous lessons
1093:17 through this before in previous lessons but this time i wanted to go through all
1093:19 but this time i wanted to go through all the configuration options that are
1093:21 the configuration options that are available and so the first thing that
1093:23 available and so the first thing that you're prompted to do here is to name
1093:25 you're prompted to do here is to name your bucket as explained in an earlier
1093:27 your bucket as explained in an earlier lesson it needs to be a globally unique
1093:30 lesson it needs to be a globally unique name and so you can pick any name you
1093:32 name and so you can pick any name you choose and so for me i'm going to call
1093:34 choose and so for me i'm going to call this bucket bowtie inc dash 2021 i'm
1093:38 this bucket bowtie inc dash 2021 i'm going to hit continue and if it wasn't a
1093:40 going to hit continue and if it wasn't a globally unique name it would error out
1093:43 globally unique name it would error out and you would have to enter in a new
1093:45 and you would have to enter in a new name but since this bucket name is
1093:47 name but since this bucket name is globally unique i'm able to move forward
1093:49 globally unique i'm able to move forward for location type you can select from
1093:52 for location type you can select from region dual region and multi region with
1093:55 region dual region and multi region with multi region under location you can
1093:57 multi region under location you can select from either the americas europe
1094:00 select from either the americas europe or asia pacific and under dual region
1094:03 or asia pacific and under dual region you have the options of again choosing
1094:06 you have the options of again choosing from america's europe and asia pacific
1094:09 from america's europe and asia pacific and you will be given the regions for
1094:11 and you will be given the regions for each and so for this demo we're going to
1094:13 each and so for this demo we're going to go ahead and choose region and we're
1094:15 go ahead and choose region and we're going to keep the location as u.s east
1094:17 going to keep the location as u.s east one and once you've selected that you
1094:19 one and once you've selected that you can go ahead and hit continue and you're
1094:21 can go ahead and hit continue and you're going to be prompted to choose a default
1094:23 going to be prompted to choose a default storage class and here you have the
1094:25 storage class and here you have the option of selecting from the four
1094:27 option of selecting from the four storage classes that we discussed in an
1094:29 storage classes that we discussed in an earlier lesson and so for this demo you
1094:31 earlier lesson and so for this demo you can keep it as standard and simply click
1094:34 can keep it as standard and simply click on continue and so here you're prompted
1094:36 on continue and so here you're prompted to choose access control and because
1094:38 to choose access control and because we're going to be diving into acls you
1094:41 we're going to be diving into acls you can keep this as the default fine grain
1094:43 can keep this as the default fine grain access control you can go ahead and
1094:45 access control you can go ahead and click continue and under encryption you
1094:47 click continue and under encryption you can keep it as the default google manage
1094:50 can keep it as the default google manage key but know that you always have the
1094:52 key but know that you always have the option of choosing a customer manage key
1094:54 option of choosing a customer manage key and once you've uploaded your customer
1094:56 and once you've uploaded your customer manage key you can select it from here
1094:58 manage key you can select it from here and because i have no customer managed
1095:00 and because i have no customer managed keys no other keys show up so i'm going
1095:02 keys no other keys show up so i'm going to click on google manage keys and here
1095:04 to click on google manage keys and here under retention policy i know i haven't
1095:07 under retention policy i know i haven't touched into that but just to give you
1095:09 touched into that but just to give you some context when placing a retention
1095:12 some context when placing a retention policy on a bucket it ensures that all
1095:15 policy on a bucket it ensures that all current and future objects in the bucket
1095:17 current and future objects in the bucket can't be deleted or replaced until they
1095:20 can't be deleted or replaced until they reach the age that you define in the
1095:23 reach the age that you define in the retention policy so if you try to delete
1095:26 retention policy so if you try to delete or replace objects where the age is less
1095:28 or replace objects where the age is less than the retention period it will
1095:30 than the retention period it will obviously fail and this is great for
1095:33 obviously fail and this is great for compliance purposes in areas where logs
1095:36 compliance purposes in areas where logs need to be audited by regulators every
1095:38 need to be audited by regulators every year or where government required
1095:40 year or where government required retention periods apply as well with the
1095:43 retention periods apply as well with the retention policy you have the option of
1095:46 retention policy you have the option of locking that retention policy and when
1095:48 locking that retention policy and when you lock a retention policy on a bucket
1095:50 you lock a retention policy on a bucket you prevent the policy from ever being
1095:53 you prevent the policy from ever being removed or the retention period from
1095:56 removed or the retention period from ever being reduced and this feature is
1095:59 ever being reduced and this feature is irreversible so please be aware if
1096:01 irreversible so please be aware if you're ever experimenting with lock
1096:03 you're ever experimenting with lock retention policies so if i set a
1096:06 retention policies so if i set a retention policy here i can retain
1096:08 retention policy here i can retain objects for a certain amount of seconds
1096:11 objects for a certain amount of seconds days months and years and for this demo
1096:14 days months and years and for this demo we're not going to set any retention
1096:15 we're not going to set any retention policies so i'm going to check that off
1096:17 policies so i'm going to check that off and i'm going to go ahead and add a
1096:19 and i'm going to go ahead and add a label with the key being environment and
1096:22 label with the key being environment and the value being test and just as a note
1096:24 the value being test and just as a note before you go ahead and click on create
1096:26 before you go ahead and click on create over on the right hand side you will see
1096:28 over on the right hand side you will see a monthly cost estimate and you will be
1096:31 a monthly cost estimate and you will be given an estimate with storage and
1096:33 given an estimate with storage and retrieval as well as how much it costs
1096:35 retrieval as well as how much it costs for operations your sla and your
1096:38 for operations your sla and your estimated monthly cost and so before
1096:40 estimated monthly cost and so before creating any buckets you can always do a
1096:43 creating any buckets you can always do a price check to see how much it'll cost
1096:46 price check to see how much it'll cost for storage size retrieval to get a good
1096:49 for storage size retrieval to get a good idea of how much it'll cost you monthly
1096:51 idea of how much it'll cost you monthly okay so once you're all done here you
1096:53 okay so once you're all done here you can simply click on create
1096:56 can simply click on create and it'll go ahead and create your
1096:57 and it'll go ahead and create your bucket and so now that your bucket is
1096:59 bucket and so now that your bucket is created we want to add some files and so
1097:02 created we want to add some files and so we first want to go into copying files
1097:05 we first want to go into copying files from an instance to your cloud storage
1097:07 from an instance to your cloud storage bucket and so in order to do that we
1097:10 bucket and so in order to do that we need to create an instance and so we're
1097:12 need to create an instance and so we're gonna go back over to the navigation
1097:13 gonna go back over to the navigation menu we're gonna scroll down to compute
1097:16 menu we're gonna scroll down to compute engine and we're gonna create our
1097:17 engine and we're gonna create our instance and for those who do not have
1097:20 instance and for those who do not have your default vpc set up please be sure
1097:22 your default vpc set up please be sure to create one before going ahead and
1097:24 to create one before going ahead and creating your instance i'm going to go
1097:26 creating your instance i'm going to go ahead and click on create i'm going to
1097:28 ahead and click on create i'm going to name this instance
1097:29 name this instance bowtie instance going to give it a label
1097:32 bowtie instance going to give it a label of environment test click on save
1097:35 of environment test click on save the region is going to be
1097:37 the region is going to be east one and you can keep the default
1097:39 east one and you can keep the default zone as us east 1b the machine type
1097:42 zone as us east 1b the machine type we're going to change it to e2micro and
1097:44 we're going to change it to e2micro and you're going to scroll down to access
1097:46 you're going to scroll down to access scopes and here your instance is going
1097:48 scopes and here your instance is going to need access to your cloud storage
1097:51 to need access to your cloud storage bucket and so it's going to need cloud
1097:53 bucket and so it's going to need cloud storage access so you're going to click
1097:54 storage access so you're going to click on set access for each api scroll down
1097:57 on set access for each api scroll down to storage and for this demo we'll
1097:59 to storage and for this demo we'll select full gonna leave everything else
1098:01 select full gonna leave everything else as the default and simply click on
1098:03 as the default and simply click on create and so we'll give it a couple
1098:05 create and so we'll give it a couple minutes here for instance to create okay
1098:08 minutes here for instance to create okay and my instance has been created and so
1098:10 and my instance has been created and so now i want to create some files and copy
1098:12 now i want to create some files and copy them over to cloud storage so i'm going
1098:14 them over to cloud storage so i'm going to first navigate over to cloud storage
1098:17 to first navigate over to cloud storage and into my bucket and this way you can
1098:19 and into my bucket and this way you can see the files that you upload and so
1098:22 see the files that you upload and so next you're going to open up cloud shell
1098:23 next you're going to open up cloud shell and make this a little bigger for better
1098:25 and make this a little bigger for better viewing and so now you're going to ssh
1098:27 viewing and so now you're going to ssh into your instance by using the command
1098:30 into your instance by using the command gcloud compute ssh along with your
1098:32 gcloud compute ssh along with your instance name the zone flag dash dash
1098:35 instance name the zone flag dash dash zone with the zone of us east 1b i'm
1098:38 zone with the zone of us east 1b i'm going to go ahead and hit enter and you
1098:40 going to go ahead and hit enter and you may be prompted with a message asking to
1098:42 may be prompted with a message asking to authorize this api call and you want to
1098:44 authorize this api call and you want to hit authorize and you're going to be
1098:46 hit authorize and you're going to be prompted to enter a passphrase for your
1098:48 prompted to enter a passphrase for your key pair enter it in again
1098:51 key pair enter it in again and one more time
1098:52 and one more time and success we're logged into the
1098:54 and success we're logged into the instance i'm going to quickly clear my
1098:56 instance i'm going to quickly clear my screen and so i know i could have sshed
1098:59 screen and so i know i could have sshed into the instance from the compute
1099:01 into the instance from the compute engine console but i wanted to display
1099:03 engine console but i wanted to display both the console and the shell on the
1099:06 both the console and the shell on the same screen to make viewing a bit easier
1099:08 same screen to make viewing a bit easier as i add and remove files to and from
1099:11 as i add and remove files to and from the bucket okay and so now that you're
1099:13 the bucket okay and so now that you're logged in you want to create your first
1099:15 logged in you want to create your first file that you can copy over to your
1099:17 file that you can copy over to your bucket so you can enter in the command
1099:20 bucket so you can enter in the command sudo nano file a bow ties dot text hit
1099:23 sudo nano file a bow ties dot text hit enter and this will allow you to open up
1099:25 enter and this will allow you to open up the nano editor to edit the file of
1099:28 the nano editor to edit the file of bowties.txt and here you can enter in
1099:30 bowties.txt and here you can enter in any message that you'd like for me i'm
1099:32 any message that you'd like for me i'm going to enter in learning to tie a bow
1099:35 going to enter in learning to tie a bow tie takes time okay and i'm going to hit
1099:38 tie takes time okay and i'm going to hit ctrl o to save hit enter to verify the
1099:41 ctrl o to save hit enter to verify the file name to right and ctrl x to exit
1099:44 file name to right and ctrl x to exit and so now i want to copy this file up
1099:46 and so now i want to copy this file up to my bucket and so here is where i'm
1099:48 to my bucket and so here is where i'm going to use the gsutil command so i'm
1099:50 going to use the gsutil command so i'm going to type in gsutil cp for copy the
1099:54 going to type in gsutil cp for copy the name of the file which is file of
1099:55 name of the file which is file of bowties
1099:57 bowties text along with gs colon forward slash
1100:00 text along with gs colon forward slash forward slash and the name of your
1100:02 forward slash and the name of your bucket which in my case is bow tie ink
1100:05 bucket which in my case is bow tie ink dash 2021 and this should copy my file
1100:09 dash 2021 and this should copy my file file a bowties.txt up to my bucket of
1100:12 file a bowties.txt up to my bucket of bow tie inc
1100:13 bow tie inc 2021 i'm gonna hit enter
1100:16 2021 i'm gonna hit enter okay and it's finished copying over and
1100:18 okay and it's finished copying over and if i go up here to the top right and
1100:20 if i go up here to the top right and click on refresh i can see that my file
1100:23 click on refresh i can see that my file successfully uploaded and this is a
1100:25 successfully uploaded and this is a great and easy method to upload any
1100:28 great and easy method to upload any files that you may have to cloud storage
1100:30 files that you may have to cloud storage okay and so now that you've copied files
1100:33 okay and so now that you've copied files from your instance to your bucket you're
1100:35 from your instance to your bucket you're going to now copy some files from the
1100:37 going to now copy some files from the repo to be uploaded to cloud storage for
1100:40 repo to be uploaded to cloud storage for our next step so you're gonna go ahead
1100:42 our next step so you're gonna go ahead and exit out of the instance by just
1100:45 and exit out of the instance by just simply typing in exit i'm gonna quickly
1100:47 simply typing in exit i'm gonna quickly clear the screen and so here i need to
1100:49 clear the screen and so here i need to clone my repo if you already have clone
1100:52 clone my repo if you already have clone the repo then you can skip this step i'm
1100:55 the repo then you can skip this step i'm going to cd tilde to make sure i'm in my
1100:57 going to cd tilde to make sure i'm in my home directory i'm going to do an ls and
1100:59 home directory i'm going to do an ls and so i can see here that i've already
1101:01 so i can see here that i've already cloned my repo so i'm going to cd into
1101:03 cloned my repo so i'm going to cd into that directory and i'm going to run the
1101:05 that directory and i'm going to run the command git pull to get the latest files
1101:08 command git pull to get the latest files fantastic i'm going to now clear my
1101:09 fantastic i'm going to now clear my screen and i'm going to cd back to my
1101:12 screen and i'm going to cd back to my home directory and so now i want to copy
1101:14 home directory and so now i want to copy up the files that i want to work with to
1101:16 up the files that i want to work with to my cloud storage bucket and they are two
1101:19 my cloud storage bucket and they are two jpegs by the name of pink
1101:21 jpegs by the name of pink elephant-bowtie as well as plaid bowtie
1101:24 elephant-bowtie as well as plaid bowtie and these files can be found in the repo
1101:26 and these files can be found in the repo marked 12 storage services under zero
1101:30 marked 12 storage services under zero one cloud storage management and i will
1101:32 one cloud storage management and i will be providing this in the lesson text as
1101:34 be providing this in the lesson text as well as can be found in the instructions
1101:36 well as can be found in the instructions and so i'm going to simply cd into that
1101:38 and so i'm going to simply cd into that directory by typing in cd google cloud
1101:41 directory by typing in cd google cloud associate cloud engineer 12 storage
1101:43 associate cloud engineer 12 storage services and 0 1 cloud storage
1101:46 services and 0 1 cloud storage management i'm going to list all the
1101:47 management i'm going to list all the files in the directory and as you can
1101:49 files in the directory and as you can see here pink elephant dash bow tie and
1101:52 see here pink elephant dash bow tie and plaid bow tie are both here and so i'm
1101:54 plaid bow tie are both here and so i'm going to quickly clear my screen and so
1101:56 going to quickly clear my screen and so now for me to copy these files i'm going
1101:58 now for me to copy these files i'm going to use the command gsutil
1102:01 to use the command gsutil cp for copy star.jpg which is all the
1102:05 cp for copy star.jpg which is all the jpegs that are available along with gs
1102:07 jpegs that are available along with gs colon forward slash forward slash and
1102:10 colon forward slash forward slash and the bucket name which is bow tie inc
1102:12 the bucket name which is bow tie inc dash 2021 i'm going to hit enter and it
1102:15 dash 2021 i'm going to hit enter and it says that it's successfully copied the
1102:17 says that it's successfully copied the files i'm going to simply go up to the
1102:19 files i'm going to simply go up to the top right hand corner and do another
1102:21 top right hand corner and do another refresh and success the files have been
1102:24 refresh and success the files have been successfully uploaded another perfect
1102:26 successfully uploaded another perfect example of copying files from another
1102:29 example of copying files from another source to your bucket using the gsutil
1102:32 source to your bucket using the gsutil command line tool and so this is the end
1102:34 command line tool and so this is the end of part one of this demo it was getting
1102:36 of part one of this demo it was getting a bit long so i decided to break it up
1102:38 a bit long so i decided to break it up and this would be a great opportunity
1102:40 and this would be a great opportunity for you to get up and have a stretch get
1102:43 for you to get up and have a stretch get yourself a coffee or tea and whenever
1102:45 yourself a coffee or tea and whenever you're ready part two will be starting
1102:47 you're ready part two will be starting immediately from the end of part one so
1102:50 immediately from the end of part one so you can complete this video and i will
1102:52 you can complete this video and i will see you in part two
1102:53 see you in part two [Music]
1102:57 [Music] this is part two of the managing cloud
1103:00 this is part two of the managing cloud storage access demo and we'll be
1103:02 storage access demo and we'll be starting exactly where we left off in
1103:04 starting exactly where we left off in part 1. so with that being said let's
1103:06 part 1. so with that being said let's dive in and so now that we've uploaded
1103:09 dive in and so now that we've uploaded all these files we next want to make
1103:11 all these files we next want to make this bucket publicly available now
1103:13 this bucket publicly available now please know that leaving a bucket public
1103:16 please know that leaving a bucket public is not common practice and should only
1103:19 is not common practice and should only be used on the rare occasion that you
1103:21 be used on the rare occasion that you are hosting a static website from your
1103:23 are hosting a static website from your bucket and should always be kept private
1103:26 bucket and should always be kept private whenever possible especially in a
1103:28 whenever possible especially in a production environment so please note
1103:31 production environment so please note that this is only for the purposes of
1103:33 that this is only for the purposes of this demo and so i'm going to quickly
1103:35 this demo and so i'm going to quickly show this to you in the console so i'm
1103:37 show this to you in the console so i'm going to shut down the cloud shell for
1103:39 going to shut down the cloud shell for just a minute and i'm going to go to the
1103:40 just a minute and i'm going to go to the top menu and click on permissions and
1103:42 top menu and click on permissions and under permissions i'm going to click on
1103:44 under permissions i'm going to click on add here you can add new members and
1103:47 add here you can add new members and because you want to make it publicly
1103:48 because you want to make it publicly available you want to use the all users
1103:51 available you want to use the all users member so you type in all and you should
1103:53 member so you type in all and you should get a pop-up bringing up all users and
1103:56 get a pop-up bringing up all users and all authenticated users you want to
1103:58 all authenticated users you want to click on all users and the role that you
1104:00 click on all users and the role that you want to select for this demo is going to
1104:03 want to select for this demo is going to be storage object viewer so i'm going to
1104:05 be storage object viewer so i'm going to type in storage object viewer and here
1104:08 type in storage object viewer and here it should pop up and select that and
1104:10 it should pop up and select that and then you can click on save you're going
1104:12 then you can click on save you're going to be prompted to make sure that this is
1104:14 to be prompted to make sure that this is what you want to do that you want to
1104:15 what you want to do that you want to make this bucket public and so yes we do
1104:18 make this bucket public and so yes we do so you can simply click on allow public
1104:20 so you can simply click on allow public access and you will get a banner up here
1104:22 access and you will get a banner up here at the top saying that this bucket is
1104:24 at the top saying that this bucket is public to internet and is a great fail
1104:27 public to internet and is a great fail safe to have in case you were to ever
1104:29 safe to have in case you were to ever mistakenly make your bucket public and
1104:31 mistakenly make your bucket public and if i head back over to objects you can
1104:34 if i head back over to objects you can see that public access is available to
1104:37 see that public access is available to all the files in the bucket and so just
1104:39 all the files in the bucket and so just to verify this i'm going to copy the
1104:41 to verify this i'm going to copy the public url for pink elephant dash bowtie
1104:44 public url for pink elephant dash bowtie i'm going to open up a new tab paste in
1104:46 i'm going to open up a new tab paste in the url hit enter and as you can see i
1104:48 the url hit enter and as you can see i have public access to this picture and
1104:51 have public access to this picture and close this tab and so now that we've
1104:53 close this tab and so now that we've done our demo to make the bucket
1104:54 done our demo to make the bucket publicly accessible we should go ahead
1104:57 publicly accessible we should go ahead and remove public access so in order to
1105:00 and remove public access so in order to remove public permissions i can simply
1105:02 remove public permissions i can simply go up to permissions and simply click on
1105:05 go up to permissions and simply click on remove public permissions i'm going to
1105:07 remove public permissions i'm going to get a prompt to make sure this is
1105:08 get a prompt to make sure this is exactly what i want to do and yes it is
1105:11 exactly what i want to do and yes it is so you can click on remove public
1105:13 so you can click on remove public permissions a very simple and elegant
1105:15 permissions a very simple and elegant solution in order to remove public
1105:18 solution in order to remove public access from your bucket and if you go
1105:20 access from your bucket and if you go back to objects you'll see that all the
1105:22 back to objects you'll see that all the public access has been removed from all
1105:24 public access has been removed from all the files and so now that you've
1105:26 the files and so now that you've experienced how to add public access to
1105:29 experienced how to add public access to a bucket i wanted to get a little bit
1105:31 a bucket i wanted to get a little bit more granular and so we're going to go
1105:33 more granular and so we're going to go ahead and apply acl permissions for one
1105:36 ahead and apply acl permissions for one specific object and because i like pink
1105:39 specific object and because i like pink elephants let's go ahead and select pink
1105:41 elephants let's go ahead and select pink elephant dash bow tie and so here i can
1105:43 elephant dash bow tie and so here i can go up to the top menu and click on edit
1105:46 go up to the top menu and click on edit permissions and i'll be prompted with a
1105:48 permissions and i'll be prompted with a new window for permissions that are
1105:50 new window for permissions that are currently available for this object you
1105:52 currently available for this object you can click on add entry click on the drop
1105:54 can click on add entry click on the drop down and select public from the
1105:57 down and select public from the drop-down and it will automatically auto
1105:59 drop-down and it will automatically auto populate the name which is all users and
1106:02 populate the name which is all users and the access which will be reader i'm
1106:04 the access which will be reader i'm going to go ahead and click on save and
1106:06 going to go ahead and click on save and a public url will be generated and so
1106:08 a public url will be generated and so just to verify this i'm going to click
1106:10 just to verify this i'm going to click on the public url and success i now have
1106:13 on the public url and success i now have public access to this picture yet once
1106:15 public access to this picture yet once again i'm going to close down this tab
1106:17 again i'm going to close down this tab and so now that you've configured this
1106:19 and so now that you've configured this object for public access i want to show
1106:22 object for public access i want to show you how to remove public access using
1106:24 you how to remove public access using the command line this time so you're
1106:26 the command line this time so you're going to go up to the top right hand
1106:27 going to go up to the top right hand corner and open up cloud shell i'm going
1106:29 corner and open up cloud shell i'm going to quickly clear my screen and i'm going
1106:31 to quickly clear my screen and i'm going to paste in the command here which is
1106:33 to paste in the command here which is gsutil acl ch for change minus d which
1106:38 gsutil acl ch for change minus d which is delete the name of the user which is
1106:40 is delete the name of the user which is all users and if this was a regular user
1106:43 all users and if this was a regular user you could enter in their email address
1106:45 you could enter in their email address along with gs colon forward slash
1106:47 along with gs colon forward slash forward slash the bucket name which in
1106:50 forward slash the bucket name which in my case is bow tie ink dash 2021 and the
1106:53 my case is bow tie ink dash 2021 and the name of the file which is pink elephant
1106:55 name of the file which is pink elephant bow tie dot jpeg i'm going to hit enter
1106:58 bow tie dot jpeg i'm going to hit enter and it says that it's been successfully
1106:59 and it says that it's been successfully updated and so if i go back up here to
1107:02 updated and so if i go back up here to the console and i back out and go back
1107:04 the console and i back out and go back into the file i can see here that the
1107:06 into the file i can see here that the public url has been removed okay and now
1107:09 public url has been removed okay and now there's one last step that we need to do
1107:11 there's one last step that we need to do before ending this demo and this is to
1107:14 before ending this demo and this is to create a signed url for the file so in
1107:17 create a signed url for the file so in order to create a signed url we first
1107:19 order to create a signed url we first need to create a private key and so
1107:21 need to create a private key and so we're gonna do this using a service
1107:23 we're gonna do this using a service account and so i'm gonna head on over to
1107:24 account and so i'm gonna head on over to iam so i'm going to go up to the
1107:26 iam so i'm going to go up to the navigation menu i'm going to go to i am
1107:28 navigation menu i'm going to go to i am an admin and here with the menu on the
1107:30 an admin and here with the menu on the left i'm going to click on service
1107:32 left i'm going to click on service accounts here up at the top menu you're
1107:34 accounts here up at the top menu you're going to click on create service account
1107:36 going to click on create service account and under service account name you can
1107:38 and under service account name you can enter in any name
1107:40 enter in any name but for me i'm going to enter in signed
1107:43 but for me i'm going to enter in signed url i'm going to leave everything else
1107:45 url i'm going to leave everything else as is i'm going to simply click on
1107:47 as is i'm going to simply click on create i'm going to close down cloud
1107:48 create i'm going to close down cloud shell because i don't really need it
1107:50 shell because i don't really need it right now just select a role and i'm
1107:52 right now just select a role and i'm going to give it the role of storage
1107:54 going to give it the role of storage object viewer
1107:56 object viewer i'm going to click on continue and i'm
1107:58 i'm going to click on continue and i'm going to leave the rest blank and simply
1108:00 going to leave the rest blank and simply click on done and you should see a
1108:02 click on done and you should see a service account with the name of signed
1108:04 service account with the name of signed url and so in order to create a key i'm
1108:06 url and so in order to create a key i'm going to simply go over to actions and
1108:09 going to simply go over to actions and i'm going to click on the three dots and
1108:11 i'm going to click on the three dots and i'm going to select create key from the
1108:13 i'm going to select create key from the drop down menu and here i'm going to be
1108:15 drop down menu and here i'm going to be prompted with what type of key that i
1108:17 prompted with what type of key that i want to create and you want to make sure
1108:18 want to create and you want to make sure that json is selected and simply click
1108:21 that json is selected and simply click on create and this is where your key
1108:23 on create and this is where your key will be automatically downloaded to your
1108:25 will be automatically downloaded to your downloads folder i'm going to click on
1108:27 downloads folder i'm going to click on close and so once you have your key
1108:29 close and so once you have your key downloaded you're able to start the
1108:31 downloaded you're able to start the process of generating a signed url and
1108:34 process of generating a signed url and so i'm going to go ahead and use cloud
1108:36 so i'm going to go ahead and use cloud shell in order to generate this signed
1108:38 shell in order to generate this signed url so i'm going to go ahead back up to
1108:40 url so i'm going to go ahead back up to the top and open up cloud shell again
1108:42 the top and open up cloud shell again and then you can open up the cloud shell
1108:44 and then you can open up the cloud shell editor going to go up to the top menu in
1108:46 editor going to go up to the top menu in editor and click on file and you're
1108:48 editor and click on file and you're going to select upload files and here's
1108:51 going to select upload files and here's where you upload your key from your
1108:53 where you upload your key from your downloads folder and i can see my key
1108:55 downloads folder and i can see my key has been uploaded right here and you can
1108:57 has been uploaded right here and you can rename your key file to something a
1108:59 rename your key file to something a little bit more human readable so i'm
1109:01 little bit more human readable so i'm going to right click i'm going to click
1109:02 going to right click i'm going to click on rename and you can rename this file
1109:05 on rename and you can rename this file as privatekey.json hit ok and so once
1109:08 as privatekey.json hit ok and so once you have your key uploaded and renamed
1109:10 you have your key uploaded and renamed you can now go back into the terminal to
1109:13 you can now go back into the terminal to generate a signed url i'm going to
1109:15 generate a signed url i'm going to quickly clear the screen i'm going to
1109:16 quickly clear the screen i'm going to make sure that the private key is in my
1109:18 make sure that the private key is in my path by typing in ls and as you can see
1109:21 path by typing in ls and as you can see here privatekey.json
1109:23 here privatekey.json is indeed in my path and so before i
1109:25 is indeed in my path and so before i generate this key i'm going to head back
1109:27 generate this key i'm going to head back on over to cloud storage i'm going to
1109:29 on over to cloud storage i'm going to drill down into my bucket and as you can
1109:31 drill down into my bucket and as you can see here pink elephant dash bow tie does
1109:34 see here pink elephant dash bow tie does not have a public url and so when the
1109:36 not have a public url and so when the sign url is generated you will get a
1109:39 sign url is generated you will get a public url that will not be shown here
1109:41 public url that will not be shown here in the console and will be private to
1109:44 in the console and will be private to only the user that generated it and the
1109:47 only the user that generated it and the users that the url has been distributed
1109:49 users that the url has been distributed to okay and once you have everything in
1109:52 to okay and once you have everything in place you can then go ahead and paste in
1109:54 place you can then go ahead and paste in the command gsutil sign url minus d the
1109:58 the command gsutil sign url minus d the allotted time which is 10 minutes the
1110:01 allotted time which is 10 minutes the private key which is private key dot
1110:03 private key which is private key dot json along with gs colon forward slash
1110:06 json along with gs colon forward slash forward slash your bucket name which in
1110:08 forward slash your bucket name which in my case is bow tie ink dash 2021 along
1110:11 my case is bow tie ink dash 2021 along with the file name of
1110:14 with the file name of pinkelephant-bowtie.jpg i'm going to hit
1110:16 pinkelephant-bowtie.jpg i'm going to hit enter and so i purposely left this error
1110:18 enter and so i purposely left this error here so you can see that when you
1110:20 here so you can see that when you generate a signed url you need pi open
1110:23 generate a signed url you need pi open ssl in order to generate it and so the
1110:26 ssl in order to generate it and so the caveat here is that because python 2 is
1110:28 caveat here is that because python 2 is being deprecated the command pip install
1110:31 being deprecated the command pip install pi openssl will not work pi open ssl
1110:35 pi openssl will not work pi open ssl needs to be installed with python3 and
1110:37 needs to be installed with python3 and so to install it you're going to run the
1110:39 so to install it you're going to run the command pip3 install pi open ssl and hit
1110:44 command pip3 install pi open ssl and hit enter and so once it's finished
1110:45 enter and so once it's finished installing you can now generate your
1110:48 installing you can now generate your signed url i'm going to quickly clear my
1110:50 signed url i'm going to quickly clear my screen paste in the command again hit
1110:52 screen paste in the command again hit enter and success you've now generated a
1110:56 enter and success you've now generated a sign url for the object pink elephant
1110:59 sign url for the object pink elephant bowtie.jpg and because this is a signed
1111:02 bowtie.jpg and because this is a signed url you will see under public url there
1111:05 url you will see under public url there is no url there available even though it
1111:08 is no url there available even though it is publicly accessible and so just to
1111:10 is publicly accessible and so just to verify this i'm going to highlight the
1111:12 verify this i'm going to highlight the link here i'm going to copy it i'm going
1111:14 link here i'm going to copy it i'm going to open up a new tab i'm going to paste
1111:16 to open up a new tab i'm going to paste in this url hit enter and success this
1111:20 in this url hit enter and success this sign url is working and anyone who has
1111:23 sign url is working and anyone who has access to it has viewing permissions of
1111:25 access to it has viewing permissions of the file for 10 minutes and so again
1111:28 the file for 10 minutes and so again this is a great method for giving
1111:30 this is a great method for giving someone access to an object who doesn't
1111:33 someone access to an object who doesn't have an account and will give them a
1111:35 have an account and will give them a limited time to view or edit this object
1111:38 limited time to view or edit this object and so i wanted to congratulate you on
1111:40 and so i wanted to congratulate you on making it through this demo and hope
1111:42 making it through this demo and hope that it has been extremely useful in
1111:45 that it has been extremely useful in excelling your knowledge on managing
1111:47 excelling your knowledge on managing buckets files and access to the buckets
1111:50 buckets files and access to the buckets and files in cloud storage and so just
1111:52 and files in cloud storage and so just as a recap you created a cloud storage
1111:55 as a recap you created a cloud storage bucket you then created an instance and
1111:57 bucket you then created an instance and copied a file from that instance to the
1112:00 copied a file from that instance to the bucket you then clone your repo to cloud
1112:02 bucket you then clone your repo to cloud shell and copy two jpeg files to your
1112:06 shell and copy two jpeg files to your cloud storage bucket you then assigned
1112:08 cloud storage bucket you then assigned and then removed public access to your
1112:11 and then removed public access to your bucket and then applied an acl to a file
1112:14 bucket and then applied an acl to a file in the bucket making it public as well
1112:17 in the bucket making it public as well as removing public access right after
1112:19 as removing public access right after you then created a service account
1112:21 you then created a service account private key and generated a signed url
1112:25 private key and generated a signed url to an object in that bucket
1112:27 to an object in that bucket congratulations again on a job well done
1112:29 congratulations again on a job well done and so that's pretty much all i wanted
1112:31 and so that's pretty much all i wanted to cover in this demo on managing cloud
1112:35 to cover in this demo on managing cloud storage access so you can now mark this
1112:37 storage access so you can now mark this as complete and let's move on to the
1112:39 as complete and let's move on to the next one
1112:40 next one [Music]
1112:44 [Music] welcome back in this demo we're going to
1112:46 welcome back in this demo we're going to be getting into the weeds with object
1112:49 be getting into the weeds with object versioning and life cycle management
1112:51 versioning and life cycle management using both the console and the command
1112:53 using both the console and the command line we're going to go through how
1112:55 line we're going to go through how versioning works and what happens when
1112:57 versioning works and what happens when objects get promoted along with creation
1113:00 objects get promoted along with creation configuration and editing these life
1113:03 configuration and editing these life cycle policies and so with that being
1113:05 cycle policies and so with that being said let's dive in so we're going to be
1113:08 said let's dive in so we're going to be starting off from where we left off in
1113:10 starting off from where we left off in the last demo with all the resources
1113:12 the last demo with all the resources intact that we created before and we're
1113:15 intact that we created before and we're going to go ahead and dive right into
1113:17 going to go ahead and dive right into versioning and so the first thing that
1113:19 versioning and so the first thing that you want to do is turn on versioning for
1113:21 you want to do is turn on versioning for your current bucket so in my case for
1113:24 your current bucket so in my case for bow tie ink dash 2021 and we're going to
1113:27 bow tie ink dash 2021 and we're going to do this through the command line so i'm
1113:29 do this through the command line so i'm going to first go up to the top right
1113:30 going to first go up to the top right hand corner and open up cloud shell and
1113:33 hand corner and open up cloud shell and so you first want to see if versioning
1113:36 so you first want to see if versioning is turned on for your bucket and you can
1113:38 is turned on for your bucket and you can do this by using the command gsutil
1113:41 do this by using the command gsutil versioning get along with gs colon
1113:44 versioning get along with gs colon forward slash forward slash with your
1113:46 forward slash forward slash with your bucket name and hit enter and you may be
1113:49 bucket name and hit enter and you may be prompted with a message asking you to
1113:51 prompted with a message asking you to authorize this api call you definitely
1113:53 authorize this api call you definitely want to authorize and as expected
1113:55 want to authorize and as expected versioning is not turned on on this
1113:57 versioning is not turned on on this bucket hence the return of suspended and
1114:00 bucket hence the return of suspended and so in order to turn versioning on we're
1114:02 so in order to turn versioning on we're going to use a similar command gsutil
1114:05 going to use a similar command gsutil versioning and instead of get we're
1114:07 versioning and instead of get we're going to use set on gs colon forward
1114:10 going to use set on gs colon forward slash forward slash and the bucket name
1114:12 slash forward slash and the bucket name and hit enter and versioning has been
1114:15 and hit enter and versioning has been enabled and so if i run the command
1114:17 enabled and so if i run the command gsutil version in get again i'll get a
1114:20 gsutil version in get again i'll get a response of enabled okay great now that
1114:23 response of enabled okay great now that we have versioning enabled we can go
1114:25 we have versioning enabled we can go ahead with the next step which is to
1114:27 ahead with the next step which is to delete one of the files in the bucket
1114:29 delete one of the files in the bucket and so you can go ahead and select plaid
1114:31 and so you can go ahead and select plaid bowtie.jpg
1114:33 bowtie.jpg and simply click on delete you can
1114:35 and simply click on delete you can confirm the deletion and the file has
1114:37 confirm the deletion and the file has been deleted now technically the file
1114:39 been deleted now technically the file has not been deleted it is merely been
1114:42 has not been deleted it is merely been converted to a non-current version and
1114:44 converted to a non-current version and so in order to check the current and
1114:47 so in order to check the current and non-current versions i'm going to use
1114:49 non-current versions i'm going to use the command
1114:50 the command gsutil
1114:52 gsutil ls minus a along with the bucket name of
1114:55 ls minus a along with the bucket name of g s colon forward slash forward slash
1114:58 g s colon forward slash forward slash bow tie inc dash 2021 i'm gonna hit
1115:01 bow tie inc dash 2021 i'm gonna hit enter
1115:02 enter and as you can see here plaid bow tie
1115:04 and as you can see here plaid bow tie still shows up the ls minus a command is
1115:08 still shows up the ls minus a command is a linux command to show all files
1115:10 a linux command to show all files including the hidden files and so what's
1115:13 including the hidden files and so what's different about these files is right
1115:15 different about these files is right after the dot text or dot jpg you will
1115:18 after the dot text or dot jpg you will see a hashtag number and this is the
1115:20 see a hashtag number and this is the generation number and this determines
1115:22 generation number and this determines the version of each object and so what i
1115:25 the version of each object and so what i want to do now is bring back the
1115:27 want to do now is bring back the non-current version and make it current
1115:30 non-current version and make it current so i'm going to promote the non-current
1115:32 so i'm going to promote the non-current version of plaid bowtie.jpg
1115:34 version of plaid bowtie.jpg to the current version and so in order
1115:36 to the current version and so in order to do this i'm going to run the command
1115:38 to do this i'm going to run the command gsutil and v for move along with the
1115:41 gsutil and v for move along with the bucket of gs colon forward slash forward
1115:44 bucket of gs colon forward slash forward slash bowtie inc hyphen 2021 and the
1115:48 slash bowtie inc hyphen 2021 and the name of the file of plaid bow tie dot
1115:51 name of the file of plaid bow tie dot jpeg along with the generation number
1115:53 jpeg along with the generation number and i'm going to copy it from the
1115:55 and i'm going to copy it from the currently listed i'm going to paste it
1115:57 currently listed i'm going to paste it in and so now we need to put in the
1115:59 in and so now we need to put in the target which is going to be the same
1116:01 target which is going to be the same without the generation number and paste
1116:03 without the generation number and paste that in then hit enter
1116:05 that in then hit enter okay operation completed and so if i go
1116:08 okay operation completed and so if i go up to the top right hand corner and
1116:10 up to the top right hand corner and click on refresh i can see that now
1116:13 click on refresh i can see that now there is a current version for plaid bow
1116:15 there is a current version for plaid bow tie now just know that using the move
1116:18 tie now just know that using the move command actually deletes the non-current
1116:20 command actually deletes the non-current version and gives the new current
1116:22 version and gives the new current version a new generation number and so
1116:25 version a new generation number and so in order to verify this i'm going to
1116:27 in order to verify this i'm going to quickly clear my screen and i'm going to
1116:29 quickly clear my screen and i'm going to run the command gsutil ls minus a along
1116:33 run the command gsutil ls minus a along with the bucket name a bow tie inc dash
1116:35 with the bucket name a bow tie inc dash 2021
1116:38 2021 and the generation number here is
1116:40 and the generation number here is different than that of the last now if i
1116:43 different than that of the last now if i use the cp or copy command it would
1116:46 use the cp or copy command it would leave the non-current version and create
1116:48 leave the non-current version and create a new version on top of that leaving two
1116:51 a new version on top of that leaving two objects with two different generation
1116:53 objects with two different generation numbers okay so with that step being
1116:55 numbers okay so with that step being done you now want to log into your linux
1116:58 done you now want to log into your linux instance and we're going to be doing
1117:00 instance and we're going to be doing some versioning for file of bowties.text
1117:03 some versioning for file of bowties.text so i'm going to go ahead and clear my
1117:05 so i'm going to go ahead and clear my screen again and i'm going to run the
1117:06 screen again and i'm going to run the command gcloud compute ssh bowtie
1117:10 command gcloud compute ssh bowtie instance which is the name of my
1117:12 instance which is the name of my instance along with the zone flag dash
1117:14 instance along with the zone flag dash dash zone of the zone us east 1b i'm
1117:18 dash zone of the zone us east 1b i'm going to hit enter
1117:19 going to hit enter and you should be prompted for the
1117:21 and you should be prompted for the passphrase of your key
1117:23 passphrase of your key and i'm in and so here you want to edit
1117:25 and i'm in and so here you want to edit file a bowties.txt to a different
1117:28 file a bowties.txt to a different version so you can go ahead and run the
1117:30 version so you can go ahead and run the command sudo nano file a bow ties dot
1117:33 command sudo nano file a bow ties dot text and hit enter and you should have
1117:36 text and hit enter and you should have learning to tie a bow tie takes time and
1117:38 learning to tie a bow tie takes time and what you want to do is append version 2
1117:42 what you want to do is append version 2 right at the end ctrl o to save enter to
1117:45 right at the end ctrl o to save enter to verify the file name to right and
1117:47 verify the file name to right and control x to exit and so now we want to
1117:49 control x to exit and so now we want to copy file a bow ties dot text to your
1117:52 copy file a bow ties dot text to your current bucket mine being bow tie ink
1117:55 current bucket mine being bow tie ink dash 2021 so i'm going to go ahead and
1117:57 dash 2021 so i'm going to go ahead and run the command gsutil cp the name of
1118:00 run the command gsutil cp the name of the file which is file of bowties dot
1118:03 the file which is file of bowties dot text and the target which is going to be
1118:06 text and the target which is going to be bowtie inc
1118:08 bowtie inc 2021 and hit enter
1118:10 2021 and hit enter and it's copied the file to the bucket
1118:13 and it's copied the file to the bucket and so if i hit refresh in the console
1118:15 and so if i hit refresh in the console you can see that there is only one
1118:17 you can see that there is only one version of file of bowties.text and so
1118:20 version of file of bowties.text and so to check on all the versions that i have
1118:22 to check on all the versions that i have i'm going to go back to my cloud shell
1118:24 i'm going to go back to my cloud shell i'm going to quickly clear my screen and
1118:26 i'm going to quickly clear my screen and i'm going to run the command gsutil ls
1118:30 i'm going to run the command gsutil ls minus a along with the target bucket
1118:33 minus a along with the target bucket hit enter and as you can see here there
1118:36 hit enter and as you can see here there are now two versions of file of
1118:38 are now two versions of file of bowties.text and if i quickly open this
1118:41 bowties.text and if i quickly open this up
1118:42 up i'm gonna click on the url you can see
1118:44 i'm gonna click on the url you can see here that this is version two and so
1118:46 here that this is version two and so this should be the latest generation of
1118:49 this should be the latest generation of file of bowties.txt that you edited over
1118:52 file of bowties.txt that you edited over in your instance i'm going to close this
1118:54 in your instance i'm going to close this tab now and so what i want to do now is
1118:56 tab now and so what i want to do now is i want to promote the non-current
1118:58 i want to promote the non-current version to be the current version in
1119:01 version to be the current version in essence making version 2 the non-current
1119:03 essence making version 2 the non-current version and so i'm going to run the
1119:05 version and so i'm going to run the command gsutil cp and i'm going to take
1119:08 command gsutil cp and i'm going to take the older generation number and i'm
1119:10 the older generation number and i'm going to copy it and paste it here and
1119:13 going to copy it and paste it here and the target is going to be the same
1119:15 the target is going to be the same without the generation number and paste
1119:17 without the generation number and paste it and hit enter okay and the file has
1119:20 it and hit enter okay and the file has been copied over so i'm going to do a
1119:22 been copied over so i'm going to do a quick refresh in the console i'm going
1119:24 quick refresh in the console i'm going to drill down into file a bowties.txt
1119:27 to drill down into file a bowties.txt and when i click on the url link it
1119:29 and when i click on the url link it should come up as version 1. and so this
1119:31 should come up as version 1. and so this is a way to promote non-current versions
1119:34 is a way to promote non-current versions to current versions using the gsutil
1119:37 to current versions using the gsutil copy command or the gsutil move command
1119:40 copy command or the gsutil move command i'm going to close on this tab now i'm
1119:42 i'm going to close on this tab now i'm going to quickly clear my screen and if
1119:44 going to quickly clear my screen and if i run the command gsutil ls minus a
1119:47 i run the command gsutil ls minus a again you can see that i have even more
1119:50 again you can see that i have even more files and so these files and versions of
1119:53 files and so these files and versions of files will eventually accumulate and
1119:55 files will eventually accumulate and continuously take up space along with
1119:58 continuously take up space along with costing you money and so in order to
1120:00 costing you money and so in order to mitigate this a good idea would be to
1120:02 mitigate this a good idea would be to put life cycle policies into place and
1120:05 put life cycle policies into place and so you're gonna go ahead now and add a
1120:07 so you're gonna go ahead now and add a life cycle policy to the bucket and this
1120:10 life cycle policy to the bucket and this will help manage the ever-growing
1120:12 will help manage the ever-growing accumulation of files as more files are
1120:14 accumulation of files as more files are being added to the bucket and more
1120:17 being added to the bucket and more versions are being produced something
1120:19 versions are being produced something that is very common that is seen in many
1120:21 that is very common that is seen in many different environments and so we're
1120:23 different environments and so we're going to go ahead and get this done in
1120:24 going to go ahead and get this done in the console so i'm going to close down
1120:26 the console so i'm going to close down cloud shell and i'm going to go back to
1120:28 cloud shell and i'm going to go back to the main page of the bucket and under
1120:30 the main page of the bucket and under the menu you can click on lifecycle and
1120:33 the menu you can click on lifecycle and here you'll be able to add the lifecycle
1120:35 here you'll be able to add the lifecycle rules and so here you're going to click
1120:37 rules and so here you're going to click on add a rule and the first thing that
1120:39 on add a rule and the first thing that you're prompted to do is to select an
1120:41 you're prompted to do is to select an action and so the first rule you're
1120:43 action and so the first rule you're going to apply is to delete non-current
1120:46 going to apply is to delete non-current objects after seven days so you're gonna
1120:49 objects after seven days so you're gonna click on delete object you're gonna be
1120:51 click on delete object you're gonna be prompted with a warning gonna hit
1120:52 prompted with a warning gonna hit continue and you'll be prompted to
1120:54 continue and you'll be prompted to select object conditions and as
1120:57 select object conditions and as discussed in an earlier lesson there are
1120:59 discussed in an earlier lesson there are many conditions to choose from and
1121:01 many conditions to choose from and multiple conditions can be selected so
1121:04 multiple conditions can be selected so here you're going to select days since
1121:06 here you're going to select days since becoming non-current and in the empty
1121:08 becoming non-current and in the empty field you're going to type in 7. you can
1121:10 field you're going to type in 7. you can click on continue and before you click
1121:12 click on continue and before you click on create i wanted just to note that any
1121:15 on create i wanted just to note that any life cycle rule can take up to 24 hours
1121:18 life cycle rule can take up to 24 hours to take effect so i'm going to click on
1121:20 to take effect so i'm going to click on create and here you can see the rule has
1121:23 create and here you can see the rule has been applied to delete objects after
1121:26 been applied to delete objects after seven days when object becomes
1121:28 seven days when object becomes non-current and so now that we added a
1121:30 non-current and so now that we added a delete rule we're going to go ahead and
1121:32 delete rule we're going to go ahead and add another rule to move current files
1121:35 add another rule to move current files that are not being used to a storage
1121:37 that are not being used to a storage class that can save the company money
1121:40 class that can save the company money and so let's go ahead and create another
1121:43 and so let's go ahead and create another lifecycle rule but this time to use this
1121:46 lifecycle rule but this time to use this set storage class action and so the
1121:48 set storage class action and so the files that accumulate that have been
1121:50 files that accumulate that have been there for over 90 days you want to set
1121:52 there for over 90 days you want to set the storage class the cold line so this
1121:54 the storage class the cold line so this way it'll save you some money and so
1121:56 way it'll save you some money and so you're going to click on add a rule
1121:58 you're going to click on add a rule you're going to select set storage class
1121:59 you're going to select set storage class to cold line and as a note here it says
1122:02 to cold line and as a note here it says archive objects will not be changed to
1122:04 archive objects will not be changed to cold line so you can move forward with
1122:06 cold line so you can move forward with the storage class but you can't move
1122:08 the storage class but you can't move backwards in other words i can't move
1122:10 backwards in other words i can't move from cold line to near line or archive
1122:13 from cold line to near line or archive the cold line i can only move from near
1122:15 the cold line i can only move from near line to cold line or cold line to
1122:17 line to cold line or cold line to archive so i'm going to go ahead and
1122:19 archive so i'm going to go ahead and click continue for the object conditions
1122:21 click continue for the object conditions you want to select age and in the field
1122:24 you want to select age and in the field you want to enter 90 days and here you
1122:26 you want to enter 90 days and here you want to hit continue and finally click
1122:29 want to hit continue and finally click on create and so in order to actually
1122:31 on create and so in order to actually see these rules take effect like i said
1122:33 see these rules take effect like i said before it'll take up to 24 hours and so
1122:36 before it'll take up to 24 hours and so before we end this demo i wanted to show
1122:39 before we end this demo i wanted to show you another way to edit a life cycle
1122:41 you another way to edit a life cycle policy by editing the json file itself
1122:44 policy by editing the json file itself so you can head on up to the top right
1122:47 so you can head on up to the top right and open up cloud shell i'm going to
1122:48 and open up cloud shell i'm going to bring this down a little bit and you're
1122:50 bring this down a little bit and you're going to run the command gsutil
1122:52 going to run the command gsutil lifecycle get along with the bucket name
1122:55 lifecycle get along with the bucket name and output it to a file called
1122:57 and output it to a file called lifecycle.json and hit enter
1123:00 lifecycle.json and hit enter and no errors so that's a good sign next
1123:03 and no errors so that's a good sign next i'm going to run the command ls and as
1123:05 i'm going to run the command ls and as you can see here the lifecycle.json file
1123:08 you can see here the lifecycle.json file has been written and so i'd like to edit
1123:10 has been written and so i'd like to edit this file where it changes the set to
1123:12 this file where it changes the set to cold line rule from 90 days to 120 days
1123:16 cold line rule from 90 days to 120 days as tony bowtie's manager thinks that
1123:19 as tony bowtie's manager thinks that they should keep the files a little bit
1123:20 they should keep the files a little bit longer before sending it to coldline and
1123:23 longer before sending it to coldline and so in order to edit this file you're
1123:25 so in order to edit this file you're going to run the command sudo nano
1123:28 going to run the command sudo nano along with the name of the file of
1123:30 along with the name of the file of lifecycle.js you hit enter and it's
1123:33 lifecycle.js you hit enter and it's going to be a long string but if you use
1123:36 going to be a long string but if you use your arrow keys and move down and then
1123:39 your arrow keys and move down and then back you'll see the set to cold line
1123:41 back you'll see the set to cold line rule with the age of 90 days so i'm
1123:44 rule with the age of 90 days so i'm going to move over here and i'm going to
1123:45 going to move over here and i'm going to edit this to 120 and i'm going to hit
1123:48 edit this to 120 and i'm going to hit ctrl o to save enter to verify file name
1123:50 ctrl o to save enter to verify file name to write and ctrl x to exit and just
1123:53 to write and ctrl x to exit and just know that you can also edit this file in
1123:56 know that you can also edit this file in cloud shell editor and so in order for
1123:58 cloud shell editor and so in order for me to put this lifecycle policy in place
1124:01 me to put this lifecycle policy in place i need to set this as the new lifecycle
1124:03 i need to set this as the new lifecycle policy and so in order for me to do that
1124:06 policy and so in order for me to do that i'm going to run the command gsutil
1124:08 i'm going to run the command gsutil lifecycle set along with the name of the
1124:10 lifecycle set along with the name of the json file which is
1124:12 json file which is lifecycle.json along with the bucket
1124:14 lifecycle.json along with the bucket name and hit enter and it looks like it
1124:17 name and hit enter and it looks like it said it and i'm going to do quick
1124:18 said it and i'm going to do quick refresh in the console just to verify
1124:21 refresh in the console just to verify and success the rule has been changed
1124:24 and success the rule has been changed from 90 days to 120 days congratulations
1124:28 from 90 days to 120 days congratulations on completing this demo now a lot of
1124:30 on completing this demo now a lot of what you've experienced here is more of
1124:32 what you've experienced here is more of what you will see in the architect exam
1124:35 what you will see in the architect exam as the cloud engineer exam focuses on
1124:38 as the cloud engineer exam focuses on more of the high level theory of these
1124:40 more of the high level theory of these cloud storage features
1124:42 cloud storage features but i wanted to show you some real life
1124:44 but i wanted to show you some real life scenarios and how to apply the theory
1124:47 scenarios and how to apply the theory that was shown in previous lessons into
1124:49 that was shown in previous lessons into practice and so just as a recap you set
1124:52 practice and so just as a recap you set versioning on the current bucket that
1124:54 versioning on the current bucket that you are working in and you deleted a
1124:56 you are working in and you deleted a file and made it non-current you then
1124:58 file and made it non-current you then brought it back to be current again you
1125:00 brought it back to be current again you then edited a file on your instance and
1125:03 then edited a file on your instance and copied it over to replace the current
1125:05 copied it over to replace the current version of that file in your bucket you
1125:08 version of that file in your bucket you then promoted the non-current version as
1125:10 then promoted the non-current version as the new one and moved into lifecycle
1125:12 the new one and moved into lifecycle rules where you created two separate
1125:14 rules where you created two separate rules you created a rule to delete files
1125:18 rules you created a rule to delete files along with the rule to set storage class
1125:20 along with the rule to set storage class after a certain age of the file and the
1125:22 after a certain age of the file and the last step you took was to copy the
1125:25 last step you took was to copy the lifecycle policy to your cloud shell and
1125:27 lifecycle policy to your cloud shell and edited that policy and set it to a newer
1125:30 edited that policy and set it to a newer edited version and so that pretty much
1125:32 edited version and so that pretty much covers this demo on object versioning
1125:36 covers this demo on object versioning and lifecycle management congratulations
1125:38 and lifecycle management congratulations again on a job well done and so before
1125:40 again on a job well done and so before you go
1125:41 you go make sure you delete all the resources
1125:44 make sure you delete all the resources you've created for the past couple of
1125:46 you've created for the past couple of demos as you want to make sure that
1125:48 demos as you want to make sure that you're not accumulating any unnecessary
1125:51 you're not accumulating any unnecessary costs and so i'm going to do a quick run
1125:53 costs and so i'm going to do a quick run through on deleting these resources and
1125:55 through on deleting these resources and so i'm going to quickly close down cloud
1125:56 so i'm going to quickly close down cloud shell and i'm going to head on over to
1125:58 shell and i'm going to head on over to the navigation menu go to compute engine
1126:01 the navigation menu go to compute engine i'm going to delete my instance and i'm
1126:03 i'm going to delete my instance and i'm going to head back on over to cloud
1126:05 going to head back on over to cloud storage and delete the bucket there i'm
1126:07 storage and delete the bucket there i'm going to confirm the deletion i'm going
1126:09 going to confirm the deletion i'm going to click on delete and so that covers
1126:11 to click on delete and so that covers the deletion of all the resources so you
1126:14 the deletion of all the resources so you can now mark this as complete and i'll
1126:16 can now mark this as complete and i'll see you in the next one
1126:24 welcome back and in this lesson i'm going to be covering cloud sql one of
1126:26 going to be covering cloud sql one of google cloud's many database offerings
1126:29 google cloud's many database offerings that offers reliable secure and scalable
1126:32 that offers reliable secure and scalable sql databases without having to worry
1126:35 sql databases without having to worry about the complexity to set it all up
1126:37 about the complexity to set it all up now there's quite a bit to cover here so
1126:39 now there's quite a bit to cover here so with that being said let's dive in now
1126:42 with that being said let's dive in now cloud sql is a fully managed cloud
1126:45 cloud sql is a fully managed cloud native relational database service that
1126:48 native relational database service that offers mysql postgres and sql server
1126:52 offers mysql postgres and sql server engines with built-in support for
1126:54 engines with built-in support for replication cloud sql is a database as a
1126:57 replication cloud sql is a database as a service offering from google where
1126:59 service offering from google where google takes care of all the underlying
1127:02 google takes care of all the underlying infrastructure for the database along
1127:04 infrastructure for the database along with the operating system and the
1127:06 with the operating system and the database software now because there are
1127:08 database software now because there are a few different types of database
1127:10 a few different types of database offerings from google cloud sql was
1127:13 offerings from google cloud sql was designed for low latency transactional
1127:16 designed for low latency transactional and relational database workloads it's
1127:19 and relational database workloads it's also available in three different
1127:21 also available in three different flavors of databases mysql postgres and
1127:25 flavors of databases mysql postgres and the newest edition is sql server and all
1127:28 the newest edition is sql server and all of them support standard apis for
1127:30 of them support standard apis for connectivity cloud sql offers
1127:33 connectivity cloud sql offers replication using different types of
1127:35 replication using different types of read replicas which i will get into a
1127:38 read replicas which i will get into a little bit later and offers capabilities
1127:40 little bit later and offers capabilities for high availability for continuous
1127:43 for high availability for continuous access to your data cloud sql also
1127:46 access to your data cloud sql also offers backups in two different flavors
1127:49 offers backups in two different flavors and allows you to restore your database
1127:52 and allows you to restore your database from these backups with the same amount
1127:54 from these backups with the same amount of ease now along with your backups
1127:57 of ease now along with your backups comes point in time recovery for when
1127:59 comes point in time recovery for when you want to restore a database from a
1128:02 you want to restore a database from a specific point in time cloud sql storage
1128:05 specific point in time cloud sql storage relies on connected persistent disks in
1128:07 relies on connected persistent disks in the same zone that are available in
1128:10 the same zone that are available in regular hard disk drives or ssds that
1128:14 regular hard disk drives or ssds that currently give you up to 30 terabytes of
1128:16 currently give you up to 30 terabytes of storage capacity and because the same
1128:19 storage capacity and because the same technologies lie in the background for
1128:21 technologies lie in the background for persistent disks
1128:23 persistent disks automatic storage increase is available
1128:26 automatic storage increase is available to resize your disks for more storage
1128:29 to resize your disks for more storage cloud sql also offers encryption at rest
1128:32 cloud sql also offers encryption at rest and in transit for securing data
1128:34 and in transit for securing data entering and leaving your instance and
1128:37 entering and leaving your instance and when it comes to costs you are billed
1128:39 when it comes to costs you are billed for cpu memory and storage of the
1128:42 for cpu memory and storage of the instance along with egress traffic as
1128:45 instance along with egress traffic as well please be aware that there is a
1128:47 well please be aware that there is a licensing cost when it comes to windows
1128:50 licensing cost when it comes to windows instances now cloud sql instances are
1128:52 instances now cloud sql instances are not available in the same instance types
1128:55 not available in the same instance types as compute engine and are only available
1128:58 as compute engine and are only available in the shared core
1128:59 in the shared core standard and high memory cpu types and
1129:03 standard and high memory cpu types and when you see them they will be clearly
1129:05 when you see them they will be clearly marked with a db on the beginning of the
1129:08 marked with a db on the beginning of the cpu type you cannot customize these
1129:10 cpu type you cannot customize these instances like you can with compute
1129:12 instances like you can with compute engine and so memory will be pre-defined
1129:15 engine and so memory will be pre-defined when choosing the instance type now
1129:17 when choosing the instance type now storage types for cloud sql are only
1129:20 storage types for cloud sql are only available in hard disk drives and ssds
1129:23 available in hard disk drives and ssds you are able to size them according to
1129:25 you are able to size them according to your needs and as stated earlier can be
1129:28 your needs and as stated earlier can be sized up to 30 terabytes in size and
1129:31 sized up to 30 terabytes in size and when entering the danger zone of having
1129:33 when entering the danger zone of having a full disk you do have the option of
1129:35 a full disk you do have the option of enabling automatic storage increase so
1129:38 enabling automatic storage increase so you never have to worry about filling up
1129:40 you never have to worry about filling up your disk before that 30 terabyte limit
1129:43 your disk before that 30 terabyte limit now when it comes to connecting to your
1129:45 now when it comes to connecting to your cloud sql instance you can configure it
1129:48 cloud sql instance you can configure it with a public or private ip but know
1129:52 with a public or private ip but know that after configuring the instance with
1129:54 that after configuring the instance with a private ip it cannot be changed
1129:56 a private ip it cannot be changed although connecting with the private ip
1129:59 although connecting with the private ip is preferred when connecting from a
1130:01 is preferred when connecting from a client on a resource with access to a
1130:04 client on a resource with access to a vpc as well it is always best practice
1130:08 vpc as well it is always best practice to use private i p addresses for any
1130:11 to use private i p addresses for any database in your environment whenever
1130:13 database in your environment whenever you can now moving on to authentication
1130:15 you can now moving on to authentication options the recommended method to
1130:18 options the recommended method to connecting to your cloud sql instance is
1130:21 connecting to your cloud sql instance is using cloud sql proxy the cloud sql
1130:23 using cloud sql proxy the cloud sql proxy allows you to authorize and secure
1130:26 proxy allows you to authorize and secure your connections using iam permissions
1130:29 your connections using iam permissions unless using the cloud sql proxy
1130:32 unless using the cloud sql proxy connections to an instance's public ip
1130:34 connections to an instance's public ip address are only allowed if the
1130:36 address are only allowed if the connection comes from an authorized
1130:39 connection comes from an authorized network authorized networks are ip
1130:41 network authorized networks are ip addresses or ranges that the user has
1130:44 addresses or ranges that the user has specified as having permission to
1130:46 specified as having permission to connect once you are authorized you can
1130:48 connect once you are authorized you can connect to your instance through
1130:50 connect to your instance through external clients or applications and
1130:53 external clients or applications and even other google cloud services like
1130:56 even other google cloud services like compute engine gke app engine cloud
1131:00 compute engine gke app engine cloud functions and cloud run now i wanted to
1131:03 functions and cloud run now i wanted to focus a moment here on the recommended
1131:05 focus a moment here on the recommended method for connecting to your instance
1131:08 method for connecting to your instance which is cloud sql proxy now as
1131:10 which is cloud sql proxy now as mentioned before the cloud sql proxy
1131:13 mentioned before the cloud sql proxy allows you to authorize and secure your
1131:15 allows you to authorize and secure your connections using iam permissions the
1131:18 connections using iam permissions the proxy validates connections using
1131:21 proxy validates connections using credentials for a user or service
1131:23 credentials for a user or service account and wrapping the connection in
1131:26 account and wrapping the connection in an ssl tls layer that is authorized for
1131:30 an ssl tls layer that is authorized for a cloud sql instance using the cloud sql
1131:33 a cloud sql instance using the cloud sql proxy is the recommended method for
1131:36 proxy is the recommended method for authenticating connections to a cloud
1131:38 authenticating connections to a cloud sql instance as it is the most secure
1131:41 sql instance as it is the most secure the client proxy is an open source
1131:43 the client proxy is an open source library distributed as an executable
1131:46 library distributed as an executable binary and is available for linux macos
1131:50 binary and is available for linux macos and windows the client proxy acts as an
1131:53 and windows the client proxy acts as an intermediary server that listens for
1131:55 intermediary server that listens for incoming connections wraps them in ssl
1131:58 incoming connections wraps them in ssl or tls and then passes them to a cloud
1132:01 or tls and then passes them to a cloud sql instance the cloud sql proxy handles
1132:04 sql instance the cloud sql proxy handles authentication with cloud sql providing
1132:07 authentication with cloud sql providing secure access to cloud sql instances
1132:10 secure access to cloud sql instances without the need to manage allowed ip
1132:13 without the need to manage allowed ip addresses or configure ssl connections
1132:16 addresses or configure ssl connections as well this is also the best solution
1132:18 as well this is also the best solution for applications that hold ephemeral
1132:20 for applications that hold ephemeral eyepiece and while the proxy can listen
1132:23 eyepiece and while the proxy can listen on any port
1132:24 on any port it only creates outgoing connections to
1132:27 it only creates outgoing connections to your cloud sql instance on port 3307 now
1132:31 your cloud sql instance on port 3307 now when it comes to database replication
1132:33 when it comes to database replication it's more than just copying your data
1132:36 it's more than just copying your data from one database to another the primary
1132:38 from one database to another the primary reason for using replication is to scale
1132:41 reason for using replication is to scale the use of data in a database without
1132:44 the use of data in a database without degrading performance other reasons
1132:46 degrading performance other reasons include migrating data between regions
1132:48 include migrating data between regions and platforms and from an on-premises
1132:51 and platforms and from an on-premises database to cloud sql you could also
1132:53 database to cloud sql you could also promote a replica if the original
1132:56 promote a replica if the original instance becomes corrupted and i'll be
1132:58 instance becomes corrupted and i'll be getting into promoting replicas a little
1133:00 getting into promoting replicas a little bit later now when it comes to a cloud
1133:03 bit later now when it comes to a cloud sql instance the instance that is
1133:05 sql instance the instance that is replicated is called a primary instance
1133:08 replicated is called a primary instance and the copies are called read replicas
1133:11 and the copies are called read replicas the primary instance and read replicas
1133:14 the primary instance and read replicas all reside in cloud sql read replicas
1133:17 all reside in cloud sql read replicas are read-only and you cannot write to
1133:19 are read-only and you cannot write to them the read replica processes queries
1133:22 them the read replica processes queries read requests and analytics traffics
1133:24 read requests and analytics traffics thus reducing the load on the primary
1133:27 thus reducing the load on the primary instance read replicas can have more
1133:30 instance read replicas can have more cpus in memory than the primary instance
1133:33 cpus in memory than the primary instance but they cannot have any less and you
1133:35 but they cannot have any less and you can have up to 10 read replicas per
1133:38 can have up to 10 read replicas per primary instance and you can connect to
1133:40 primary instance and you can connect to a replica directly using its connection
1133:43 a replica directly using its connection name and ip address cloud sql supports
1133:46 name and ip address cloud sql supports the following types of replicas
1133:49 the following types of replicas read replicas cross region read replicas
1133:52 read replicas cross region read replicas external read replicas and cloud sql
1133:55 external read replicas and cloud sql replicas when replicating from an
1133:58 replicas when replicating from an external server now when it comes to
1134:00 external server now when it comes to read replicas you would use it to
1134:02 read replicas you would use it to offload work from a cloud sql instance
1134:05 offload work from a cloud sql instance the read replica is an exact copy of the
1134:08 the read replica is an exact copy of the primary instance and data and other
1134:10 primary instance and data and other changes on the primary instance are
1134:13 changes on the primary instance are updated in almost real time on the read
1134:16 updated in almost real time on the read replica a read replica is created in a
1134:19 replica a read replica is created in a different region from the primary
1134:21 different region from the primary instance and you can create a cross
1134:23 instance and you can create a cross region read replica the same way as you
1134:25 region read replica the same way as you would create an in-region replica this
1134:28 would create an in-region replica this improves read performance by making
1134:30 improves read performance by making replicas available closer to your
1134:32 replicas available closer to your application's region it also provides
1134:34 application's region it also provides additional disaster recovery capability
1134:37 additional disaster recovery capability to guard you against a regional failure
1134:39 to guard you against a regional failure it also lets you migrate data from one
1134:42 it also lets you migrate data from one region to another with minimum downtime
1134:45 region to another with minimum downtime and lastly when it comes to external
1134:47 and lastly when it comes to external read replicas these are external mysql
1134:50 read replicas these are external mysql instances that replicate from a cloud
1134:53 instances that replicate from a cloud sql primary instance
1134:55 sql primary instance for example a mysql instance running on
1134:58 for example a mysql instance running on compute engine is considered an external
1135:01 compute engine is considered an external instance and so just as a quick note
1135:03 instance and so just as a quick note here before you can create a read
1135:05 here before you can create a read replica of a primary cloud sql instance
1135:08 replica of a primary cloud sql instance the instance must meet the following
1135:10 the instance must meet the following requirements automated backups must be
1135:13 requirements automated backups must be enabled binary logging must be enabled
1135:16 enabled binary logging must be enabled which requires point-in-time recovery to
1135:19 which requires point-in-time recovery to be enabled and at least one backup must
1135:22 be enabled and at least one backup must have been created after binary logging
1135:24 have been created after binary logging was enabled and so when you have read
1135:27 was enabled and so when you have read replicas in your environment it gives
1135:29 replicas in your environment it gives you the flexibility of promoting those
1135:31 you the flexibility of promoting those replicas if needed now promoting
1135:33 replicas if needed now promoting replicas is a feature that can be used
1135:36 replicas is a feature that can be used for when your primary database becomes
1135:38 for when your primary database becomes corrupted or unreachable now you can
1135:41 corrupted or unreachable now you can promote an in-region read replica or
1135:44 promote an in-region read replica or cross-region re-replica depending on
1135:46 cross-region re-replica depending on where you have your read replicas hosted
1135:49 where you have your read replicas hosted so when you promote a read replica the
1135:52 so when you promote a read replica the instance stops replication and converts
1135:55 instance stops replication and converts the instance to a standalone cloud sql
1135:57 the instance to a standalone cloud sql primary instance with read and write
1136:00 primary instance with read and write capabilities please note that this
1136:02 capabilities please note that this cannot be undone and also note that when
1136:05 cannot be undone and also note that when your new primary instance has started
1136:08 your new primary instance has started your other read replicas are not
1136:10 your other read replicas are not transferred over from the old primary
1136:12 transferred over from the old primary instance you will need to reconnect your
1136:15 instance you will need to reconnect your other read replicas to your new primary
1136:18 other read replicas to your new primary instance and as you can see here
1136:20 instance and as you can see here promoting a replica is done manually and
1136:23 promoting a replica is done manually and intentionally whereas high availability
1136:26 intentionally whereas high availability has a standby instance that
1136:28 has a standby instance that automatically becomes the primary in
1136:31 automatically becomes the primary in case of a failure horizontal outage now
1136:34 case of a failure horizontal outage now when it comes to promoting cross-region
1136:35 when it comes to promoting cross-region replicas there are two common scenarios
1136:38 replicas there are two common scenarios for promotion
1136:40 for promotion regional migration which performs a
1136:42 regional migration which performs a planned migration of a database to a
1136:44 planned migration of a database to a different region and disaster recovery
1136:47 different region and disaster recovery and this is where you would fail over a
1136:49 and this is where you would fail over a database to another region in the event
1136:52 database to another region in the event that the primary instances region
1136:54 that the primary instances region becomes unavailable both use cases
1136:57 becomes unavailable both use cases involve setting up cross-region
1136:58 involve setting up cross-region replication and then promoting the
1137:01 replication and then promoting the replica the main difference between them
1137:03 replica the main difference between them is whether the promotion of the replica
1137:06 is whether the promotion of the replica is planned or unplanned now if you're
1137:08 is planned or unplanned now if you're promoting your replicas for a regional
1137:11 promoting your replicas for a regional migration you can use a cross region
1137:13 migration you can use a cross region replica to migrate your database to
1137:16 replica to migrate your database to another region with minimal downtime and
1137:18 another region with minimal downtime and this is so you can create a replica in
1137:21 this is so you can create a replica in another region wait until the
1137:23 another region wait until the replication catches up promote it and
1137:25 replication catches up promote it and then direct your applications to the
1137:27 then direct your applications to the newly promoted instance the steps
1137:29 newly promoted instance the steps involved in promotion are the same as
1137:31 involved in promotion are the same as for promoting an in-region replica and
1137:34 for promoting an in-region replica and so when you're promoting replicas for
1137:36 so when you're promoting replicas for disaster recovery cross-region replicas
1137:39 disaster recovery cross-region replicas can be used as part of this disaster
1137:41 can be used as part of this disaster recovery procedure you can promote a
1137:43 recovery procedure you can promote a cross-region replica to fail over to
1137:46 cross-region replica to fail over to another region should the primary
1137:48 another region should the primary instances region become unavailable for
1137:51 instances region become unavailable for an extended period of time so in this
1137:54 an extended period of time so in this example the entire u.s east 1 region has
1137:57 example the entire u.s east 1 region has gone down yet the reed replica in the
1137:59 gone down yet the reed replica in the europe region is still up and running
1138:01 europe region is still up and running and although there may be a little bit
1138:03 and although there may be a little bit more latency for your customers in north
1138:05 more latency for your customers in north america i'm able to promote this read
1138:08 america i'm able to promote this read replica connect it to the needed
1138:10 replica connect it to the needed resources and get back to business now
1138:13 resources and get back to business now moving along to high availability cloud
1138:16 moving along to high availability cloud sql offers aha capabilities out of the
1138:19 sql offers aha capabilities out of the box the aha configuration sometimes
1138:22 box the aha configuration sometimes called a cluster provides data
1138:24 called a cluster provides data redundancy so a cloud sql instance
1138:27 redundancy so a cloud sql instance configured for ha is also called a
1138:30 configured for ha is also called a regional instance and is located in a
1138:32 regional instance and is located in a primary and secondary zone within the
1138:35 primary and secondary zone within the configured region within a regional
1138:38 configured region within a regional instance the configuration is made up of
1138:40 instance the configuration is made up of a primary instance and a standby
1138:43 a primary instance and a standby instance and through synchronous
1138:45 instance and through synchronous replication to each zone's persistent
1138:47 replication to each zone's persistent disk all rights made to the primary
1138:49 disk all rights made to the primary instance are also made to the standby
1138:52 instance are also made to the standby instance each second the primary
1138:55 instance each second the primary instance writes to a system database as
1138:57 instance writes to a system database as a heartbeat signal if multiple
1139:00 a heartbeat signal if multiple heartbeats aren't detected
1139:01 heartbeats aren't detected failover is initiated and so if an
1139:04 failover is initiated and so if an ha-configured instance becomes
1139:06 ha-configured instance becomes unresponsive cloud sql automatically
1139:09 unresponsive cloud sql automatically switches to serving data from the
1139:11 switches to serving data from the standby instance and this is called a
1139:13 standby instance and this is called a failover in this example the primary
1139:16 failover in this example the primary instance or zone fails and failover is
1139:19 instance or zone fails and failover is initiated so if the primary instance is
1139:22 initiated so if the primary instance is unresponsive for approximately 60
1139:25 unresponsive for approximately 60 seconds or the zone containing the
1139:27 seconds or the zone containing the primary instance experiences an outage
1139:30 primary instance experiences an outage failover will initiate the standby
1139:33 failover will initiate the standby instance immediately starts serving data
1139:36 instance immediately starts serving data upon reconnection through a shared
1139:38 upon reconnection through a shared static ip address with the primary
1139:41 static ip address with the primary instance and the standby instance now
1139:44 instance and the standby instance now serves data from the secondary zone and
1139:46 serves data from the secondary zone and now when the primary instance is
1139:48 now when the primary instance is available again a fail back will happen
1139:51 available again a fail back will happen and this is when traffic will be
1139:53 and this is when traffic will be redirected back to the primary instance
1139:56 redirected back to the primary instance and the standby instance will go back
1139:59 and the standby instance will go back into standby mode as well the regional
1140:02 into standby mode as well the regional persistent disk will pick up replication
1140:04 persistent disk will pick up replication to the persistent disk in that same zone
1140:07 to the persistent disk in that same zone and with regards to billing an ha
1140:10 and with regards to billing an ha configured instance is charged at double
1140:13 configured instance is charged at double the price of a standalone instance
1140:15 the price of a standalone instance and this includes cpu ram and storage
1140:18 and this includes cpu ram and storage also note that the standby instance
1140:21 also note that the standby instance cannot be used for read queries and this
1140:23 cannot be used for read queries and this is where it differs from read replicas
1140:26 is where it differs from read replicas as well a very important note here is
1140:28 as well a very important note here is that automatic backups and point in time
1140:31 that automatic backups and point in time recovery must be enabled for high
1140:33 recovery must be enabled for high availability and so the last topic that
1140:36 availability and so the last topic that i wanted to touch on is backups
1140:38 i wanted to touch on is backups and backups help you restore lost data
1140:41 and backups help you restore lost data to your cloud sql instance you can also
1140:43 to your cloud sql instance you can also restore an instance that is having
1140:45 restore an instance that is having problems from a backup you enable
1140:48 problems from a backup you enable backups for any instance that contains
1140:50 backups for any instance that contains necessary data backups protect your data
1140:53 necessary data backups protect your data from loss or damage enabling automated
1140:56 from loss or damage enabling automated backups along with binary logging is
1140:59 backups along with binary logging is also required for some operations such
1141:02 also required for some operations such as clone and replica creation by default
1141:05 as clone and replica creation by default cloud sql stores backup data in two
1141:08 cloud sql stores backup data in two regions for redundancy one region can be
1141:10 regions for redundancy one region can be the same region that the instance is in
1141:13 the same region that the instance is in and the other is a different region if
1141:15 and the other is a different region if there are two regions in a continent the
1141:17 there are two regions in a continent the backup data remains on the same
1141:20 backup data remains on the same continent cloud sql also lets you select
1141:22 continent cloud sql also lets you select a custom location for your backup data
1141:25 a custom location for your backup data and this is great if you need to comply
1141:27 and this is great if you need to comply with data residency regulations for your
1141:30 with data residency regulations for your business now cloud sql performs two
1141:33 business now cloud sql performs two types of backups on-demand backups and
1141:36 types of backups on-demand backups and automated backups now with on-demand
1141:39 automated backups now with on-demand backups you can create a backup at any
1141:41 backups you can create a backup at any time and this is useful for when you're
1141:44 time and this is useful for when you're making risky changes that may go
1141:46 making risky changes that may go sideways you can always create on-demand
1141:49 sideways you can always create on-demand backups for any instance whether the
1141:52 backups for any instance whether the instance has automatic backups enabled
1141:54 instance has automatic backups enabled or not and these backups persist until
1141:57 or not and these backups persist until you delete them or until their instance
1142:00 you delete them or until their instance is deleted now when it comes to
1142:02 is deleted now when it comes to automated backups these use a four hour
1142:05 automated backups these use a four hour backup window these backups start during
1142:07 backup window these backups start during the backup window and just as a note
1142:10 the backup window and just as a note when possible you should schedule your
1142:12 when possible you should schedule your backups when your instance has the least
1142:14 backups when your instance has the least activity automated backups occur every
1142:17 activity automated backups occur every day when your instance is running at any
1142:20 day when your instance is running at any time in the 36 hour window and by
1142:23 time in the 36 hour window and by default up to seven most recent backups
1142:26 default up to seven most recent backups are retained you can also configure how
1142:28 are retained you can also configure how many automated backups to retain from 1
1142:31 many automated backups to retain from 1 to
1142:32 to 365. now i've touched on this topic many
1142:35 365. now i've touched on this topic many times in this lesson and i wanted to
1142:37 times in this lesson and i wanted to highlight it for just a second and this
1142:39 highlight it for just a second and this is point-in-time recovery so
1142:41 is point-in-time recovery so point-in-time recovery helps you recover
1142:43 point-in-time recovery helps you recover an instance to a specific point in time
1142:47 an instance to a specific point in time for example if an error causes a loss of
1142:49 for example if an error causes a loss of data you can recover a database to its
1142:52 data you can recover a database to its state before the error happened a point
1142:55 state before the error happened a point in time recovery always creates a new
1142:57 in time recovery always creates a new instance and you cannot perform a point
1143:00 instance and you cannot perform a point in time recovery to an existing instance
1143:03 in time recovery to an existing instance and point in time recovery is enabled by
1143:06 and point in time recovery is enabled by default when you create a new cloud sql
1143:08 default when you create a new cloud sql instance and so when it comes to billing
1143:11 instance and so when it comes to billing by default cloud sql retains seven days
1143:14 by default cloud sql retains seven days of automated backups plus all on-demand
1143:18 of automated backups plus all on-demand backups for an instance and so i know
1143:20 backups for an instance and so i know there is a lot to retain in this lesson
1143:22 there is a lot to retain in this lesson on cloud sql but be sure that these
1143:25 on cloud sql but be sure that these concepts and knowing the difference
1143:27 concepts and knowing the difference between them as well as when to use each
1143:29 between them as well as when to use each feature will be a sure help in the exam
1143:33 feature will be a sure help in the exam along with giving you the knowledge you
1143:35 along with giving you the knowledge you need to use cloud sql in your role as a
1143:38 need to use cloud sql in your role as a cloud engineer and so that's pretty much
1143:40 cloud engineer and so that's pretty much all i had to cover when it comes to
1143:42 all i had to cover when it comes to cloud sql so you can now mark this
1143:45 cloud sql so you can now mark this lesson as complete and let's move on to
1143:47 lesson as complete and let's move on to the next one
1143:55 welcome back and in this lesson i wanted to touch on google cloud's global
1143:57 to touch on google cloud's global relational database called cloud spanner
1144:01 relational database called cloud spanner now cloud spanner is the same in some
1144:03 now cloud spanner is the same in some ways as cloud sql when it comes to asset
1144:06 ways as cloud sql when it comes to asset transactions sql querying and strong
1144:09 transactions sql querying and strong consistency but differs in the way that
1144:12 consistency but differs in the way that data is handled under the hood than
1144:14 data is handled under the hood than cloud sql and so knowing this database
1144:17 cloud sql and so knowing this database only at a high level is needed for the
1144:19 only at a high level is needed for the exam but i'll be going into a bit more
1144:22 exam but i'll be going into a bit more detail just to give you a better
1144:24 detail just to give you a better understanding on how it works so with
1144:26 understanding on how it works so with that being said let's dive in now cloud
1144:29 that being said let's dive in now cloud spanner is a fully managed relational
1144:32 spanner is a fully managed relational database service that is both strongly
1144:35 database service that is both strongly consistent and horizontally scalable
1144:37 consistent and horizontally scalable cloud spanner is another database as a
1144:39 cloud spanner is another database as a service offering from google and so it
1144:42 service offering from google and so it strips away all the headaches of setting
1144:44 strips away all the headaches of setting up and maintaining the infrastructure
1144:47 up and maintaining the infrastructure and software needed to run your database
1144:50 and software needed to run your database in the cloud now being strongly
1144:52 in the cloud now being strongly consistent in this context is when data
1144:55 consistent in this context is when data will get passed on to all the replicas
1144:58 will get passed on to all the replicas as soon as a write request comes to one
1145:00 as soon as a write request comes to one of the replicas of the database cloud
1145:02 of the replicas of the database cloud spanner uses truetime a highly available
1145:06 spanner uses truetime a highly available distributed atomic clock system that is
1145:09 distributed atomic clock system that is provided to applications on all google
1145:11 provided to applications on all google servers it applies a time stamp to every
1145:15 servers it applies a time stamp to every transaction on commit and so
1145:17 transaction on commit and so transactions in other regions are always
1145:20 transactions in other regions are always executed sequentially cloud spanner can
1145:22 executed sequentially cloud spanner can distribute and manage data at a global
1145:25 distribute and manage data at a global scale and support globally consistent
1145:28 scale and support globally consistent reads along with strongly consistent
1145:30 reads along with strongly consistent distributed transactions now being fully
1145:33 distributed transactions now being fully managed cloud spanner handles any
1145:36 managed cloud spanner handles any replicas that are needed for
1145:38 replicas that are needed for availability of your data and optimizes
1145:41 availability of your data and optimizes performance by automatically sharding
1145:43 performance by automatically sharding the data based on request load and size
1145:46 the data based on request load and size of the data part of why cloud spanner's
1145:48 of the data part of why cloud spanner's high availability is due to its
1145:51 high availability is due to its automatic synchronous data replication
1145:54 automatic synchronous data replication between all replicas in independent
1145:57 between all replicas in independent zones cloud spanner scales horizontally
1145:59 zones cloud spanner scales horizontally automatically within regions but it can
1146:02 automatically within regions but it can also scale across regions for workloads
1146:05 also scale across regions for workloads that have higher availability
1146:07 that have higher availability requirements making data available
1146:10 requirements making data available faster to users at a global scale along
1146:13 faster to users at a global scale along with node redundancy quietly added for
1146:16 with node redundancy quietly added for every node deployed in the instance and
1146:19 every node deployed in the instance and when you quickly add up all these
1146:21 when you quickly add up all these features of cloud spanner it's no wonder
1146:24 features of cloud spanner it's no wonder that it's available to achieve five
1146:26 that it's available to achieve five nines availability on a multi-regional
1146:28 nines availability on a multi-regional instance and four nines availability on
1146:32 instance and four nines availability on a regional instance cloud spanner is
1146:34 a regional instance cloud spanner is highly secure and offers data layer
1146:37 highly secure and offers data layer encryption audit logging and iam
1146:40 encryption audit logging and iam integration cloud spanner was designed
1146:42 integration cloud spanner was designed to fit the needs of specific industries
1146:45 to fit the needs of specific industries such as financial services
1146:47 such as financial services ad tech retail and global supply chain
1146:51 ad tech retail and global supply chain along with gaming and pricing for cloud
1146:53 along with gaming and pricing for cloud spanner comes in at 90 cents per node
1146:56 spanner comes in at 90 cents per node per hour with the cost of storage coming
1146:58 per hour with the cost of storage coming in at 30 cents per gigabyte per month
1147:02 in at 30 cents per gigabyte per month definitely not cheap but the features
1147:04 definitely not cheap but the features are plentiful now this isn't in the exam
1147:07 are plentiful now this isn't in the exam but i did want to take a moment to dive
1147:09 but i did want to take a moment to dive into the architecture for a bit more
1147:11 into the architecture for a bit more context as to why this database is of a
1147:14 context as to why this database is of a different breed than the typical sql
1147:17 different breed than the typical sql database now to use cloud spanner you
1147:20 database now to use cloud spanner you must first create a cloud spanner
1147:22 must first create a cloud spanner instance this instance is an allocation
1147:25 instance this instance is an allocation of resources that is used by cloud
1147:28 of resources that is used by cloud spanner databases created in that
1147:30 spanner databases created in that instance instance creation includes two
1147:33 instance instance creation includes two important choices the instance
1147:35 important choices the instance configuration and the node count and
1147:38 configuration and the node count and these choices determine the location and
1147:40 these choices determine the location and the amount of the instances cpu and
1147:43 the amount of the instances cpu and memory along with its storage resources
1147:46 memory along with its storage resources your configuration choice is permanent
1147:48 your configuration choice is permanent for an instance and only the node count
1147:51 for an instance and only the node count can be changed later if needed an
1147:53 can be changed later if needed an instance configuration defines the
1147:56 instance configuration defines the geographic placement and replication of
1147:58 geographic placement and replication of the database in that instance either
1148:01 the database in that instance either regional or multi-region and please note
1148:04 regional or multi-region and please note that when you choose a multi-zone
1148:06 that when you choose a multi-zone configuration it allows you to replicate
1148:09 configuration it allows you to replicate the databases data not just in multiple
1148:12 the databases data not just in multiple zones but in multiple zones across
1148:14 zones but in multiple zones across multiple regions and when it comes to
1148:17 multiple regions and when it comes to the node count this determines the
1148:19 the node count this determines the number of nodes to allocate to that
1148:21 number of nodes to allocate to that instance these nodes allocate the amount
1148:23 instance these nodes allocate the amount of cpu memory and storage needed for
1148:26 of cpu memory and storage needed for your instance to either increase
1148:28 your instance to either increase throughput or storage capacity there is
1148:31 throughput or storage capacity there is no instance types to choose from like
1148:33 no instance types to choose from like cloud sql and so when you need more
1148:36 cloud sql and so when you need more power you simply add another node now
1148:38 power you simply add another node now for any regional configuration cloud
1148:41 for any regional configuration cloud spanner maintains exactly three read
1148:44 spanner maintains exactly three read write replicas each within a different
1148:46 write replicas each within a different zone in that region each read write
1148:48 zone in that region each read write replica contains a full copy of your
1148:51 replica contains a full copy of your operational database that is able to
1148:54 operational database that is able to serve rewrite and read only requests
1148:57 serve rewrite and read only requests cloud spanner uses replicas in different
1148:59 cloud spanner uses replicas in different zones so that if a single zone failure
1149:02 zones so that if a single zone failure occurs your database remains available
1149:05 occurs your database remains available in a multi-region instance configuration
1149:08 in a multi-region instance configuration the instance is allotted a combination
1149:10 the instance is allotted a combination of four read write and read only
1149:12 of four read write and read only replicas and just as a note a three node
1149:16 replicas and just as a note a three node configuration minimum is what is
1149:18 configuration minimum is what is recommended for production by google and
1149:21 recommended for production by google and as cloud spanner gets populated with
1149:23 as cloud spanner gets populated with data
1149:24 data sharding happens which is also known as
1149:26 sharding happens which is also known as a split and cloud spanner creates
1149:29 a split and cloud spanner creates replicas of each database split to
1149:31 replicas of each database split to improve performance and availability all
1149:34 improve performance and availability all of the data in a split is physically
1149:36 of the data in a split is physically stored together in a replica and cloud
1149:39 stored together in a replica and cloud spanner serves each replica out of an
1149:41 spanner serves each replica out of an independent failure zone and within each
1149:44 independent failure zone and within each replica set
1149:45 replica set one replica is elected to act as the
1149:48 one replica is elected to act as the leader leader replicas are responsible
1149:50 leader leader replicas are responsible for handling rights while any read write
1149:54 for handling rights while any read write or read only replica can serve a read
1149:56 or read only replica can serve a read request without communicating with the
1149:58 request without communicating with the leader and so this is the inner workings
1150:01 leader and so this is the inner workings of cloud spanner at a high level and not
1150:04 of cloud spanner at a high level and not meant to confuse you but to give you a
1150:06 meant to confuse you but to give you a better context of how cloud spanner
1150:09 better context of how cloud spanner although it is a relational sql database
1150:12 although it is a relational sql database is so different than its cloud sql
1150:14 is so different than its cloud sql cousin now before ending this lesson i
1150:16 cousin now before ending this lesson i wanted to touch on node performance for
1150:18 wanted to touch on node performance for a quick moment and so each cloud spanner
1150:21 a quick moment and so each cloud spanner node can provide up to 10 000 queries
1150:24 node can provide up to 10 000 queries per second or qps of reads or 2000 qps
1150:29 per second or qps of reads or 2000 qps of writes each node provides up to two
1150:32 of writes each node provides up to two terabytes of storage and so if you need
1150:35 terabytes of storage and so if you need to scale up the serving and storage
1150:37 to scale up the serving and storage resources in your instance you add more
1150:40 resources in your instance you add more nodes to that instance
1150:42 nodes to that instance and remember as noted earlier that
1150:44 and remember as noted earlier that adding a node does not increase the
1150:46 adding a node does not increase the number of replicas but rather increases
1150:49 number of replicas but rather increases the resources each replica has in the
1150:52 the resources each replica has in the instance adding nodes gives each replica
1150:55 instance adding nodes gives each replica more cpu and ram which increases the
1150:58 more cpu and ram which increases the replicas throughput and so if you're
1151:00 replicas throughput and so if you're looking to scale up automatically you
1151:02 looking to scale up automatically you can scale the numbers of nodes in your
1151:05 can scale the numbers of nodes in your instance based on the cloud monitoring
1151:07 instance based on the cloud monitoring metrics on cpu or storage utilization
1151:11 metrics on cpu or storage utilization in conjunction with using cloud
1151:13 in conjunction with using cloud functions to trigger and so when you are
1151:15 functions to trigger and so when you are deciding on a relational database that
1151:18 deciding on a relational database that provides global distribution and
1151:20 provides global distribution and horizontally scalable that handles
1151:23 horizontally scalable that handles transactional workloads in google cloud
1151:25 transactional workloads in google cloud cloud spanner will always be the obvious
1151:28 cloud spanner will always be the obvious choice over cloud sql and so that's
1151:31 choice over cloud sql and so that's pretty much all i have to cover when it
1151:33 pretty much all i have to cover when it comes to this overview on cloud spanner
1151:35 comes to this overview on cloud spanner so you can now mark this lesson as
1151:37 so you can now mark this lesson as complete and let's move on to the next
1151:39 complete and let's move on to the next one
1151:39 one [Music]
1151:43 [Music] welcome back and in this lesson we will
1151:46 welcome back and in this lesson we will be going over the available nosql
1151:48 be going over the available nosql databases available in google cloud this
1151:52 databases available in google cloud this lesson is meant to be another overview
1151:54 lesson is meant to be another overview just to familiarize you with the nosql
1151:56 just to familiarize you with the nosql database options as they show up in the
1151:59 database options as they show up in the exam this lesson is not meant to go in
1152:02 exam this lesson is not meant to go in depth on databases but an overview and
1152:04 depth on databases but an overview and will give you a good understanding on
1152:06 will give you a good understanding on what features are available for each
1152:09 what features are available for each and their use cases so with that being
1152:11 and their use cases so with that being said let's dive in now there are four
1152:15 said let's dive in now there are four managed nosql databases available in
1152:18 managed nosql databases available in google cloud and i will be briefly going
1152:20 google cloud and i will be briefly going over them and i'll be starting this off
1152:22 over them and i'll be starting this off by discussing bigtable
1152:24 by discussing bigtable now cloud bigtable is a fully managed
1152:27 now cloud bigtable is a fully managed wide column nosql database designed for
1152:31 wide column nosql database designed for terabyte and petabyte scale workloads
1152:34 terabyte and petabyte scale workloads that offers low latency and high
1152:36 that offers low latency and high throughput bigtable is built for
1152:38 throughput bigtable is built for real-time application serving workloads
1152:41 real-time application serving workloads as well as large-scale analytical
1152:44 as well as large-scale analytical workloads cloud bigtable is a regional
1152:47 workloads cloud bigtable is a regional service and if using replication a copy
1152:50 service and if using replication a copy is stored in a different zone or region
1152:53 is stored in a different zone or region for durability cloud bigtable is
1152:55 for durability cloud bigtable is designed for storing very large amounts
1152:58 designed for storing very large amounts of single keyed data while still being
1153:01 of single keyed data while still being able to provide very low latency and
1153:04 able to provide very low latency and because throughput scales linearly you
1153:06 because throughput scales linearly you can increase the queries per second by
1153:09 can increase the queries per second by adding more bigtable nodes when you need
1153:12 adding more bigtable nodes when you need them bigtable throughput can be
1153:14 them bigtable throughput can be dynamically adjusted by adding or
1153:16 dynamically adjusted by adding or removing cluster nodes without
1153:18 removing cluster nodes without restarting meaning you can increase the
1153:21 restarting meaning you can increase the size of a bigtable cluster for just a
1153:24 size of a bigtable cluster for just a few hours to handle a large load and
1153:26 few hours to handle a large load and then reduce the cluster size again and
1153:28 then reduce the cluster size again and do it all without any downtime bigtable
1153:31 do it all without any downtime bigtable is an ideal source
1153:33 is an ideal source for map reduce operations and integrates
1153:36 for map reduce operations and integrates easily with all the existing big data
1153:38 easily with all the existing big data tools such as hadoop dataproc and
1153:42 tools such as hadoop dataproc and dataflow along with apache hbase and
1153:45 dataflow along with apache hbase and when it comes to price bigtable is
1153:47 when it comes to price bigtable is definitely no joke pricing for bigtable
1153:50 definitely no joke pricing for bigtable starts at 65 cents per hour per node
1153:53 starts at 65 cents per hour per node or over 450 dollars a month for a one
1153:57 or over 450 dollars a month for a one node configuration with no data now you
1153:59 node configuration with no data now you can use bigtable to store and query all
1154:02 can use bigtable to store and query all of the following types of data such as
1154:04 of the following types of data such as cpu and memory usage over time for
1154:07 cpu and memory usage over time for multiple servers marketing data such as
1154:10 multiple servers marketing data such as purchase histories and customer
1154:11 purchase histories and customer preferences financial data such as
1154:14 preferences financial data such as transaction histories stock prices and
1154:17 transaction histories stock prices and currency exchange rates iot data or
1154:20 currency exchange rates iot data or internet of things such as usage reports
1154:23 internet of things such as usage reports from energy meters and home appliances
1154:25 from energy meters and home appliances and lastly graph data such as
1154:28 and lastly graph data such as information about how users are
1154:30 information about how users are connected to one another cloud bigtable
1154:32 connected to one another cloud bigtable excels as a storage engine as it can
1154:35 excels as a storage engine as it can batch mapreduce operations
1154:37 batch mapreduce operations stream processing or analytics as well
1154:40 stream processing or analytics as well as being used for storage for machine
1154:42 as being used for storage for machine learning applications now moving on to
1154:45 learning applications now moving on to the next nosql database is cloud
1154:47 the next nosql database is cloud datastore and cloud datastore is a
1154:50 datastore and cloud datastore is a highly scalable nosql document database
1154:53 highly scalable nosql document database built for automatic scaling high
1154:56 built for automatic scaling high performance and ease of application
1154:58 performance and ease of application development datastore is redundant
1155:00 development datastore is redundant within your location
1155:02 within your location to minimize impact from points of
1155:04 to minimize impact from points of failures and therefore can offer high
1155:07 failures and therefore can offer high availability of reads and rights cloud
1155:10 availability of reads and rights cloud datastore can execute atomic
1155:12 datastore can execute atomic transactions where a set of operations
1155:15 transactions where a set of operations either all succeed or none occur cloud
1155:18 either all succeed or none occur cloud datastore uses a distributed
1155:20 datastore uses a distributed architecture to automatically manage
1155:23 architecture to automatically manage scaling so you never have to worry about
1155:25 scaling so you never have to worry about scaling manually as well what's very
1155:28 scaling manually as well what's very unique about cloud datastore is that it
1155:30 unique about cloud datastore is that it has a sql-like query language that's
1155:33 has a sql-like query language that's available called gql also known as gql
1155:37 available called gql also known as gql gql maps roughly to sql however a sql
1155:41 gql maps roughly to sql however a sql role column lookup is limited to a
1155:43 role column lookup is limited to a single value whereas in gql a property
1155:46 single value whereas in gql a property can be a multiple value property this
1155:49 can be a multiple value property this consistency model allows an application
1155:52 consistency model allows an application to handle large amounts of data and
1155:54 to handle large amounts of data and users while still being able to deliver
1155:57 users while still being able to deliver a great user experience data is
1155:59 a great user experience data is automatically encrypted before it is
1156:01 automatically encrypted before it is written to disk and automatically
1156:03 written to disk and automatically decrypted when read by an authorized
1156:06 decrypted when read by an authorized user now this does not reflect in the
1156:08 user now this does not reflect in the exam as of yet and i will be updating
1156:11 exam as of yet and i will be updating this lesson if and when it happens but
1156:14 this lesson if and when it happens but firestore is the newest version of
1156:16 firestore is the newest version of datastore and introduces several
1156:19 datastore and introduces several improvements over datastore existing
1156:21 improvements over datastore existing datastore users can access these
1156:23 datastore users can access these improvements by creating a new firestore
1156:26 improvements by creating a new firestore database instance in datastore mode and
1156:29 database instance in datastore mode and in the near future all existing
1156:31 in the near future all existing datastore databases will be
1156:33 datastore databases will be automatically upgraded to firestore in
1156:36 automatically upgraded to firestore in datastore mode now moving right along
1156:39 datastore mode now moving right along cloud datastore holds a really cool
1156:41 cloud datastore holds a really cool feature for developers that's called
1156:44 feature for developers that's called datastore emulator and this provides
1156:46 datastore emulator and this provides local emulation of the production
1156:49 local emulation of the production datastore environment so that you can
1156:51 datastore environment so that you can use to develop and test your application
1156:54 use to develop and test your application locally this is a component of the
1156:56 locally this is a component of the google cloud sdks gcloud tool and can be
1157:00 google cloud sdks gcloud tool and can be installed by using the gcloud components
1157:02 installed by using the gcloud components install command that we discussed
1157:04 install command that we discussed earlier on in the course and so moving
1157:06 earlier on in the course and so moving on to use cases for datastore
1157:09 on to use cases for datastore it is ideal for applications that rely
1157:12 it is ideal for applications that rely on highly available structured data at
1157:14 on highly available structured data at scale you can use datastore for things
1157:17 scale you can use datastore for things like product catalogs that provide
1157:19 like product catalogs that provide real-time inventory and product details
1157:22 real-time inventory and product details for a retailer user profiles that
1157:25 for a retailer user profiles that deliver a customized experience based on
1157:28 deliver a customized experience based on the user's past activities and
1157:30 the user's past activities and preferences
1157:31 preferences as well as transactions based on asset
1157:34 as well as transactions based on asset properties for example transferring
1157:36 properties for example transferring funds from one bank account to another
1157:39 funds from one bank account to another next up we have firestore for firebase
1157:43 next up we have firestore for firebase and so this is a flexible scalable nosql
1157:46 and so this is a flexible scalable nosql cloud database to store and sync data
1157:49 cloud database to store and sync data for client and server side development
1157:51 for client and server side development and is available for native c plus unity
1157:55 and is available for native c plus unity node.js java go and python sdks
1158:00 node.js java go and python sdks in addition to rest and rpc apis pretty
1158:03 in addition to rest and rpc apis pretty much covering the gamut of most major
1158:05 much covering the gamut of most major programming languages now with cloud
1158:07 programming languages now with cloud firestore you store data in documents
1158:11 firestore you store data in documents that contain fields mapping to values
1158:14 that contain fields mapping to values these documents are stored in
1158:15 these documents are stored in collections which are containers for
1158:18 collections which are containers for your documents that you can use to
1158:20 your documents that you can use to organize your data and build queries
1158:23 organize your data and build queries documents support many different data
1158:25 documents support many different data types as well you can also create sub
1158:28 types as well you can also create sub collections within documents and build
1158:31 collections within documents and build hierarchical data structures cloud
1158:33 hierarchical data structures cloud firestore is serverless with absolutely
1158:36 firestore is serverless with absolutely no servers to manage update or maintain
1158:39 no servers to manage update or maintain and with automatic multi-region
1158:41 and with automatic multi-region replication and strong consistency
1158:44 replication and strong consistency google is able to hold
1158:46 google is able to hold a five nines availability guarantee and
1158:49 a five nines availability guarantee and so when it comes to querying in cloud
1158:51 so when it comes to querying in cloud firestore it is expressive efficient and
1158:55 firestore it is expressive efficient and flexible you can create shallow queries
1158:57 flexible you can create shallow queries to retrieve data at the document level
1159:00 to retrieve data at the document level without needing to retrieve the entire
1159:02 without needing to retrieve the entire collection or any nested subcollections
1159:05 collection or any nested subcollections cloud firestore uses data
1159:07 cloud firestore uses data synchronization to update data in real
1159:10 synchronization to update data in real time for any connected device as well it
1159:14 time for any connected device as well it also caches data that your application
1159:16 also caches data that your application is actively using so that the
1159:18 is actively using so that the application can write read listen to and
1159:22 application can write read listen to and query data even if the device is offline
1159:25 query data even if the device is offline when the device comes back online cloud
1159:28 when the device comes back online cloud firestore synchronizes any local changes
1159:31 firestore synchronizes any local changes back to cloud firestore you can also
1159:33 back to cloud firestore you can also secure your data in cloud firestore with
1159:36 secure your data in cloud firestore with firebase authentication and cloud
1159:38 firebase authentication and cloud firestore security rules for android ios
1159:42 firestore security rules for android ios and javascript or you can use iam for
1159:45 and javascript or you can use iam for server side languages and when it comes
1159:47 server side languages and when it comes to costs firestore falls into the always
1159:51 to costs firestore falls into the always available free tier where you can use
1159:54 available free tier where you can use one database holding five gigabytes or
1159:57 one database holding five gigabytes or if you need more you can move into their
1159:59 if you need more you can move into their paid option now firebase also has
1160:02 paid option now firebase also has another database sharing similar
1160:04 another database sharing similar features like having no servers to
1160:06 features like having no servers to deploy and maintain real-time updates
1160:09 deploy and maintain real-time updates along with the free tier in this
1160:11 along with the free tier in this database is called real time database
1160:14 database is called real time database and is used for more basic querying
1160:16 and is used for more basic querying simple data structure and keeping things
1160:19 simple data structure and keeping things to one database
1160:21 to one database it's something i like to call firestore
1160:23 it's something i like to call firestore lite real time database does not show up
1160:26 lite real time database does not show up in the exam but i wanted to bring it to
1160:28 in the exam but i wanted to bring it to light as it is part of the firebase
1160:30 light as it is part of the firebase family just know that you can use both
1160:32 family just know that you can use both databases within the same firebase
1160:35 databases within the same firebase application or project as both can store
1160:37 application or project as both can store the same types of data client libraries
1160:40 the same types of data client libraries work in a similar manner and both hold
1160:43 work in a similar manner and both hold real-time updates now although firebase
1160:46 real-time updates now although firebase is a development platform and not a
1160:48 is a development platform and not a database service i wanted to give it a
1160:50 database service i wanted to give it a quick mention for those of you who are
1160:52 quick mention for those of you who are unfamiliar with the tie-in to firestore
1160:55 unfamiliar with the tie-in to firestore with firebase firebase is a mobile
1160:58 with firebase firebase is a mobile application development platform that
1161:00 application development platform that provides tools and cloud services to
1161:03 provides tools and cloud services to help enable developers to develop
1161:05 help enable developers to develop applications faster and more easily and
1161:08 applications faster and more easily and since it ties in nicely with firestore
1161:10 since it ties in nicely with firestore it becomes the perfect platform for
1161:13 it becomes the perfect platform for mobile application development okay so
1161:16 mobile application development okay so moving on to our last nosql database is
1161:19 moving on to our last nosql database is memorystore and memorystore is a fully
1161:21 memorystore and memorystore is a fully managed service from google cloud for
1161:23 managed service from google cloud for either redis or memcached in memory
1161:27 either redis or memcached in memory datastore to build application caches
1161:30 datastore to build application caches and this is a common service used in
1161:32 and this is a common service used in many production environments
1161:34 many production environments specifically when the need for caching
1161:36 specifically when the need for caching arises memory store automates the
1161:38 arises memory store automates the administration tasks for redis and
1161:41 administration tasks for redis and memcached like enabling high
1161:43 memcached like enabling high availability failover patching and
1161:46 availability failover patching and monitoring so you don't have to and when
1161:48 monitoring so you don't have to and when it comes to memory store for redis
1161:50 it comes to memory store for redis instances in the standard tier these are
1161:52 instances in the standard tier these are replicated across zones
1161:55 replicated across zones monitored for health and have fast
1161:57 monitored for health and have fast automatic failover standard tier
1162:00 automatic failover standard tier instances also provide an sla of three
1162:03 instances also provide an sla of three nines availability memory store for
1162:06 nines availability memory store for redis also provides the ability to scale
1162:09 redis also provides the ability to scale instant sizes seamlessly so that you can
1162:12 instant sizes seamlessly so that you can start small and increase the size of the
1162:14 start small and increase the size of the instance as needed memory store is
1162:16 instance as needed memory store is protected from the internet using vpc
1162:19 protected from the internet using vpc networks and private ip and also comes
1162:22 networks and private ip and also comes with iam integration systems are
1162:24 with iam integration systems are monitored around the clock ensuring that
1162:27 monitored around the clock ensuring that your data is protected at all times and
1162:30 your data is protected at all times and know that the versions are always kept
1162:32 know that the versions are always kept up to date with the latest critical
1162:34 up to date with the latest critical patches ensuring your instances are
1162:37 patches ensuring your instances are secure now when it comes to use cases of
1162:40 secure now when it comes to use cases of course the first thing you will see is
1162:42 course the first thing you will see is caching and this is the main reason to
1162:44 caching and this is the main reason to use memory store as it provides low
1162:47 use memory store as it provides low latency access and high throughput for
1162:50 latency access and high throughput for heavily accessed data compared to
1162:52 heavily accessed data compared to accessing the data from a disk common
1162:54 accessing the data from a disk common examples of caching is session
1162:56 examples of caching is session management frequently accessed queries
1162:58 management frequently accessed queries scripts or pages so when using memory
1163:01 scripts or pages so when using memory store for leaderboards and gaming this
1163:03 store for leaderboards and gaming this is a common use case in the gaming
1163:05 is a common use case in the gaming industry as well as using it for player
1163:08 industry as well as using it for player profiles memory store is also a perfect
1163:10 profiles memory store is also a perfect solution for stream processing combined
1163:13 solution for stream processing combined with data flow memory store for redis
1163:16 with data flow memory store for redis provides a scalable fast in memory store
1163:19 provides a scalable fast in memory store for storing intermediate data that
1163:22 for storing intermediate data that thousands of clients can access with
1163:24 thousands of clients can access with very low latency and so when it comes to
1163:26 very low latency and so when it comes to nosql databases these are all the
1163:29 nosql databases these are all the available options on google cloud and as
1163:32 available options on google cloud and as i said before it will only show up on
1163:34 i said before it will only show up on the exam at merely a high level and so
1163:37 the exam at merely a high level and so knowing what each of these databases are
1163:39 knowing what each of these databases are used for
1163:40 used for will be a huge benefit along with being
1163:43 will be a huge benefit along with being an entry to diving deeper into possibly
1163:46 an entry to diving deeper into possibly using these services within your
1163:48 using these services within your day-to-day job as a cloud engineer and
1163:50 day-to-day job as a cloud engineer and so that's pretty much all i wanted to
1163:52 so that's pretty much all i wanted to cover when it comes to nosql databases
1163:55 cover when it comes to nosql databases available in google cloud so you can now
1163:58 available in google cloud so you can now mark this lesson as complete and let's
1164:00 mark this lesson as complete and let's move on to the next one
1164:01 move on to the next one [Music]
1164:05 [Music] welcome back and in this lesson we'll be
1164:08 welcome back and in this lesson we'll be going over the big data ecosystem in an
1164:11 going over the big data ecosystem in an overview just to familiarize you with
1164:14 overview just to familiarize you with the services that are available in
1164:16 the services that are available in google cloud and are the services that
1164:18 google cloud and are the services that will show up in the exam this lesson is
1164:21 will show up in the exam this lesson is not meant to go in depth but is an
1164:24 not meant to go in depth but is an overview and will give you a good
1164:26 overview and will give you a good understanding on what these services can
1164:28 understanding on what these services can do and how they all work together to
1164:31 do and how they all work together to make sense of big data as a whole
1164:33 make sense of big data as a whole so getting right into it i wanted to
1164:36 so getting right into it i wanted to first ask the question what is big data
1164:38 first ask the question what is big data i mean many people talk about it but
1164:40 i mean many people talk about it but what is it really well big data refers
1164:43 what is it really well big data refers to massive amounts of data that would
1164:46 to massive amounts of data that would typically be too expensive to store
1164:48 typically be too expensive to store manage and analyze using traditional
1164:51 manage and analyze using traditional database systems either relational or
1164:54 database systems either relational or monolithic as the amount of data that we
1164:56 monolithic as the amount of data that we have been seeing over the past few years
1164:59 have been seeing over the past few years has started to increase these systems
1165:01 has started to increase these systems have become very inefficient because of
1165:04 have become very inefficient because of their lack of flexibility for storing
1165:07 their lack of flexibility for storing unstructured data such as images text or
1165:10 unstructured data such as images text or video as well as accommodating high
1165:13 video as well as accommodating high velocity or real-time data or scaling to
1165:16 velocity or real-time data or scaling to support very large
1165:18 support very large petabyte scale data volumes for this
1165:21 petabyte scale data volumes for this reason the past few years has seen the
1165:23 reason the past few years has seen the mainstream adoption of new approaches to
1165:26 mainstream adoption of new approaches to managing and processing big data
1165:29 managing and processing big data including apache hadoop and nosql
1165:32 including apache hadoop and nosql database systems however those options
1165:35 database systems however those options often prove to be complex to deploy
1165:38 often prove to be complex to deploy manage and use in an on-premises
1165:40 manage and use in an on-premises situation
1165:42 situation now the ability to consistently get
1165:44 now the ability to consistently get business value from data
1165:46 business value from data fast and efficiently is now becoming the
1165:49 fast and efficiently is now becoming the de facto of successful organizations
1165:52 de facto of successful organizations across every industry the more data a
1165:54 across every industry the more data a company has access to the more business
1165:57 company has access to the more business insights and business value they're able
1165:59 insights and business value they're able to achieve
1166:00 to achieve like gain useful insights increase
1166:03 like gain useful insights increase revenue
1166:04 revenue get or retain customers and even improve
1166:07 get or retain customers and even improve operations and because machine learning
1166:09 operations and because machine learning models get more efficient as they are
1166:11 models get more efficient as they are trained with more data machine learning
1166:14 trained with more data machine learning and big data are highly complementary
1166:17 and big data are highly complementary all in all big data brings some really
1166:19 all in all big data brings some really great value to the table that is
1166:22 great value to the table that is impossible for any organization to turn
1166:24 impossible for any organization to turn down and so now that we've gone through
1166:26 down and so now that we've gone through that overview of what big data is i
1166:29 that overview of what big data is i wanted to dive into some shorter
1166:31 wanted to dive into some shorter overviews of the services available for
1166:34 overviews of the services available for the big data ecosystem on google cloud
1166:37 the big data ecosystem on google cloud and so the first service that i'd like
1166:39 and so the first service that i'd like to start with is bigquery now bigquery
1166:42 to start with is bigquery now bigquery is a fully managed serverless data
1166:44 is a fully managed serverless data warehouse that enables scalable analysis
1166:48 warehouse that enables scalable analysis over petabytes of data this service
1166:51 over petabytes of data this service supports querying using sql and holds
1166:54 supports querying using sql and holds built-in machine learning capabilities
1166:56 built-in machine learning capabilities you start by ingesting data into
1166:58 you start by ingesting data into bigquery and then you are able to take
1167:00 bigquery and then you are able to take advantage of all the power it provides
1167:03 advantage of all the power it provides so big data would ingest that data by
1167:05 so big data would ingest that data by doing a batch upload or by streaming it
1167:08 doing a batch upload or by streaming it in real time and you can use any of the
1167:11 in real time and you can use any of the currently available google cloud
1167:13 currently available google cloud services to load data into bigquery you
1167:16 services to load data into bigquery you can take a manual batch ingestion
1167:18 can take a manual batch ingestion approach
1167:19 approach or stream using pub sub etl data and
1167:22 or stream using pub sub etl data and with bigquery data transfer service you
1167:24 with bigquery data transfer service you can automatically transfer data from
1167:27 can automatically transfer data from external google data sources and partner
1167:30 external google data sources and partner sas applications to bigquery on a
1167:33 sas applications to bigquery on a scheduled and fully managed basis and
1167:36 scheduled and fully managed basis and the best part is batch and export is
1167:38 the best part is batch and export is free bigquery's high-speed streaming api
1167:42 free bigquery's high-speed streaming api provides an incredible foundation for
1167:44 provides an incredible foundation for real-time analytics making business data
1167:47 real-time analytics making business data immediately available for analysis and
1167:50 immediately available for analysis and you can also leverage pub sub and data
1167:52 you can also leverage pub sub and data flow to stream data into bigquery
1167:56 flow to stream data into bigquery bigquery transparently and automatically
1167:59 bigquery transparently and automatically provides highly durable replicated
1168:01 provides highly durable replicated storage in multiple locations for high
1168:04 storage in multiple locations for high availability as well as being able to
1168:07 availability as well as being able to achieve easy resource bigquery keeps a
1168:10 achieve easy resource bigquery keeps a seven day history of changes in case
1168:13 seven day history of changes in case something were to go wrong bigquery
1168:15 something were to go wrong bigquery supports standard sql querying which
1168:18 supports standard sql querying which reduces the need for code rewrites you
1168:20 reduces the need for code rewrites you can simply use it as you would for
1168:22 can simply use it as you would for querying any other sql compliant
1168:25 querying any other sql compliant database and with dataproc and dataflow
1168:28 database and with dataproc and dataflow bigquery provides integration with the
1168:31 bigquery provides integration with the apache big data ecosystem allowing
1168:34 apache big data ecosystem allowing existing hadoop spark and beam workloads
1168:37 existing hadoop spark and beam workloads to read or write data directly from
1168:40 to read or write data directly from bigquery using the storage api bigquery
1168:43 bigquery using the storage api bigquery also makes it very easy to access this
1168:45 also makes it very easy to access this data by using the cloud console using
1168:48 data by using the cloud console using the bq command line tool or making calls
1168:51 the bq command line tool or making calls to the bigquery rest api using a variety
1168:55 to the bigquery rest api using a variety of client libraries such as java.net or
1168:58 of client libraries such as java.net or python there are also a variety of
1169:01 python there are also a variety of third-party tools that you can use to
1169:03 third-party tools that you can use to interact with bigquery when visualizing
1169:06 interact with bigquery when visualizing the data or loading the data bigquery
1169:09 the data or loading the data bigquery provides strong security and governance
1169:11 provides strong security and governance controls with fine-grained controls
1169:14 controls with fine-grained controls through integration with identity and
1169:16 through integration with identity and access management bigquery gives you the
1169:18 access management bigquery gives you the option of geographic data control
1169:21 option of geographic data control without the headaches of setting up and
1169:23 without the headaches of setting up and managing clusters and other computing
1169:26 managing clusters and other computing resources in different zones and regions
1169:29 resources in different zones and regions bigquery also provides fine grain
1169:31 bigquery also provides fine grain identity and access management and rest
1169:33 identity and access management and rest assured that your data is always
1169:35 assured that your data is always encrypted at rest and in transit now the
1169:38 encrypted at rest and in transit now the way that bigquery calculates billing
1169:41 way that bigquery calculates billing charges is by queries and by storage
1169:44 charges is by queries and by storage storing data in bigquery is comparable
1169:46 storing data in bigquery is comparable in price with storing data in cloud
1169:48 in price with storing data in cloud storage which makes it an easy decision
1169:51 storage which makes it an easy decision for storing data in bigquery there is no
1169:54 for storing data in bigquery there is no upper limit to the amount of data that
1169:55 upper limit to the amount of data that can be stored in bigquery so if tables
1169:58 can be stored in bigquery so if tables are not edited for 90 days the price of
1170:01 are not edited for 90 days the price of storage for that table drops by 50
1170:04 storage for that table drops by 50 percent query costs are also available
1170:06 percent query costs are also available as on-demand and flat rate pricing and
1170:09 as on-demand and flat rate pricing and when it comes to on-demand pricing you
1170:12 when it comes to on-demand pricing you are only charged for bytes read not
1170:15 are only charged for bytes read not bytes returned in the end bigquery
1170:18 bytes returned in the end bigquery scales seamlessly to store and analyze
1170:21 scales seamlessly to store and analyze petabytes to exabytes of data with ease
1170:24 petabytes to exabytes of data with ease now there are so many more features to
1170:26 now there are so many more features to list but if you are interested feel free
1170:29 list but if you are interested feel free to dive into the other features with the
1170:31 to dive into the other features with the supplied link in the lesson text now
1170:34 supplied link in the lesson text now moving on to the next service is pub sub
1170:37 moving on to the next service is pub sub and pub sub is a fully managed real-time
1170:40 and pub sub is a fully managed real-time messaging service that allows you to
1170:42 messaging service that allows you to send and receive messages between
1170:45 send and receive messages between independent applications it acts as
1170:48 independent applications it acts as messaging oriented middleware or event
1170:51 messaging oriented middleware or event ingestion and delivery for streaming
1170:54 ingestion and delivery for streaming analytics pipelines and so a publisher
1170:57 analytics pipelines and so a publisher application creates and send messages to
1171:00 application creates and send messages to a topic subscriber applications create a
1171:03 a topic subscriber applications create a subscription to a topic and receives
1171:05 subscription to a topic and receives messages from it and so i wanted to take
1171:07 messages from it and so i wanted to take a moment to show you exactly how it
1171:09 a moment to show you exactly how it works
1171:10 works so first the publisher creates messages
1171:13 so first the publisher creates messages and sends them to the messaging service
1171:15 and sends them to the messaging service on a specified topic a topic is a named
1171:18 on a specified topic a topic is a named entity that represents a feed of
1171:21 entity that represents a feed of messages a publisher application creates
1171:24 messages a publisher application creates a topic in the pub sub service and sends
1171:27 a topic in the pub sub service and sends messages to that topic a message
1171:29 messages to that topic a message contains a payload and optional
1171:31 contains a payload and optional attributes that describe the content the
1171:34 attributes that describe the content the service as a whole ensures that
1171:36 service as a whole ensures that published messages are retained on
1171:38 published messages are retained on behalf of subscriptions and so a
1171:41 behalf of subscriptions and so a published message is retained for a
1171:43 published message is retained for a subscription in a message queue shown
1171:46 subscription in a message queue shown here as message storage until it is
1171:49 here as message storage until it is acknowledged by any subscriber consuming
1171:51 acknowledged by any subscriber consuming messages from that subscription pub sub
1171:54 messages from that subscription pub sub then forwards messages from a topic to
1171:57 then forwards messages from a topic to all of its subscriptions individually a
1172:00 all of its subscriptions individually a subscriber then receives messages either
1172:03 subscriber then receives messages either by pub sub pushing them to the
1172:05 by pub sub pushing them to the subscriber's chosen endpoint or by the
1172:07 subscriber's chosen endpoint or by the subscriber pulling them from the service
1172:10 subscriber pulling them from the service the subscriber then sends an
1172:12 the subscriber then sends an acknowledgement to the pub sub service
1172:15 acknowledgement to the pub sub service for each received message the service
1172:17 for each received message the service then removes acknowledged messages from
1172:20 then removes acknowledged messages from the subscriptions message queue and some
1172:22 the subscriptions message queue and some of the use cases for pub sub is
1172:25 of the use cases for pub sub is balancing large task queues distributing
1172:28 balancing large task queues distributing event notifications and real-time data
1172:31 event notifications and real-time data streaming from various sources and so
1172:33 streaming from various sources and so the next service that i wanted to get
1172:35 the next service that i wanted to get into is composer now composer is a
1172:38 into is composer now composer is a managed workflow orchestration service
1172:41 managed workflow orchestration service that is built on apache airflow this is
1172:43 that is built on apache airflow this is a workflow automation tool for
1172:46 a workflow automation tool for developers that's based on the open
1172:48 developers that's based on the open source apache airflow project similar to
1172:51 source apache airflow project similar to an on-premises deployment cloud composer
1172:54 an on-premises deployment cloud composer deploys multiple components to run
1172:57 deploys multiple components to run airflow in the cloud airflow is a
1172:59 airflow in the cloud airflow is a platform
1173:00 platform created by the community to
1173:02 created by the community to programmatically author schedule and
1173:05 programmatically author schedule and monitor workflows the airflow scheduler
1173:07 monitor workflows the airflow scheduler as you see here executes the tasks on an
1173:11 as you see here executes the tasks on an array of workers while following the
1173:13 array of workers while following the specified dependencies and storing the
1173:16 specified dependencies and storing the data in a database and having a ui
1173:19 data in a database and having a ui component for easy management now
1173:21 component for easy management now breaking down these workflows for just a
1173:23 breaking down these workflows for just a sec in data analytics a workflow
1173:27 sec in data analytics a workflow represents a series of tasks for
1173:30 represents a series of tasks for ingesting transforming analyzing or
1173:33 ingesting transforming analyzing or utilizing data in airflow workflows are
1173:36 utilizing data in airflow workflows are created using dags which are a
1173:39 created using dags which are a collection of tasks that you want to
1173:41 collection of tasks that you want to schedule and run
1173:43 schedule and run and organizes these tasks to ensure that
1173:46 and organizes these tasks to ensure that each task is executed at the right time
1173:49 each task is executed at the right time in the right order or with the right
1173:52 in the right order or with the right issue handling now in order to run the
1173:54 issue handling now in order to run the specialized workflows
1173:56 specialized workflows provision environments are needed and so
1173:58 provision environments are needed and so composer deploys these self-contained
1174:01 composer deploys these self-contained environments on google kubernetes engine
1174:04 environments on google kubernetes engine that work with other google cloud
1174:06 that work with other google cloud services using connectors built into
1174:09 services using connectors built into airflow the beauty of composer is that
1174:12 airflow the beauty of composer is that you can create one or more of these
1174:14 you can create one or more of these environments in a single google cloud
1174:16 environments in a single google cloud project using any supported region
1174:20 project using any supported region without having to do all the heavy
1174:22 without having to do all the heavy lifting of creating a full-blown apache
1174:25 lifting of creating a full-blown apache airflow environment now when it comes to
1174:27 airflow environment now when it comes to data flow dataflow is a serverless fully
1174:31 data flow dataflow is a serverless fully managed processing service for executing
1174:34 managed processing service for executing apache beam pipelines for batch and
1174:37 apache beam pipelines for batch and real-time data streaming the apache beam
1174:39 real-time data streaming the apache beam sdk is an open source programming model
1174:43 sdk is an open source programming model that enables you to develop both batch
1174:46 that enables you to develop both batch and streaming pipelines using one of the
1174:48 and streaming pipelines using one of the apache beam sdks you build a program
1174:51 apache beam sdks you build a program that defines the pipeline then one of
1174:54 that defines the pipeline then one of apache beam's supported distributed
1174:56 apache beam's supported distributed processing back-ends such as data flow
1174:59 processing back-ends such as data flow executes that pipeline the data flow
1175:01 executes that pipeline the data flow service then takes care of all the
1175:03 service then takes care of all the low-level details like coordinating
1175:06 low-level details like coordinating individual workers sharding data sets
1175:09 individual workers sharding data sets auto scaling and exactly once processing
1175:12 auto scaling and exactly once processing now in its simplest form google cloud
1175:15 now in its simplest form google cloud data flow reads the data from a source
1175:18 data flow reads the data from a source transforms it and then writes the data
1175:21 transforms it and then writes the data back to a sink now getting a bit more
1175:23 back to a sink now getting a bit more granular with how this pipeline works
1175:26 granular with how this pipeline works data flow reads the data presented from
1175:29 data flow reads the data presented from a data source
1175:30 a data source once the data has been read it is put
1175:32 once the data has been read it is put together into a collection of data sets
1175:35 together into a collection of data sets called a p collection and this allows
1175:38 called a p collection and this allows the data to be read distributed and
1175:40 the data to be read distributed and processed across multiple machines now
1175:43 processed across multiple machines now at each step in which the data is
1175:45 at each step in which the data is transformed a new p collection is
1175:48 transformed a new p collection is created and once the final collection
1175:50 created and once the final collection has been created it is written to async
1175:53 has been created it is written to async and this is the full pipeline of how
1175:55 and this is the full pipeline of how data goes from source to sync this
1175:58 data goes from source to sync this pipeline within data flow is called a
1176:01 pipeline within data flow is called a job and finally here is a high-level
1176:03 job and finally here is a high-level overview of what a data flow job would
1176:06 overview of what a data flow job would look like when you involve other
1176:08 look like when you involve other services within google cloud and put
1176:10 services within google cloud and put together in an end-to-end solution from
1176:13 together in an end-to-end solution from retrieving the data to visualizing it
1176:16 retrieving the data to visualizing it and finally when it comes to pricing
1176:18 and finally when it comes to pricing data flow jobs are billed in per second
1176:21 data flow jobs are billed in per second increments so you're only charged for
1176:23 increments so you're only charged for when you are processing your data now
1176:26 when you are processing your data now moving on to data proc this is a fast
1176:28 moving on to data proc this is a fast and easy way to run spark hadoop hive or
1176:33 and easy way to run spark hadoop hive or pig on google cloud in an on-premises
1176:36 pig on google cloud in an on-premises environment it takes 5 to 30 minutes to
1176:39 environment it takes 5 to 30 minutes to create spark and hadoop clusters data
1176:42 create spark and hadoop clusters data proc clusters take 90 seconds or less on
1176:45 proc clusters take 90 seconds or less on average to be built in google cloud
1176:48 average to be built in google cloud dataproc has built-in integration with
1176:51 dataproc has built-in integration with other google cloud platform services and
1176:53 other google cloud platform services and use spark and hadoop clusters without
1176:56 use spark and hadoop clusters without any admin assistance so when you're done
1176:58 any admin assistance so when you're done with the cluster you can simply turn it
1177:00 with the cluster you can simply turn it off so you don't spend money on an idle
1177:03 off so you don't spend money on an idle cluster as well there's no need to worry
1177:06 cluster as well there's no need to worry about data loss because data proc is
1177:08 about data loss because data proc is integrated with cloud storage bigquery
1177:11 integrated with cloud storage bigquery and cloud bigtable the great thing about
1177:14 and cloud bigtable the great thing about dataproc is you don't need to learn new
1177:16 dataproc is you don't need to learn new tools or apis to use it
1177:18 tools or apis to use it spark hadoop pig and hive are all
1177:22 spark hadoop pig and hive are all supported and frequently updated and
1177:24 supported and frequently updated and when it comes to pricing you are billed
1177:26 when it comes to pricing you are billed at one cent per vcpu in your cluster per
1177:30 at one cent per vcpu in your cluster per hour on top of the other resources you
1177:33 hour on top of the other resources you use you also have the flexibility of
1177:36 use you also have the flexibility of using preemptable instances for even
1177:38 using preemptable instances for even lower compute cost now although cloud
1177:41 lower compute cost now although cloud data proc and cloud data flow can both
1177:44 data proc and cloud data flow can both be used to implement etl data
1177:46 be used to implement etl data warehousing solutions they each have
1177:49 warehousing solutions they each have their strengths and weaknesses and so i
1177:52 their strengths and weaknesses and so i wanted to take a quick moment to point
1177:53 wanted to take a quick moment to point them out now with dataproc you can
1177:56 them out now with dataproc you can easily spin up clusters through the
1177:57 easily spin up clusters through the console the sdk or the api and turn it
1178:01 console the sdk or the api and turn it off when you don't need it with dataflow
1178:04 off when you don't need it with dataflow it is serverless and fully managed so
1178:07 it is serverless and fully managed so there are never any servers to worry
1178:09 there are never any servers to worry about and when it comes to having any
1178:11 about and when it comes to having any dependencies to tools in the hadoop or
1178:13 dependencies to tools in the hadoop or spark ecosystem data proc would be the
1178:16 spark ecosystem data proc would be the way to go but if you're looking to make
1178:18 way to go but if you're looking to make your jobs more portable across different
1178:21 your jobs more portable across different execution engines apache beam allows you
1178:24 execution engines apache beam allows you to do this and is only available on data
1178:27 to do this and is only available on data flow moving on to the next service is
1178:29 flow moving on to the next service is cloud data lab now cloud data lab is an
1178:32 cloud data lab now cloud data lab is an interactive developer tool created to
1178:35 interactive developer tool created to explore analyze transform and visualize
1178:39 explore analyze transform and visualize data and build machine learning models
1178:42 data and build machine learning models from your data data lab uses open
1178:44 from your data data lab uses open sourced jupyter notebooks a well-known
1178:47 sourced jupyter notebooks a well-known format used in the world of data science
1178:49 format used in the world of data science it runs on compute engine and connects
1178:52 it runs on compute engine and connects to multiple cloud services easily so you
1178:55 to multiple cloud services easily so you can focus on your data science tasks it
1178:58 can focus on your data science tasks it also integrates with all of the google
1179:00 also integrates with all of the google services that help you simplify data
1179:03 services that help you simplify data processing like bigquery and cloud
1179:06 processing like bigquery and cloud storage cloud data lab is packaged as a
1179:08 storage cloud data lab is packaged as a container and run in a vm instance cloud
1179:11 container and run in a vm instance cloud data lab uses notebooks instead of text
1179:14 data lab uses notebooks instead of text files containing code notebooks bring
1179:17 files containing code notebooks bring together code documentation written as
1179:19 together code documentation written as markdown and the results of code
1179:21 markdown and the results of code execution whether it's text image or
1179:25 execution whether it's text image or html or javascript like a code editor or
1179:28 html or javascript like a code editor or ide notebooks help you write code and
1179:32 ide notebooks help you write code and they allow you to execute code in an
1179:34 they allow you to execute code in an interactive and iterative manner
1179:37 interactive and iterative manner rendering the results alongside the code
1179:39 rendering the results alongside the code cloud data lab notebooks can be stored
1179:42 cloud data lab notebooks can be stored in google cloud source repository this
1179:44 in google cloud source repository this git repository is cloned onto persistent
1179:47 git repository is cloned onto persistent disk when attached to the vm now when it
1179:50 disk when attached to the vm now when it comes to prepping your data before
1179:52 comes to prepping your data before consumption whether it be data cleansing
1179:55 consumption whether it be data cleansing cleaning prepping or alteration this is
1179:58 cleaning prepping or alteration this is where data prep hits it out of the park
1180:01 where data prep hits it out of the park dataprep is a serverless intelligent
1180:03 dataprep is a serverless intelligent data service for visually exploring
1180:06 data service for visually exploring cleaning and preparing structured and
1180:09 cleaning and preparing structured and unstructured data for analysis reporting
1180:12 unstructured data for analysis reporting and machine learning it automatically
1180:14 and machine learning it automatically detects schemas data types possible
1180:18 detects schemas data types possible joins and anomalies such as missing
1180:20 joins and anomalies such as missing values outliers and duplicates so you
1180:23 values outliers and duplicates so you don't have to the architecture that i'm
1180:25 don't have to the architecture that i'm about to show you is how data prep
1180:28 about to show you is how data prep shines the raw data that's available
1180:30 shines the raw data that's available from various different sources is
1180:32 from various different sources is ingested into cloud data prep to clean
1180:36 ingested into cloud data prep to clean and prepare the data data prep then
1180:38 and prepare the data data prep then sends the data off to cloud data flow to
1180:40 sends the data off to cloud data flow to refine that data and then sent off to
1180:43 refine that data and then sent off to cloud storage or bigquery for storage
1180:46 cloud storage or bigquery for storage before being analyzed by one of the many
1180:49 before being analyzed by one of the many available bi tools now these big data
1180:52 available bi tools now these big data services are used by many data analysts
1180:54 services are used by many data analysts in the field and it's great to know what
1180:57 in the field and it's great to know what services that can be used to help
1180:59 services that can be used to help process the data needed for their
1181:01 process the data needed for their specific job as well for the exam you
1181:04 specific job as well for the exam you only need to know these services at a
1181:06 only need to know these services at a high level and not to know them in depth
1181:09 high level and not to know them in depth but if you seem interested in diving
1181:11 but if you seem interested in diving into any of these services to know more
1181:14 into any of these services to know more about them i highly encourage you to
1181:16 about them i highly encourage you to dive in after the course and really take
1181:19 dive in after the course and really take a look at them and that's pretty much
1181:21 a look at them and that's pretty much all i have to cover in this lesson on
1181:23 all i have to cover in this lesson on the services that are available for the
1181:25 the services that are available for the big data ecosystem in google cloud so
1181:28 big data ecosystem in google cloud so you can now mark this lesson as complete
1181:30 you can now mark this lesson as complete and let's move on to the next one
1181:32 and let's move on to the next one [Music]
1181:36 [Music] welcome back
1181:37 welcome back this lesson is going to be based on the
1181:40 this lesson is going to be based on the foundation of machine learning i'm going
1181:43 foundation of machine learning i'm going to go over what machine learning is what
1181:45 to go over what machine learning is what it can do for us the machine learning
1181:48 it can do for us the machine learning ecosystem on google cloud
1181:50 ecosystem on google cloud and hopefully answer any questions along
1181:52 and hopefully answer any questions along the way this lesson will be a high level
1181:55 the way this lesson will be a high level overview of the services available on
1181:58 overview of the services available on google cloud yet these services that are
1182:00 google cloud yet these services that are available are a need to know as they
1182:03 available are a need to know as they come up in the exam and hopefully will
1182:06 come up in the exam and hopefully will give you some really cool ideas on the
1182:09 give you some really cool ideas on the possibilities of building something
1182:11 possibilities of building something truly fantastic on google cloud so what
1182:15 truly fantastic on google cloud so what is machine learning
1182:17 is machine learning well machine learning is functionality
1182:20 well machine learning is functionality that helps enable software to perform
1182:23 that helps enable software to perform tasks without any explicit programming
1182:26 tasks without any explicit programming or rules traditionally considered a
1182:28 or rules traditionally considered a subcategory of artificial intelligence
1182:31 subcategory of artificial intelligence machine learning involves statistical
1182:34 machine learning involves statistical techniques such as deep learning also
1182:36 techniques such as deep learning also known as neural networks that are
1182:39 known as neural networks that are inspired by theories about how the human
1182:42 inspired by theories about how the human brain processes information it is
1182:44 brain processes information it is trained to recognize patterns in
1182:47 trained to recognize patterns in collected data using algorithmic models
1182:50 collected data using algorithmic models and this collected data includes video
1182:53 and this collected data includes video images speech or text and because
1182:56 images speech or text and because machine learning is very expensive to
1182:59 machine learning is very expensive to run on-premises
1183:01 run on-premises is an efficient place
1183:03 is an efficient place for machine learning due to the use of
1183:06 for machine learning due to the use of massive computation at scale
1183:08 massive computation at scale and as explained before machine learning
1183:11 and as explained before machine learning is always better with big data so now i
1183:14 is always better with big data so now i wanted to touch on what can machine
1183:16 wanted to touch on what can machine learning do for us
1183:18 learning do for us well it can categorize images such as
1183:21 well it can categorize images such as photos faces or satellite imagery
1183:24 photos faces or satellite imagery it can look for keywords in text
1183:26 it can look for keywords in text documents or emails
1183:28 documents or emails it can flag potentially fraudulent
1183:31 it can flag potentially fraudulent transactions when it comes to credit
1183:33 transactions when it comes to credit cards or debit cards it can enable
1183:35 cards or debit cards it can enable software to respond accurately to voice
1183:38 software to respond accurately to voice commands it can also translate languages
1183:41 commands it can also translate languages in text or audio and these are just some
1183:44 in text or audio and these are just some of the common functions that machine
1183:46 of the common functions that machine learning can do for us so getting into
1183:48 learning can do for us so getting into google's machine learning platform
1183:50 google's machine learning platform itself machine learning has been a
1183:53 itself machine learning has been a cornerstone of google's internal systems
1183:56 cornerstone of google's internal systems for years primarily because their need
1183:59 for years primarily because their need to automate data-driven systems on a
1184:02 to automate data-driven systems on a massive scale
1184:04 massive scale and doing this has provided unique
1184:06 and doing this has provided unique insight into the right techniques
1184:09 insight into the right techniques infrastructure and frameworks that help
1184:12 infrastructure and frameworks that help their customers get optimal value out of
1184:14 their customers get optimal value out of machine learning the originally
1184:16 machine learning the originally developed open source framework for use
1184:19 developed open source framework for use inside of google
1184:20 inside of google called tensorflow
1184:22 called tensorflow is now the standard in the data science
1184:24 is now the standard in the data science community in addition to heavily
1184:27 community in addition to heavily contributing to the academic and open
1184:30 contributing to the academic and open source communities
1184:31 source communities google's machine learning researchers
1184:34 google's machine learning researchers helped bring that functionality into
1184:36 helped bring that functionality into google products such as g suite search
1184:40 google products such as g suite search and photos in addition to google's
1184:43 and photos in addition to google's internal operations when it comes to
1184:46 internal operations when it comes to data center automation
1184:48 data center automation now here is an overview of all the
1184:50 now here is an overview of all the machine learning services that we will
1184:52 machine learning services that we will be covering and that you will need to
1184:54 be covering and that you will need to know
1184:55 know only at a high level for the exam and
1184:58 only at a high level for the exam and we'll start off with the site api
1185:00 we'll start off with the site api services
1185:02 services starting with the vision api
1185:04 starting with the vision api the vision api offers powerful
1185:06 the vision api offers powerful pre-trained machine learning models
1185:09 pre-trained machine learning models that allow you to assign labels to
1185:12 that allow you to assign labels to images
1185:13 images and quickly classify them into millions
1185:16 and quickly classify them into millions of pre-defined categories
1185:18 of pre-defined categories vision api
1185:19 vision api can read printed and handwritten text it
1185:23 can read printed and handwritten text it can detect objects and faces
1185:25 can detect objects and faces and build metadata into an image catalog
1185:28 and build metadata into an image catalog of your choice now when it comes to
1185:30 of your choice now when it comes to video intelligence
1185:32 video intelligence it has pre-trained machine learning
1185:35 it has pre-trained machine learning models that automatically recognizes
1185:38 models that automatically recognizes more than 20 000 objects
1185:41 more than 20 000 objects places and actions in stored and
1185:45 places and actions in stored and streaming video you can gain insights
1185:47 streaming video you can gain insights from video in near real time using the
1185:50 from video in near real time using the video intelligence streaming video apis
1185:54 video intelligence streaming video apis and trigger events based on objects
1185:56 and trigger events based on objects detected you can easily search a video
1185:58 detected you can easily search a video catalog the same way you search text
1186:00 catalog the same way you search text documents and extract metadata that can
1186:04 documents and extract metadata that can be used to index organize and search
1186:07 be used to index organize and search video content
1186:09 video content now moving on to the language apis
1186:12 now moving on to the language apis we start off with the natural language
1186:14 we start off with the natural language api and this uses machine learning to
1186:17 api and this uses machine learning to reveal the structure and meaning of text
1186:20 reveal the structure and meaning of text you can extract information about people
1186:23 you can extract information about people places and events
1186:25 places and events and better understand social media
1186:27 and better understand social media sentiment and customer conversations
1186:30 sentiment and customer conversations natural language enables you to analyze
1186:33 natural language enables you to analyze text and also integrate it with your
1186:36 text and also integrate it with your document storage on cloud storage now
1186:39 document storage on cloud storage now with the translation api it enables you
1186:42 with the translation api it enables you to dynamically translate between
1186:44 to dynamically translate between languages using google's pre-trained or
1186:48 languages using google's pre-trained or custom machine learning models
1186:50 custom machine learning models translation api
1186:52 translation api instantly translates text into more than
1186:55 instantly translates text into more than 100 languages for your website and apps
1186:59 100 languages for your website and apps with optional customization features
1187:03 with optional customization features following another grouping of machine
1187:04 following another grouping of machine learning is the conversation apis first
1187:08 learning is the conversation apis first up we have dialog flow dialog flow is a
1187:11 up we have dialog flow dialog flow is a natural language understanding platform
1187:14 natural language understanding platform that makes it easy to design and
1187:17 that makes it easy to design and integrate a conversational user
1187:19 integrate a conversational user interface into your application or
1187:21 interface into your application or device it could be a mobile app a web
1187:24 device it could be a mobile app a web application a bot or an interactive
1187:27 application a bot or an interactive voice response system using dialogflow
1187:30 voice response system using dialogflow you can provide new and engaging ways
1187:33 you can provide new and engaging ways for users to interact with your product
1187:36 for users to interact with your product dialogflow can analyze multiple types of
1187:38 dialogflow can analyze multiple types of input from your customers
1187:40 input from your customers including text or audio inputs
1187:43 including text or audio inputs like from a phone or voice recording and
1187:45 like from a phone or voice recording and it can also respond to your customers in
1187:48 it can also respond to your customers in a couple of ways either through text or
1187:51 a couple of ways either through text or with synthetic speech now with the
1187:53 with synthetic speech now with the speech-to-text api this api accurately
1187:57 speech-to-text api this api accurately converts speech into text it can
1188:00 converts speech into text it can transcribe content with accurate
1188:02 transcribe content with accurate captions and deliver better user
1188:05 captions and deliver better user experience in products through voice
1188:08 experience in products through voice commands going the other way from text
1188:10 commands going the other way from text to speech this api enables developers to
1188:14 to speech this api enables developers to synthesize natural sounding speech with
1188:17 synthesize natural sounding speech with over a hundred different voices
1188:19 over a hundred different voices available in multiple languages and
1188:22 available in multiple languages and variants text to speech
1188:24 variants text to speech allows you to create lifelike
1188:26 allows you to create lifelike interactions with their users across
1188:29 interactions with their users across many applications and devices and to
1188:32 many applications and devices and to finish off our machine learning segment
1188:35 finish off our machine learning segment i wanted to touch on auto ml automl is a
1188:39 i wanted to touch on auto ml automl is a suite of machine learning products that
1188:41 suite of machine learning products that enables developers with very limited
1188:44 enables developers with very limited machine learning expertise
1188:46 machine learning expertise to train high quality models specific to
1188:50 to train high quality models specific to their business needs in other words
1188:52 their business needs in other words using automl allows making deep learning
1188:55 using automl allows making deep learning easier to use and relies on google's
1188:58 easier to use and relies on google's state-of-the-art transfer learning and
1189:00 state-of-the-art transfer learning and neural architecture search technology so
1189:03 neural architecture search technology so you can now generate high quality
1189:04 you can now generate high quality training data and be able to deploy new
1189:07 training data and be able to deploy new models based on your data in minutes
1189:10 models based on your data in minutes automl is available for vision
1189:14 automl is available for vision video intelligence translation
1189:16 video intelligence translation natural language tables
1189:19 natural language tables inference and recommendation apis
1189:23 inference and recommendation apis now i know this has been a lot to cover
1189:26 now i know this has been a lot to cover for this machine learning lesson and the
1189:28 for this machine learning lesson and the ecosystem around it but is a necessity
1189:31 ecosystem around it but is a necessity for the exam and will also help you
1189:34 for the exam and will also help you build really cool products when it comes
1189:37 build really cool products when it comes to your role as an engineer again all
1189:40 to your role as an engineer again all the services that i have discussed in
1189:42 the services that i have discussed in this lesson should be known at a high
1189:45 this lesson should be known at a high level only although my recommendation
1189:48 level only although my recommendation would be to dive deeper into these
1189:50 would be to dive deeper into these services by checking out the links in
1189:53 services by checking out the links in the lesson text below and having some
1189:55 the lesson text below and having some fun with these products getting to know
1189:58 fun with these products getting to know these services will really help up your
1190:01 these services will really help up your game when it comes to getting to know
1190:03 game when it comes to getting to know these services a little bit more in
1190:05 these services a little bit more in depth and will really help you gain more
1190:08 depth and will really help you gain more momentum when it comes to building any
1190:10 momentum when it comes to building any applications or applying them to any
1190:13 applications or applying them to any currently running applications i
1190:16 currently running applications i personally found it extremely valuable
1190:18 personally found it extremely valuable and really cemented my knowledge when it
1190:21 and really cemented my knowledge when it came to machine learning i also had a
1190:23 came to machine learning i also had a ton of fun doing it and so that's all i
1190:26 ton of fun doing it and so that's all i have for this lesson on machine learning
1190:28 have for this lesson on machine learning so you can now mark this lesson as
1190:30 so you can now mark this lesson as complete and let's move on to the next
1190:32 complete and let's move on to the next one
1190:33 one [Music]
1190:37 [Music] welcome back and in this lesson we'll be
1190:40 welcome back and in this lesson we'll be diving into a suite of tools used on the
1190:43 diving into a suite of tools used on the google cloud platform that allow you to
1190:45 google cloud platform that allow you to operate monitor and troubleshoot your
1190:48 operate monitor and troubleshoot your environment known as operation suite and
1190:51 environment known as operation suite and previously known as stackdriver this
1190:53 previously known as stackdriver this lesson will be mostly conceptual and
1190:55 lesson will be mostly conceptual and gear more towards what the suite of
1190:58 gear more towards what the suite of tools do as it plays a big part not only
1191:01 tools do as it plays a big part not only in the exam but for the needs of gaining
1191:04 in the exam but for the needs of gaining insight from all the resources that
1191:06 insight from all the resources that exist in your environment now there are
1191:08 exist in your environment now there are a few tools to cover here so with that
1191:11 a few tools to cover here so with that being said let's dive in
1191:13 being said let's dive in now the operation suite is a suite of
1191:15 now the operation suite is a suite of tools for logging monitoring and
1191:18 tools for logging monitoring and application diagnostics
1191:20 application diagnostics operation suite ingests this data and
1191:23 operation suite ingests this data and generates insights using dashboards
1191:26 generates insights using dashboards charts and alerts this suite of tools
1191:30 charts and alerts this suite of tools are available for both gcp and aws you
1191:34 are available for both gcp and aws you can connect to aws using an aws role and
1191:38 can connect to aws using an aws role and a gcp service account you can also
1191:41 a gcp service account you can also monitor vms with specific agents that
1191:44 monitor vms with specific agents that again both run on gcp for compute engine
1191:48 again both run on gcp for compute engine and aws ec2 operation suite also allows
1191:52 and aws ec2 operation suite also allows the added functionality of monitoring
1191:54 the added functionality of monitoring any applications that's running on those
1191:57 any applications that's running on those vms operation suite is also available
1192:00 vms operation suite is also available for any on-premises infrastructure or
1192:03 for any on-premises infrastructure or hybrid cloud environments operation
1192:06 hybrid cloud environments operation suite has a native integration within
1192:09 suite has a native integration within gcp out of the box so there's no real
1192:12 gcp out of the box so there's no real configurations that you need to do and
1192:14 configurations that you need to do and integrates with almost all the resources
1192:17 integrates with almost all the resources on google cloud such as the previously
1192:20 on google cloud such as the previously mentioned compute engine gke app engine
1192:24 mentioned compute engine gke app engine and bigquery and you can find and fix
1192:26 and bigquery and you can find and fix issues faster due to the many different
1192:29 issues faster due to the many different tools an operation suite can reduce
1192:32 tools an operation suite can reduce downtime with real-time alerting you can
1192:35 downtime with real-time alerting you can also find support from a growing partner
1192:38 also find support from a growing partner ecosystem of technology integration
1192:40 ecosystem of technology integration tools to expand your operations security
1192:44 tools to expand your operations security and compliance capabilities now the
1192:47 and compliance capabilities now the operation suite comprises of six
1192:50 operation suite comprises of six available products that covers the gamut
1192:53 available products that covers the gamut of all the available tools you will need
1192:55 of all the available tools you will need that allows you to monitor troubleshoot
1192:58 that allows you to monitor troubleshoot and improve application performance on
1193:00 and improve application performance on your google cloud environment and i will
1193:03 your google cloud environment and i will be going over these products in a bit of
1193:05 be going over these products in a bit of detail starting with monitoring now
1193:08 detail starting with monitoring now cloud monitoring collects measurements
1193:10 cloud monitoring collects measurements or metrics to help you understand how
1193:13 or metrics to help you understand how your applications and system services
1193:16 your applications and system services are performing giving you the
1193:18 are performing giving you the information about the source of the
1193:20 information about the source of the measurements time stamped values and
1193:23 measurements time stamped values and information of those values that can be
1193:26 information of those values that can be broken down through time series data
1193:28 broken down through time series data cloud monitoring can then take the data
1193:31 cloud monitoring can then take the data provided and use pre-defined dashboards
1193:35 provided and use pre-defined dashboards that require no setup or configuration
1193:38 that require no setup or configuration effort cloud monitoring also gives you
1193:40 effort cloud monitoring also gives you the flexibility to create custom
1193:43 the flexibility to create custom dashboards that display the content you
1193:45 dashboards that display the content you select you can use the widgets available
1193:48 select you can use the widgets available or you can install a dashboard
1193:50 or you can install a dashboard configuration that is stored in github
1193:52 configuration that is stored in github now in order for you to start using
1193:54 now in order for you to start using cloud monitoring you need to configure a
1193:57 cloud monitoring you need to configure a workspace
1193:58 workspace now workspaces organize monitoring
1194:01 now workspaces organize monitoring information in cloud monitoring this is
1194:03 information in cloud monitoring this is a single pane of glass where you can
1194:06 a single pane of glass where you can view everything that you're monitoring
1194:08 view everything that you're monitoring in your environment it is also best
1194:10 in your environment it is also best practice to use a multi-project
1194:12 practice to use a multi-project workspace so you can monitor multiple
1194:15 workspace so you can monitor multiple projects from a single pane of glass now
1194:18 projects from a single pane of glass now as i mentioned earlier cloud monitoring
1194:21 as i mentioned earlier cloud monitoring has an agent and this gathers system and
1194:23 has an agent and this gathers system and application metrics from your vm and
1194:26 application metrics from your vm and sends them to cloud monitoring you can
1194:28 sends them to cloud monitoring you can monitor your vms without the agent but
1194:31 monitor your vms without the agent but you will only get specific metrics such
1194:33 you will only get specific metrics such as cpu
1194:34 as cpu disk traffic network traffic and uptime
1194:38 disk traffic network traffic and uptime using the agent is optional but is
1194:40 using the agent is optional but is recommended by google and with the agent
1194:43 recommended by google and with the agent it allows you to monitor many
1194:45 it allows you to monitor many third-party applications and just as a
1194:47 third-party applications and just as a note cloud logging has an agent as well
1194:51 note cloud logging has an agent as well and works well together with cloud
1194:53 and works well together with cloud monitoring to create visualize and alert
1194:57 monitoring to create visualize and alert on metrics based on log data but more on
1195:00 on metrics based on log data but more on that a little bit later cloud monitoring
1195:02 that a little bit later cloud monitoring is also available for gke and this will
1195:05 is also available for gke and this will allow you to monitor your clusters as it
1195:08 allow you to monitor your clusters as it manages the monitoring and logging
1195:10 manages the monitoring and logging together and this will monitor clusters
1195:12 together and this will monitor clusters infrastructure its workloads and
1195:15 infrastructure its workloads and services as well as your nodes pods and
1195:18 services as well as your nodes pods and containers so when it comes to alerting
1195:21 containers so when it comes to alerting this is defined by policies and
1195:24 this is defined by policies and conditions so an a learning policy
1195:26 conditions so an a learning policy defines the conditions under which a
1195:29 defines the conditions under which a service is considered unhealthy when
1195:31 service is considered unhealthy when these conditions are met the policy is
1195:33 these conditions are met the policy is triggered and it opens a new incident
1195:36 triggered and it opens a new incident and sends off a notification a policy
1195:38 and sends off a notification a policy belongs to an individual workspace and
1195:41 belongs to an individual workspace and each workspace can contain up to 500
1195:43 each workspace can contain up to 500 policies now conditions determine when
1195:46 policies now conditions determine when an alerting policy is triggered so all
1195:49 an alerting policy is triggered so all conditions watch for three separate
1195:51 conditions watch for three separate things the first one is a metric the
1195:53 things the first one is a metric the second one is a behavior in some way and
1195:55 second one is a behavior in some way and the third one is for a period of time
1195:58 the third one is for a period of time describing a condition includes a metric
1196:01 describing a condition includes a metric to be measured and a test for
1196:03 to be measured and a test for determining when the metric reaches a
1196:05 determining when the metric reaches a state that you want to know about so
1196:07 state that you want to know about so when an alert is triggered you could be
1196:09 when an alert is triggered you could be notified using notification channels
1196:12 notified using notification channels such as email
1196:13 such as email sms
1196:14 sms as well as third party tools such as
1196:17 as well as third party tools such as pagerduty and slack now moving on to
1196:19 pagerduty and slack now moving on to cloud logging cloud logging is a central
1196:22 cloud logging cloud logging is a central repository for log data from multiple
1196:25 repository for log data from multiple sources and as described earlier logging
1196:29 sources and as described earlier logging can come not just from google but with
1196:32 can come not just from google but with aws as well as on-premises environments
1196:35 aws as well as on-premises environments cloud logging handles real-time log
1196:37 cloud logging handles real-time log management and analysis and has tight
1196:41 management and analysis and has tight integration with cloud monitoring it
1196:43 integration with cloud monitoring it collects platform system and application
1196:47 collects platform system and application logs and you also have the option of
1196:49 logs and you also have the option of exporting logs to other sources such as
1196:52 exporting logs to other sources such as long-term storage like cloud storage or
1196:55 long-term storage like cloud storage or for analysis like bigquery you can also
1196:58 for analysis like bigquery you can also export to third-party tools as well now
1197:01 export to third-party tools as well now diving into the concepts of cloud
1197:03 diving into the concepts of cloud logging these are associated primarily
1197:06 logging these are associated primarily with gcp projects so logs viewer only
1197:10 with gcp projects so logs viewer only shows logs from one specific project now
1197:13 shows logs from one specific project now when it comes to log entries log entry
1197:15 when it comes to log entries log entry records a status or an event a project
1197:18 records a status or an event a project receives log entries when services being
1197:20 receives log entries when services being used produce log entries and to get down
1197:23 used produce log entries and to get down to the basics
1197:25 to the basics logs are a named collection of log
1197:27 logs are a named collection of log entries within a google cloud resource
1197:30 entries within a google cloud resource and just as a note each log entry
1197:33 and just as a note each log entry includes the name of its log logs only
1197:35 includes the name of its log logs only exist if they have log entries and the
1197:38 exist if they have log entries and the retention period is the length of time
1197:41 retention period is the length of time for which your logs are kept so digging
1197:44 for which your logs are kept so digging into the types of logs that cloud
1197:46 into the types of logs that cloud logging handles there are three
1197:48 logging handles there are three different types of logs there are audit
1197:50 different types of logs there are audit logs transparency logs and agent logs
1197:54 logs transparency logs and agent logs now with audit logs these are logs that
1197:56 now with audit logs these are logs that define who did what where and when they
1198:00 define who did what where and when they also show admin activity and data access
1198:03 also show admin activity and data access as well as system events continuing on
1198:06 as well as system events continuing on to access transparency logs these are
1198:08 to access transparency logs these are logs for actions taken by google so when
1198:11 logs for actions taken by google so when google staff is accessing your data due
1198:14 google staff is accessing your data due to a support ticket the actions that are
1198:16 to a support ticket the actions that are taken by the google staff are logged
1198:19 taken by the google staff are logged within cloud logging now when it comes
1198:21 within cloud logging now when it comes to agent logs these are the logs that
1198:24 to agent logs these are the logs that come from agents that are installed on
1198:26 come from agents that are installed on vms
1198:27 vms the logging agent sends system and
1198:30 the logging agent sends system and third-party logs on the vm instance to
1198:32 third-party logs on the vm instance to cloud logging moving on to error
1198:34 cloud logging moving on to error reporting this looks at real-time error
1198:37 reporting this looks at real-time error monitoring and alerting it counts
1198:39 monitoring and alerting it counts analyzes and aggregates the errors that
1198:42 analyzes and aggregates the errors that happen in your gcp environment and then
1198:45 happen in your gcp environment and then alerts you when a new application error
1198:47 alerts you when a new application error occurs details of the error can be sent
1198:50 occurs details of the error can be sent through the api and notifications are
1198:53 through the api and notifications are still in beta error reporting is
1198:56 still in beta error reporting is integrated into cloud functions and
1198:58 integrated into cloud functions and google app engine standard which is
1199:01 google app engine standard which is enabled automatically error reporting is
1199:03 enabled automatically error reporting is in beta for compute engine kubernetes
1199:06 in beta for compute engine kubernetes engine and app engine flexible as well
1199:09 engine and app engine flexible as well as aws ec2 air reporting can be
1199:13 as aws ec2 air reporting can be installed in a variety of languages such
1199:16 installed in a variety of languages such as go java.net
1199:19 as go java.net node.js python php and ruby now moving
1199:23 node.js python php and ruby now moving into debugger this tool debugs a running
1199:26 into debugger this tool debugs a running application without slowing it down it
1199:29 application without slowing it down it captures and inspects the call stack and
1199:33 captures and inspects the call stack and local variables in your application this
1199:36 local variables in your application this tool debugs a running application
1199:38 tool debugs a running application without slowing it down it captures and
1199:41 without slowing it down it captures and inspects the call stack and local
1199:43 inspects the call stack and local variables in your application this is
1199:45 variables in your application this is also known as taking a snapshot once the
1199:48 also known as taking a snapshot once the snapshot has been taken
1199:50 snapshot has been taken a log point can be injected to allow you
1199:53 a log point can be injected to allow you to start debugging debugger can be used
1199:56 to start debugging debugger can be used with or without access to your
1199:58 with or without access to your application source code and if your repo
1200:01 application source code and if your repo is not local it can be hooked into a
1200:04 is not local it can be hooked into a remote git repo such as github git lab
1200:07 remote git repo such as github git lab or bitbucket debugger is integrated with
1200:10 or bitbucket debugger is integrated with google app engine automatically and can
1200:13 google app engine automatically and can be installed on google compute engine
1200:16 be installed on google compute engine gke
1200:17 gke and google app engine debugger is
1200:19 and google app engine debugger is integrated with google app engine
1200:21 integrated with google app engine automatically and can be installed on
1200:23 automatically and can be installed on gke debugger is integrated with google
1200:26 gke debugger is integrated with google app engine automatically and can be
1200:28 app engine automatically and can be installed on google compute engine
1200:31 installed on google compute engine google kubernetes engine google app
1200:33 google kubernetes engine google app engine and cloud run and just as a note
1200:36 engine and cloud run and just as a note installation on these products is all
1200:38 installation on these products is all dependent on the library and again
1200:41 dependent on the library and again debugger can be installed like trace on
1200:44 debugger can be installed like trace on non-gcp environments and is available to
1200:47 non-gcp environments and is available to be installed using a variety of
1200:49 be installed using a variety of different languages next up is trace and
1200:53 different languages next up is trace and trace helps you understand how long it
1200:55 trace helps you understand how long it takes your application to handle
1200:57 takes your application to handle incoming requests from users and
1201:00 incoming requests from users and applications trace collects latency data
1201:02 applications trace collects latency data from app engine https load balancers and
1201:06 from app engine https load balancers and applications using the trace api this is
1201:09 applications using the trace api this is also integrated with google app engine
1201:12 also integrated with google app engine standard and is applied automatically so
1201:14 standard and is applied automatically so you would use trace for something like a
1201:16 you would use trace for something like a website that is taking forever to load
1201:19 website that is taking forever to load to troubleshoot that specific issue
1201:21 to troubleshoot that specific issue trace can be installed on google compute
1201:24 trace can be installed on google compute engine google kubernetes engine and
1201:26 engine google kubernetes engine and google app engine as well it can also be
1201:29 google app engine as well it can also be installed on non-gcp environments and it
1201:32 installed on non-gcp environments and it can be installed using a variety of
1201:34 can be installed using a variety of different languages as shown here and
1201:37 different languages as shown here and coming up on the last tool of the bunch
1201:39 coming up on the last tool of the bunch is profiler now profiler gathers cpu
1201:42 is profiler now profiler gathers cpu usage and memory allocation information
1201:45 usage and memory allocation information from your applications continuously and
1201:48 from your applications continuously and this helps you discover patterns of
1201:50 this helps you discover patterns of resource consumption to help you better
1201:53 resource consumption to help you better troubleshoot profiler is low profile and
1201:56 troubleshoot profiler is low profile and therefore won't take up a lot of memory
1201:58 therefore won't take up a lot of memory or cpu on your system as well in order
1202:02 or cpu on your system as well in order to use profiler an agent needs to be
1202:04 to use profiler an agent needs to be installed profiler can be installed on
1202:07 installed profiler can be installed on compute engine kubernetes engine and app
1202:10 compute engine kubernetes engine and app engine as well and of course it can be
1202:13 engine as well and of course it can be installed on non-gcp environments and
1202:16 installed on non-gcp environments and profiler can be installed using the
1202:18 profiler can be installed using the following languages just go
1202:21 following languages just go java node.js and python and so just as a
1202:25 java node.js and python and so just as a note for the exam only a high level
1202:27 note for the exam only a high level overview of these tools are needed and
1202:29 overview of these tools are needed and so this concludes this lesson on a high
1202:32 so this concludes this lesson on a high level overview of operation suite so you
1202:35 level overview of operation suite so you can now mark this lesson as complete and
1202:38 can now mark this lesson as complete and let's move on to the next one