0:02 as you all probably know by now versel
0:04 and I broke up I still use them for a
0:05 lot of the things I'm shipping but they
0:07 are no longer a channel sponsor that
0:09 means I can talk about things they might
0:10 not have wanted me to talk about in the
0:12 past and today we're talking about a big
0:15 one how to not have a crazy versell bill
0:17 I see a lot of fear around how expensive
0:19 verell is and these terrible bills that
0:21 float around online I've had the
0:23 pleasure of auditing almost all of them
0:25 as in I've Doven into code bases that
0:27 caus these huge bills and I've learned a
0:29 ton about how to use Rell right and more
0:31 importantly the ways you can use it
0:33 wrong so I did something I have not done
0:36 before I built an app to Showcase just
0:39 how bad things can be I did all of the
0:41 stuff that I have seen that causes these
0:42 big versell bills and we're going to go
0:45 through and fix them so that you can
0:46 find them in your own code base and
0:48 prevent one of these crazy bills as I
0:49 mentioned before verell has nothing to
0:51 do with this video they did not sponsor
0:52 it but we do have a sponsor so let's
0:55 hear from them really quick [Applause]
0:57 [Applause] [Music]
0:57 [Music] [Applause]
1:01 [Applause] [Music]
1:02 [Music] [Applause]
1:04 [Applause]
1:05 this seems like the most innocent thing
1:07 in the world you put a video in the
1:09 public directory you put it in a video
1:12 tag and then you go to your website now
1:14 the video is playing this is great right
1:18 totally safe fine except that for CS
1:21 infra is expensive for bandwidth I know
1:23 people look at it and then they compare
1:24 to things like hetner and they're like
1:26 wow versell charges so much for
1:28 bandwidth the reason is everything you
1:30 put in this public directory gets thrown
1:33 on a CDN and good cdns are expensive the
1:34 reason you'd want things on a CDN is
1:36 because stuff like a favicon which is
1:39 really really small is really really
1:42 beneficial to have close to your users
1:44 even Cloud flare has acknowledged this
1:46 when they built R2 because R2 despite
1:49 being cheaper to host files is much much
1:52 slower than the CDN here because of that
1:53 putting stuff in this folder is
1:55 expensive and if it's something that you
1:58 can't reasonably respond with in a
2:00 single like chunk of a request it
2:02 shouldn't go in here my general rule is
2:05 if it's more than like 4 kilobytes do
2:07 not put it in here if you want the
2:09 easiest thing to put it in we'll have a
2:11 small self plug throw it an upload thing
2:12 I'm going to go to my dashboard we're
2:15 going to create a static asset host create
2:16 create
2:19 app files
2:25 upload go to the public folder grab drop
2:29 upload now all I have to do copy file URL
2:31 URL
2:33 go back here and just swap the source
2:37 out that's it we just potentially saved
2:39 ourselves from a very very expensive
2:41 bill because we don't charge for egress
2:43 on upload thing so instead of
2:44 potentially spending thousands of
2:47 dollars go drag and drop it into upload
2:49 thing and spend zero instead you can
2:52 also throw it on S3 or R2 or other
2:53 products all over the internet but this
2:56 is the one we built for next devs and it
2:58 makes avoiding these things hilariously
3:00 easy on the topic of assets though there
3:02 is one other Edge case I see people
3:04 coming into and I made a dedicated
3:07 example for this this page grabs a
3:09 thousand random Pokemon
3:11 Sprites and there's a lot of them and
3:12 they take quite a bit to
3:15 load this is doing something right that
3:18 I think is really really important we're
3:20 using the next image component and this
3:23 is awesome because if we were using our
3:25 own images like we had put in public
3:27 instead of serving the 4 megabyte Theo
3:29 face this could compress it all the way
3:31 down to like three kilobytes depending
3:33 on the use case but the way that versel
3:35 bills on the image Optimizer is really
3:38 important to note by default on a free
3:40 plan on versel you get a th000 image
3:43 optimizations by default but then they
3:45 cost $5 per thousand you get 5,000 for
3:47 free on the pro tier but that $5 per
3:49 thousand optimizations that's not cheap
3:52 and we made a couple mistakes in this
3:55 implementation one is that we are just
3:57 referencing these files that are already
3:58 really small the ones we're grabbing
4:00 from this GitHub repo poke AP API these
4:01 are already small files they don't
4:03 really need to be optimized it's nice if
4:05 the other one for sell CDN but it's not
4:07 necessary the much bigger mistake we
4:10 made is how we indicate which images
4:12 we're cool with optimizing you'll see
4:15 here that we're allowing any path from
4:18 GitHub user content so if other people
4:22 are hosting random images on GitHub they
4:25 could use your optimization endpoint to
4:27 generate tens of thousands of additional
4:29 image optimizations and I want to be
4:30 clear about what what an image
4:32 optimization is if you were to rerender
4:34 these below at a different size so we
4:37 were to change this to 200 a lot of
4:39 platforms Will Bill you separately for
4:40 the different optimizations if we make a
4:42 version of this image that's 1,000
4:44 pixels wide and tall and a version
4:46 that's 200 you would pay for both but on
4:48 versell you're only paying based on the
4:50 unique URLs the important thing to make
4:52 sure you do right here is that you
4:54 configure the path name to be more
4:57 restrictive so the quick fix for this
4:59 one is pretty simple you grab more the
5:03 URL so we go here we say /on API Sprites
5:07 Master starstar now this app will only
5:09 optimize images that come from Pokey API
5:12 so as long as this repo isn't
5:14 compromised you're good this also goes
5:17 for upli thing by the way if you just
5:19 call utfs doio here which a lot of
5:23 people do you just set it up so any
5:25 image on upload thing is optimizable
5:27 through your app what you want to do is
5:31 use the SL a SL style URLs because these
5:33 URLs allow you to specify an ID that's
5:35 unique to your app so in the example I
5:37 gave earlier if we were to use upload
5:40 thing to be the original host the E ID
5:42 is just this part right
5:45 here and now we can only optimize the
5:47 images as long as if they are coming
5:48 from my app because this is the path for
5:50 files that are from my app and you
5:52 cannot serve files from other people's
5:53 apps if you put the app ID in it like
5:55 this so if you're using upload thing and
5:57 you're also using the next image
5:58 component to optimize the image as
5:59 upload thing please make sure you do it
6:00 this this way and if you want to change
6:02 the URLs over the API will start doing
6:04 these by default soon but if you're
6:05 doing this early enough where that
6:08 hasn't happened copy file URL grab this
6:11 part put that after here so if we wanted
6:13 to put this optimized image on the
6:16 homepage image let's import the image
6:20 component from next in the
6:25 source will be https utfs doio
6:28 a did it get that correct from my config
6:30 it did look at that good job cursor now
6:32 that we've done this I can grab an
6:35 optimized image from my host which is
6:38 upload thing you don't have to pay for
6:39 somebody potentially going directly to
6:41 that URL because we eat that with upload
6:42 thing and users are now getting a much
6:45 more optimized image sent down to them
6:47 instead of the giant 10 megabyte thing
6:48 that you might be hosting with upload
6:50 thing and you don't have to worry about
6:52 users abusing it because if they don't
6:54 have the file in your service they can't
6:56 generate an optimized image this covers
6:59 a comically large amount of the bills
7:01 and concerns I've seen so make sure
7:03 you're doing this optimize your images
7:04 especially if you're still putting them
7:06 on verel for some reason and ideally
7:08 take every single asset you have that is
7:10 larger than a few kilobytes and throw it
7:13 on a real file host because vell's goal
7:16 is to do things so they're really fast
7:17 when you put them in the public folder
7:19 because if you put something like an SVG
7:21 or a favicon it needs to go really quick
7:22 which makes it more expensive but you
7:24 can even use for sales blob product
7:26 which is similar to upload thing R2 S3
7:28 all of those it immediately wipe these
7:30 costs out ideally they would introduce
7:33 something in the build system that Flags
7:34 when you have large files here and the
7:36 potential risk I might even make a
7:38 eslint plugin that does this in the
7:40 future but for now just make sure you're
7:42 not embedding large Assets in a way that
7:44 they get hosted on versell thing one
7:46 complete okay that's just bandwidth but
7:48 serverless is so expensive you got to
7:51 make that cheap too let's get to it
7:53 let's say you made a Blog and you have a
7:57 data model that includes posts comments
8:00 and of course users both posts and
8:02 comments reference users and you can see
8:04 how one might write a query that gets
8:06 all of these things at once but let's
8:09 say you started with just posts and you
8:11 made an endpoint that Returns the posts
8:13 then you added comments so you added a
8:15 call to your DB to get the comments and
8:17 then you added users so you added a
8:18 bunch of calls to grab the right user
8:20 info for all the things you just did you
8:23 might end up with an API that looks a
8:26 little something like
8:29 this hopefully y'all can quickly see the
8:32 problem here I was surprised when one of
8:33 those companies with a really big Bill
8:37 could not the problem here is we do this
8:41 blockin call cx. db. posts. findfirst to
8:44 get the post then we have the comments
8:47 which we get using the post ID then we
8:49 have the author which we also get using
8:52 the post idid well the post user ID then
8:54 we get the users in comments by taking
8:56 all the comments selecting up the user
8:58 ID and selecting things where those match
9:00 match
9:03 this is really really bad it is
9:06 hilariously bad because let's say your
9:08 database is relatively fast each of
9:10 these only takes 50 milliseconds to
9:12 complete blocking for 50 milliseconds
9:14 blocking for another 50 blocking for
9:16 another 50 locking for another 50 this
9:20 is 200 milliseconds minimum of compute
9:23 that should probably be a single
9:27 instance the dumb Quick Fix is to take
9:29 things that can be happening at the same
9:31 time and do them at the same time so we
9:34 can grab comments and author at the same
9:37 time a quick way to do this make the
9:39 comments promise don't block for it make
9:43 the author promise don't block for it
9:44 now these are both going at the same
9:46 time and if we need the comments here
9:48 which we do con comments is a wait
9:51 comments promise now we have them now at
9:53 the very very least we took these two
9:54 queries and allow them to run at the
9:56 same time but we can do much better than
9:59 this this is a real quick hack fix if
10:00 you don't have dependencies like if all
10:03 of these queries don't share data you
10:05 could just run all of them at once in a
10:07 promise do all settled but ideally we
10:11 would use SQL so I could write this
10:13 myself instead we're going to tell
10:18 cursor to change this code so a single
10:22 Prisma query is made that gets all of
10:25 the data in a single pass using relations
10:32 look at
10:35 that hilariously simpler db. post. fine
10:37 first order by down but we're also
10:39 telling it to include the user because
10:41 that's the author as well as comments
10:43 but in those comments we also want to
10:45 include user so we get all of the data
10:48 back directly here here they're cleaning
10:49 up because the data model I actually had
10:51 for this was garbage but honestly when
10:54 we get this back we have post which has
10:56 the user in it which is the author I
10:57 should probably have named that properly
10:59 whatever too late now and we have com
11:00 which have users as well as the comment
11:03 data all in one query this means that
11:07 this request takes four times less time
11:09 to resolve and I you not one of
11:11 those massive versell bills I saw
11:14 requests were taking over 20 seconds and
11:17 the average request had over 15 blocking
11:20 Prisma calls most of which didn't need
11:22 data shared with each other so a single
11:24 promise.all cut their request times down
11:28 by like 90% then using relations cut it
11:31 down another like 5% and I got the
11:33 runtime down in an Uber and 30 minutes
11:35 from over 20 seconds the requests were
11:39 often timing out down to like two in
11:41 very very little time in an Uber without
11:43 even being able to run the code you need
11:46 to know how to use a database and one of
11:47 my spicy takes is that forel's
11:50 infrastructure scales so well that
11:52 writing absolute garbage code like that
11:55 can function if you were using a VPS and
11:58 the average request took 20 seconds to
12:01 resolve I don't care how good bps's are
12:02 you wrote something terrible and your
12:04 bill is still going to suck or users are
12:06 going to get a lot more timeouts or
12:08 requests bouncing because the server is
12:10 too busy doing all of this stuff verell
12:13 did just add a feature to make the issue
12:15 here slightly less bad which is their
12:17 serverless servers announcement check
12:19 out my dedicated video on this if you
12:20 want to understand more about it the
12:22 tldr is when something is waiting on
12:25 external work other users can make
12:27 requests on the same Lambda so each
12:29 request isn't costing you money because
12:32 if these DB calls took 20 seconds then
12:34 every user going to your app is costing
12:37 you 20 seconds of compute with the new
12:38 concurrency model at the very least when
12:40 you're waiting on data externally other
12:43 users can be doing other things so it
12:45 reduces the bill there a little bit and
12:47 by a little bit I mean half or more
12:48 sometimes so it is a big deal especially
12:51 if you have long requests like if you're
12:54 requesting to an external API for doing
12:56 generation for example very good use
12:59 case for doing something like this if
13:01 you're waiting 20 plus seconds for an AI
13:03 to generate something for your users
13:05 paying the 20 seconds of waiting for
13:07 every single user sucks and this helps a
13:09 ton there there are other things we can
13:11 do to help with that though one of those
13:13 things I didn't take the time to put it
13:16 in here is queuing instead of having
13:18 your server wait for that data to come
13:20 back you could throw it in a queue and
13:23 have the service that's generating your
13:25 stuff update the queue when it's done
13:27 there are lots of cool services for this
13:29 inest is one of the most popular had a
13:30 really good experience with them they
13:32 allow you to create durable functions
13:35 that will trigger the generation and
13:37 then die and then when the generation is
13:39 done trigger again to update your
13:41 database really cool in order to avoid
13:43 those compute moments entirely another
13:45 that I've been talking with more is
13:47 trigger. deev open source background
13:49 jobs with no timeouts this lets you do
13:51 one of those steps where you're waiting
13:53 a really long time for Dolly to generate
13:55 something without having to pay all of
13:58 the time to wait for your service
14:00 sitting there as you this thing is being
14:02 generated so if you do have requests
14:04 that have to take long amounts of time
14:05 you should probably throw those in a
14:08 queue of some form instead of just
14:09 letting your servers eat all of that
14:12 cost these Solutions all help a ton be
14:14 it a que or the concurrency stuff that
14:15 verell shipping at the very least you
14:18 should go click the concurrency button
14:20 because it's one click and might save
14:22 you 80% of your bill all of the things I
14:25 just showed assume that the compute has
14:27 to be done but you don't always have to
14:30 do the compute sometimes you can skip it
14:33 let's say theoretically this query took
14:36 a really long time it didn't take 100
14:39 milliseconds maybe it takes 10 seconds
14:41 but also the data that this resolves
14:43 doesn't change much we can call things a
14:47 little bit differently if we have const
14:51 cached post call equals unstable cach
14:52 stable version of this coming very soon
14:54 as long as versel gets their stuff
14:57 together before next conf here I need to
14:59 import DB now we have this function
15:02 cached post call I should name this
15:03 better because post call has a specific
15:07 meaning cached blog post fetcher now
15:09 with the special cach blog post fetcher
15:11 function the first time it's called it
15:13 actually does the work but from that
15:16 point forward all of the data is cached
15:17 and now you don't have to do the call
15:21 again so if this call took 10 seconds
15:24 now it's only going to take 10 seconds
15:28 the first time this is a huge win because
15:29 because
15:32 now future requests are significantly
15:34 cheaper and if you can find the points
15:35 in your app where things take a long
15:37 amount of time and don't change that
15:41 much huge win but they do change
15:42 sometimes and it's important to know how
15:44 to deal with that so let's say we have a
15:46 leave comment
15:49 procedure it's a procedure where a user
15:52 creates a comment so context DB comment
15:54 create and we create this new comment
15:57 let's not return just yet though we'll
16:00 await this const comment equals that but
16:03 now this old cache is going to be out of
16:04 date and it's not going to show the
16:06 comment because this page got fetched
16:08 earlier that's pretty easy to
16:11 fix all you have to do is down here
16:15 revalidate tag post and now since we
16:17 called revalidate tag with this tag
16:20 versel is smart enough to know okay this
16:23 cash is invalid now so the next time
16:24 somebody needs the data we're going to
16:26 have to call the function again but now
16:28 you only have to call this query which
16:31 which we are pretending is very slow
16:34 once per comment so when a user leaves a
16:36 comment you run this heavy query but
16:38 when a user goes to the page you don't
16:39 have to because the results are already
16:41 in the cache we've just changed the
16:43 model from every request requires this
16:46 to run to every comment being made
16:49 requires it to run but then nobody else
16:50 has to deal with it from that point
16:54 forward huge change a common one I see
16:56 is users who are calling their database
16:58 to check the user and get user data on
17:01 every single request that is a database
17:03 call that is blocking at the start of
17:06 every single request you do if instead
17:09 you cach that user data then most of
17:11 those requests will now be instantaneous
17:14 instead of being blocking on a DB call
17:16 huge change so this will not only make
17:19 your build cheaper it'll also make the
17:21 website feel significantly faster you
17:23 don't have to wait for a database to be
17:24 called in order to generate this
17:26 information I'm seeing some confusion
17:28 about unstable cach I want to call these
17:30 things out this cash isn't running on
17:33 the client at all the client has no idea
17:35 about any of this things like react
17:37 query things like St while R validate
17:39 all of that stuff for the most part is
17:41 client side things to worry about this
17:44 is the server the server is making a
17:47 call to your database to get this data
17:48 and you are telling the server when you
17:51 wrap it with unstable cache hey once
17:52 this has been done you don't have to
17:54 call the database anymore you can just
17:56 take the result this is kind of just a
17:59 wrapper to store this in a k TV store
18:01 invers cells data center or if you
18:02 implement it yourself wherever else you
18:04 could do this Yourself by writing the
18:06 function a little differently I'll show
18:08 you what the DIY version would look
18:13 like DIY cach blog post so first thing
18:15 we have to do is check our KV so I'm
18:19 assuming we have a KV const KV result
18:21 equals yeah await kv. getet poost I
18:22 don't actually have a KV in here so
18:23 ignore the fact it's going to type error
18:27 if KV result return KV result otherwise
18:33 we set the result and then we return it
18:35 this is effectively what vel's cash is
18:36 doing they have some niceties to make it
18:38 easier to interrupt with and validate
18:41 things on I've diy things like this so
18:42 often in my life versel gave us some
18:45 synx sugar for it but you can DIY this
18:47 if you want to yourself I could rewrite
18:49 the unstable cache function and just
18:51 throw it in KV if I wanted to but this
18:55 is using a store in the cloud to C cash
18:57 the result of what this function returns
18:59 so you don't have to call it again if
19:00 you already have the result as you see
19:02 here if we have the result we just
19:04 return it from the KV otherwise we run
19:06 the other code again think that help
19:08 clarify that one that all said if you
19:10 know anything about blog posts you might
19:12 be suspicious of this example in the
19:13 first place because you shouldn't have
19:16 to make an API call to load a blog post
19:17 you should be able to just open the page
19:19 and have the blog post and here's
19:21 another one of those common failures I
19:22 see you might have even noticed it
19:24 earlier if you're paying close enough
19:27 attention see this export cons Dynamic
19:29 equals force Dynamic call here
19:31 this forces the page that I'm on to be
19:34 generated every time a user goes to it
19:36 this page doesn't have any user specific
19:39 data we have this API call but this one
19:42 doesn't use any user specific data we
19:45 have this void get latest. prefetch call
19:47 which allows for data to be cached when
19:49 things load on the client side we don't
19:51 even need that though we can kill it
19:54 nothing on this page is user specific so
19:56 loading this page shouldn't require any
19:59 compute at all but because we set it to
20:02 be dynamic it will and this whole page
20:05 is going to require compute to run on
20:07 your server every time someone goes to
20:09 it if you have pages that are mostly
20:12 static like a terms of service page a
20:15 Blog docs all of those things it's
20:17 important to make sure the pages being
20:20 generated are static thankfully verell
20:22 makes this relatively easy to check if
20:25 you run a build they will just show you
20:27 all of these details in the output and
20:28 they don't just show it when you run the
20:31 build locally I can also go to my verell
20:34 deployments and go take a look so we'll
20:37 hop into quick pick which is a service I
20:39 just put out and in here we can take a
20:42 look the deployment summary and see what
20:44 got deployed in what ways we have the
20:46 static assets the functions and the ISR
20:48 functions and it tells you which does
20:50 what the more important thing that's a
20:51 little easier in my opinion to
20:53 understand is in the build output it
20:55 shows you here each route and what it
20:59 means so the circle means static the f
21:01 means Dynamic and you want to make sure
21:04 all of your heavy things like your pages
21:07 that are static are static because you
21:09 want the user receiving generated HTML
21:11 you don't want to have a server spin up
21:14 to generate the same HTML for every user
21:17 when they go to every page back to image
21:18 optimization for a sec CU I know I
21:20 showed you how to use them right and as
21:22 long as you have less than 5,000 images
21:24 honestly you should probably use their
21:26 stuff it is very good and very
21:28 convenient despite being pretty happy
21:30 with the experience of using the next
21:32 image component on versel once you break
21:35 5,000 images price gets rough that's why
21:37 the example loader configurations page
21:40 is pretty useful here frankly I'm not
21:42 happy with either the pricing or the DX
21:44 around any of these other options but
21:46 they are significantly cheaper if you
21:49 want to use them sometimes sometimes
21:50 they're more expensive but for the most
21:51 part all the options here are cheaper
21:53 they have their own goas I've been to
21:55 Hell in back with Cloud flares to the
21:56 point where I'm putting out my own thing
21:57 in the near future if you need to save
21:59 your money now take a look through this
22:00 list and find the thing that fits your
22:03 needs the best but in the future image.
22:05 engineering is going to be a thing I am
22:07 very excited about this project I've
22:08 been working out in the background for a
22:10 while if you look closely enough at the
22:12 URLs on pck thing you'll see that all of
22:15 the URLs on this page are being served
22:17 by image. engineering already we're dog
22:19 fooding it we're really excited about
22:21 what it can do and in the near future
22:23 you'll be able to use that too so for
22:25 now if you need to get something cheap
22:28 ASAP go through the list here if this
22:30 video has been out for long enough or
22:32 maybe just check the pin comment I'll
22:33 have something about image engineering
22:35 if it's ready but for now use for sell
22:37 until you break for, images if the bill
22:38 gets too bad consider moving to
22:41 something like anything in this list and
22:43 keep an eye out for when our really
22:45 really fast and cheap solution is ready
22:47 to go which will be effectively a drop
22:49 in and have some really cool benefits as
22:53 well so yeah one last thing there's a
22:56 tab in everyone's versell dashboard for
22:58 everyone's versell deployments that
23:01 seems very innocent analytics you will
23:04 notice that you not have it enabled
23:06 there's a reason for that these
23:08 analytics events are not product
23:09 analytics if you're not familiar with
23:11 the distinction product analytics are
23:13 how you track what a user does on your
23:15 site so if you want to see which events
23:17 a specific user had that's product
23:19 analytics to track the Journey of a user
23:21 if you want to know which Pages people
23:23 are going to you want to have a count
23:24 for how many people go to a specific
23:27 page that is web analytics web analytics
23:29 is like the old Google analytics type
23:31 stuff product analytics is things like
23:33 amplitude mix panel the tools that let
23:35 you track what users are specifically
23:37 doing my preference on how to set this
23:40 up is to use post hog and thankfully
23:41 they made a huge change to how they
23:43 handle Anonymous users they also made a
23:45 really useful change to their site the
23:47 mode which hides all of the crap it
23:49 makes it much nicer for videos so thank
23:50 you to them for
23:53 that but what we care about here is the
23:55 new pricing where it is
24:00 0.005 cents per event and that is the
24:01 most expensive ones and the first
24:03 million are free so you get a million
24:05 free events the next million are at this
24:07 price but if you're doing a million
24:08 events you're probably doing two million
24:10 events this is the more fair number so
24:11 we're going to take this number we're
24:14 going to compare it here so that is at
24:18 100,000 events times this
24:21 $343 versus 14 bucks pretty big deal
24:24 there interesting apparently the web
24:27 analytics plus product has a cap for how
24:28 many events you can do a month even in
24:30 the Pro window Enterprise can work
24:32 around it but 20 million events is a
24:35 pretty hard cap like we can even get
24:38 close to that with upload thing so yeah
24:40 not my favorite certainly not at the $14
24:42 per 100,000 event pricing and certainly
24:45 not for 50 bucks a month generally I
24:49 recommend not using the versel analytics
24:50 but if they do get cheaper in the future
24:52 I'll be sure to let you guys know so you
24:54 can consider it one last thing if you
24:57 are still concerned about the bill I
24:59 understand the thought about having some
25:01 massive multi thousand bill out of
25:04 nowhere is terrifying they have a
25:06 solution for that too spend management
25:09 you can set up a spend limit in your app
25:11 if you are concerned about the price
25:13 getting too
25:16 expensive you can go in to the spend
25:18 management tab in Billing and specify
25:20 that you only want to be able to spend
25:21 up to this much money and even manage
25:24 when you get notifications so if you are
25:25 concerned that usage will get to a point
25:27 where you have a really high Bill there
25:30 you go Bill handled it does mean your
25:32 service will go down so there's a catch
25:34 there but the reason this is happening
25:36 is either you implemented things really
25:39 wrong or your service went super viral
25:40 for what it is worth I have never
25:42 enabled this because the amount of
25:45 compute each requests cost for us is
25:47 hilariously low so even when we were
25:50 being dosed the worst Bill somebody
25:52 could generate was like 80 bucks after
25:54 spamming us for hours straight with
25:56 millions of computers because they found
25:58 one file on one of our apps that was
26:00 like 400 kilobytes so if you one things
26:02 well you almost certainly won't have
26:04 problems my napkin math suggested that
26:06 for us to have a $100,000 a month Bill
26:08 we'd have to have a billion users so
26:10 you're probably fine but if you are the
26:13 nervous type I understand go hit the
26:15 switch I hope this was helpful I know a
26:16 lot of y'all are scared about your for
26:18 sale bills but as long as you follow
26:20 these basic best practices you can keep
26:23 them really low our bill has been like
26:25 $10 a month for a while and not counting
26:28 seats it's not a big deal High recommend
26:30 taking advantage of these things and
26:32 continue using things like for sale all
26:34 of these tips apply other places too
26:35 it's not just for sale you can use these
26:37 same things to be more successful on
26:39 netlify cloud flare any other serverless
26:40 platform and these things will also
26:43 speed up your apps for using a VPS build
26:44 your apps in a way that you understand
26:46 and try your best to keep the complexity
26:49 down in the end the bill comes from the
26:51 things you shouldn't be doing until next