0:09 yeah hi everybody this is the weirdest
0:12 promql you'll ever see promql for
0:15 reporting analytics and business
0:18 intelligence now this is my first time
0:19 at promcon it's actually my first
0:22 conference talk um so you'll have to
0:25 forgive the kind of somewhat Batey title
0:27 I put together when I was uh pitching
0:30 this um we've we've seen some weird prom
0:35 today right um some of H and minoes and
0:37 I was told about this uh legendary talk
0:40 from five years ago um that's that's
0:43 gone down in history so maybe this is
0:45 the second weirdest prom you'll ever see
0:48 who knows and uh yeah here it is if you
0:51 want to go now you can uh you can go go
0:53 away happy now
0:55 um so yeah my name is Sam juel I'm a
0:58 senior software engineer at graffan Labs
1:02 I've been there three uh years now when
1:05 I'm not working I'm working hard or
1:07 maybe not so hard looking after my two
1:10 boys um here we're dressed up as uh
1:13 daffodils dressed up in yellow yellow
1:16 flowers um when I am working most
1:19 recently I've worked on cost attribution
1:21 this is helping customers to split up
1:23 their their bill their Graff Labs Bill
1:27 uh by label typically um and for large
1:29 companies large customers this is pretty
1:32 important important it means they can
1:34 break down that huge bill and see who is
1:37 sending what um and it's also important
1:40 for teams teams can then see their own
1:43 contribution to that enormous company
1:45 spend and start to kind of take
1:48 ownership of that so it's it's pretty
1:50 valuable you know in dollars we dog feed
1:51 this at
1:54 grafana and in the last month or two
1:57 we've we've cut metrics that were worth
2:00 $25,000 every month and um and we we're
2:02 just getting started we plan to save
2:05 even more over the coming months and
2:06 we're using we're using Prometheus and
2:10 we're using grafana um to do this which
2:13 is which is interesting because it's not
2:16 an observability use case um we're
2:18 storing the data in Prometheus we're
2:20 visualizing with grafana and that's why
2:24 I'm here talking today um to share like
2:26 how we're using Prometheus and to kind
2:29 of say that you to can use Prometheus
2:33 for non-observability use cases
2:35 um such things as you know you could
2:38 track dorom metrics you could track like
2:40 site activity or transactions you could
2:44 track alerts or incidents um or like us
2:48 you could track usage of like meted
2:50 Cloud spend Cloud you know meted cloud
2:53 services and save money and and we heard
2:57 yesterday from Swiss re they described
2:59 themselves uh one of their functions as
3:01 being a data shop and the other other
3:04 teams were consuming some of their data
3:05 data [Music]
3:07 [Music]
3:10 so so why why use Prometheus and why not
3:13 kind of like SQL why not some SQL data
3:17 warehouse um well you know you could you
3:18 could avoid adding an additional data
3:20 pipeline or database if you happen to
3:24 have your data there in Prometheus um
3:27 you can avoid ex exporting that data
3:28 which means you can avoid things like a
3:31 delay or or stale data in some cases and
3:33 we can lean on the ecosystem as well you
3:36 know it makes it very easy to build
3:40 dashboards or alert on that data so you
3:42 know there's opportunities here so my
3:45 goals for the talk are to uh equip you
3:47 to do a bit more with Prometheus and
3:49 hopefully learn something new about
3:54 promql and have some fun doing it if we can
3:55 can
3:58 so so
4:00 specifically I've been looking a lot at
4:03 usage data that's things like the count
4:08 of um Prometheus time series or bites of
4:10 telemetry data ingested bites of logs
4:15 traces data um at Grana we also
4:18 track our CL consumption of cloud
4:20 compute resources so how much CPU and
4:22 memory we're Computing across all our
4:25 kubernetes machines um to to ATT trct
4:28 costs and to manage those costs so when
4:31 we look at that usage dat data we've got
4:34 super high you know Prometheus is
4:35 powerful there's a reason we love it
4:37 it's like super high frequency in the
4:42 time Dimension so we can like see spikes
4:44 and and zoom in on those see exactly
4:46 when they occurred and correlate and
4:48 figure out you know what caused the
4:51 spike and and address it you know and it
4:52 you know the Prometheus label model also
4:54 allows us as high cardinality you know
4:57 as high as we want or higher sometimes
5:00 um so we can break down that usage in
5:02 kind of a space Dimension as well and
5:04 track it down to an instance um or a
5:06 team you know and we can address our
5:09 usage spikes great in kind of real time
5:13 or close to it but my goal is something
5:15 pretty different to that it's to grock
5:18 those costs um and view view the data
5:20 over a much longer time period so like
5:23 aggregate over time so some examples of
5:26 like times when you might want to do
5:28 that um at the end of the month you get
5:30 sent your bill by your like cloud
5:33 provider for your Cloud compute and um
5:35 you might want to compare that to your
5:37 own data and kind of audit the bill
5:39 they've sent to you
5:42 um and then you know challenge the bill
5:45 if it's wrong uh or break down the bill
5:49 even further further than the pro
5:51 provider allows you to do with their own
5:56 data um we might want to just like Gro
5:59 these costs uh by day you know see see
6:00 like daily or weekly how these things
6:03 are trending and and where the where the
6:05 sources are coming from the changes
6:08 coming from this one I think is quite an
6:10 interesting view this is kind of like a
6:12 monthly View and it's cumulative as well
6:15 and it allows you to compare across
6:18 months you know where the Peaks are so
6:20 was this month more than last month
6:21 which of the kind of teams which of the
6:24 instances are responsible and
6:27 also see some gradients see some rates
6:29 you know where where is it picking up
6:31 and where it slowing down so that's
6:34 quite a rich view on the data and then
6:37 over even longer time Horizons
6:40 um this is
6:44 uh like if you buy some credits up front
6:46 at the beginning of the year and you're
6:47 burning through them as the year goes on
6:50 you want to know a that you're not going
6:54 to fail to spend them all and lose them
6:57 uh once that 12 months runs out and
6:58 equally that you're not going to burn
6:59 through it all in six months and have to
7:02 like buy another contract within 6
7:06 months so um so yeah you you you want
7:19 yeah all right so let's uh let's start
7:21 writing some promql let's start trying
7:22 to build some of these visualizations so
7:24 we'll start with this daily
7:28 one um now the easiest way is to start from
7:30 from
7:32 a counter metric uh so here I've got
7:36 traces size total um so that's like to
7:38 in in bytes and you can see if I take
7:41 the sum of the rate we can see that you
7:43 know in August we're seeing around
7:46 85,000 bytes coming in every
7:48 second um and so the reason we've
7:50 started with a counter is because this
7:53 is the easiest so you can just use
7:56 increase increase and pass it the time
7:58 period of one day and it'll tell you how
8:01 much that bytes counter increased over
8:03 the day so on the 17th of August you can
8:06 see 7.4 billion
8:10 bytes um were you know were sent or
8:13 ingested over that last day but this is
8:15 kind of continuous data and it's
8:17 actually more data than we want to see
8:19 if we just want to grock this quickly so
8:22 we'll change the Min step and we'll
8:24 change to bars and there we are we've
8:26 got a daily view that's super nice we
8:29 can really quickly and easily understand
8:30 that and share it around the business you
8:31 you
8:33 know and if you want to do that in
8:35 Prometheus you can change the resolution
8:38 input that takes seconds so just convert
8:42 your day into seconds there but
8:44 there's something not quite right here
8:47 on the 16th of August if you hover you
8:50 can see that bar is the time the exact
8:52 time for that data point is midnight in
8:55 the morning and what it's done is it's
8:59 calculated the increase in that count um
9:02 over the past 24 hours so all of
9:06 those traces um were actually being
9:08 recorded in the previous 24 hours
9:10 meaning the 15th of the month not the
9:14 16th so we'd like to fix that fix the X
9:15 labels so we're just going to add one
9:17 more thing at this point which is to
9:20 offset offset by one day and you can see
9:21 the bars move over and they're now
9:24 labeled correctly they're labeled us for
9:27 when when that usage happened all right
9:29 let's move on to something
9:33 bit more tricky at graan Labs
9:35 we often find that we're working with
9:39 these gauge metrics and not counter
9:41 metrics well specifically rates right
9:44 rate metrics because we use recording
9:46 rules to kind of like Aggregate and
9:49 relabel lots of this data it's kind of
9:50 like there are too many labels and it's
9:54 too high cardinality um so we end up
9:55 with metrics that are more like this
9:58 byes received per second and so in this
10:01 case we we want to get to the same daily
10:03 data but
10:06 um because it's a rate we're going to
10:08 have to integrate the area under the
10:11 curve um
10:14 so now interestingly there's an open
10:17 pull request this guy juano also works
10:20 at gra Labs he's proposed this in kind
10:22 of July and August so I'm interested to
10:24 see where this goes I I really want to
10:28 see this com in um but in the meantime
10:31 there's kind of of some tricks to
10:33 getting this to work properly so if we
10:36 look at the docs we can track down some
10:39 overtime which says you know it takes a
10:42 range vector and what it returns is the
10:45 sum of all values in the specified
10:47 interval let's just have a look at how
10:50 that pans out in practice so if
10:53 we take some kind of take this example
10:55 from 1600 to
10:58 161 we pass one minute into some over
10:59 time and it's going to
11:03 do what we just said like sum all the
11:05 values in the specified interval so in
11:07 this case the values are 5 5 10 and 10
11:10 we get 30 okay that's what it's giving
11:11 us for the B bytes
11:13 received but the trouble is the btes
11:14 received per
11:17 second is exactly that it's per second
11:20 so what it's telling us is five bytes
11:22 were received in that one second five
11:24 bytes in that second 10 in that second
11:27 and 10 in that second so we've summed up
11:29 the uh bites received over four seconds
11:31 and not over a minute so we'll fix that
11:33 by multiplying by the time period we'll
11:36 multiply by 15 this is not kind of in
11:39 the docs I've opened a pull request
11:42 we'll see see what happens there but um
11:43 in order to get a true integration and
11:45 taking account of all that time we'll
11:47 have to multiply by the time period or scrape
11:49 scrape
11:51 interval now this isn't completely
11:53 robust yet if a scrape is missed or a
11:56 recording rule fails you might lose a
11:58 data point you might lose two if the
12:00 scrape interval actually changes or your
12:03 like frequency of data changes we can
12:07 fix that by adding a subquery so that
12:08 you know as long as you execute a
12:10 subquery you can guarantee that it's
12:13 going to evaluate at specific intervals
12:16 in time and then we'll we'll we'll cover
12:17 the whole time range
12:21 again great and so we can we can run
12:23 that we can see the the the curve again
12:26 and we can like present it as days
12:29 awesome we got there so we'll do another
12:31 one let's have a look at this monthly
12:45 now so you'd hope that you could just
12:48 kind of pull out where you had inserted
12:52 one day and stick in one month um that
12:55 doesn't work we get a paing error bad
12:57 number or
12:59 duration and presumably this is months
13:02 are different lengths right so it's like
13:03 what would what would one month be
13:06 translated into so prom doesn't accept
13:08 month um time
13:10 durations in some cases perhaps you
13:13 could get away with 30 days
13:15 but it's nice to have data for calendar
13:17 months because you know if you look at
13:19 it two weeks later a month later even
13:22 six months later it won't have changed
13:25 right it won't have moved around so
13:29 um so how are we going to do that well
13:31 there are some functions at some some of
13:34 these you've seen already today actually
13:35 month day of month days in month and
13:39 there's others like uh day and week for
13:40 example there's there's loads in the
13:44 docs um so I was just going to show what
13:47 these actually do so month will give you
13:50 the value of the month index so here you
13:52 can see it takes seven during July eight
13:54 during August there's
13:58 also and so you can um then apply a
13:59 condition if you wanted to you could say
14:02 the month has to equal eight and you'll
14:06 see no data for July and for September
14:09 so this might have some potential for us
14:11 there's also I thought I'd quickly show
14:14 day of month that climbs from a value of
14:18 one on 1st to 31 on the 31st again you
14:20 could apply a condition and use that to
14:22 maybe lift data just at the start of the
14:25 month or just at the end of the month for
14:26 for
14:29 example um
14:32 but yeah so I'm interested to try and
14:34 isolate August data to begin with and
14:36 I've got this kind of like synthetic metric
14:38 metric
14:45 is uh kind of take the intersection of
14:47 this synthetic metric with my actual
14:52 usage metric um and use this uh
14:55 synthetic metric as a filter
14:58 basically so intersection would be the
15:00 and set operator but and doesn't allow
15:03 for like the selective matching of
15:04 labels so it doesn't allow you to use
15:07 like on or ignore keywords or this group
15:08 group right or left people have talked
15:12 about joins earlier today as well so
15:14 what we're going to try and do instead
15:16 is to try multiplying these
15:17 together uh if we're going to multiply
15:19 them we want to scale
15:22 this we well we want it to just be have
15:23 a value of one really we don't want it
15:25 to have a value of eight and we don't
15:26 want to like divide by eight and
15:28 hardcode some value there because
15:29 because it'll be wrong for the other
15:34 so the way we're going to try and do
15:37 this is we'll introduce absent absent
15:39 always has a value one so it's nice in
15:41 that regard and when we put month equals
15:44 8 into absent you can see okay that
15:46 metric we had before was absent and
15:49 that's why we get the value one in July
15:51 and in September and then we can reverse
15:53 the condition if we do that we'll say
15:55 okay what about when the month was not
15:57 equal to eight where is it absent it's
16:01 absent in August fantastic okay so now
16:03 let's try and multiply this by our usage
16:05 metric first of
16:09 all that's the usage metric
16:11 fine and if we try to just multiply we
16:13 get no
16:16 data um and if we check the docs we see
16:19 uh you know binary operator between two instant
16:20 instant
16:24 vectors um the operator is applied to
16:25 each entry in the left hand side and
16:27 it's matching elements in the right hand
16:30 side and what is a matching element well
16:31 for one to one vector matches it's two
16:33 entries match if they have the exact
16:36 same set of labels so we must not have
16:38 the exact same set of labels presumably
16:40 so let's have a look we've
16:43 got you know our real metric on the on
16:45 the left has like cluster ID or ID and
16:47 on the right we have no labels that's
16:51 why it's not working so to begin with
16:53 let's simplify a bit what we've got on
16:56 the left I'll just do a sum by
16:59 ID um
17:00 in order to reduce the number of labels
17:02 I'm working with to begin with so that
17:04 that takes me down to one but I don't
17:05 want to throw them all away I want to
17:08 keep this one because as I said earlier
17:10 we want we want to we want that data in
17:11 the cardinality direction in the space
17:14 direction we want to know which teams
17:17 which which pods which instances um then
17:19 once we've done that we'll multiply and
17:20 this time we'll use
17:23 ignoring um and we and there it is we've
17:27 managed to like filter a usage metric to
17:30 a specific month um it's working there's
17:31 one more thing I want to do before we
17:34 move on and that's
17:37 to here the label is is not preserved
17:39 you can see the legend just shows the
17:42 the query string but uh by adding group
17:46 left to the to the join logic we can um
17:48 preserve the ID and get that to come out
17:50 in the result as
17:52 well okay so this is still a rate this
17:55 is still like bytes per second we need
17:57 to integrate underneath the curve again
17:59 to um
18:01 to get this like
18:04 cumulative um usage and just calculate
18:06 the bytes
18:08 received so we'll try and do what we did
18:09 before we'll sum over time this time
18:12 we'll um use 31 days as the period and
18:14 I'm multiplying by 60 here so this time
18:18 60 is the is the is the scrape interval
18:21 or the period between data points this
18:23 one is not working does does not work
18:26 yet it wants a range Vector so in order
18:29 to give it a range Vector we'll do the
18:30 same thing we did before actually we'll
18:33 turn this into a
18:37 subquery and um we'll make sure that the
18:39 subquery evaluation interval at one
18:42 minute is going to match the 60 so that
18:45 we get the the right the right
18:49 numbers and uh and there we go we've got
18:51 like an accumulating usage during the
18:54 month of August and we can see like what
18:55 the total was at the end of August but
18:58 also like how it was changing during the
18:59 month and this would work mid Monon
19:00 right this is nice because you know you
19:02 load this up on the 16th of August you
19:06 can see where you are
19:08 um but what's going on in
19:10 September well let's zoom out and have a
19:13 look if you look at September you'll see
19:15 the same usage pattern kind of in
19:16 Reverse it's been flipped upside down
19:19 and so we were accumulating usage during
19:22 August and we're de accumulating that same
19:23 same
19:25 usage um in in
19:27 September and like why is that happening
19:29 it's because
19:31 when we sum over time the time window
19:33 we're using is 31 days long so even on
19:36 the 30th of September we look back 31
19:40 days we see that last day in August is
19:42 still being counted to that to that
19:44 usage calculation we're not interested
19:46 in that September usage pattern at all
19:48 that when we get to the end of August we
19:50 basically like pay the bill and we start
19:53 from zero again so let's like try and
19:56 throw that away um this is this is where
19:58 it gets a bit more fun or hacky or you know
20:00 know
20:02 but let's let's push
20:04 forwards so we'll just apply the exact
20:06 same filter
20:09 again um and we'll
20:12 we'll uh cut the data off at the end of
20:14 August and there we have it we've got
20:15 like an
20:19 accumulating usage for August and
20:22 now we can start adding some more months
20:24 so let's try and
20:27 agly uh to agly what we'll do is we'll use
20:29 use
20:32 the or operator this is the union so
20:33 we're going to
20:36 like we've got one query at the top and
20:37 one query at the bottom you can see the
20:39 bottom one I have swapped out the number
20:42 eight for seven so I'm I want month
20:44 seven now when we Union those two
20:47 together they the labels are going to be
20:49 identical and so it's kind of like going
20:53 to stick them together in in time so so
20:56 there we are we've got uh July now I did
20:59 PR promise weird promise from ql I've
21:02 got a kind of like a code golf version
21:04 for like all months so if once you want
21:07 12 months you could do something like
21:08 this you could do like modulo divide by
21:12 three and compare it to 0 one or two I'm
21:15 not very proud of this although it is um
21:18 what's running in the code back home I'm
21:19 going to change this when I when I get
21:22 back to my desk um Y3 the reason you
21:25 need three rather than two is because
21:27 February is too short and the other
21:29 short months um
21:33 um I'm slightly more more happy with
21:35 this one I think this is slightly more
21:38 um predictable and possible to reason
21:41 about so copy it 12 times over I came
21:44 from a I came from a background in Ruby
21:46 and there's a great conference speaker
21:47 called Sandy Mets she wrote a book
21:49 called 99 bottles of oop that I
21:52 absolutely love and she talks about
21:54 getting to Shameless green Shameless
21:57 green is the version that passes the
21:59 tests with it and it's Shameless so this
22:08 um anyway it
22:10 works there you can see June has
22:12 appeared and September has appeared and
22:15 um that's fantastic and so just to like
22:17 put the icing on the cake this is scoped
22:18 to a single
22:21 ID um here ID
22:24 bar if we like unscop from a single ID
22:26 and choose an or ID instead now we can
22:29 see um see that rich view of the data we
22:33 saw before so yeah this is like super
22:36 interesting and potentially like really
22:44 time oh I can see I can see I've got
22:48 like a little demo I think so you can
22:49 see that this isn't just a picture but actually
22:51 actually
23:23 right I was just going to uh like zoom
23:26 out a bit and interact with the interact
23:27 a little bit and you could see that it
23:35 know and we can like show and hide and
23:37 focus on a single team and like share
23:38 that URL with our colleagues and tell
23:56 harder um so I I showed this at the
23:58 start as well um un fortunately I
24:00 haven't really got time now to cover
24:02 this one so I thought i' would say you
24:03 know this could be your homework task if
24:05 you want to go away and have a go or you
24:07 can like reach out to me afterwards if
24:11 you want to uh if you want to anyway so
24:14 that's it I hope I've kind of equipped
24:15 you to do a little bit more with
24:17 Prometheus and maybe taught you
24:19 something new about promql and that
24:22 we've like had a bit of fun doing it um
24:24 that's it thank you everybody for
24:26 listening and thanks to people who have
24:37 that wasn't too weird after all or was
24:49 question hi thank you for the talk um
24:50 just a quick question have you
24:52 considered how are you going to deal
25:05 I think the uh I mean I guess I have
25:08 some kind of answer the the beauty of
25:11 using that month operator is that it
25:14 will it knows it knows when the month
25:17 begins and when the month ends so um
25:19 that's why I'm using it right that's why
25:23 I'm using it instead of using 30 days so
25:26 um so yeah I'd hope I'd hope that this
25:30 would like help me help me get there um
25:33 and I wouldn't have to think about
25:36 it um can you explain again why you have
25:40 to use the ignoring instead of an on
25:45 parenthesis for the join part to work uh
25:54 there it might be that I could have used
25:56 on and then empty
25:59 brackets um
26:00 yeah that's the form I'm used to seeing
26:02 so I'm wondering if this was if there's
26:04 a specific reason why it had to be
26:08 different I no so typically like you've
26:10 got a set of labels you might have like three
26:11 three
26:15 labels um and if you want to join on one
26:16 of them then you can either say on that
26:18 one or you can say ignoring the other
26:22 two and um yeah both both do exactly
26:30 yeah so thanks for for the talk very
26:33 interesting um I have a comment and
26:35 question the comment is that it looks
26:37 like that this ignoring absent
26:39 absent
26:43 uh code can be replaced by scolar did
26:47 you consider scalar function by what by
26:49 Scala function SC yeah
26:52 scalar that's the first question uh let
26:55 just comment and uh the question is uh
26:57 did you consider uh time zone because as
27:01 I know um months function doesn't is return
27:02 return
27:06 returns um result on UTC time zone I I
27:08 recommend you adding time zone support
27:10 in this
27:15 query that's a really good question yeah
27:20 um through my work on this I'm
27:22 like and someone asked about like a leap
27:25 year I'm I'm often hitting time zone
27:26 issues and they kind of manifest
27:28 themselves most when Daylight Saving
27:31 occurs actually and so I'll get some
27:33 like I'll get January February March and
27:34 then I'll get March again and then I'll
27:36 get April May June or something or I'll
27:41 get a gap and um so I've like discovered
27:42 and reported like quite a few bugs
27:45 actually or uh at least one probably two
27:48 or three in in grafana around time zones
27:49 and the way they're handled
27:51 handled
27:53 um but basically
27:55 basically
27:58 we we we build dashboards for our
28:00 customers that show their their bills in
28:02 in like a billing dashboard in grafana
28:04 and we're also building like grafana app
28:06 plugins that show how their bills break
28:09 down by team and the bills are
28:14 calculated in the utc's time zone and so
28:16 as long as we are able to like always
28:17 work in
28:22 UTC then we can like show them the their
28:24 actual bill and label it with the right
28:26 month and you know it's not it's it's
28:27 not relevant to them to see their bill
28:29 in their local time zone because it
28:31 won't actually match up with what we're
28:34 billing them so mostly we yeah just just
28:37 work in UTC for this
28:40 work UTC for the win um quick
28:42 announcement uh the first two lightning
28:44 talk speakers Kieran and Christian
28:46 please come already to the front so we
28:50 can get started right after
28:53 this hello um thank you for the talking
28:57 is really amazing uh just a very maybe
29:00 dumb question it's like if it's only
29:02 dashboarding doesn't work to have a
29:05 variable using the days in month
29:07 function and getting the return from
29:10 this expression using as like the the
29:17 query the days in which month though
29:19 like surely you're looking at multiple
29:21 months at once on some versions of the
29:23 dashboard oh yeah I mean like for
29:25 example creating a a variable using
29:27 theing M function and providing maybe like
29:29 like
29:30 up time series or something like that
29:32 just to getting the month it's going to
29:35 be evaluated and then using the return
29:38 from this function as the range selector
29:42 you know does good work or not
29:45 really yeah I I think those kinds of
29:47 solutions can work yeah we've done some
29:50 pretty gnarly things as well with uh
29:54 with with dashboard variables but yeah
29:55 yeah like if you if
29:58 you had a dashboard that's like viewing
30:00 one specific month for example then yeah
30:03 for sure okay okay thank you
30:06 yeah I have one question if I may on
30:10 your left um hello amazing talk and I'm
30:12 just curious like you you do incredible
30:15 analytic queries using promql right and
30:17 typically you know data analytics they
30:19 just prefer you know maybe SQL and like
30:21 other other languages so I'm just
30:23 curious like what you have been missing
30:25 in promql that you would do easier in
30:31 SQL what could we add or should we EV
30:35 yeah we we had a hackathon project
30:40 um I yeah I so before I joined grafana
30:41 I'd never worked in observability I
30:43 didn't know what a Time series database
30:47 was or what like you know logs were
30:49 basically I'd been using a tool called
30:51 looker uh that got acquired by Google
30:55 which is a bi tool and I um I loved it
30:57 it a dashboarding tool I was like oh I
30:58 love this dashboarding until I can go
31:02 and work on grafana on dashboarding
31:06 um anyway how little I knew but
31:09 um what there was in the SQL world you
31:11 could have like a
31:13 materialized you know you're writing
31:14 these subqueries and subqueries and
31:16 subqueries but often like if you've got
31:18 a subquery you want to give it a name
31:19 and you want to make it a source of
31:22 Truth for your business like so
31:24 recording rule exactly exactly but there
31:27 were different versions of this you
31:29 could have materialized views or you
31:30 could have views that were not
31:32 materialized now recording rules are
31:34 like materialized views but we don't
32:00 I think Victoria metrics ql they have
32:02 variable maybe that's kind of similar
32:04 purpose so yeah there's some room to
32:16 questions okay then thank you again for
32:19 the weirdest talk ever um