0:01 Literally for the first time in history,
0:03 if you have 20 bucks a month, you're
0:05 able to do things that only gigantic
0:07 companies would be able to do just by
0:10 using chatbt. All of this, just for
0:11 context, would have never been possible
0:13 pre- AAI because this is where true
0:15 intelligence [music] is being layered on
0:16 top of the data that we're giving it.
0:19 >> I think so many people fail at trying to
0:22 implement AI because they
0:24 I'm Alex Lieberman joined by my
0:26 co-founder Arman Hzarani. We're the
0:29 co-founders of 10X. We help companies go
0:31 from AI apps into AI native, build
0:34 custom solutions, and uh do enablement
0:36 and trainings to train people on the
0:37 technology that we built for them.
0:39 You're probably wondering, why is he
0:41 wearing this hat? I'll tell you why. I'm
0:43 in New York City. It's like 30°. I don't
0:45 want to freeze my you know what off. But
0:47 this isn't about me. It's about you. You
0:49 have given us your time watching this
0:51 video and we wanted to make it as easy
0:52 as possible for you. So, we turned this
0:54 whole episode into a playbook on our
0:56 website. Just click on the link in the
0:58 description and you can get the whole
1:00 playbook with prompts, with steps, all
1:02 the details you need to apply this to
1:05 your work. Check it out. The first thing
1:07 that I'll say is, and this is something
1:10 we say all the time at 10x, AI is an
1:12 incredible technology, but it is just
1:14 that it is a technology. And when we
1:16 think about technology, we think of
1:20 technology as a tool that helps to solve
1:22 human beings problems. And that's also
1:25 why we don't think AI is always the
1:27 right solution. We think AI is a hammer.
1:28 When you have a nail, it's great. When
1:31 you have a screw, go get a screwdriver.
1:34 And so the the entire way I want to
1:36 frame this conversation is around what
1:38 is the problem we're facing at 10x? What
1:40 is our hypothesis? And then how are we
1:42 building solutioning to solve that
1:46 problem. So the problem is very simple.
1:48 We are in a client services business at
1:52 10X. We have several customers and next
1:54 year we will have even more than
1:58 several. And Arman and I as we grow our
2:00 organization, we want to have a
2:04 consistent pulse on the happiness of our
2:06 customers. We want to have a pulse on
2:09 how our customer success or and our
2:11 strategists are working with customers.
2:13 But the question is is how do you do
2:15 that as you scale to hypothetically
2:18 hundreds of clients? And our hypothesis
2:21 is that AI makes it more possible than
2:24 ever before to maintain that pulse and
2:26 deliver truly six-star service to
2:29 customers as you scale. And so this
2:32 workflow is not only um going to provide
2:35 you kind of a framework for how we uh
2:37 approach building solutions in general,
2:40 but also how you can deliver great
2:42 customer success and keep a pulse on it
2:46 as you scale your organization.
2:52 Okay. So, the I want to refer back to um
2:55 we did a an episode with Wade Foster,
2:58 the CEO of Zapier, several weeks ago,
3:01 and he used this visual and if you
3:03 joined that episode, you have seen this
3:04 visual, but I think it's really
3:07 important to reiterate. So basically
3:10 what this is is it is showing the
3:15 spectrum of processes in a business. And
3:17 the idea is on the far left you have
3:20 what he calls determinism. And
3:23 determinism is the traditional way that
3:25 technology has worked to enable
3:28 processes. Very simply if X happens do
3:32 Y. If A happens do B. It is
3:34 deterministic. It's finite. It is black
3:36 and white. What happens on the fully
3:38 opposite side of the spectrum is
3:41 inference. And inference is just another
3:45 way of describing the power of AI and
3:48 what AI is capable of. And the the
3:51 beauty of AI is I could go tell
3:54 ChachiPT, hey, I want you to build a
3:57 travel itinerary for me. And I and my
4:00 rough preferences are I like adventures.
4:02 I like staying in four-star hotels. and
4:04 I want it to be a place that I've never
4:06 been before and here are the places I've
4:08 been just go build something.
4:10 Deterministic technology could have
4:12 never done that. AI is capable of doing
4:14 that because it is more intelligent and
4:17 it is the technology is probabilistic.
4:19 The reason that this diagram I think is
4:22 really helpful is while everyone has
4:25 talked about that 2025 was the year of
4:28 AI agents, I would actually argue that
4:30 most of the solutions that companies
4:32 should be thinking about building as it
4:34 relates to AI right now are either
4:36 something you would call an AI workflow
4:39 or an agentic workflow. And what that
4:41 basically means is if you have a
4:43 process, let's say you have a 10-step
4:45 process, the majority of that 10-step
4:48 process is still going to use technology
4:50 and automations in the way that they've
4:51 always been used, which is in a
4:53 deterministic fashion. If this happens,
4:57 do this. And the idea is that as AI gets
4:59 better, more and more of this
5:01 intelligence, more and more of this new
5:03 technology will be able to be sprinkled
5:06 into steps of the process where it makes
5:07 sense to use a technology that for all
5:10 intents and purposes behaves like a
5:12 junior employee who is really motivated,
5:14 really smart, but can be forgetful and
5:17 go off the rails sometimes. So that is
5:18 why I think it's really important to
5:20 understand this graphic because I do not
5:22 think it's realistic for people to think
5:24 that you're just going to build AI
5:25 products, you're going to tell it to do
5:28 something, you're going to go uh grab
5:30 coffee, grab lunch, come back to it, and
5:32 it's done. Humans are still very much in
5:34 the loop of this entire thing. Arman,
5:37 anything you would add?
5:39 Yeah. I mean, [clears throat]
5:42 I think that if you ever catch yourself
5:44 thinking about AI
5:48 in this net new magical way, like if you
5:50 ever start thinking like talking about
5:53 it in all the marketing terms that Sam
5:58 Alman and and Daario use just like like
5:59 I think it's very important to just come
6:02 back to earth and realize that like I
6:05 think so many people fail at trying to
6:08 implement AI because they immediately go
6:10 all the way to the agent side here and
6:13 they try to oneshot everything. And I
6:14 think that it's really really important
6:18 to note that if I historically told you
6:20 that you can get 10 5% improvements
6:23 across your business by automating one
6:26 step of the process using AI, that
6:29 historically would be gamechanging.
6:31 But people always try to like automate
6:33 the entire thing. And when when AI can't
6:35 do it in one shot, they think that it's
6:37 a failure. And so that's the first
6:39 thing. The other thing to note is I
6:42 think one helpful mental model that I
6:46 always find myself going back to is if I
6:50 want to delegate some of my work to AI,
6:53 how would I delegate this to a human,
6:55 right? And how junior is this human and
6:56 how much experience do they does this
6:58 human have? What what context does this
7:02 human have? and using that as a as a
7:05 framing for the thought process around
7:08 how to delegate to an AI whether it's a
7:10 workflow AI uh agentic workflow or full
7:12 agent. Um I think that's a really
7:14 helpful way to think about it.
7:18 >> Yep. Absolutely. And in a in a few
7:20 minutes we're going to talk about as I
7:23 go through this workflow of measuring
7:25 customer health and kind of how do we
7:28 gradually make it more powerful. Um, as
7:29 we go through those steps, I think
7:31 thinking about the junior employee
7:33 analogy is gonna be really helpful. Um,
7:35 so let's keep going. Also, Arman, I can
7:37 only see my slides, so if there's
7:38 anything in the chat that people are
7:39 asking. Yeah,
7:41 >> let me quickly answer Josh Dance's
7:42 question. I think it's a really good
7:43 one. Like, what is the difference
7:46 between agentic uh workflow and a full
7:51 agent, right? Um,
7:56 a full agent is I go into clawed code,
7:58 okay? and I tell Claude code, I want you
8:01 to build Facebook.
8:04 Okay, that's it. I just go in and I try
8:07 to oneshot the entire task.
8:09 Historically, building Facebook is
8:12 actually many many steps, right? But I
8:17 can just trust AI to do the entire thing
8:18 for like come up with its own plan,
8:22 follow its own steps and figure it out,
8:24 right? That's that that would be a full
8:26 agent. I would argue that we're not
8:29 there yet, right? So, what you can do is
8:30 you can come up with what are the steps
8:33 that I know this agent is going to need
8:34 to follow. I know that the first thing
8:36 it's going to need to do is do research
8:38 about what the hell it what does it mean
8:40 to build Facebook, right? So, go do
8:42 research and build a PRD, like a product
8:45 requirements doc. That's step one. Then
8:49 step two will always be take that PRD
8:51 and design the technical architecture
8:55 for building Facebook. Step three is to
8:57 go through step by step of that
8:59 technical architecture doc in order to
9:01 complete it. Right? And so that would be
9:03 like a loop over those things. And so
9:04 the difference between these two things
9:07 is for the agentic workflow I am telling
9:09 the system for step one I always want
9:12 you to do this in that step it is
9:15 agentic. So in that step the AI is able
9:16 to go and do its own thing whatever but
9:19 I will take the output of that step and
9:21 give it to step two and step two will
9:23 always have the same objective and so
9:25 on. With an agent again it is creating
9:28 its own plan. It is doing its own thing.
9:30 Again I think a helpful way to think
9:32 about this is like an employee. So with
9:35 a very senior employee you can tell them
9:39 hey let's say I'm a VC. I want you to go
9:42 deploy $2 million of capital, right? In
9:44 order to deploy $2 million of capital as
9:46 a venture capitalist, you have to talk
9:48 to hundreds of companies. You have to
9:49 come up with thesis around the market.
9:51 You have to uh make a decision on where
9:53 you want to invest, right? So, there's
9:54 many, many steps. But with a really
9:56 senior partner at a VC firm, you can do
9:59 that. But with more junior VCs, you're
10:01 going to tell them, I need you to talk
10:03 to a hundred customer or 100 companies
10:05 every every month. I need you to write
10:07 up memos for each of them. I need you to
10:09 write me a thesis and I need you to come
10:12 to me with an exact document doing the
10:13 diligence that we need for these
10:14 companies and then together we'll make a
10:16 decision. And the difference between
10:17 those things is I have a lot of
10:19 structure with the junior person. I have
10:20 very little structure with the senior
10:22 person. That's how I think about the
10:23 difference between agentic workflows and agents.
10:24 agents.
10:26 >> Love that. Um cool. Let's keep it going. >> Yep.
10:27 >> Yep.
10:30 >> Okay. [laughter] Oh, is that a picture
10:32 of me? This this everyone is my
10:36 co-founder Arman. uh he uh is c he's
10:40 currently on safari in Kenya. Um no so
10:42 Arman always says this really good line
10:45 which I think is such a good framing for
10:46 not just how we're going to go through
10:48 this workflow but how I think about
10:50 building any sort of AI products or just
10:52 honestly any products or solutions in
10:54 general which is Arman. Do you know what
10:56 I'm about to say?
10:58 >> Yes, for sure. You you can say it.
11:01 >> How do you eat an elephant?
11:03 >> I don't know. One bite at a time.
11:06 >> Yes. And I think when people are
11:10 thinking about building new AI workflows
11:12 or processes or products, they think
11:16 about like the pie in the sky dream of
11:20 what they want to build. So at if we
11:23 even just talk about what is the perfect
11:27 pie in the sky dream for what a customer
11:31 health and happiness um system would
11:33 look like at 10X. Here's what I would
11:38 imagine. We have a an application that
11:44 ingests any information that relates to
11:46 us interacting with our customers. So,
11:49 it pulls transcripts from notion uh from
11:51 our uh notion because that's where we
11:53 transcribe our calls. It pulls all
11:56 tickets from linear, which is where we
11:57 project manage the software we're
12:00 building for clients. It pulls messages
12:03 from Slack. It pulls emails from Gmail.
12:06 And then in a beautiful dashboard, it
12:09 visualizes our interactions like average
12:13 response time. Uh uh it buckets like it
12:17 has um a red light uh where there are
12:19 certain clients named that uh are in the
12:21 red like they're high risk yellow light
12:23 green light and we have this beautiful
12:25 dashboard that's visualized all of our
12:27 internal data and then even more
12:30 valuable than this is this dashboard
12:33 then can be talked to and I can say to
12:35 the dashboard hey based on what you're
12:37 saying about client red what is the next
12:38 action action we should take and
12:40 actually can you take that action Can
12:42 you actually do that for us? That's the
12:44 pie in the sky. And honestly, what what
12:46 I'm going to show you by the end isn't
12:48 that far from there. And that is
12:49 ultimately our goal for where we want to
12:52 get to with 10x. But that is the
12:54 elephant. And the only way to get to the
12:56 elephant is take one bite at a time. So
12:58 Arman always talks about this when we
13:00 talk about product at 10x is how you
13:04 scope down to the most important, most
13:05 atomic unit of what you're trying to
13:08 build. How do you start there and then
13:09 build up from there? And that is why
13:11 we're going to eat the elephant one bite
13:14 at a time. So this is bite number one.
13:18 That's that's someone biting. So what is
13:20 the first question that we are trying
13:22 that I'm trying to answer for our
13:26 customers at 10X? Very simply, we create
13:28 software for our customers, either AI
13:30 software or traditional software. And so
13:32 my number one question as I'm thinking
13:37 about um customer success and are our
13:38 account managers uh successfully
13:41 managing our clients is are we shipping
13:44 software like we promised? And so this
13:47 gets into how I think about building out
13:49 this AI solution that I'm talking about
13:52 and where I want to start. The first
13:54 level of building a solution is what I
13:57 call reactive AI. And reactive AI very
13:59 simply allows you to talk to your data.
14:02 So before doing anything else, I want to
14:04 scope down to the simplest way for me to
14:06 talk to our data and specifically our
14:09 customer interaction or engagement data
14:12 so I can easily understand what is the
14:14 state of the software that we're
14:15 shipping for our clients. So I'm going
14:17 to quickly demo that and then we're
14:20 going to build up from there. And what
14:22 I'm going to play out for you, and this
14:25 is what Arman was referring to before,
14:27 um, the analogy is there are basically
14:30 five steps that I think about taking in
14:32 any sort of process build, product
14:35 build, whether it's AI or just
14:37 traditional technology. The first is how
14:39 do we connect to the right sources?
14:41 Meaning, how do we connect to the right
14:42 information so we pulling in the right
14:45 data to learn the things we need to
14:48 learn about our customer interactions.
14:50 Second, how do we create a great prompt?
14:53 And what a great prompt allows us to do
14:56 is to get insights from that data.
14:58 Third, test the workflow we're building.
15:00 Again, the smallest version of the
15:02 workflow. Fourth, iterate. Because one
15:04 of the expectations I always try to set
15:06 with people is you are not going to
15:09 oneshot whatever the workflow or product
15:11 you're building uh is, you're not going
15:13 to oneshot it. It is going to take
15:15 iteration. the more complex or the
15:17 bigger the thing you're building, the
15:19 more iterations it will take to get it
15:20 to work right, which is why you want to
15:23 scope down to the simplest use case
15:25 first. Once you iterate and get it to a
15:26 great place, that is when you can add
15:28 functionality. And think about this
15:30 again, go back to Armon, think about
15:32 this as a junior employee. The first
15:33 thing you want to do with a junior
15:35 employee is give them the right
15:36 information. Give them the context that
15:40 they need to operate within the uh rails
15:41 that you've put them inside of. Then
15:43 create a great prompt. What's a great
15:46 prompt? It is what are the very explicit
15:48 instructions you have given to an a
15:50 junior employee to do then test the AI
15:53 test the human they go off they do the
15:55 work they come back the work isn't
15:57 exactly done properly that's where you
15:59 iterate iteration said differently for a
16:02 human is giving feedback you go through
16:04 this loop once the feedback is clearly
16:06 worked they're doing the work well you
16:08 add functionality or in terms of a human
16:10 you add responsibility because you built
16:16 No, I'm just laughing at the said
16:20 differently. Um, [laughter]
16:22 >> that that's that's an Alexism. Okay. So,
16:25 what we're going to do is I'm going to
16:27 show you how we start with this the
16:29 smallest unit here which is talking to
16:31 our data and specifically for us that is
16:33 talking to linear to understand are we
16:35 shipping software at the speed that we
16:38 want to to make clients happy. So, let
16:40 me get out of here. The one thing the
16:41 one thing that I do want to add as
16:42 you're following this up is we have a
16:44 few questions in the chat of like is
16:46 this a product that 10X is building? Is
16:48 this just a bunch of Zapier
16:51 integrations? Like this is
16:54 like we are we are live building this
16:55 for our own company. Like everything
16:58 that Alex is showing you we are building
16:59 for ourselves. We already have a lot of
17:01 it built for ourselves, but we're
17:03 walking you through how we think about
17:05 building this because right now we've
17:07 done it for client success and client
17:10 support and all that, but we this is how
17:12 we do it for every part of our company.
17:14 And the goal is that by the end of this,
17:15 not only will you be able to literally
17:18 copy and paste this tool for yourselves,
17:19 um, and I know that there are a bunch of
17:21 companies that have this exact product
17:22 basically that we're building on top of
17:24 Xavier for ourselves, um, they have
17:26 this, but they charge a bunch of money.
17:27 you guys will be able to basically just
17:29 copy and paste this yourselves. That's
17:30 the first thing that I think is really
17:34 great. But also um you'll be able to
17:36 identify other opportunities in your
17:38 company and likewise build solutions to them.
17:41 them.
17:44 >> Yep. Absolutely. Okay. So the first
17:45 thing I want to do again is talk to my
17:48 data. And to us the for me the most
17:49 important data that we can have access
17:53 to is effectively our project management
17:55 board which because we're we're building
17:57 software is linear. I want to be able to
18:00 ask linear questions and and I really
18:02 would love to get like a report that
18:04 just tells me how are we moving along
18:07 with every client and what are risks
18:10 that uh you would dig into to learn more
18:11 about. And just think about this in
18:13 context. Let's assume in the future we
18:16 have a hundred clients and we have a
18:18 hundred different linear boards. It is
18:20 not going to be realistic for Arman or I
18:23 to get in the weeds or look at every
18:25 individual client's linear board. But we
18:27 want to make sure we are pushing forward
18:29 the work we're doing with clients just
18:31 as effectively as when we had one
18:33 customer. And so where I always start
18:37 again is I connect to uh I connect to
18:40 the right source. And so in linear let
18:45 me just find it here. Uh sorry in um in
18:48 claude uh and catch has the same thing.
18:51 These companies have connectors. Uh I I
18:53 Arman I laugh doesn't like these things
18:55 because connectors are basically just
18:58 MCP. MCP from Arman's point of view is
19:00 just a glorified version of APIs. But
19:02 all this to say that there are these
19:04 connections that Chashibbt and Claude
19:05 have created to talk to your
19:07 applications. Linear is one of them.
19:10 Linear also is a connector in chash up.
19:12 So what I'm going through is
19:14 interchangeable. So once I've made the
19:17 connection to linear that means now I
19:20 can access data that we have around
19:21 project management with our customers.
19:24 Then the next thing that I always do is
19:27 I create a great prompt and I people
19:30 have all these formulas for what is what
19:32 is a great prompt. My general formula is
19:35 this. If I'm talking to a junior
19:38 employee, how do I
19:40 increase the odds of them comprehending
19:42 what I'm saying so that there isn't
19:43 error because things got lost in
19:47 translation and how do I treat getting a
19:48 prompt in the same way? So, what I
19:49 basically said here is I want you to
19:51 create a prompt that helps me understand
19:52 how much software we're shipping for
19:54 clients and how many story points we've
19:56 completed this month for each client. I
19:58 want the output to be anonymized since
19:59 I'm showing this to a group of people.
20:01 create a prompt that I can feed to
20:03 Sonnet 4.5 that using the linear
20:05 connector will allow me to understand
20:06 the state of each customer's linear
20:08 board, how fast or much we're shipping,
20:10 and how many story points we've
20:12 completed this month. And just uh for
20:13 context, because people may not know
20:15 what story points are. Story points are
20:18 just a way we measure how much output we
20:19 are creating for a client, how much
20:21 software we're actually building for a
20:22 client. And then I said the linear
20:24 boards I'll want to monitor are
20:26 attached. This is a really small but
20:29 like I think is one of the underrated
20:32 things that LLMs have made super easy is
20:34 there's no easy way for me in linear to
20:37 like copy the name of all of our
20:39 different boards and paste it in. And so
20:42 now I just take screenshots and LLMs are
20:44 incredible at taking screenshots and
20:45 turning it into text. So I just took
20:48 screenshots of all of our boards,
20:50 attached it into Claude and it turned
20:53 that into a text list. So that is the
20:56 prompt that I create and then I take the
20:58 output of that prompt and I just in a
21:02 different chat feed it back to the LLM.
21:05 So let me just go here. So basically
21:06 what the output was and I'm not
21:08 scrolling up because it actually has the
21:10 names of our clients but basically it
21:13 gave this really thoughtful
21:17 prompt for telling Claude how it wants
21:20 to uh use Claude to generate a report by
21:22 accessing our linear data. So what are
21:24 the output requirements? Anonymization,
21:27 report format. Um based on the number of
21:28 story points we create for a client,
21:31 mark them as uh green like uh high
21:34 velocity, yellow uh medium velocity or
21:38 red low velocity. After that, add the
21:39 average story points per client, top
21:41 three performing clients. Additional
21:43 insights, flag any clients with zero
21:45 activity this month. note if any clients
21:47 have large backlogs. Uh identify any
21:50 patterns and then what that actually
21:57 results in is this is the output of
21:59 the uh prompt that I gave. So this is
22:02 basically a our linear customer board.
22:05 We're talking with the data by customer.
22:07 It shares how many issues have we
22:10 completed. What is each task that we've
22:13 done for each client by client? what is
22:15 the activity level, what is the current
22:17 project that we're working on, what are
22:20 the key themes of each project, and then
22:24 it also will share um what are potential
22:27 issues that we should be flagging. And
22:31 so like um what I can look at here is
22:33 there are certain companies that have
22:37 issues that uh they have not reviewed
22:39 our work that we've done for them in a
22:40 long time. And so thinking about how do
22:42 you actually turn this into action? What
22:43 I'm trying to understand here is where
22:45 are their bottlenecks? Where are we
22:47 getting slowed down? And now that I have
22:50 this at scale, now I can zoom into who
22:52 are the two to four clients where things
22:54 are getting stuck in a certain part of
22:57 the process. Now I can go into Slack,
22:59 ask the specific technical strategist,
23:01 hey, what's happening here? And I've
23:03 been able to focus my time on the things
23:05 that actually matter because creating
23:07 this integration allowed me to scale to
23:10 hundreds of clients but focus down on
23:11 who are the few that I actually need to
23:14 care about right now.
23:18 So that is that is the first example of
23:20 talking to your data and again creating
23:21 a connection with whatever is your
23:23 source of truth like for customer
23:26 interactions. And the one thing I'll say
23:29 going back to the slide that I had about um
23:31 um
23:35 start small, test, iterate and then uh
23:38 increase complexity. The way I would
23:40 increase complexity here is we started
23:42 with linear because in my mind linear is
23:44 the source of truth for us is of how
23:46 much work are we actually doing. But
23:48 then the next way to add complexity here
23:51 is not just to do things like make the
23:53 AI more proactive, have it take action,
23:56 but also make it multi-threaded. And
23:58 when I say multi-threaded, get other
24:01 data inputs that work into understanding
24:03 our customer health. So not just linear,
24:05 but what is our average response time in
24:07 Slack with our customers? How many Slack
24:09 messages have we had with them? Take a
24:10 look at our notion meeting transcripts
24:12 with them. Are there any signals you got
24:14 from there? And that's what we're going
24:15 to go into in a minute. Um, I'm going to
24:17 pause there, see if there are any
24:19 questions before we kind of dial up what
24:21 this workflow looks like as we start
24:24 introducing other variables. One thing
24:25 that I want to highlight here as well as
24:26 we're waiting for questions to come in
24:28 in the chat is [clears throat] that
24:30 that
24:31 we get a lot of questions about like,
24:35 oh, like I'm using chatbt. What's next?
24:37 Right? Like I'm using Claude. What's
24:39 next? Arman, you're you're um you're
24:42 pretty frozen right now. >> Oh.
24:50 Let me hotspot.
24:52 >> Well, while we It's all good. We'll keep
24:54 going and Arma will work on his Wi-Fi.
24:57 But um so any questions on this
25:00 integration between Claude and Linear
25:02 before we kind of ramp up to making the
25:05 the workflow not only multi-threaded but
25:07 also more proactive and actually be able
25:17 Uh Josh said where do you go to view the
25:21 report? So in this example I am going to
25:23 claude to view the report and typically
25:27 again like claw claude and GPT I use
25:28 them interchangeably right now. They're
25:30 kind of my daily drivers. So I just
25:34 always have it open. In GPT there is the
25:36 ability and this takes things from let
25:39 me just share my um let me share my deck
25:40 again because I think this is an
25:52 Okay so going back to the different
25:54 levels of creating AI products or
25:57 processes there's level one is reactive
25:59 AI which is what we just built. You talk
26:02 to your AI and you ask it for insights,
26:04 but it you are pushing the AI to do
26:06 something. Level two is proactive AI and
26:09 that is the idea that AI works in the
26:11 background based on a schedule that or
26:13 some trigger that you've dictated to it.
26:16 And so I don't I don't know if they have
26:17 this with Claude. It may have been added
26:20 in with skills, but with GPT there's a
26:22 way to add in recurring jobs. So if we
26:24 ran the same exact flow where we're
26:26 connected with linear and we have this
26:29 report run in linear, we can run it as a
26:32 daily job where every day a chat is
26:35 created in GPT that delivers this report
26:38 based on the last day or week's activity
26:40 within linear. So to answer your
26:42 question, that's how it would currently
26:44 be done. If you use other tools like
26:45 Zapier, which we're going to go to in a
26:47 minute, then you can have the output
26:50 happen via email, via Slack. You have
26:52 more flexibility than building kind of
26:54 th this workflow I just shared within
26:58 GPT or uh Claude. To Mark's question,
27:00 linear is think of it as like Air Table
27:03 or Monday.com. It's a project management
27:05 tool, but specifically focused on
27:08 engineers and product managers.
27:10 >> Arman, you were going to add something before.
27:10 before.
27:12 >> Yeah. Um, so everyone can hear me,
27:14 right? I'm I'm clear yet again.
27:14 >> Yeah, you're good.
27:16 >> Okay. So, one thing that I just want to
27:18 highlight is we get questions a lot from
27:22 from clients, companies, everybody that
27:24 >> they basically say, okay, I'm using Chad
27:26 GBT, I'm using Claude. What's next?
27:28 Like, I'm doing this and I'm and I'm
27:30 really good at it. Like, what is the
27:32 next level? And I think that there
27:34 absolutely are next levels, but there's
27:37 always more opportunity to get out of
27:39 Chad Shept and Claude. And I think one
27:46 traditionally what Alex just showed
27:50 would really only be done by like a
27:53 full-time person. Like that would take
27:55 Think about before AI, which like
27:56 literally is hard for me to wrap my head
27:59 around, but like before AI,
28:02 a person would have to go through this
28:04 like linear system, which is basically
28:06 like it's a it's a giant project
28:07 management system. They would have to go
28:09 through the project management system
28:11 and they would need to go through client
28:14 by client, column by cl column, task by
28:16 task and they would need to copy and
28:18 paste. Okay, this was done, this was not
28:20 done. Okay, for each client, how are
28:22 they feeling? Let me look at the slack.
28:26 This would be a dayong job. It would be
28:29 incredibly expensive. And so the only
28:31 companies that would be able to actually
28:34 afford having a message like this sent
28:35 to the co-founders of the business every
28:38 day, you'd have to be gigantic. But for
28:40 the first literally for the first time
28:42 in history, a company like 10X and a
28:45 company like of any size, if you have 20
28:47 bucks a month, you can afford to have
28:49 this message sent to you. And I think
28:52 that is what is incredible here is that
28:54 you're able to do things that
28:57 traditionally only gigantic companies
28:59 would be able to do just by using
29:01 chatbt. And so I always think that
29:03 there's more opportunity. And then we'll
29:06 see in level two that it's even better.
29:08 So Alex, I'll throw it back to you.
29:10 >> Yeah. So level two, we're going to ramp
29:12 this up a lot. And so we're going to do
29:14 two things. one is we're going to make
29:18 this proactive so that this customer
29:21 health report um is generated with a
29:24 level of frequency that we want. From my
29:26 perspective, Arman and I would want this
29:28 weekly to really keep a good uh pulse on
29:31 the business on a weekly basis and we're
29:33 going to make it multi-threaded. So when
29:34 I think about it, going back to what I
29:36 was saying before, there there are three
29:39 or four data sources that together give
29:42 us a great picture of what is the
29:44 directional health of a customer given
29:46 what they're saying, how they're saying
29:48 it, and how we are pushing forward the
29:50 projects that we're working on for them.
29:53 And so my next goal here with proactive
29:55 AI is to have something running in the
29:57 background with a level of frequency and
30:00 pulling in all of the data sources that
30:03 tell us more about our customers. So let
30:06 me let me keep this going.
30:09 Um so we we had the first bite of the
30:11 elephant. Are we shipping software where
30:14 we uh like we promised by through Claude
30:16 setting up the linear connector getting
30:19 a great prompt asking it for a great
30:22 report in that report having it point
30:24 out what are specific clients where
30:28 there is a backlog of PRs in review so
30:30 that I can reach out to or Armen can
30:31 reach out to our technical strategist
30:34 say hey why is so and so client not
30:36 pushing forward the work that we're
30:38 giving them and uncover some bottleneck
30:41 there. Next step is give me this insight
30:46 without being asked and
30:48 that is where we get into proactive AI.
30:50 And so for this I'm using Zapier and
30:52 basically when I use Zapier you can
30:55 basically assume I am building something
30:59 that is code on the back end but all the
31:00 code has been abstracted away for
31:02 someone like me that is not an engineer.
31:05 So if you have access to engineers, you
31:08 could build what I am building or you
31:11 could use Zapier and do it yourself as a
31:13 non-technical person. And I'm sure Arman
31:15 could give you several ideas for as an
31:16 engineer how you would build this. So
31:19 let me walk you through my process for
31:21 making this multi-threaded proactive AI
31:23 with Zapier.
31:27 So let me share this first.
31:30 Um stop sharing presentation.
31:32 presentation. Nope.
31:33 Nope.
31:35 >> As you're pulling this up, I wanna I
31:37 want to mention a question that we've
31:38 seen a few thing a few times in the
31:40 chat. Like folks are asking things like,
31:42 "Will this work with Jira? Will this
31:45 work with Asana?" Um, these ideas that
31:48 we're presenting here are foundational
31:52 to AI development in business. So like
31:55 level one, you can ask AI questions
31:58 about your data. Level two, Alex is
31:59 going to walk through like like AI will
32:01 proactively tell you things about your
32:04 data. These these levels, these systems,
32:07 these frameworks work regardless of
32:10 where your data is, what your data is,
32:11 right? Like right now we're just talking
32:14 about this one specific use case, but it
32:15 can work like even this specific use
32:17 case can work with
32:20 linear, Asana, Jira, and so on. but they
32:22 can also work with your ERP and CRM and
32:24 they can also work with your Slack and
32:26 they can also work with your email and
32:29 your calendar and so really like like I
32:31 think it's it's important to dig into an
32:33 example to see the the power and the
32:37 fire but definitely pull out zoom out
32:40 and think about like where else where
32:42 else can this apply and the answer is
32:43 probably yes.
32:46 >> Yep. Absolutely. So this again where I
32:49 start with everything is with giving the
32:51 right information which I already set up
32:53 in in Zapier which you'll see in a
32:56 second and creating a great prompt. And
32:59 generally like you can have Zapier and
33:01 we'll show it in a second create the
33:03 prompt for you. I've always just found
33:05 that I feel the most comfort with like a
33:10 GPT 5.1 or a set 4.5 creating the prompt
33:12 because oftentimes like in Zapier or any
33:14 of these tools they are just using those
33:16 models to create the prompt as it is. So
33:20 basically here I I fed exactly what I
33:21 wanted created that I described to you
33:23 all think hard and create an amazing
33:25 prompt I can feed to Zapier. I wanted to
33:26 ingest a bunch of signals from client
33:28 interactions and use that to create a
33:29 weekly comprehensive client health
33:32 report that provides overall client
33:33 health as well as deep insights client
33:35 by client. I want you to take liberties
33:36 to make this as specific, deep, and
33:38 actionable as possible. But there are
33:39 things I think should be included by
33:41 client, customer health score, key
33:43 signals, key quotes, areas for concern,
33:45 potential issues, number of Slack
33:47 interactions, average Slack response
33:49 time, number of story points completed,
33:52 blah blah blah blah. I I fire this
33:53 prompt off and then I was like, "Oh,
33:56 damn. I also wanted notion. So then I
33:58 updated the prompt while it was
34:00 mid-thinking and said, "Oh, I also
34:01 wanted to pull insights not just from
34:03 Slack and Linear, but also call
34:05 transcripts from notion." And so then
34:08 what it ended up giving me was this just
34:11 like really in-depth prompt for scoring
34:14 clients based on the inputs. And then
34:16 the output format is the overall
34:18 summary. So average client health score,
34:20 count of clients by status, short
34:22 narrative summary, key emerging risks,
34:24 key positive trends. Again, all of this
34:26 just for context would have never been
34:28 possible pre- AAI because this is where
34:30 true intelligence is being layered on
34:32 top of the data that we're giving it.
34:35 And then we have a client byclient
34:36 breakdown. So for each client, health
34:39 score, status light, renewal, churn,
34:41 risk, expansion potential, oneline
34:44 summary, key signals this week, key
34:46 quotes, uh slack activity, linear
34:49 progress, call insights for every single
34:54 client. So now let me go to the next
34:57 what how that actually shows up in uh
35:00 Zapier. So let me share this step. So
35:03 we're in Zapier now. There's the agent
35:06 builder which is uh functionality in
35:10 Zapier. And what I basically did is I
35:15 fed the prompt to uh Zapier in agent
35:17 builder. And as you can see the trigger
35:19 says on demand right now. I just did
35:23 that so we could run this um agent if we
35:26 wanted to right now. But obviously you
35:27 can also do it as you can see you can
35:29 schedule by Zap year. You could have it
35:33 due every week. Choose value. I would
35:35 want it Mondays at
35:38 8:00 a.m. so that Arman and I can get it
35:40 when we get to the office first day of
35:42 the week. And now that is the recurring
35:45 trigger. So I'm not going to run it now
35:46 because it's going to take time, but I'm
35:47 going to show you what the output looks
35:51 like. So let me go over here. Um, so
35:54 what I basically did is we also set it
35:56 up so that it emails to us so that it
36:00 would send us emails and also our core
36:04 team a Slack message with what weekly uh
36:08 customer um like customer sentiment
36:11 looks like. And so this is basically
36:13 what we're sent. We have the executive
36:16 summary which has top three risks. So
36:18 one client who's working on field rep
36:19 testing, another one who's working on an
36:21 eBay integration, another one who's
36:23 dealing with performance delays, the top
36:26 three opportunities. So what are actual
36:27 opportunities for how we can wa wow
36:30 clients over the next week. What we then
36:32 have is client by client breakdown. So
36:35 client A and this is ordered by risk. So
36:37 health score 6 out of 10, churn risk
36:41 medium, trending down. So that is not a
36:42 good sign. So like this is something
36:43 that Arman and I would probably pull the
36:46 technical strategist in to go through
36:48 this breakdown and be like tell us where
36:50 the concern that's created in this
36:52 report is wrong. Where is it right? What
36:54 are the actions we're taking to have
36:56 this trend back up and get this from a
36:59 red to a yellow. And so then it also
37:01 shares immediate actions required daily
37:03 check-ins until the field rep testing is
37:06 done. Escalate location in uh issues to
37:08 senior engineering. Prepare contingency
37:10 plan if testing is delayed. And this
37:13 goes through with every single client
37:16 ingesting Slack, ingesting linear, and
37:18 ingesting our Notion call transcripts
37:21 all to work together this full picture
37:23 of how we're doing with clients. Uh,
37:25 Arman, anything you would add? Yeah, I
37:28 mean I I just again I like I'm going to
37:31 sound like a broken record. I still
37:34 think it's insane that like historically
37:36 the president and Alex and I talk about
37:37 this all the time like there's this
37:39 thing called the presidential daily
37:42 brief, right? Every single day the
37:45 president wakes up and they get a huge
37:47 like stack of papers and it is
37:48 structured exactly how the president
37:50 wants with all the top news, all the
37:51 things that they need to know. So they
37:53 wake up, they read this presidential
37:55 daily brief. This is a thing since I
37:58 forget which president started it. I
37:59 once I heard about that I was like that
38:02 would be incredible to get that right
38:04 and there are a ton of newsletters right
38:06 like we all know them like Morning Brew
38:09 and all the other ones um and you can
38:10 wake up and you can read those but what
38:12 if you could have your own right and I
38:14 think that it's incredible that
38:16 basically what Alex just built is that
38:19 for customers right for the customer
38:22 success and the and deuce has a question
38:25 here about like what is uh that's linear
38:26 and call transcripts like can you
38:28 include email and CRM The answer is yes.
38:30 Like uh Daniel put a list of all the
38:33 apps from Zapier. You can absolutely add
38:34 those other data sources as well. And
38:36 then Doug asked how is it assigning
38:38 churn risk and a health score. Did you
38:39 tell it what the formula would be? And I
38:42 I believe that um that Alex, you did
38:46 have a like you did have information on
38:48 how to assign that client risk. Yep. And
38:50 the entire structure of the response
38:52 like everything is in that prompt. So
38:55 you can make this completely custom to
38:57 what you want it to be for customer
39:00 success, for your sales pipeline, for
39:02 every part of your company. You can have
39:03 this happen.
39:06 >> Yeah. So what I would say for the churn
39:08 risk question because this is a really
39:11 uh good one. In a perfect world, the way
39:13 we will ultimately use this and
39:16 associate churn risk is look do a look
39:20 back on all previous clients who churned
39:22 and what were the signals that we saw
39:26 from that client. whether it was um
39:29 things they said on calls, uh what the
39:31 average number of touch points were over
39:35 Slack, uh what um what uh linear
39:38 momentum like in terms of uh number of
39:40 issues resolved or number of story
39:42 points complete and that will become
39:45 like basically those traits will become
39:47 the bar of what creates churn risk for
39:50 us. What I did here is I basically uh
39:54 gave it conservative estimates of what I
39:56 would consider high, middle, and low
39:58 risk. But I also said use your best
40:01 judgment. So I said like if we are if we
40:05 in the last week have done more than uh
40:08 50 story points worth of work that is
40:11 low turn risk. If we've done uh between
40:14 25 and 50 that's medium turn risk. And
40:18 if we've done um less than 25, that's
40:19 high churn risk. And I basically did
40:22 that for churn. I did that for Slack.
40:25 And I did that for call transcripts. And
40:28 like I said, it was directional. But you
40:30 would also be surprised how good these
40:33 models are at just like directionally
40:36 figuring out what signals lead to an
40:38 increased risk of churn. But what I did
40:40 make sure of here is again, this is for
40:41 me, what's most important here is that
40:43 it's directional. I would rather there
40:47 be false positives than false negatives.
40:50 And so I would rather call out too many
40:52 clients that are high risk of turn and
40:54 we go and check in on them than not call
40:56 out enough. And so that was how I
40:59 structured that prompt. Um someone else
41:03 had a question about this. Um
41:05 did you tell Yeah. So the the whole
41:08 chorus thing I would also say is again
41:12 if I wanted to also specify it further I
41:15 would go back to typically chat GPT or
41:17 claude and I would say these are the
41:19 this is the type of business we have.
41:22 This is what we deliver them. Uh these
41:24 are the systems by which we engage with
41:26 our clients. Can you create a list of
41:28 signals that put someone into a high
41:31 middle or low churn risk bucket? And I
41:33 would use that in the instructions for
41:35 Zapier. And as we get more actual
41:37 information over time, I would just
41:39 finagle the instructions to be more
41:41 accurate to the real data that we have.
41:43 >> Yeah. And hopefully that one thing that
41:46 I think is interesting is like I would
41:50 recommend testing more guidance and less
41:51 guidance. And so I would actually
41:53 recommend testing and Alex, I don't know
41:54 if we've done this, but but we should do
41:56 it as well.
41:58 almost giving no guidance on what on
42:00 what defines churn risk but saying I
42:01 want you to be a little extra
42:04 conservative because I think that AI
42:06 tools part of what's incredible about
42:08 them I always call them warm-blooded
42:08 right like >> yep
42:09 >> yep
42:11 >> AI is warm-blooded and there's this
42:13 within that warm-bloodedness is the fact
42:15 that you can actually sometimes
42:17 sometimes AI will tell you things you're
42:18 like wow I didn't catch that like I
42:21 would not have considered this signal to
42:23 be like I didn't consider that the fact
42:25 that we're not responding thing in 30
42:27 minutes to our clients to be a churn
42:29 risk, but but the AI is and maybe that's
42:31 actually a good thing to to flag, right?
42:33 And so I think that there's there's
42:35 value in that. And so like Alex said, I
42:37 think testing more guidance, less
42:39 guidance, different types of guidance,
42:40 different types of conservatism within
42:44 that guidance um I think is is is a part
42:45 of the process.
42:49 >> Yep. Absolutely. So I want to bring this
42:51 to like the final uh part of the step.
42:53 So you saw the last output which was
42:55 basically that is the customer health
42:58 report uh that organized clients by
43:01 churn risk based on what it deems to be
43:05 different risk factors of not enough um
43:07 not enough kind of like product being
43:09 pushed based on our linear board. Uh not
43:11 enough interaction or fast enough
43:13 interaction based on Slack. Uh any
43:16 negative signals or positive signals in
43:18 our notion meeting transcripts. The
43:20 final piece of this and let me just pull
43:23 up the slide again is so so we've had
43:26 two bites of the elephant. The third
43:28 bite uh which is like what Austin or
43:31 what Arman and I would ask ourselves is
43:34 like okay
43:36 you we we have a sense of are we
43:38 shipping software like we promised by
43:39 having this integration between linear
43:41 and claude and and asking the data
43:44 questions that helped us understand like
43:47 oh there's a backlog of PRs for review
43:49 that a client hasn't reviewed great that
43:51 leads us to a conversation then we were
43:53 like we want a higher fidelity and
43:55 proactive AI so this job runs on a
43:58 weekly basis on Mondays. Um, and it
44:00 ingests information from four different
44:01 sources and works it together in a
44:03 report which you saw what the final
44:05 report looks like. And to someone's
44:07 question, you can always improve the
44:09 inputs of what is high, middle, or low
44:11 churn risk based on you getting
44:13 information in the business and feeding
44:14 those insights back into the
44:17 instructions for the agent. The final is
44:19 take action. So don't just give me
44:21 insights, but take action on those
44:23 insights. And so I'll just quickly show what
44:24 what
44:26 >> where did you get these photos?
44:30 >> Um I looked up open mouth on Google. I
44:32 asked myself what would make people
44:34 smile in their office chair and what
44:36 relates back to taking one bite of a t
44:38 at a time of the elephant.
44:39 >> Love [laughter] it.
44:42 >> Yep. Yeah. My brain is crazy. Um so then
44:44 the final piece here is active AI. And
44:47 so what we wanted to do is we had this
44:49 report. This report tells us what are
44:52 the action items that we want the AI to
44:54 take. Well, how can it help us with
44:56 those action items? And so, just to
44:59 quickly share my screen again, let me uh
45:03 go back to Zapier,
45:05 Zapier,
45:08 please hold um
45:10 um
45:23 as Okay, pulling that up. Um, if anyone
45:24 has questions, please continue to put
45:26 them into the chat. Uh, it's it's really
45:28 helpful for us to guide the conversation
45:30 in the way that in the way that you all
45:32 want and um to to make sure this is
45:34 super useful for everybody.
45:36 >> Yeah. So, what I'm going to do is I'm
45:37 actually just going to show you the
45:40 build of this uh in Zapier so that you
45:43 can uh you can do this yourself. So, let
45:46 me just share my screen of Zapier and
45:48 we're going to
45:51 do this. So basically the way I wanted
45:55 to set this up is every time the weekly
45:59 sentiment report runs. So every time
46:02 just again using this this report runs
46:06 that was created from that uh zap year
46:08 agent that we built that in in uh
46:11 ingests all the information uh creates a
46:16 breakdown etc. I want us to get drafts
46:18 of emails that our customer success
46:21 people can send to clients based on the
46:24 action items that it's detailed. So I'm
46:27 I'm going to show basically what the
46:29 what the end instructions it created are
46:31 for the agent, but I actually want us to
46:32 just go through the flow quickly
46:34 together. So basically it created
46:36 instructions of when the weekly customer
46:38 sentiment insight agent completes its
46:40 analysis, retrieve the generated action
46:42 items and recommendations from its
46:44 output. Call the weekly customer
46:46 sentiment insights agent to get the la
46:48 the latest customer sentiment data and
46:49 action items. Parse through the action
46:51 items and recommendations provided by
46:53 the analysis. For each action item,
46:54 create a personalized email draft that
46:56 account managers can send to clients. It
46:59 creates the draft in Gmail. Structure
47:01 each email draft to include professional
47:02 subject line, personalized greeting,
47:04 context about the sentiment insight that
47:06 trigger triggered the recommendation,
47:07 specific action or solution being
47:09 proposed, a clear call to action for
47:11 next steps, professional closing, and
47:14 then uh save all email drafts in a
47:16 format that account managers can easily
47:18 access and customize before sending. The
47:20 final goal is transform customer
47:22 sentment insights into actionable ready
47:23 to send email drafts that account
47:25 managers can use to proactively address
47:27 client needs and improve customer
47:30 relationships. And so very simply how I
47:33 created this is I basically said here um
47:36 I have a weekly what what is it called?
47:41 Weekly customer sentiment insights agent
47:46 on Zapier. I want you to um every time
47:50 that agent runs, I want a new agent to
47:53 draft emails
47:56 that um directly
48:00 um relate to the action items that you
48:04 called out in the report. And I want
48:07 these drafts to be actionable, thoughtful,
48:09 thoughtful,
48:13 and specific slashcustomized so that a
48:19 technical strategist at my company can
48:23 send it off to the customer with minimal
48:25 edits. And then you start building. And
48:29 what's nice about these um
48:32 these tools like Zapier um or Lindy or
48:34 NAND or Gumloop now is they have these
48:36 agent builders. So it's basically like
48:39 you're in a chatgpt or claudeesque
48:41 experience but it is specified to their
48:43 platform. So it's actually calling
48:45 tools. And so what it basically did is
48:47 it built the instructions here and then
48:49 it's going to ultimately put the tools
48:52 that it needs access to in the
48:54 instructions. And those tools are going
48:56 to be it needs access to Gmail. So it's
48:58 going to ask me for access to Gmail. It
48:59 also is going to ask for access to the
49:02 weekly sentiment insights report. So the
49:03 other agent that I created in Zapier
49:04 because that's where it's going to
49:06 ingest context from to draft these
49:09 emails. And it will set up as you can
49:10 see it's setting up a web hook which is
49:13 basically it is creating a trigger where
49:17 when the weekly sentiment insights agent
49:20 fires off it is going to trigger this
49:22 agent. So as you can see the tools this
49:24 agent can use are the sentiment uh
49:27 sentiment driven email drafts when the
49:29 weekly customer sentiment sentiment
49:31 insights agent completes its analysis
49:33 and generate a report automate
49:35 automatically trigger this workflow. So
49:37 basically this start starts when the
49:40 other agent finishes and then it
49:43 delivers emails as drafts in our inbox
49:45 related to those action items. Any
49:53 We were just getting advice to use YAP
49:55 to text.
49:57 >> Yep, we we agree. We actually took a
50:01 poll of the team on uh under underhyped
50:03 and overhyped things in AI right now.
50:05 And one of our team members said that
50:08 Yap to text is very underhyped. Um, and
50:13 even uh Ryan Carson uh who we had on a a
50:16 previous episode of Human in the Loop,
50:18 uh he basically yapped to text
50:21 everything as an engineer. Um which I
50:22 agree with all this, but I would say in
50:24 an office environment that we're in, it
50:25 would be pure chaos if everyone was
50:28 yapping to text.
50:30 Um Daniel Ree, do your clients have KPIs
50:32 for AI use? What metrics are they
50:35 looking at to judge or evaluate success?
50:37 Arman, any thoughts here? Yeah, I mean I
50:40 think that like there
50:43 business is business, right? Like when a
50:44 successful business makes more money
50:48 than it spends very simply. Um people
50:50 want to be happy. People want to work
50:51 less and make more money. Like there are
50:53 facts of nature that will always be the
50:55 case. And I don't think that AI changes
50:57 that. And so I think that companies
50:59 still have their own success metrics
51:01 that they've historically had. And
51:04 whether you use EOS or um OKRs or
51:06 whatever, like a company has their
51:09 goals. We if we're successful as 10X in
51:12 in helping companies adopt AI, but also
51:14 other people and their own companies
51:15 when they're trying to adopt AI, they
51:17 will know when they're successful
51:20 because that initiative will drive the
51:22 company closer to the goals.
51:24 >> And they'll know they're not successful
51:26 >> if they're just like saying the word AI
51:28 a lot and and it doesn't impact the goals.
51:29 goals.
51:31 Um, two quick things. One is I want to
51:33 answer Cat's question and then Arman,
51:34 something for you to think about as I
51:36 answer it is if I was giving you this
51:39 prompt, if I gave you this prompt to
51:41 like build a customer health and happiness
51:43 happiness
51:46 monitor for the business,
51:48 how would you go about doing it roughly
51:50 speaking given you're an engineer and
51:52 you know how to do this technically?
51:54 What would be like how would you think
51:55 about building this and would it be
51:56 different from how I thought about
51:57 setting this up? So, just something to
52:00 think about. Kat, to your question, why
52:02 wouldn't you want to create one agent to
52:04 create the report and email it? Is it
52:06 just to try and keep workflows as simple
52:10 as possible? The answer here is in
52:13 Zapier when you chat with the agent. So
52:16 like when I chat with the agent here,
52:20 just to give you a quick example, um
52:22 let's just share my screen one more time
52:25 quickly. Uh here. So when you chat with
52:28 the agent, you can the a when you chat
52:29 with the agent, you can make changes and
52:31 when it makes changes, it makes changes
52:35 to the entire instruction uh of of that
52:37 agent. So like it changes the system
52:39 prompt for that agent. And so my whole
52:42 >> just to sorry just to jump in like this
52:45 agent that you're chatting with is an
52:47 agent building agent, right?
52:50 >> Correct. Exactly. So this is like Zapier
52:52 basically like historically with Zapier.
52:55 Zapier, you can go in, you can like
52:56 click click click to build these
52:59 workflows, but then they introduce their
53:01 own Zapier agent that allows you to
53:02 build and edit the workflows. And so
53:04 that's the one that that Alex is is
53:06 messing around with right now.
53:08 >> Exactly. And my I don't know if I'm
53:11 doing this sublim subliminally because
53:13 Arman has always thought about doing
53:15 this like with chats in whether it's in
53:17 different terminals with cloud code or
53:19 different chats with GPT but it's like
53:22 whenever I'm working on uh a net new
53:26 thing I don't want to pollute the
53:27 previous thing that I've worked on. And
53:29 so my fear is is if I worked on this
53:33 email generator within the the weekly
53:35 customer sentiment insights one and I
53:37 asked the agent building agent to make a
53:40 change. My fear is is it makes a change
53:43 to the initial agent that not only
53:45 screws up the email generator but it
53:48 also screws up the the report generator.
53:50 And so now I have nothing that works
53:52 versus containing kind of the poison to
53:55 this second flow. >> Yeah.
53:56 >> Yeah.
53:58 Um, so to answer your question, like
53:59 let's say I was to build this from
54:02 scratch. If I'm being super honest, I
54:05 would just use something like Zapier,
54:06 right? Um, and earlier we had a question
54:08 about like why Zapier and there's
54:10 Zapier, there's NAN, there's make,
54:13 there's all these different things. Um, currently
54:15 currently
54:16 my read is that Zapier is the most
54:19 robust. Uh, it has it's just been around
54:20 for way longer. So the connections are
54:24 great. Um, they their team is building
54:27 with AI first in mind. like I just we've
54:28 just used all the products and we and we
54:30 like it the best. We think it's the most
54:32 robust um for this use case. This is
54:34 what I would do. What we're thinking
54:36 about internally and again these are
54:38 real use cases like Alex and I on Monday
54:41 will get that message in Slack. Like
54:44 this is actually what we use internally.
54:46 if I but we're also thinking about like
54:48 what does it look like to build an
54:51 internal operating system for 10x that
54:54 not only surfaces these insights but
54:57 even more right and so I think if we
54:59 were to make it more robust what you
55:01 would do is you would think about it the
55:03 same exact way that you do that you did
55:04 right where it's like what is all the
55:06 data what are all the different triggers
55:08 what like what is the information that
55:09 we need and then what do you want to do
55:11 with it well you want to be able to talk
55:12 to it you want to be able to ask
55:13 questions and you want to be able to get
55:15 some insight ites and those insights are
55:16 going to pull from certain data and
55:17 they're going to be structured in a
55:20 certain way. And and so it would
55:21 literally be structured the same exact
55:22 way. We would think about it the same
55:25 exact way. What what custom building
55:27 would give you is just a little bit more
55:30 or a lot of bit more customization, but
55:32 it's also going to it also comes with
55:37 some negatives, right? or you need to um
55:38 it's just a lot more work up front, you
55:39 know, honestly to
55:41 >> and there and there's maintenance you
55:44 have to do like Zapier like they are
55:45 doing all the maintenance on the back
55:46 end. So yeah, there's a ton of
55:49 trade-offs. The the last question
55:51 because I know we're at time is just to
55:52 answer Mark your question, can you show
55:54 us the production zap that you made and
55:57 walk us through the actions? Um yes, the
55:59 the cool thing about Zapier is after you
56:02 build an agent or an automation uh you
56:04 can actually share it uh for people to
56:07 like just customize on top of. So when
56:11 we send the um the postrecording email,
56:14 we'll include links to both the report
56:16 generator and to the email generator
56:18 that connects to the report so that you
56:19 can again it's going to take some
56:22 finagling to connect the right data
56:24 sources for you and your customers, but
56:26 it's at least probably 75% of the work
56:30 is there for you to build on top of.
56:32 Sweet. Um I think that is it. I want to
56:34 be respectful of everyone's time. Thank
56:36 you all as always for joining uh the
56:39 show and we will catch you next week on
56:41 Human in the Loop. And feel free to
56:43 email us if you have any questions at
56:46 all. Alex alex10x.co. Arman 10x.co.
56:48 Thanks everyone. Have a good night.