0:00 [Music]
0:10 Well, thank you everyone. Welcome. How's
0:12 everybody doing? Are we in the in that
0:15 moment before or actually after lunch
0:17 now and we're like, okay, we launched
0:19 now. We see it and we're going to relax.
0:20 We're we're going to try to keep the
0:22 energy up, guys. So, this is a great
0:23 session. So, first of all, welcome to
0:25 Next. Hope you're having a great time
0:27 here. My name is Ilana Kines and I lead
0:29 the customer engineering team for North
0:31 America startups at Google Cloud. So we
0:34 have a startup hub downstairs in the
0:36 expo floor. If you have any questions
0:37 about the startups, please come talk to
0:39 us and meet us there. We be thrilled to
0:42 have you there. Uh but today this
0:44 session and these amazing founders are
0:46 going to help us understand some very
0:48 specific applications, use cases and
0:50 innovation that they are building uh
0:52 using Vert.Ex AI. So for those of you in
0:54 the audience that know the product or
0:56 are interested in learning more about
0:57 the product, this is a great session for
0:59 you. We're going to have a strategies,
1:00 lessons learned and some specific uh
1:03 capabilities that these founders have
1:04 used. So with that said, let me set the
1:07 stage and welcome our founders. So I'm
1:09 very happy and very grateful that you
1:11 are all here. Um I'm going to start with
1:13 Tete Tetia. He is the CEO and founder of
1:16 a company called Prompt AI and they
1:18 provide a visual intelligence platform.
1:21 So Tete has been in this space for many
1:23 many years and what they do is take all
1:25 the information that they capture
1:26 through visual devices and then act upon
1:29 them and the information that they can
1:31 gather. So thank you Teta for being here
1:33 with us today. Thanks for having me
1:34 here. Of course. Uh and then Akil Gupta
1:37 and I should say Akil welcome and thank
1:39 you so much for such a big flight. He
1:41 comes from India so great founder from
1:43 India market leading uh application
1:45 there. His company is called
1:47 nobroker.com and what they do is they
1:49 disrupt the real estate industry.
1:52 Basically providing a way to think about
1:55 removing the middleman from the real
1:57 estate transactions and do automated
1:59 property matching and detect fraud as
2:01 well. So we'll hear more about that in a
2:03 second. Thank you so much for making the
2:04 trip and being here today. Wonderful
2:06 being here. Thank you. Thank you. And
2:08 and then last but not least, so hi
2:10 Ahmed, welcome here. CEO and founder as
2:13 well of a company called Resemble AI. So
2:15 many of you may be familiar with them.
2:17 They do fantastic innovation in
2:20 generating voices and automating voices
2:23 with different models. So think about
2:25 all the fantastic spaces about cloning
2:27 your voice. Anyone has tried that? So
2:29 please do come to this company. Uh
2:31 congratulations also on your launch of
2:33 rapid voice cloning 2.0 just in February
2:36 I think of this year. Yeah. Thank you.
2:38 Appreciate it. Happy to be here. Great.
2:40 Thank you so much. Well with that said
2:42 let let's deep dive on this thing. So
2:44 why don't we start with you? So hey on
2:46 that one. So I think a lot of people in
2:48 the audience may be thinking okay you're
2:49 in a different industries all of you. So
2:51 what dropped your decision to use for
2:53 instance Vert.ex AI to build your
2:56 applications to build your solutions
2:57 because I'm sure there were things that
2:59 were out there that you were looking at.
3:00 So what dropped your your decision to
3:02 use Vertex? Um so we started resemble uh
3:07 five and a half years ago at this point.
3:09 So five and a half years ago there
3:10 weren't many things actually out there.
3:12 So actually Vert.Ex text didn't exist at
3:14 all at that point. But um uh we were
3:17 developing our own models um for
3:20 generating voices and um one of uh we
3:24 tried a lot of things to really get
3:25 scale in terms of the number of amount
3:27 of compute we need uh because we had
3:29 this interesting problem where every
3:31 customer every user would come in and
3:33 they would be building their own voice
3:34 model, right? Um so we had this weird
3:37 like one user has many model
3:39 relationship and uh what we really
3:41 needed to do needed to do is find a way
3:43 to like scale up compute really quickly.
3:45 Um you know we tried other services as
3:48 well for this uh solution. Uh but we
3:51 landed on at that time was ML engine uh
3:54 what was called ML engine at that time
3:55 which is now called Vert.Ex AI. Um but
3:58 effectively we found it to be super
4:00 scalable in terms of us having the
4:02 ability to train models very quickly and
4:05 um uh there were layers that we actually
4:09 ended up building on top of at that time
4:12 ML engine which is now vertex which are
4:15 now just incorporated right into vertex.
4:17 So uh you know we've kind of seen the
4:18 evolution and the growth of uh uh the
4:21 product and obviously um compute has
4:24 become a huge concern uh scalability and
4:27 uh availability of the compute has
4:28 become a concern as well. So uh for us
4:30 just having that compute available and
4:32 having horizontal scalability for
4:34 training purposes is what really got us
4:36 into the product in the first place.
4:38 That is great to hear and I know
4:40 scalability is a big thing just across
4:42 the board for for all the founders here.
4:43 So uh let me let me ask ail or or tete
4:47 do you find that been a similar uh point
4:49 of decision point for you to use vertex
4:51 AI? Yes, that's that's one of the
4:53 consideration which has really helped uh
4:55 even no broker and the convo and what uh
4:58 the product what we have uh because it
5:01 makes it easier earlier if you go back 5
5:04 10 years back and we we are a 11 year
5:06 old company so he says that he didn't
5:08 know about vortex 5 and a half years
5:10 back 11 year back there was nothing so
5:12 if you had to do something uh in terms
5:15 of whether you want to leverage AI ML to
5:18 build models or create something it used
5:19 to take a lot of time and that time has
5:22 drastically come down. It's like you can
5:24 now do it in few minutes to few hours
5:27 and there are a lot of foundation models
5:29 lot of models which are available with
5:31 vortex which can easily seamlessly do
5:33 the basic task for you. So that has
5:35 definitely improved. So and that's a
5:37 great point right because we have
5:38 scalability and we have different models
5:40 as well that you coming into the picture
5:42 and then you can decide among them. So
5:44 that's great and and t do you have a
5:46 similar experience too on the models or
5:48 the scalability side? Absolutely. uh I
5:50 mean we are in the phys visual physical
5:52 AI space. So in there like there's no
5:55 like a uniform model like you know
5:57 Gemini for voice or or for text for
6:00 example. So we have to train and deploy
6:03 a lot of different models for different
6:05 purposes and I think that's why Vert.Ex
6:08 X AI is is really really helpful because
6:10 you know it's every couple of weeks we
6:13 might have to address uh you know a new
6:16 application a new use case that might
6:18 mean like we have to tweak the existing
6:20 models or you know train a new one just
6:22 specifically for that. So we find vert V
6:25 vertex AI uh to be really really useful
6:27 for scalability. Nice. So I love this
6:30 fact and and I love the fact that all
6:32 these are good points for us to know
6:34 about Vert.ex AI. Let let me try to see
6:36 also on the other side for instance when
6:38 you were going through your journeys on
6:40 Vert.ex and learning what you could do
6:42 going from like nothing before to this
6:44 to ML ML and then Vert.ex AI any
6:48 specific lessons learned that your
6:50 yourselves or your teams actually went
6:52 through that may have been a little bit
6:54 hard at the beginning that were much
6:56 easier or just things that were pivotal
6:58 moments in that journey.
7:01 Sure. Yeah. So basically if you if you
7:03 see uh no broker we are a uh world's
7:06 largest broker free real estate
7:08 platform. So we save close to a billion
7:11 dollar of brokerage in India every year
7:13 and it's been 11 years. So we close more
7:16 than 70 18,000 80,000 properties every
7:19 month. So I'm talking about 2,000
7:21 properties being closed on the platform
7:23 uh or 200 300 properties being closed uh
7:27 as we speak. So now that needs lot of uh
7:30 uh smartness onto the platform and we
7:33 don't have any field on uh field uh
7:35 force on the ground. So identifying
7:38 brokers so that happens basis and
7:41 brokers brokers are called agents here
7:43 or the property consultant. So there's a
7:45 very very different phenomena with which
7:47 these people behave and they uh work
7:50 identifying their uh insights their
7:53 signals to when people are uploading
7:56 pictures. So what we do we tell owners
7:57 okay you please upload your pictures uh
7:59 via WhatsApp and when somebody sends you
8:02 the photograph for silly reason they'll
8:04 just select 10 15 uh photographs
8:07 together and one of the photograph can
8:09 be a good morning or a happy birthday uh
8:12 image and you really don't want those
8:14 images to be available on your platform
8:16 correct so there has to be a scrutiny
8:18 which happens so I'm talking about uh
8:21 1617 we had built-in model uh with
8:24 Google and that time obviously vortex
8:26 object was not there with the object
8:27 identification where we started
8:30 identifying what all objects are there
8:31 in the images with that we used to
8:33 classify whether this image is for a
8:35 house dining hall kitchen bedroom and
8:38 then we used to accept or uh reject the
8:40 images. So multiple use cases and to do
8:44 this particular use case it took us few
8:46 months at that point of time and
8:49 rebuilding the model training it tuning
8:51 it and there was no AI it was machine
8:53 learning at that point of time the
8:54 beauty about our industry is that with
8:56 every 2 years or 3 years you'll have a
8:58 new buzzword and people start using that
9:00 particular thing but we were building
9:01 machine learning models but now with
9:04 Gemini and with Vortex uh the models
9:07 which are available that thing has
9:09 become so seamless for us we had built
9:11 our own model. We had deployed it. We
9:13 were incurring a cost on that particular
9:15 thing. Now it's a SAS model. We just
9:17 pass on the image and we tell uh we ask
9:19 the model whether this is a image of a
9:21 property or not. And not only it tells
9:23 me whether the image of the property, it
9:25 can also beautify the image and give it
9:27 back to me. So those are the seamless
9:28 things which have uh definitely help
9:30 helped us in uh with Vort.ex AI. Thank
9:33 you. I think uh the the lessons that
9:36 we've learned over time for Vert.Ex have
9:38 been uh how well it fits into the other
9:41 ecosystem of uh
9:45 uh solutions that exist on Google cloud.
9:47 Um so I think Vert.Ex itself has
9:49 broadened quite significantly over time.
9:51 Um obviously we were using to train
9:53 models initially. So being able to train
9:55 a model, evaluate the model on how
9:57 successful it is. If it's like for
9:59 example like Gil mentioned for object
10:01 detection, uh if you were training
10:02 models right now, having the ability to
10:04 evaluate and figure out how accurate
10:05 your model is. um and you know combining
10:08 with other tools to kind of make that
10:09 work whether they're Google related or
10:11 whether they're not like you know we we
10:12 use a product called weights and biases
10:14 to kind of make sure that the product
10:15 that we're or the model that we're
10:17 creating um is you know evaluating
10:19 correctly is actually performing
10:21 correctly there's no regressions um but
10:23 also in terms of like you know we we
10:25 have this continuous stream of
10:27 foundation models that we're creating
10:29 and then being able to scale on vertex
10:30 AI from there uh has been phenomenal and
10:33 in terms of data storage you have you
10:35 know a bunch array of options, anything
10:38 from like hyperdisk to just plain Google
10:40 cloud storage and how that all kind of
10:42 integrates together um into this one
10:45 product where I think a lot of the
10:47 development team is very happy to kind
10:49 of work with a bunch of Google tools but
10:51 everything kind of exists in one product
10:52 or another which is actually a a great
10:55 thing I I guess for the audience to know
10:57 right so part of the power of vertx AI
10:59 is that integration integrated vertical
11:02 stack and solutioning for you to make it
11:03 easier for your teams and your
11:05 developers to actually go from
11:07 prototyping experimentation on new
11:08 things and then actually deploying into
11:10 production for your customers. So Tet
11:13 how have has these particular function
11:16 functionalities from Vert.Ex text help
11:18 your in your case right using the models
11:20 that are provided using the APIs using
11:23 the all the integration capabilities
11:25 absolutely so uh prompti is uh uh still
11:28 in this early days so we've been
11:30 operating for about 18 months and that's
11:32 kind of an interesting experience so I
11:34 kind I want to share that from that
11:35 perspective so when you're early stage
11:37 company many times you have to do a lot
11:39 of trials and errors and iterating a lot
11:42 through these use cases and processes so
11:45 um and I was uh I had my PhD at UC
11:49 Berkeley before and that was very
11:50 different from doing academic research
11:52 where you want to get everything right
11:53 and many times in startup you really
11:55 need to get a direction right first do
11:57 these iterations and try to be as
11:59 efficient and fast as possible so
12:02 sometimes we make engineering compromise
12:03 and sometimes we look for off-the-shelf
12:06 solutions first before we decide to
12:08 delve into that and I think vertex AI
12:11 has been tremendously helpful in that
12:14 sense because you know we can put
12:17 together a solution very quickly and and
12:19 this was not un unimaginable a couple
12:22 years back then and now we can just put
12:24 them together and very be very fast in
12:27 like actually go to market and deploy it
12:30 uh to a either it's a group of uh test
12:33 users or the our entire user base and
12:35 sort of iterate from there and if we
12:37 realize okay we got to do model training
12:39 we'll do that afterwards but then we can
12:41 still deploy them on vertx for example
12:43 and Gemini for example has has been
12:46 really really uh transformative because
12:49 now it's sort of you have this uh
12:51 general almost like a computer where
12:54 like it understands instructions it
12:56 understands natural languages um so we
13:00 are able to uh build very high level uh
13:03 applications very quickly using these
13:05 these APIs which is great and I I think
13:08 you all pointed out to to one thing
13:10 which is your teams like vertex they
13:12 like developer uh developing with the
13:13 platform but let's get a little more
13:15 specific on that. So how long if you can
13:18 give us a sense right for the audience
13:20 especially um does it take your teams to
13:23 go from let's say an experimentation
13:25 phase or like a trial phase for new
13:27 parts of your product your solutions to
13:30 then having uh the model train and
13:32 everything ready to go and then
13:33 deploying. So can you give us a sense of
13:35 like what that journey looks like? How
13:37 long does it take? Oh I can share a
13:39 actually an interesting story. So we
13:41 were uh building up this pet feature. So
13:44 it's you know visual AI helps you with
13:47 anything kids pets of course. So we were
13:50 I want to see my dog in Absolutely.
13:52 Yeah. We got to recognize them. I
13:53 understand like this your dog is a
13:55 neighbor's dog and if your dog is doing
13:56 thing any any sort of uh anything it's
13:59 not supposed to do right. Um or anything
14:01 just that's interesting and fun. So we
14:03 were sort of like you know uh
14:05 implementing this feature and our uh uh
14:08 design product officer uh he moved
14:11 pretty fast but he was taking his time.
14:13 It was like, oh, the engineering team
14:15 was going to, you know, take a while to
14:17 implement this thing, especially, you
14:18 know, we're implementing this as a a
14:21 full scale feature that's going to be
14:23 pushed out to everybody. And, uh, he
14:26 estimated it was going to take us six
14:28 weeks to get it done and we got it done
14:31 within three. Three weeks. Three weeks.
14:33 Oh, so that's 50% cut of the original
14:36 time frame that you guys expected.
14:37 Absolutely. and he usually, you know, uh
14:41 says like I tend to bend space time in
14:43 the company uh because whenever they say
14:46 it's going to take eight weeks, I like
14:47 how about four? Let's let's work that
14:48 out. How about four? Um and this time it
14:51 was genuinely surprising like we we got
14:53 it done within such a short period of
14:55 time. That is pretty amazing. So it's
14:57 and great productivity uh gains for you
14:59 as as your development team. Absolutely.
15:01 And meet this deadline. So that's fine.
15:03 You just need to be careful, right? Next
15:04 time he will tell you two weeks
15:06 expecting it to be ready one week. I'm
15:08 always greedy. There you go. All right.
15:11 So, well, actually he and so any any
15:15 similar experiences hopefully. Yeah, I
15:16 can go I can go first. Um, so we have a
15:19 we have a model that we've uh deployed
15:21 into production that can detect deep
15:23 fakes. So can detect images, audio, and
15:25 video if they're if they're AI generated
15:27 or not. Um whether that's from you know
15:30 a Google openai doesn't matter who's
15:32 producing this. except it's open source
15:33 models etc. Uh a key part of that is
15:36 actually like uh a curation of synthetic
15:38 data. Um so uh we actually use uh we're
15:42 continuously like upgrading these models
15:44 and this is almost an automated
15:46 functionality now. So the idea is that
15:48 um we have this crawler that goes out uh
15:51 and that is observing different GitHub
15:53 repositories and a hugging face etc. and
15:55 it's trying to figure out if there are
15:56 new commits or new models that are being
15:58 that are being published. It scrapes
16:00 data, puts them into a cloud storage
16:02 bucket, um, uh, puts them into a Excel
16:06 sheet or Google sheets. At this point,
16:07 this is probably most untechical part of
16:09 this entire process. We'll remove that
16:10 Excel we're recording. And effectively
16:12 what ends up happening is uh, as soon as
16:15 some QA person says like, oh, this is
16:17 actually valid. Here's a here's a data
16:20 set or here's a model that our model has
16:21 not seen before. And the regression test
16:24 shows that the current model has low
16:25 coverage of this. um it'll immediately
16:28 trigger a model training on Vert.Ex
16:30 through the to a training platform a
16:32 custom job that's what they call it um
16:34 and effectively train a model
16:36 immediately to get that coverage. So for
16:37 example in the last week or so there
16:40 have been week and a half there have
16:42 been um three models that have come out
16:44 like Gemini 2.5 now supports image
16:46 generation OpenAI supports image
16:48 generation through chatbt and Midjourney
16:51 came out with V7 and all of those models
16:54 even though Midjourney came out on
16:55 Monday or Tuesday uh if you go upload a
16:58 picture from that product today it'll
16:59 tell you that's fake. Um, and the reason
17:01 for that is because it quickly gathers
17:03 data, does a regression test, and it
17:05 immediately kind of fires off a custom
17:07 training job to kind of train that model
17:09 and get that coverage. So, it's uh it's
17:12 kind of built in a way that kind of puts
17:14 all the pieces together. Well, it's
17:15 saving you but also saving all your
17:17 users a lot of time, right? Yeah. The
17:20 users expect like, you know, if there's
17:22 a new model that comes out um uh an
17:25 image generator, video, etc. Uh our
17:28 users expect coverage almost
17:29 immediately, right? Otherwise, if you
17:30 have like a firewall or a spam filter
17:32 that can only catch spam like a month
17:35 ago, then it's not very useful because
17:37 your attacks are are enhancing almost
17:39 every day or every week. That's true.
17:41 And and what an amazing use of the
17:43 product too, right? So detecting deep
17:45 fakes and all that. So that's great. And
17:47 using that for productivity gains, but
17:49 also time to service, time to market,
17:51 right? That's pretty pretty important,
17:53 pretty good for you too. And with those
17:55 all those properties I killed right and
17:57 all the services that you provide how
17:59 does that work for you in terms of the
18:01 time I think uh there are uh hello yeah
18:06 so if you see at no broker uh we don't
18:09 only help people find houses or buy
18:12 houses. We work in the all facads of the
18:16 property which is like you may want to
18:18 get moving services. You may want to get
18:22 your uh rental agreement, sale deed,
18:24 property deeds, uh you want your house
18:26 cleaned, you want your house painting
18:28 done and if you see all these uh
18:31 services they have a touch point and
18:34 they need uh somebody to go visit your
18:36 house maybe to see how much of the area
18:38 has to be painted so that I can give you
18:40 the quotation for that particular thing.
18:42 when you are moving how much is the uh
18:45 quote how big is your house because
18:48 typically and I'm sure this happens
18:50 across the globe whenever somebody asks
18:52 you how much stuff you have to move you
18:55 will always say I have little but when
18:57 the truck comes which is supposed to
18:59 take the luggage and with the people who
19:01 are married you'll suddenly find so many
19:03 lofts which have sudden uh stuff coming
19:06 out and typically it overflows so for
19:09 that uh we used uh Gemini and uh some
19:12 beautiful applications what we have
19:13 done. Now what we tell our customers is
19:16 that take the new broker app and if you
19:18 are moving just roam around the house
19:21 with the video on and when you are
19:24 roaming around just open your wardrobes.
19:26 If you have the beds which has the
19:27 storage just show us how uh of the how
19:30 much of the stuff is there. Let us know
19:32 if that fridge has to be moved, this
19:34 sofa has to be moved, TV has to be moved
19:36 and then we calculate what is the cubic
19:39 capacity, what is needed to move this
19:41 particular house and uh what will be the
19:44 cost of moving that particular house.
19:46 Imagine earlier we were doing a
19:50 guesstimate which 60% of the time was
19:53 not working well because of the hidden
19:55 stuff which is there uh in form of toys
19:58 of your kids or maybe the old clothes
19:59 what you have all those things we are
20:01 able to do now so that's one and this
20:03 this phenomenal then second one is when
20:06 you do your lease agreements again uh
20:08 year uh typically in India it happens
20:10 after 11 12 months you had to ask all
20:14 the details on the form Now what we do
20:16 we just tell them whatever lease
20:18 agreement you have in whatever format
20:20 whatever language it has been written
20:22 just upload it we just scrape it we use
20:26 uh OCR we get all the details and we ask
20:29 three four information like what's the
20:30 new rent what's a new deposit just fill
20:32 in those details click confirm boom your
20:35 rental agreement is ready so all those
20:37 things which were taking like days and
20:39 which was uh earlier needed human
20:41 intervention to do this stuff all of it
20:43 we are able to do with AI now which is
20:45 Great. So, not just eliminating the
20:47 middleman, but all those uh potential
20:49 services of someone going to check on
20:51 what's the space require and the service
20:53 and how much we're going to cost and all
20:55 that. So, that's pretty impressive.
20:56 Thank you, Ailio, for for sharing that.
20:59 And actually, I could use some of those
21:00 services too.
21:02 Not in US yet. Not yet. Not yet. Not
21:05 yet. All right. So, um so now let's
21:08 think about you've been working
21:09 obviously with Vert.ex already for a
21:11 while and uh the product has evolved,
21:13 right? it was non-existent then it was
21:15 ML then it's vertex AI today and there
21:18 have been a lot of announcements um at
21:20 next this week right about vertex and
21:22 some of them are related to agents and
21:24 agent building and some of them are
21:26 related to new models so what I would
21:28 like to take the conversation now
21:30 is how are you looking at the future for
21:33 your companies and how some of these
21:35 announcements some of these new
21:37 developments advancements actually can
21:39 help you power those that new next layer
21:42 of innovation that you you are thinking
21:44 about for your companies. So if you can
21:46 let us know a little bit about that and
21:48 and that will give us a glimpse also of
21:50 where your industries are going too.
21:53 Ted. Yeah, I can go first. Um so uh
21:58 imagine the future where like these
22:00 spaces are are watched by AI so that we
22:03 don't have to spend hours watching these
22:05 videos and also this information coming
22:07 back to a centralized place and uh we're
22:11 just able to ask questions about what
22:14 had happened and uh the insights of of
22:17 what had happened and that means for
22:19 example the well that means the first
22:21 step is to understand environment right
22:23 visual understanding And after that it
22:26 has to be agentic because it needs to
22:30 connect uh the things that had happened
22:33 to intentions to uh what we as uh
22:38 operators or users, homeowners, business
22:41 owners, what they'd like to see. And
22:44 these things are all different.
22:45 Sometimes they are personal. What I want
22:47 for my home might be very different from
22:49 what you want uh for your home. And uh a
22:52 uh retail shop owner what they are
22:54 trying to uh uh get might be very
22:57 different from a hotel owner for
22:58 example. And these models will have to
23:02 and systems I would say have to be able
23:04 to understand intentions and work in a
23:07 way that that different people uh might
23:10 want very differently. And uh I think
23:13 for example uh vertex AI can be a a very
23:17 um important role in that. uh for
23:20 example reasoning capacity for for these
23:22 models and now they have to think and
23:25 step step by step laying out what they
23:27 have to do and now they have to go to
23:30 the uh prospective uh parts of the
23:33 system uh whether it's a storage they
23:35 might have to check some data in a
23:36 storage or they might to go have to go
23:38 into the database and and uh come up
23:41 with a a SQL query uh some keyword to
23:44 search for some information and on top
23:46 of it they need to synthesize this
23:48 information and then decide what to do
23:50 next and or stop and present that
23:52 information to people and sometimes it's
23:54 even a voice interface. So we're really
23:57 getting into the stage where computers
23:58 are getting really
23:59 sophisticated and uh and also like we
24:03 just have AI to automate a bunch of uh
24:08 the either the boring task or sometimes
24:11 it's just very heavy for human beings to
24:13 do. Yeah. And I love the fact that you
24:15 mentioned I think all these things that
24:16 you mentioned you started with saying
24:18 it's agentic right it's a lot of these
24:20 process flows that are going to be built
24:22 on top of that and vertex can give you
24:24 capabilities right agent SDK and the
24:26 agent builder and all those uh those
24:28 parts of the product so how is that um
24:31 helping you or potentially helping you
24:33 and let me go with a or so um are you
24:37 planning on using those are you already
24:39 in that journey of the agent u building
24:41 how agentic those solutions will be for
24:43 you uh any specific things that that you
24:46 can share with us? So uh so at the scale
24:49 of no broker where we have like close to
24:51 5,000 employees working for us and most
24:54 of them uh a big chunk of them or a
24:57 majority of them work in our customer
24:59 service department where they have to
25:01 touch base with the customer answer
25:03 their queries understand what they need
25:05 like I was talking about packers and
25:06 movers I was talking about cleaning
25:08 painting and all those things so then
25:10 but for a customerf facing company the
25:13 SOP is that you should have a consist
25:16 consistent great quality service which
25:19 is unbiased by the mood of your agent.
25:22 Correct? It should it should it should
25:23 not happen that I had a fight with my
25:26 wife tonight and or early in the morning
25:29 and I'm disgrunted on my customer and
25:32 I'm not happy to help that particular
25:34 customer and that had always been in my
25:36 mind that as we grow big how are we
25:39 going to solve that particular thing. So
25:40 we started building models very early.
25:43 So now what we do we have built a
25:44 platform called convoen.ai which is like
25:47 zen out of customer conversations. C uh
25:50 customers can be conversing with you on
25:52 a chat chatbot emails SMS WhatsApp and
25:57 on your call center. We take all those
25:59 conversations and India the beauty is uh
26:04 we talk in multiple languages. So we
26:06 have like 14 15 languages which are
26:08 actively used otherwise we have hundreds
26:10 of languages and people switch
26:12 languages. Uh so they'll be speaking in
26:14 English and suddenly Hindiad that's what
26:17 I did and it it comes very very
26:19 naturally to us. So none of the models
26:21 were able to solve that particular
26:22 problem. So we created our own ST models
26:25 and now once we had that particular
26:27 thing we were able to create agents like
26:30 agent assist where there is a virtual
26:33 agent who is sitting on top of our
26:35 platform and one of my call center
26:37 executive and he's he or she is talking
26:39 to the customers it can tell you what
26:41 exactly is the history of that customer
26:43 that okay she came to no broker platform
26:46 3 months back this is what she had or
26:49 maybe she has active service going on
26:51 she had sent you an email she's not
26:53 happy about something which is going on
26:55 this is what you need to tell so
26:57 basically uh the things like okay sir
27:00 can I put you on hold and then I'm going
27:02 back I'm going to search with my manager
27:04 all those things immediately goes off
27:06 now when you talk to the customer you
27:07 say okay hi this is what is happening I
27:10 see that you have a packer remover
27:11 movement and our partner has not reached
27:13 I have already put uh uh put a touch
27:16 with my partner and he or she may be
27:18 reaching uh in another 30 minutes so
27:20 that that levels up your experience uh
27:23 to a different level. Then after we did
27:26 that we realized that there are a lot of
27:28 task which don't even need human because
27:31 I feel as a human we should do something
27:34 which is non-mundane we should we should
27:36 be thinking we should be uh creating new
27:39 stuff we should be doing something smart
27:41 so then we created our own virtual
27:43 agents uh you can say humanoids which
27:46 can talk in Indian languages uh and
27:48 that's what I was talking with Zah also
27:50 like uh it can make a call to you it
27:53 will feel as If a human is speaking to
27:55 you and if let's say you have a property
27:57 visit scheduled it will just call you
27:58 and say hi I see or you have property
28:01 visit scheduled and are you coming or
28:03 not and then somebody says oh no I see
28:05 there is a traffic oh I also see that
28:07 there's a traffic okay so that means
28:09 that you'll be delayed by 45 minutes
28:10 that that is what Google map is showing
28:12 let me just reschedule the appointment
28:14 for you and I'll also inform the person
28:16 on the field who was supposed to be with
28:17 you on that particular visit so things
28:19 like that we have started automating and
28:21 that's where the agentic
28:24 theme has started coming into uh our
28:26 platform and because it was so beautiful
28:29 uh we have started selling it out as a
28:31 product uh to other companies also there
28:33 you go so another business revenue
28:35 stream there so that's good
28:36 congratulations on that hill and thank
28:38 you for sharing it so what I'm hearing
28:40 also is that it's not just the internal
28:42 experience that gets better with all
28:44 these new advancements but it's also the
28:47 experience for your customers of course
28:48 right so not just for the internal
28:49 developers that are using the platform
28:51 but also the end result So no, no
28:53 company can be successful until your
28:55 customer is happy. Absolutely. And I
28:58 love hearing that it's actually good for
29:00 you to use our technology on both sides.
29:02 So thank you for that. Um but so hi so
29:06 let me ask you because in your space
29:08 specifically right there's a lot of
29:09 innovation going on with models out
29:12 there from you from other companies
29:14 there's a lot of competition there's a
29:15 lot of innovation that we're bringing to
29:17 the table. So how are you navigating
29:18 through that and how do you see really
29:20 the future for resemble AI is going to
29:22 look like with your technology with the
29:25 help of of Google but also with things
29:26 that are going on out there that are
29:28 coming out. Yeah. So I'll answer this in
29:30 two ways. So um we're kind of lucky that
29:34 we develop models and our customers go
29:35 use those models and applications. So we
29:37 have a lot of insight and oversight as
29:38 to what applications are are um are very
29:42 useful uh and where where they're
29:43 creating an impact. Right? So we see
29:45 everything from like call automation and
29:47 I think uh these two gentlemen have
29:48 talked a lot about different automations
29:50 and different agents that are really
29:51 applicable and everyone here is probably
29:54 tired of hearing voice AI for the last
29:56 two days. Um so, uh one of the things
29:59 that's probably the most impactful in
30:01 resemble and you know we we've uh I've
30:04 actually like worked with a circle of
30:06 other founders to kind of implement this
30:07 in inside of companies and we're we're
30:09 really bullish on this actually is
30:12 um I I'm a firm believer that every
30:15 company should have one dedicated person
30:19 ideally a team but if you're if you're a
30:21 startup one dedicated person in just
30:23 exploring different agents and how they
30:26 could applicable as employees in your
30:28 company. Um, and that has like
30:31 tremendous benefits to the company and
30:32 it's now way easier than ever, right?
30:34 So, you can actually get employees that
30:36 could do programming, you know, there
30:38 there are literally software out there
30:39 if you if you wanted to get something
30:40 off the shelf. There's Devon, you know,
30:42 uh there's plenty of others. Uh there's
30:44 customer success uh products out there.
30:47 Um with a with a the real power here is
30:51 um every company like every human is
30:53 also slightly different from one
30:54 another. uh but the building blocks of
30:57 you know using Gemini using OpenAI using
30:59 different models to achieve different
31:01 tasks is a matter of plumbing work
31:03 together and then the core really
31:05 becomes how it works in your workflow
31:07 right so a lot of us um and I'm really
31:09 bullish on this is the most valuable AI
31:12 company the most valuable agent AI
31:14 company is probably Slack right now and
31:16 the reason is because every AI like
31:18 agent that your company will interact
31:20 with it's like an employee within Slack
31:22 so why would that be any different um So
31:25 having these agents being deployable,
31:27 it's like having uh staff that has
31:31 10-second SLAs's. No human staff member
31:34 can give you a 10-second SLA. Uh but an
31:36 AI agent can. And there's a lot of uh uh
31:39 a lot of uh great stuff happening within
31:42 these companies, including Resemble. You
31:43 know, we're deploying uh bots that are
31:45 effectively helping customer success,
31:47 internal, external. There's different
31:49 ones. Uh we're hooking them up to
31:51 different products. We we have a bot
31:53 that typically sits on our um on our
31:55 document page which helps people make
31:57 integrations because at a certain point
31:59 we're not going to write and maintain
32:00 SDKs for every single language. Uh it's
32:02 too much work for us. Uh but what we can
32:04 do is we can effectively have a have a
32:07 you know a chatbot that's geared solely
32:09 geared to understand our SDK and our
32:11 documentation and then the user can go
32:13 in and say oh I need to plug this into
32:15 Genesis or I need to plug this into
32:16 Unity. How do I do that? Right? And of
32:19 course, we're not going to write a guide
32:20 for every single integration, but this
32:22 thing can this thing can do it on the
32:23 spot on the fly. Um, so creating these
32:25 like agents, um, particularly
32:28 internally, which is kind of where I
32:29 have the focus right now, is it can pay
32:31 a lot of dividends and it helps your
32:33 company learn extremely quickly. So, uh,
32:35 I'm not sure what the audience makeup
32:36 is. If you own companies, you should
32:37 probably be doing this. If you don't,
32:38 then you should probably go to your boss
32:40 or manager and say like there should be
32:41 a team or a group of people dedicated to
32:43 just experimenting with agents just
32:45 internally making those workflows
32:46 better. that I love that idea and let's
32:50 actually quiz the audience. So just by
32:52 show of hands uh how many of you are
32:55 maybe already doing that creating agents
32:56 internally going to your managers and
32:59 saying hey we need to to do this for
33:01 some of those employee tasks and
33:03 functions that are very repetitive or
33:04 that are intelligent but could be better
33:06 done with AI right now. Showing of
33:09 hands. There you go. We're like 30% of
33:12 the room. 40. Yeah. Yeah. Just about.
33:14 There's a while to go. It's a lot to go.
33:16 Yeah. Exactly. But we're just starting
33:18 in that journey. So I think it's coming.
33:20 It's coming. So yeah. All right. No,
33:22 that that sounds great and thank you for
33:24 sharing that. Um All right. So since we
33:26 have also of course founders in the
33:28 room, one of the things that uh that I'm
33:30 sure they are probably thinking about
33:31 also is with your companies. You are in
33:33 different stages, right? So TE's
33:35 companies earlier on, you guys have been
33:37 uh for a few years already. So what's
33:40 next for your company? What are you
33:42 excited about for your company for your
33:43 next milestone?
33:45 Uh let me go first. So so if you talk
33:48 about no broker we are 11 year old
33:49 company the only prop tech unicorn in
33:51 India but given uh at this stage also we
33:55 are just present in six cities in India.
33:57 So we have a lot and lot of uh ground to
34:00 cover and uh with AI and with kind of
34:03 automations uh what we are able to do at
34:05 the uh and the rate at which technology
34:08 is changing uh I think companies will
34:11 become global. So it will be the
34:12 solutions uh which you'll be able to
34:14 create from one country and it will work
34:17 across the globe and that is something
34:19 which people keep asking me that uh when
34:21 exactly are you going to come uh to
34:23 different countries uh because in US
34:26 also you see that uh the amount of
34:28 interpretation cost is extremely high
34:30 and with lot of things happening uh on
34:32 the uh on the law side. So there's
34:35 opportunity for us uh there also but
34:38 right now we are focusing on India uh
34:40 very big opportunity uh with no broker
34:43 no broker services what we have and the
34:45 convoen which is uh our customer uh
34:48 intelligence uh AI product what we have
34:50 built so we'll focus on that great so
34:52 hopefully we'll see you soon too in the
34:54 US and then we'll be happy to move you
34:57 there you go all right thank you Tede
35:00 what's next for you absolutely so promi
35:02 was founded by a group of PhD students
35:05 and professor from from Berkeley. So all
35:07 of us uh have been working on computer
35:09 vision for for myself personally it's
35:11 been a decade and for uh one of my
35:15 colleagues he's been working on it for
35:17 more than three decades since early days
35:19 like um when he was at MIT. So we just
35:23 had this frustration back in the days of
35:26 like well we've been developing so many
35:28 different algorithms and research works
35:30 and but how come these cameras are still
35:33 dumb cameras like how come they are just
35:35 recording and I have to go back to it
35:37 and I have used a slide tiny little
35:38 slider and to look for what I'm trying
35:41 to get and they can't really talk to
35:44 each other they can't not uh really they
35:46 don't really understand any sort of
35:48 information and how come that we've been
35:50 doing so many years uh so so many years
35:53 of work in computer vision and they
35:55 still can't tell you whether your cat
35:56 has jumped onto the couch or not. It's
35:58 not supposed to be that hard. Okay. Um
36:01 so that's why we started and now we're
36:04 getting closer and closer. I think we're
36:06 we're at the down of of visual physical
36:08 AI. I mean like you heard the word
36:10 physical AI all the time, right? Robots
36:12 and drones and autonomous agents um
36:15 everywhere. But then you think about it
36:17 like who's going to watch them, right?
36:19 And you you got to deploy these cameras
36:21 everywhere and so that you make sure
36:23 that they're not they're not functioning
36:26 or or doing bad things. Um and I think
36:29 our our goal is to uh sort of have
36:34 computer to be able to do anything that
36:37 only requires a pair of eyes. If we just
36:40 need a human being to sit there and
36:42 watch, please do that with a computer
36:45 because um you know we humans can do
36:48 much more interesting things and we can
36:50 spend our time more efficiently. We can
36:53 spend our the time with with family uh
36:56 with friends and and focus on the work
36:58 that actually require our attention uh
37:01 rather than just like these tiny little
37:03 things. So that's why we're I'm really
37:05 excited about the future. I think these
37:07 uh technologies can transform how people
37:09 interact with the home with their with
37:11 their pets and their environment also
37:13 like how businesses function. It's going
37:15 to make us more secure uh feel more safe
37:19 and more connected. Great. Thank you.
37:21 And I I guess what I'm hearing from you
37:22 also is that there's of course software
37:24 solutions that you provide today and
37:26 maybe the hardware pieces are coming up
37:28 at some point too. Yeah, absolutely. And
37:30 I think a lot of these uh hardwares have
37:32 been really commoditized like 20 years
37:35 ago. I remember like a camera, nice
37:37 camera would cost uh at least hundreds
37:40 of dollars if not like thousands of
37:42 dollars and now they cost 20 bucks. You
37:45 can buy them from like anywhere almost.
37:47 Um and it's not hard to to manufacture
37:50 them either. So the reason that um many
37:54 people are still not buying them uh is
37:56 that they really don't find a use case
37:58 for that. It's like I buy a camera, I
37:59 put it there, so I forget about it and I
38:00 pay like cloud storage for that. uh and
38:04 now finally people are able to get some
38:06 usage out of it and I think it's just
38:08 going to drive this very positive cycle
38:10 where people keep buying more cameras
38:12 and as a result we discover more use
38:15 cases we try to automate them and they
38:17 become happier they buy more cameras
38:19 true and that that point they will need
38:21 your visual intelligence platform too so
38:23 absolutely which is great all right
38:25 thank you for that so hey so let me
38:27 close that this section with you in
38:29 terms of uh what's next for the company
38:31 and what are you excited about. Yeah,
38:33 there's a lot to be excited about. Um,
38:35 just to give you context, uh, 4 years
38:38 ago now, 2021, we're in 2025. Yeah,
38:41 that's four years ago. Yeah, time is a
38:43 blur. Uh, four years ago, uh, one of the
38:46 things that we, one of our customers
38:48 actually published a show on Netflix
38:50 called The Andy Warhol Diaries. Um, it
38:53 the entire narration in the Andy Warhol
38:55 Diaries, Andy Warhol of course passed
38:57 away uh, in the 70s or early 80s.
39:01 Um and every narration from him in that
39:03 documentary was completely AI generated.
39:06 Uh I called this the pre-hat era. Um and
39:10 uh that gave us an idea of mainstream
39:13 use of generative AI that was nominated
39:15 for four Emmys uh that show or that
39:17 documentary series. And um that got us
39:21 thinking or got me thinking a lot about
39:23 okay this piece of technology that you
39:25 know in 2021 four years ago is able to
39:28 reproduce something that a normal
39:29 consumer that's watching TV cannot tell
39:31 if it's AI or not anymore. And um if you
39:34 fast forward today you have this in
39:36 pretty much all the modalities. You can
39:37 go and obviously create gorgeous videos
39:40 with open uh with open AI or Google with
39:42 V2 now etc. Um but you can also go in
39:46 open source and do them. And I don't
39:47 think open source is slowing down. I
39:49 think open source is keeping ahead with
39:50 the pace of where uh the frontier models
39:53 are. So the thought really comes in when
39:55 we're creating these platforms
39:56 especially as resembles creating these
39:57 models and allowing you know millions of
39:59 users to use and create models
40:01 themselves is how do we do it in a safe
40:02 manner? How do we get people to not be
40:05 able to scrape a video off of YouTube of
40:08 Akil and effectively just, you know,
40:10 clone his voice, take his face, create a
40:12 version of him, you know, and that could
40:14 be extremely dangerous. Nobody will do
40:16 that. Somebody might do that. You know,
40:18 we've had people on YouTube that have
40:20 said like, "I found my voice being used
40:22 by a different channel." Uh we've had
40:24 people, you know, complain about I never
40:26 was on this ad. I never promoted this,
40:29 etc. You have politicians obviously. Um
40:31 and uh the the thing that we are really
40:35 bullish on now and that we really want
40:36 to have impact on to be honest. We hope
40:38 that the company plays some part in this
40:40 is the deployment of responsible and
40:42 safe AI. And those are not just meant by
40:44 guard rails, but you know, as I'm in the
40:46 Bay Area, Tete's in the Bay Area, the
40:48 the way we think in the Bay Area to be
40:50 honest is technology is the answer to
40:53 problems as well, right? Technology can
40:54 be solved by technology and not
40:56 necessarily policies, right? Um, and so
40:58 we've been, you know, building models
41:00 around watermarking. We've been building
41:02 models around detecting defakes. We open
41:04 source models around like, uh, speaker
41:06 identification and person
41:07 identification. Um but all of those are
41:09 kind of coming together and we're trying
41:10 to really wrangle around this this
41:12 foreseeable problem where you know early
41:15 in January this year 55% of the internet
41:18 according to a lot of research
41:19 researchers um was being created with
41:22 generative AI right uh and the
41:24 projection was by the end of 2026 that
41:26 90% of it would be created by generative
41:28 AI. I think by the end of 2025 with the
41:32 uh image and video models that are
41:33 coming out that are widely accessible on
41:35 your phones at this point, 90% is a
41:37 pretty pretty conservative
41:39 efer AI being used in content being
41:42 produced. Um so that really opens the
41:45 door for malicious users on the other
41:47 end that can also use that content. And
41:49 what we want to do is actually give
41:50 tools and give models to people and
41:53 companies that are deploying these
41:54 models to also offer ways to kind of
41:56 prevent um kind of the responsible or
41:58 encourage the responsible use and
42:00 prevent malicious use of those models.
42:02 So I think that's where a lot of my
42:04 attention and focus is going because I
42:06 think generative AI is out of the it's
42:07 out of the box. This these models are
42:09 going to improve. I have no doubt by the
42:10 end of the year it'll get faster,
42:12 better, higher fidelity. Um that that's
42:14 a given at this point. So the the
42:16 response is well what's the what's the
42:18 what's the uh what what's the counter to
42:20 what's uh what we're about to see here
42:22 happen in the world. True. And I'm so
42:24 glad that you mentioned that because the
42:26 general concept really when we think
42:28 about guard rails is like regulations
42:30 and policies and what can we do but you
42:32 mentioned something very specific which
42:34 is well technology can also regulate
42:36 technology. So that's an interesting
42:38 concept and I think a lot of companies
42:40 are actually looking into that because
42:42 policies regulations will not be able to
42:44 advance as fast as we need them to catch
42:46 up with what's happening in technology.
42:48 So that's an interesting concept of what
42:50 you brought brought today. So thank you
42:51 for that. And actually we're getting
42:54 close to closing the session. So I'm
42:57 going to say kind of like a rapid fire
42:59 oneliner. What would be your advice for
43:01 founders in the room who would like to
43:03 use Vert.ex AI in their solutions today?
43:07 And uh what advice can you give them?
43:08 Just oneliner very quick. Let's start
43:11 with Ted tip please. Yeah.
43:14 Um so speed is really important for
43:18 listed startups. You got to try your
43:21 best for that. Thank you. Yeah, I think
43:24 uh the same. So basically the rate at
43:27 which you can innovate uh with vortex
43:30 and anything else uh is uh extremely
43:34 fast. So things as I was mentioning 10
43:37 years back things which were taking
43:38 months few years back which was taking
43:40 days now is taking hours. So if you
43:43 think about a problem which you want to
43:44 solve uh you should be able to do a P of
43:47 that particular thing extremely fast to
43:49 know whether it's going to work or not
43:51 and that can define uh how fast you want
43:53 to work on a problem. Thank you for for
43:56 that. I hear just go to
43:58 aistudio.google.com and click all the
44:00 buttons and you'll learn everything
44:01 really quickly. That's a good one too.
44:04 Well, thank you so much for that. I hear
44:06 that um hopefully this session has been
44:08 useful for you. Um I have to say thank
44:11 you to all the founders obviously
44:12 everybody else in the room and if you
44:14 are not this is my commercial if you are
44:16 not familiar with Google for startup
44:18 cloud program please come talk to us
44:20 startups hub in the expo hall and uh
44:23 thank you for being here today. Thank
44:24 you for investing your time with that
44:26 with us today and thank you for
44:28 evaluating or using our technology
44:29 already. Thank you and have a great rest
44:31 of your day at the
44:33 PL. Thank you.
44:38 [Music]