0:05 I'm really excited for this next
0:08 section. Uh, so we'll be doing a
0:09 fireside chat with Andrew Wing. And
0:11 Andrew probably doesn't need any
0:13 introduction to most folks here. I'm
0:15 guessing a lot of people have taken some
0:16 of his classes on Corsera or deep
0:19 learning. Um, but Andrew's also been a
0:21 big part of the lane chain story. So, I
0:24 met I met Andrew a little over two years
0:26 ago at at at a conference um when we
0:29 started talking about Lingchain and he
0:30 he he graciously invited us to do a
0:32 course on Lingchain with deep learning.
0:34 I think it must have been the second or
0:37 third one that they they they ever did
0:38 and I know a lot of people here probably
0:40 watched that course or got started on
0:42 Lingchain because of that course. So,
0:44 Andrew has been a huge part of the Lang
0:47 Chain journey and I'm super excited to
0:49 welcome him on stage for a fireside
0:50 chat. So, let's welcome Andrew in.
0:54 [Music]
0:55 [Applause]
0:59 Thanks for being
1:02 [Music]
1:05 here. But, by the way, Harrison was
1:07 really kind. Um, I think Harrison and
1:09 his team has taught six short causes so
1:12 far on deep learning.AI and our metrics
1:15 by net promoter score and so on are that
1:17 Harrison's causes are among our most
1:19 highly rated. So, go take all of
1:22 Harrison's courses. I think the recent
1:23 Langrath one had the clearest
1:25 explanation I have seen myself of a
1:27 bunch of agent concepts. So, they
1:29 they've definitely helped make our our
1:31 courses and explanations better. So,
1:33 thank you guys for that as well. Um,
1:36 you've obviously touched and thought
1:39 about so many things in this industry,
1:40 but one of one of your takes that I cite
1:43 a lot and and and probably people have
1:45 have heard me talk about is your take on
1:47 kind of like talking about the
1:49 agenticness of an application as opposed
1:52 to whether something is an agent. And
1:53 so, you know, as we're here now at an
1:56 agent conference, maybe we should rename
1:57 it to an agentic conference, but would
1:59 you mind kind of like clarifying that?
2:01 and and I think it was like almost a
2:03 year and a half two years ago that you
2:05 said that and so I'm curious if things
2:06 have changed in your mind since then. So
2:08 I I remember actually Harris and I spoke
2:11 both spoke at a conference like a year
2:13 over a year ago and at that time I think
2:15 both of us were trying to convince other
2:17 people that agents are a thing and we
2:19 should pay attention to it. And uh that
2:21 was before maybe I think it was
2:22 midsummer last year that a bunch of
2:24 marketers got a hold of the agentic term
2:27 and started sticking that sticker
2:29 everywhere until last meeting. But to to
2:31 Herren's question, I think about a year
2:33 and a half ago, I saw that a lot of
2:34 people are arguing is this an agent, is
2:36 this not there different, you know,
2:37 arguments. Is it truly autonomous not to
2:39 an agent? And I felt that um it was fine
2:42 to have that argument, but that we would
2:44 succeed better as a community if we just
2:46 say that there degrees to which
2:47 something is agentic. So um and then if
2:50 we just say if you want to build an
2:52 agentic system with a little bit of
2:53 autonomy or a lot of autonomy is all
2:55 fine. No need to spend time arguing is
2:58 this truly an agent. Let's just call all
3:00 of these things agentic systems with
3:02 different degrees of autonomy. Um, and I
3:04 think that actually hopefully reduce the
3:07 amount of time people wasted spend
3:09 arguing of something as an agent and
3:10 let's just call them all agentic and
3:12 then and then get on with it. So I I
3:14 think it's actually worked out.
3:16 Where where on that spectrum of kind of
3:18 like a little autonomy to a lot of
3:20 autonomy do you see people building for
3:22 these days? Yeah. So um my team
3:25 routinely uses land graph for our
3:27 hardest problems right with complex
3:29 flows and so on. Um I'm also seeing tons
3:32 of business opportunities that frankly
3:34 are fairly linear workflows or linear
3:37 with just occasional side branches. So a
3:39 lot of businesses there are
3:40 opportunities where you know right now
3:42 we have people um looking at a form on a
3:45 website doing web search checking some
3:47 of the database to see if there's a
3:48 compliance issue or if there you know
3:50 someone we shouldn't sell certain stuff
3:51 to. And it's kind of a or take something
3:54 copy paste it maybe do another web
3:55 search paste in a different form. So in
3:57 business processes there actually a lot
3:59 of fairly linear workflows or linear
4:02 with very small loops and occasional
4:04 branches usually conoting a failure
4:06 because they reject this workflow. So um
4:09 I see a lot of opportunity but but one
4:11 one challenge I see businesses have is
4:13 it's still pretty difficult to look at
4:15 you know some stuff that's being done in
4:17 your business and figure out how to turn
4:19 into an agentic workflow. So what is the
4:21 granularity with which you should break
4:23 down this thing into micro tasks? uh and
4:26 then you know after you build your
4:28 initial prototype if it doesn't work
4:30 well enough which of these steps do you
4:32 work on to improve the performance so I
4:34 think that whole bag of skills on how to
4:36 look at a bunch of stuff that people are
4:38 doing break it into sequential steps u
4:40 where the small number of branches how
4:42 do you put in place evals you know all
4:44 that that skill set is still far too
4:47 rare I think and and then of course the
4:50 much more complex agentic workflows I
4:51 think you heard a bunch about uh with
4:53 very complex loops um uh that that's
4:56 very valuable as well. But I see much
4:58 more in terms of number of opportunities
5:00 still about value. There's a lot of
5:02 simpler workflows that I think are still
5:04 being built out. Let's let's talk about
5:06 some of those skills like so you've
5:08 you've been doing deep learning. I think
5:09 a lot of courses are in in in pursuit of
5:12 helping people kind of like build
5:13 agents. And so what are what are some of
5:15 the skills that you think agent builders
5:18 all across the spectrum should kind of
5:20 like master and get started with? Boy,
5:23 it's a good question. And I wish I knew
5:24 the good answer to that. I've been
5:26 thinking a lot about this actually
5:27 recently. I think a lot of the challenge
5:29 is um uh if you have a business process
5:32 workflow you often have people in
5:34 compliance legal HR whatever doing these
5:36 steps how do you um put in place the
5:40 plumbing um either through a land graph
5:43 type integration or we'll see if MCP
5:45 helps with some of that too uh to ingest
5:47 the data and then how do you prompt or
5:50 process and do the multiple steps uh in
5:53 order to build this end to end system
5:55 and one thing I see a lot is um putting
5:57 in place the right eval framework um uh
6:01 to not only understand the performance
6:04 of the overall system but to trace the
6:06 individual steps you can hone in on
6:08 what's the one step that is broken
6:10 what's the one prompt is broken to work
6:13 on I find that a lot of teams probably
6:15 wait longer than they should just using
6:17 human evals where every time you change
6:19 something you then sit there and look at
6:21 a bunch of output receivers right I see
6:23 most teams probably slower to put in
6:25 place eval systematic evals is ideal.
6:28 But I find that um having the right
6:30 instincts for what to do next in a
6:32 project is is still really difficult,
6:34 right? The skilled teams um the the the
6:37 teams that are still learning these
6:38 skills will often, you know, go down
6:41 blind alleys, right? Where you spend
6:42 like a few months trying to improve one
6:45 component. The the more experienced team
6:46 will say, you know what, I don't think
6:48 this can ever be made to work. Uh so
6:50 just don't just find a different way
6:52 around this problem. I I I wish I knew I
6:54 wish I I I knew more efficient ways to
6:57 get this kind of almost tactile
6:58 knowledge. often you're there you you
7:01 know look at the the output look at
7:03 trace look at the lang output uh and you
7:05 just got to make a decision right in
7:07 minutes or hours on what to do next and
7:10 that's still very difficult and and is
7:12 this kind of like tactile knowledge
7:14 mostly around LLMs and their limitations
7:17 or more around like just the product
7:18 framing of things and and and that skill
7:21 of of of taking a job and breaking it
7:23 down that's something that we're still
7:25 getting accustomed to I think it's all
7:27 of the above actually so I I feel like
7:29 over the last couple years uh AI tool
7:32 companies have created an amazing set of
7:35 AI tools and this includes tools like
7:38 you know Lang graph uh but also uh how
7:41 do you ideas like how do you think about
7:43 rack how do you think about building
7:44 chat bots uh many many different ways of
7:47 approaching memory um I don't know what
7:49 else uh how do you build evals how do
7:51 you build guardrails but I feel like
7:52 there's this you know wide sprawling
7:55 array of really exciting tools one One
7:58 picture I often have in my head is um if
8:01 all you have are, you know, purple Lego
8:03 bricks, right? You can't build that much
8:05 interesting stuff. But and and I think
8:08 of these tools as being akin to Lego
8:09 bricks, right? And the more tools you
8:11 have is as if you don't just a purple
8:13 Lego bricks, but a red one and a black
8:15 one and a yellow one and a green one.
8:17 And as you get more different colored
8:19 and shaped Lego bricks, you can very
8:20 quickly assemble them into really cool
8:23 things. And so I think a lot of these
8:24 tools like the ones I was rattling off
8:26 as different types of Lego bricks and
8:28 when you're trying to build something,
8:30 you know, sometimes you need that right
8:32 squiggly weird shaped Lego brick and
8:34 some people know it and can plug it in
8:35 and just get the job done. But if you've
8:37 never built evals of a certain type,
8:40 then you know, then you could actually
8:42 end up spending whatever three extra
8:44 months doing something that someone else
8:46 that's done that before could say, "Oh,
8:47 well, we should just build evals this
8:49 way. use the OM as a judge and well and
8:51 just go through that process to get it
8:53 done much faster. So one of the for
8:56 unfortunate things about AI is not just
8:59 one tool and in when I'm coding I just
9:02 use a whole bunch of different stuff
9:03 right and I'm not a master of enough
9:05 stuff myself but I've learned enough
9:07 tools to assemble them quickly so um
9:10 yeah and I think having that practice
9:13 with different tools also helps with
9:15 much faster decision- making on for and
9:17 and oh one of the thing is it it also
9:19 changes so for example because OM have
9:21 been having longer and longer context
9:23 memory
9:24 um a lot of the best practices for rag
9:27 from you know a year and a half ago or
9:29 whatever are much less relevant today
9:31 right I remember Harrison was really
9:33 early to a lot of these things like play
9:35 the early lang chain rag frameworks
9:37 recursive summarization and all that as
9:40 OM context memories got longer now we
9:42 just dump a lot more stuff into context
9:44 it's not that rack has gone away but the
9:46 hyperparameter tuning has gotten way
9:48 easier there's a huge range of
9:49 hyperparameters that work you know like
9:51 just fine so so as OM keep progressing
9:55 um the instincts we hold you know two
9:58 years ago may or may not be relevant
10:00 anymore today. You you mentioned a lot
10:02 of things that I wanna I want I want to
10:03 talk about. So okay what are what are
10:05 some of the Lego bricks that are maybe
10:08 underrated right now that you would
10:09 recommend that that people aren't
10:11 talking about like eval you know we we
10:12 had we had three people talk about evals
10:14 and I think that's top of people's mind
10:15 but what are what are some things that
10:17 most people maybe haven't thought of or
10:19 or haven't heard of yet that you would
10:20 recommend them looking into? Good
10:22 question. I don't know. Yeah. Uh maybe
10:25 maybe I'm sure so even though people
10:27 talk about evals, for some reason people
10:29 don't do it. Uh near why why don't why
10:31 don't you think they do it? And I think
10:33 I think it's because people often have
10:35 um I saw a post on this on on eval
10:38 writers blog. People think of writing
10:39 evals as this huge thing that you have
10:41 to do, right? Um, I think of evals as
10:44 something I'm going to throw together
10:45 really quickly, you know, in 20 minutes
10:47 and it's not that good, but it starts to
10:50 complement my human eyeball evals and
10:53 and so what often happens, I'll build a
10:55 system and there's one problem where I
10:56 keep on getting regression. I thought I
10:58 made it work, then it breaks. I thought
11:00 I made it work, then it breaks. Well,
11:01 darn it, this is getting annoying. Then
11:03 I code up a very simple eval maybe with,
11:05 you know, five input examples and some
11:08 very simple LMS judge to just check for
11:10 this one regression, right? Did this one
11:12 thing break? And then I'm not swapping
11:14 out human evals for automated evals. I'm
11:17 still looking at the output myself. But
11:18 when I change something or run this
11:20 evals to just, you know, take this
11:21 burden something so I don't have to
11:23 think about it. And then what happens is
11:26 um just like the way we write English
11:29 maybe once you have some slightly
11:31 helpful but clearly very broken uh
11:35 imperfect eval then you'll start to go
11:38 you know what I can improve my eval to
11:40 make it better and I can improve it to
11:41 make it better. So just as when we built
11:43 a lot of applications we built some you
11:45 know very quick and dirty thing that
11:47 doesn't work and we incrementally make
11:48 it better. For a lot of the way I built
11:50 evals, I built really awful evals that
11:53 barely helps. And then when you look at
11:55 what it does, you go, you know what,
11:56 this eval is broken. I could fix it. And
11:58 you incrementally make it better. Uh, so
12:00 that's one thing. I'll mention one thing
12:03 that people have talked a lot about, but
12:05 I think is so underrated um is the voice
12:08 stack. Um, it's one of the things that
12:10 I'm actually very excited about voice
12:11 applications. A lot of my friends are
12:13 very excited about voice applications. I
12:15 see a bunch of large enterprises really
12:17 excited about voice applications. very
12:18 large enterprise, very large use cases.
12:21 For some reason, while there are some
12:23 developers in this community doing
12:25 voice, the amount of developer attention
12:27 on voice stack applications there, there
12:29 is some, right? It's not the people
12:31 ignored it, but that's one thing that
12:32 feels much smaller than the um large
12:35 enterprise uh uh importance I see as
12:37 well as applications coming down the
12:39 pipe. Um and not not all of this is the
12:41 real-time voice API. It's not all uh
12:43 speechtoech native uh audio in audio
12:47 models. I find those models are very
12:49 hard to control. Uh but when we use more
12:51 of an agentic voice stack workflow which
12:54 is we which find much more controllable
12:56 um uh boy a fan working with a ton of
12:59 teams on on voice stack stuff that some
13:02 of which hopefully will be announced in
13:04 the near future. I'm seeing a lot of
13:05 very exciting things. Um and then other
13:08 things I think underrated one other one
13:10 that maybe is not underrated but more
13:13 business should do it. I think many of
13:14 you have seen that um developers that
13:17 use AI assistance in our coding is so
13:19 much faster than developers that don't.
13:21 Uh I've been uh it's been interesting to
13:24 see how many companies CIOS and CTO's
13:27 still have you know policies that don't
13:29 let engineers use AI assisted coding. Um
13:32 I think maybe sometimes for good reasons
13:34 but I think we have to get past that
13:36 because frankly I don't know my teams
13:38 and I I just hate to ever have to code
13:40 again without AI assistance. So, but I
13:43 think some businesses still need to get
13:44 through that. Um, I think underrated is
13:47 the idea that I I think everyone should
13:49 learn to code. Uh, uh, one one fun fact
13:51 about AI fund, um, everyone in AI fund,
13:55 including, you know, the person that
13:57 runs our front desk receptionist, uh,
13:59 and my CFO and my, uh, at and the
14:02 general counsel, everyone in AI fund
14:04 actually knows how to code. And um it's
14:06 not that I want them to be software
14:08 engineers, they're not. But in their
14:10 respective job functions, many of them
14:11 by learning a little bit about how to
14:13 code are better able to tell a computer
14:16 what they wanted to do. Um and so it's
14:18 actually driving meaningful productivity
14:20 improvements across all of these job
14:23 functions that are not software
14:24 engineering. So that that's been
14:26 exciting as well. Talking about kind of
14:27 like AI coding, how how what what tools
14:30 are you using for that personally?
14:34 So we're working on some things that
14:37 we've not yet announced. Um Oh,
14:39 exciting. Yeah.
14:42 So maybe I I I do use cursor winds surf
14:46 uh um uh and some other things. All
14:50 right, we'll come back to that later.
14:53 Um talking about voice, if if people
14:55 here want to get into voice and they're
14:56 familiar with building kind of like
14:58 agents with LLMs, how how similar is it?
15:00 Are there a lot of ideas that are
15:02 transferable or or what's new? what will
15:03 they have to learn? Yeah. So, it turns
15:06 out um there are a lot of applications
15:08 where I think voice is important. It
15:10 creates certain interactions uh that um
15:14 that are much more it turns out that uh
15:17 it turns out from an application
15:18 perspective um a input text prompt is
15:22 kind of intimidating right for a lot of
15:24 applications. Well, we can go to user
15:25 and say tell me what you think. Here's a
15:27 block of text prompt. Write a bunch of
15:29 text for me. That's actually very
15:30 intimidating for users. And one of the
15:32 problems with that is um people can use
15:35 backspace and so you know people are
15:37 just slower to respond via text whereas
15:40 for voice you know time rolls forward
15:43 you just have to keep talking you could
15:45 change your mind you could actually say
15:46 oh I changed my mind forget that earlier
15:48 thing and our model is actually pretty
15:49 good at dealing with it but I find that
15:51 um there a lot of applications where the
15:53 user friction to just getting them to
15:55 use it is lower and we just say you know
15:57 tell me what you think and then they
15:59 they respond in voice Um so in terms of
16:02 voice the one biggest difference is uh
16:04 in in terms of um engine requirements is
16:06 latency because if you can if someone
16:09 says something you kind of really want
16:11 to respond in you know I don't know sub
16:13 one second right less than 500 millconds
16:15 is great but really ideally sub one
16:18 second and with a lot of um agentic
16:20 workflows that will run for many
16:22 seconds. So when DBWI worked with real
16:25 avatar to build an avatar of me, uh this
16:27 is on a web page. You can talk to an
16:28 avatar of me if you want. Um uh our
16:31 initial version had kind of five to nine
16:34 seconds of latency and was and it's just
16:36 a bad user experience. You say
16:38 something, you know, 9 seconds of
16:39 silence, then my avatar responds. But so
16:42 we w up building things like um uh we
16:44 call a pre-response. So just as you
16:46 know, if you ask me a question, I might
16:48 go, "Huh, that's interesting." Or, "Let
16:50 me think about that."
16:51 So, we prompted an ARM to basically do
16:54 that to hide the latency. Um, and it
16:57 actually seems to work great. And there
16:58 all these other little tricks as well.
17:00 Turns out if you're building a voice um
17:02 customer service chatbot, uh, it turns
17:04 out that if you play background noise of
17:06 a customer contact center instead of
17:08 dead silence, people are much more
17:10 accepting of that of that, you know,
17:12 latency. So I find that there are a lot
17:14 of these things that um uh that are
17:17 different than a pure textbased LM but
17:20 in applications where a voice-based
17:22 modality lets a user be comfortable and
17:24 just start talking. Uh I think it
17:26 sometimes really reduces the user
17:28 friction to you know getting some
17:29 information out of them in a safe but I
17:32 think when we talk we don't feel like we
17:35 need to deliver perfection as much as
17:37 when we write. Um, so it's somehow
17:39 easier for people to just start blurting
17:41 out their ideas and change their mind
17:42 and go back and forth and that lets us
17:44 get the information from them that we
17:46 need to help the user to move forward.
17:48 So, huh, that's interesting.
17:52 Yeah. Um, one of the one one of the new
17:55 things that's out there and you
17:56 mentioned briefly is MCP. How are you
17:59 seeing that transform how people are
18:01 building apps, what types of apps
18:03 they're building or what's generally
18:04 happening in the ecosystem? Yeah, I
18:06 think it's really exciting. Uh just this
18:07 morning we released with anthropic uh
18:10 short halls on MCP. Um uh I actually saw
18:13 a lot of uh stuff you know on the
18:16 interweb on MCP that I thought was quite
18:19 confusing. So when we got together
18:21 anthropy we said you know let's let's
18:22 create a really good short course on MCP
18:25 that explains it clearly. I think MCP is
18:28 fantastic. I think it was a very clear
18:29 market gap and you know that that OpenAI
18:31 adopted it also I think speaks to the
18:34 importance of this. Um I think the MCP
18:37 standard will continue to evolve right
18:39 so for example so I I think many many of
18:42 you know what MCP is right makes it much
18:44 easier for agents primarily but frankly
18:46 I think other types of software to plug
18:48 into different types of data when I'm
18:50 using OM myself or when I'm building
18:53 applications frankly for a lot of us we
18:56 spend so much time on the plumbing right
18:58 so I I think for those of you from large
19:00 enterprises as well the AI especially
19:03 you know reasoning models are like
19:04 pretty darn intelligent They could do a
19:06 lot of stuff when given the right
19:08 context. But so I find that I spend my
19:11 team spend a lot of time working on the
19:13 plumbing on the data integrations to get
19:15 the context of the OM to make it you
19:18 know do something that often is pretty
19:20 sensible when it has the right input
19:21 context. So MCP I think is a fantastic
19:25 way to try to standardize the interface
19:27 to a lot of tools or API calls as well
19:29 as data sources. Um it feels like uh it
19:33 feels a little bit like wild west. You
19:34 know a lot of MCP servers you find in
19:36 the internet do not work right and then
19:38 the authentication systems are kind of
19:40 you know even for the very large
19:42 companies you know with MCP servers a
19:44 little bit clunky. It's not clear if the
19:46 authentication token totally works and
19:48 expires. There's a lot of that going on.
19:50 Um I think the MCP protocol itself is
19:52 also early right now. MCP gives a long
19:55 list of the resources available. you
19:58 know, eventually I think we'll need some
20:00 more hierarchal discovery. Imagine you
20:02 want to build something um I don't know
20:05 even I don't know if there ever be an
20:06 MCP uh interface to to a land graph but
20:09 Lang graph has so many API calls you
20:11 just can't have like a long list of
20:13 everything under the sun for agent to
20:16 sort out. So I think some sort of
20:17 hierarchal discovery mechanism. So I
20:19 think MCP is a really fantastic first
20:21 step. Definitely encourage you to learn
20:23 about it. it will make your life easier
20:25 probably um if you find a good MCP
20:27 server implementations to help some of
20:29 the data integrations and I think I
20:31 think it'll be important this this idea
20:33 of um when you have you know n models or
20:36 n agents and m data sources it should
20:40 not be n* m e effort to do all the
20:42 integration should be n plus m and I
20:44 think mcp is a is a fantastic first step
20:47 it will need to evolve but it's a
20:49 fantastic first step toward that type of
20:51 data integration
20:53 Another type of protocol that's seen
20:55 less buzz than MCP is some of the agent
20:57 to aagent stuff. And I remember when we
21:00 when we were at a conference a year or
21:02 so ago, I think you were talking about
21:03 multi-agent systems which this would
21:05 kind of enable. So how how do you see
21:07 some of the multi-agent or agentto agent
21:09 stuff evolving? Yeah. So I think you
21:12 know agent AI is still so early most of
21:16 us right including me we struggle to
21:18 even make our code work. And so making
21:22 my co my agent work with someone else's
21:24 agent, it feels like a two miracle, you
21:27 know, requirement. Um, so I see that
21:31 when one team is building a multi- aent
21:33 system, that often works because we
21:34 build a bunch of agents, they go with
21:36 themselves, we understand the protocols,
21:37 blah, blah, that works. But to right
21:39 now, at least at this moment in time,
21:41 and maybe I'm off, the number of
21:43 examples I'm seeing of when, you know,
21:45 one team's agent or collection of agents
21:47 successfully engages a totally different
21:49 team's agent or collection of agents. I
21:51 think we're a little bit early to that.
21:53 I'm sure we'll get there, but I'm not
21:55 personally seeing, you know, real
21:57 success, huge success stories of that
21:59 yet. I I'm not sure if yall seeing No, I
22:02 agree. It's it's I I think it's super
22:04 early. I think if MCP is early, I think
22:06 agent agent stuff is even earlier. Um,
22:09 another thing that's kind of like top of
22:11 people's mind right now is is kind of
22:12 vibe coding. Um, and all of that, and
22:14 you touched on it a little bit earlier
22:16 with how people are using the these AI
22:18 coding assistants, but how how do you
22:21 think about vibe coding? Is that a
22:22 different skill than before? What what
22:24 what kind of purpose does that serve in
22:26 in the world? Yeah. So, but I I think
22:28 you know, many of us code with barely
22:30 looking at the code, right? I think it's
22:32 a fantastic thing to be doing. Um, I
22:34 think it's unfortunate that that call
22:35 Vive coding because it's misleading a
22:38 lot of people into thinking just go with
22:39 the vibes, you know, accept this, reject
22:41 that. And frankly, when I'm coding for a
22:44 day, uh, uh, you know, with Vive coding
22:47 or whatever with air coding assistance,
22:49 I'm frankly exhausted by the end of the
22:50 day. This is a deeply intellectual
22:52 exercise. Um, and so I think the name is
22:55 unfortunate, but the phenomenon is real
22:57 and it's been taking off and is great.
22:59 Um so I I I I I over the last year a few
23:04 people have been advising others to not
23:07 learn to code on the basis that AI will
23:09 automate coding. I think we look back at
23:11 some of the worst career advice ever
23:13 given um because over the last many
23:16 decades as coding became easier uh more
23:19 people started to code. So it it turns
23:22 out, you know, when we went from punch
23:23 cards to keyboards and terminals, right?
23:26 Or when it it turns out, I actually
23:27 found some very old articles when
23:29 programming went from assembly language
23:31 to, you know, literally cobalt. There
23:33 were people arguing back then, yep, we
23:35 have cobalt, it's so easy, we don't need
23:37 programmers anymore. And and obviously
23:39 when it became easier, more people learn
23:41 to code. And so with AI coding
23:43 assistance, more a lot more people
23:45 should code. Um uh but I think and it
23:49 turns out one of the most important
23:50 skills of the future for developers and
23:52 non-developers is the ability to tell a
23:54 computer exactly what you want so they
23:56 will do it for you. And um I think
23:59 understanding at some level which all of
24:02 you do I know but understanding at some
24:03 level how a computer works lets you
24:06 prompt or instruct a computer much more
24:08 precisely which is why I still try to
24:11 advise everyone to you know learn one
24:13 programming language learn Python or
24:14 something. Um, and then I I think so
24:17 maybe some of you know this. I I
24:19 personally am a you're much stronger
24:21 Python developer than say JavaScript,
24:24 right? Uh, but with um AI assisted
24:27 coding, I now write a lot more
24:29 JavaScript and TypeScript code than I
24:31 ever used to. But even when debugging,
24:33 you know, JavaScript code that something
24:36 else wrote for me that I didn't write
24:37 with my own fingers, really
24:39 understanding, you know, what are the
24:41 error cases? What does this mean? that
24:42 that's been really important for me to
24:45 write debug my JavaScript code. So, if
24:48 you if you don't like the name vibe
24:50 coding, do you have a better name in
24:51 mind? Oh, it's a good question. I should
24:54 think about that. We'll we'll get back
24:55 to you on that. That's a good question.
24:57 Um, one one of the things that you
24:59 announced recently is a new fund for AI
25:01 funds. So, congrats on that. Thank you.
25:03 For people in the audience who are maybe
25:04 thinking of starting a startup or
25:06 looking into that, what advice would you
25:08 have for them?
25:09 So um AI funds a venture studio. So we
25:12 built companies and we exclusively
25:14 invest in companies that we co-ounded.
25:16 So um I think in terms of looking back
25:19 on AI funds, you know, lessons learned,
25:21 the the number one I would say the
25:23 number one predictor of a startup
25:26 success is speed. Um I know we're in
25:29 Silicon Valley, but I see a lot of
25:31 people that have never seen yet the
25:33 speed with which a skilled team can
25:36 execute. And if you've never seen it
25:38 before, I know many of you have seen it,
25:40 it's just so much faster than you know
25:42 anything that
25:44 um slower businesses know how to do. Uh
25:47 and I think the number two predictor
25:49 also very important is uh technical
25:51 knowledge. It turns out if we look at
25:52 the skills needed to build a startup,
25:54 there's some things like how do you
25:56 market, how do you sell, how do you
25:57 price, you know, all that is important,
25:59 but that knowledge has been around. So
26:01 it's a little bit more widespread. But
26:03 the knowledge that's really rare is how
26:05 does technology actually work because
26:07 technology been evolving so quickly. So
26:09 I I have deep respect for the go to
26:11 market people. Pricing is hard, you
26:13 know, marketing is hard, um positioning
26:15 is hard, but that knowledge is more
26:17 diffused and the most rare resource is
26:19 someone that really understands how the
26:21 technology works. So AI fund, we really
26:24 like working with deeply technical
26:25 people um that have good instincts or
26:28 understands do this, don't do that. This
26:31 lets you go twice as fast. Um, and then
26:33 I think uh uh a lot of the business
26:36 stuff, you know, that knowledge is very
26:38 important, but it's usually easier to
26:40 figure out. All right, that's great
26:42 advice for starting something. Um, we
26:45 are going to wrap this up. We're going
26:47 to go to a break now, but before we do,
26:48 please join me in giving Andrew a big
26:50 hand and thank you.