The adoption of AI, particularly large language models (LLMs) and agents, represents a fundamental behavioral shift rather than a traditional technology upgrade. The primary obstacle to AI adoption is not the technology itself, but the human tendency to treat it as a familiar tool to replace existing processes, rather than a catalyst for reimagining workflows and ways of working.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
My dad, you know, who is 85 now, old Irish guy. I gave him a new iPhone back in probably like 2011
or something, right? He used to have an old flip phone. He's like, "You know what this thing has?"
I'm like, "Yeah." He's like, "It has a flashlight. Did you know this thing has a flashlight?" And I'm
like, "I did. I did know it had a flash wh-what?" I told him it was replacing his flip phone. An
iPhone doesn't replace your flip phone. Like an iPhone replaces how you bank and how you order
cards and how you do your social network, everything. If you're on the fence about AI
adoption, you aren't alone. It's easy to hear all of the hype, but when it comes to implementation,
the reality starts to set in. So, let's get your systems up and running. Our guest today
is bringing people into the AI future, as well as shaping the minds that will redefine
business. Conor Grennan, welcome to AI in Action. Thanks for having me. Well, first, what is the
Chief Architect of AI at NYU? Yeah, that's a great question. It's it's a new role because
it's a very new field. So Chief AI Architect at NYU Stern uh is really responsible for elevating
and upskilling the entire institution. So in the institution obviously we have students,
we have faculty, we have administration and we really want everybody thinking about this in
a different way. It's it's one thing if you just have a new technology or a new system or something
like that that's pretty straightforward. You can just have IT obviously handle that. This is very
very different. So the role that I have there is really trying to figure out how to best upskill
that entire institution. How did you get this job? Yeah, I mean look, I there's a lot of um really
cool stories I could tell you about that, but let's just go with the truth for now. Um you know,
the truth is that uh I actually have no technical background, which is kind of unbelievable to have
this sort of role with no technical background. But just to sort of bore you with a story real
quick about this in early, let's call it really early 2023, ChatGPT came out obviously in very
late 2022. But I discovered this thing and went down this very deep rabbit hole of, you know,
saying, "Hey, what is this thing?" And then realizing like, wait, there's no bottom to this.
And then the really crazy thing that I discovered as a non-technical person was what do I need to
know to really make this sing, to really make it operate well, to really be an expert in this. And
so I started taking classes in machine learning, AI and everything and soon realized that I didn't
actually need any of that to be really really good at it. So I very quickly built out a framework
because my you know theory of work here was that this was going to transform everything. So this
is very early 2023. So built out a framework to teach you know students and fellow administrators
and faculty etc. And in doing that I also realized well wait a minute I actually don't know how this
is going to apply to different industries because I don't I don't I just don't know. It's such a new
technology. So I would go out to kind of you know leadership teams around the city and say hey how
are you using this and the answer that kept coming back was like oh we're not really using it and you
know and I kept saying like oh you should you know you should check it out it's it's kind of it's
kind of powerful and they would say well can you teach us how to do it and then we'll tell you what
the use cases are for asset managements or hedge funds or healthcare or whatever it would be. So I
would do that. I was like great. I would go out and teach these leadership teams frameworks you
know and after about an hour they would say this is going to transform everything. And I would walk
out being like Conor you are the greatest teacher in history. And then I would go back maybe three
or four weeks later and say okay you know what do you got? How are you using this? I really want
to teach our MBA students. And two answers would come back. This is this won't surprise you David
but may surprise other people. The first answer was like oh we haven't really gotten back to it
you know and I was like what is there not to get back to? It's incredibly easy to use. All you do
is talk to it. So that was one uh answer and the other answer that would come back in the very
minority kind of the minority of people would be okay yeah so it's you're right it's amazing
and I was like amazing what are you what are the use cases you're using and they would say oh the
use cases that you know that you gave us and I was like but what the hell do I know about healthcare
like I don't know like I just I googled those use cases like what are you doing and they're like no
those are great so all that is to say that in the very very earliest days I think what I started to
realize and and started to build out and do a lot of research on NYU was realizing this is different
and I couldn't figure out why but that led us down this incredible path where we started to realize
through a lot of research and just you know trying this with a lot of different organizations that
this was actually much more of a behavioral shift than something you had to learn and that's how I
sort of got off and running on it. Yeah. And that and you that that translated into your
own consulting business. Yeah. So that's what I do a lot now. So now I'm I'm out and I'm working
with a lot of large organizations kind of like really across the board doing a ton of training
for everybody. I mean it feels like you know NASA and Mackenzie and PWC we've worked with and we've
worked with uh you know we're about to work with anyway a lot of companies that that I can't name
also but just really in law firms and in retail and in healthcare and everything and the the the
crazy thing about that is that it applies you know this sort of system applies across every single
industry because it has nothing to do with the industry it has everything to do with how we think
and how we relate to this technology. So it's not the use case, it's just the technology. And really
when it comes down to AI transformation, I don't want to put words in your mouth,
but it's behavioral. Like the the the adoption of this technology is what a lot of the times is
the is the is like the obstacle in order to get this technology up and running in industries. I
think that's it 100%. I think that what we've found is that it's so radically different and
I think people have a hard time getting their head around it. So, we were talking about this a little
bit off the air, David, but like, you know, one of the crazy things is that usually with technology,
there's, you know, a learning curve associated with it. If you have to or even anything, right?
If you're learning like French or, you know, Excel or Calculus, whatever it is, there's something
where you have to learn it, right? And there's some there's a moment where you have to go figure
out Excel and figure out all the things that come along with it or just whatever you have to learn
in another language or something like that. And what we've been finding here is that this has
actually nothing to do with the learning curve. In the same way that if you wanted to get into,
you know, better shape or something like that, right? Like we generally know that we should
probably like eat less and exercise or something, right? I mean, like there's nothing to, right? In
theory, in theory, I'm talking here. Uh, you know, there's nothing to learn though, right? I mean,
like we already have the knowledge. The challenge is purely behavioral. The challenge is can we
actually change our behavior? And the problem is that this looks like a normal technology,
but there's a lot that goes wrong with it, right? Because and not because of the technology. The
technology is as you and I have talked about is very very powerful right now. You know that better
than anybody. So what is the problem? The problem is the adoption. The problem is that our brain
our very sort of like you know limited human brain believes that this is something we have to learn.
So we're literally using the wrong part of our brain to do this instead of using the behavioral
shift and figuring out what will drive behavioral shift. And that's our sort of specialty not just
a TED talk saying isn't that cool like it's a behavioral shift like how do we actually make
that behavioral shift? But I think the big problem is that people are treating it just as another
technology to learn. And as you and I were saying, there's nothing really to learn. You just have to
shift how you work. So that the first challenge you face like when a client reaches out to you,
the first challenge is like you explain to them, this is a behavioral shift. 100%. And it's one
of these things that is easy to say, but then how do you actually make that shift with clients? And
I would say one of the first things that clients say is hey can you show us how this will work in
whatever you know asset management or retail or something like that and you know and in fairness
and I don't want to like out myself here but I always say yes of course we'll talk about that
and by the way it's true we will talk about that but the reality is and I think you know
this especially in what you do David is when I walk in I'm like you're going to tell me how this
operates in your business because you have to be the industry experts you understand your business
all that kind of stuff. All we're doing is helping you shift. So in other words, if you had a usual
kind of technology, you say, "Oh, how does this work in, you know, healthcare or or advanced
sciences or or or whatever drug discovery, there usually is a way that a technology works in those
different fields and it works differently." That's not the case with this. In fact, we go
so far as to say we don't teach in use cases. We don't do there's a lot of things we don't do that
are sort of the typical ways of doing this and we just find they don't work. So you said before
when we were talking off the air, you said that this isn't a digital transformation. What do you
mean by that? Yeah, you know, it's it's one of the first things I say to organizations that I
I go in and talk to and and the reason is that I think we're thinking that it is just the typical
digital transformation. And if you think about a digital transformation, and I don't know who does
it better than than you all, you've done a million of these going all the way back like you're the
OG of digital transformation. I don't know if you actually invented the the term, but with a
digital transformation, if you put it into sort of like the very simplest, you know, elements of it,
essentially, you're taking an old technology and replacing it with a new technology. That's a
digital transformation. And if you think about how that's done, again, in very simplistic terms, but,
you know, usually you sort of like give everybody this new technology like, hey, here's our new,
you know, CRM system or whatever it is, doesn't matter. And you have to force people to use it.
So you actually, you know, give everybody training on it and you have mentors that tell people how to
do it and you give people use cases about how to do it and you try to explain why it's going to be
better and the vendor comes in all that kind of stuff. And then importantly, you always have to
um you have to like burn down the old system because everybody's like, "Ah, the old CRM
is so much better, right?" Like no, man. Like no, you cannot do that. But the the point of a digital
transformation is that you're switching out one thing for another. And what we found is that
that's actually pretty easy on the brain, right? Because the brain does templates really well,
right? It does like pattern prediction things like that. So one of the things I that I always think
is interesting to show is like a a slide of you know where it shows like chatbt reached a million
users in 3 days and Netflix took this long and Spotify took this all that. The interesting thing
about a slide like that is with all these other things your brain knows what it is replacing and
so you're just switching out you know going to Blockbuster for Netflix or using you know the old
hotel system for Airbnb whatever it is. And if you think about that that you can take that technology
all the way down to like the hammer uh you know replace like the rock and the rock replaced like
your hand you know I mean essentially that's what a digital transformation is. It's switching
something out and your brain has no problem with that. It's like okay used to do it this way now do
it this way. But what if you came along and said this is totally different like this is the way
that you think about everything. In fact I would argue what does a large language model replace?
And by the way when I ask that in big rooms like people are like often of course as you can imagine
Google. It's like kind of but not really, right? And then the second one is always like your brain.
I'm like, "No, what?" No, man. Like it doesn't replace your brain, but but you know what I mean?
Like that's the instinct. So I think that's the difference between a digital transformation and
what we have here. So it's interesting about this for me. So we we we often work with, you know,
customers who want to automate their workflows. They have these huge workflows and they say, "Oh
my god, Agentic AI, we could take this component, this component, this component, and replace that
with agents." and the whole workflow would just be the exact same except now agents are doing them.
And this really hits on a point where you really have to think of well does this workflow even
make sense in this new reality. We have this tool that can do much more and anything you really not
anything you want but honestly kind of anything you want to do and you can replace your entire
workflow. So it wouldn't even be a transformation. Well, it's a transformation I would say is like
kind of like taking A and replacing it with B. And this is more like a whole it's as kind of you're
going like it's a whole new way of working. So when we my whole point about like the the Google
search engine and everything else is that when you think what you know what it's replacing your brain
sort of like locks that in and you lock in the new way of doing it's sort of like the Henry Ford
with like the faster horse. It's like well hold on what if there was a totally different way of doing
things. And that's what to me is so interesting is like when your brain locks in but like I could
tell this kind of quick story but like when we think about a digital transformation and thinking
about what something replaces the problem with giving people you know agents is they're very
very powerful but they also sort of mimic what we do already and I think what people are doing and
this is what you and I were talking about I think and I've I've heard you talk about this before and
I've always loved how you've talked about this which is people are just uh very limited in how
they think and they're just using that AI agent to actually replace this other workflow. So it's
like great now I can spend a an hour less rather than totally rethinking this. And the way I think
about it is just to kind of give you an analogy. If your brain doesn't know what a large language
model replaces, it's going to come up with something pretty simplistic. So you know, one
analogy is I gave my um my dad, you know, who's 85, now old Irish guy. I gave him a new iPhone
back in probably like 2011 or something, right? He used to have an old flip phone and I was like,
"Dad, here's your new phone." He's like, "Oh, it's kind of hard to hold." I'm like, "Yeah, I know,
but okay, just just go. Just try it." And he's like, "Okay." And I decided I'm going to let him
just figure this thing out on his own. Right. So it to it was like a week went by before he called
and I was like, "What do you think?" He's like, "Counter, you're totally right. This new iPhone
is like unbelievable." I'm like, "Right." He's like, "You know what this thing has?" I'm like,
"Yeah." He's like, "It has a flashlight. Did you know this thing has a flashlight?" And I'm like,
"I did. I did know it had a fl What? What?" So point being, like, why did that happen?
It's because I told him it was replacing his flip phone. An iPhone doesn't replace your flip phone.
like an iPhone replaces how you bank and how you order cars and how you do your social network,
everything. So, I think what's happening that we're seeing with large language models
is something has nothing to do with the large language model, everything to do with our brain,
which is it's replacing something. And I think we're seeing the same thing, and you know this
better than uh anyone probably. I think we're seeing the same thing with agents. Don't you think
people are just using it to replace something rather than innovating? A big part of this
behavioral shift, I think, is to say, be creative. Look at this. Look at this tool that you have.
think about how you could reform everything you're doing with it. Mhm. That's exactly right. And we
see that over and over and over again. And one of the reasons we don't teach use cases is that
people don't extrapolate on use cases. The problem is if you're good at using a large language model,
I'm guessing if people are listening to the show, a lot of people are good at this. The problem
isn't whether you know how to use it or not. Yeah, you know how to use it. The problem is can
you get others to use it? And that to us is the holy grail. So when we think about AI adoption,
what slows AI adoption and I would even say brings it to a halt. We sort of see four things. And by
the way, a lot of these we used to do ourselves. So just to sort of like run through them real
fast. So the first one is, you know, the the lighthouse case like, oh look what Walmart's doing
or look what IKEA is doing. It doesn't that's not actionable, right? So forget how much ROI they're
making. It doesn't matter if you don't know how to do it. The second thing is the tool roll out,
right? We see this all the time. 10,000 co-pilot licenses go on computers around the organization,
right? And it seems great for the people who use C-Pilot. They're like, "This has changed
my life. Now it's going to change everybody's life." It would be like putting a treadmill in
every house in America and thinking you're going to cure heart disease. It doesn't work that way.
It's It doesn't matter how many features your treadmill has. I do have a treadmill. I mean,
it's downstairs and I bought this thing and I was like imagining how I'm going to
look and this summer my wife's going to be like, "Who's that guy?" Right? I mean, like, you know,
and yet all the features do nothing unless I actually get on the treadmill. And the reason
I get off that treadmill has nothing to do with whether I not I know how to run on the treadmill.
A toddler could figure out a treadmill. It has to do with whether you know what actually has to do
with my lyic system prioritizing quick rewards and conserving energy. But but but that aside rolling
out tools like treadmills or or large language models doesn't work because it's not something
that people will naturally just like learn and then replace with something. Right? So that's the
that's the second thing. The third thing is the use case problem. Right? Again we people teach in
use cases all the time. You and I see this all the time. Right? So and as you and I see all the time,
people don't extrapolate. No. And the the the reason is I believe is that it's just too broad
a general purpose technology. And so the analogy I use a lot is like like electricity, right?
Like nobody wakes up in the morning and is like what are some use cases for electricity today,
right? Like how can I make my life more productive using electricity, right? No, you start you it's
a dark room and so you use electricity or you have to open the garage door, whatever it is,
right? Except that uh with with a purpose, you know, a general purpose technology that's broad,
everybody's stuck on like the light bulb. People like, "Yeah, the light bulb. like and you can use
the light bulb to do all the same stuff but now it's you know better lit right I mean like that's
the that's the problem and then the last thing is and this is what I was going for here is the AI
champion and by the way we used to teach this AI champion model all the time like hey get people
who are really good at it they'll teach everybody else it doesn't work and it doesn't work and this
is the whole problem because again it doesn't matter if you know how to do it does your team
know how to use it and not even know how to use it but are they using it and with the AI champion the
analogy I use real quick on this is if you wanted to get your whole office healthy, right? Like you
have a big team here and everybody seems great and but if you wanted everybody to do like, you know,
yoga every morning and you wanted to get one of your team members to sort of, you know, say like,
"Hey, you know, yoga, teach everybody yoga." That person could teach everybody the moves,
everything else. So at the end of the month, everybody would know all the moves,
why it's healthy. But what they would not know or what they would not do rather is get up every
morning and do because one is a behavioral shift and the other is a learning curve. So that's why
these things I think stall in real enterprise. So you when you go to a client, are you looking for
the AI champion? You're looking for the person that you think this is the person who's going to
be able to affect change on this organization to have that behavioral shift. How do you find who
that person is? I think you have to find that person in order to bring everybody else along,
but it has to almost happen after the fact. So for example, like I'll go and do workshops with senior
leadership teams. And one thing that happens a lot is that the senior leadership team will say
something like, "Hey, can you come in and give us priv, you know, specific demos?" Cuz I always do
demos. Can you give us uh demos that are tailored to whatever private equity or something like that,
right? And I'll be like, "Sure." And so I'll come in and I'll do the demo. Now, here's where it
backfires. And here's where the AI champion has to come in. It'll backfire and the the the skeptics
in the room, because you always have skeptics in the room of like a senior leadership team, you
run into these folks all the time, right? And the challenge is that those skeptics will look at what
I've done and say, "But that's not a good answer." And now I'm stuck, right? Because I don't know
if it's a good answer or not because I'm not in private equity. And then everybody else is like,
"Oh, but so but so what I do instead, and this is how I really recommend people run workshops,
by the way." So sort of one of my trade secrets is like get somebody on board before, right? Sort of
say like, "Hey, I'm going to turn to you and I'm going to be like, you know, John, tell me how you
are using this." And that's the AI champion who's really absorbed it. And then we sort of like do it
live together. But what they can't do is then walk out of that room and just get everybody else to
use it because the problem isn't whether people have the tool or understand it or anything like
that. The problem is how do you shift behavior and that's what that's the specialty we do. That's
amazing. First of your trade secret is going to come with me back to my seriously because we we
work a lot like that. Exactly. Like we sit in a room with a lot of people and we say okay what
are your problems? we're the experts in whatever this technology is and we're we're the ones who
have to be kind of creative but we're you know we're coming in from IBM and work with a customer
internally how do you how do you foster that creativity within the organization I think that
you have to train a wildly different way I think that everything we've been doing and I believe
this very deeply I think has been wrong and I think it's proven to be wrong by the fact that
we're almost 3 years into this whole journey and still adoption as you've seen as I've seen is very
low right I mean like there's some organizations that are you know really killing it and doing some
great work but it tends to be sort of isolated at least how I've seen it right so given all that
like how do we make that transformation so what I do in organizations is first of all point out that
it is a behavioral transformation but if I just did that it would just be a TED talk right I mean
like that's not actually that interesting what's actually interesting is we have been using Google
for example right like for since the dawn of dawn of time it feels like right the dinosaurs were
using Google and how do we use Google right so we look at this thing it's a search engine we give
it a command we get response, we walk away. Now, the problem is when your brain actually sees this,
your brain actually looks at this. The way neural pathways work, your brain actually thinks it's
looking at a search engine in the same way that if you see like a baby, you wouldn't accidentally
talk to it, you know, like a college professor, right? Like your brain works on visual cues. It
just does things automatically. You can drive in the rain probably, right? And talk, have a
conversation because of muscle memory. So, that's what our brain is doing totally subconsciously.
Now, if it looked like C3PO or if it looked like a person or something like that, you would never
do that. You would never sort of say you know, to a person like, "Hey, give me the top 10 things to
do in Costa Rica." And the person would be like, "Sure, where do you want to go? I've been there."
No, no, I just just give me the top 10. You know what I mean? Like, you never do that. You'd have
a conversation. The problem is your brain doesn't see this thing as a conversational entity. It sees
it as a command response, walk away. And there's no way to fix that in your brain unless you start
to build out paradigms and frameworks. Dude, I had to do the same thing. So for example and
we've before LLM before gender AI before all these powerful tools anytime I would deal with web chat
I would think okay what is the decision tree what are the key words I have to get to I just need to
get to this answer so I would talk to it in like single word trying to get down the decision tree
of the chatbot to get me to where I want to go and now when we're building this out like we have
to it's a behavioral thing it's like I am I could communicate really broad like super broad and you
have like these agentic routing like where I could just say like a I'm very verbose in obviously
um and and so I I would I would test it out like when I show like we're building out
these applications just like these huge verbose like meandering statements trying to get to the
end result that I want and amazingly the LMS and agents are able to get us there which is again you
would not anticipate that because you're so used to NLP you're so used to decision trees you're so
used to this stuff and that's something you you have to teach people do you teach about using
these powerful tools and not using them as a crutch like because is that as an engineer,
as an architect of this kind of stuff, it feels like I use a tool that could help me write better
code and I start using it and then I'm like start using it a little bit more and I have to
like scale back because all of a sudden I'm now relying on this tool almost totally to do a job
that I am supposed to be doing. Is there is there anything that when you talk about behavioral shift
is there anything that like that that comes up? There is. And and I'm by the way dying to get your
answer on this too because what I find is that you know if I'm speaking to a big room or a team
or something like that, I get asked the question almost inevitably, well isn't this cheating? And
I'm like it depends. Like I mean if if this was you know 8th grade and they were judging you on
whether or not you could write this then yes it's cheating right? But if you're employer just wants
a better output I mean you use Google and you use friends and you use colleagues and you're you're
just it's a it's a resource that's all. And what happens in law firms a lot I work a lot with law
firms. What happens in law firms all the time is that the they'll say, "Well, it's it's a disaster
because the junior lawyers are giving us things that are inaccurate." I'm like, "Then fire that
lawyer." I don't mean that literally, but do you know what I mean? Like, it's the point is that
everybody's responsible for their output. And now to your question about this, I'm struggling too
cause I'm a writer by background and I'm finding myself like writing and I'm finding myself like
just now instinctively sort of like checking with one of these large language models and I'm like,
"Okay, so what does that mean?" And I'm like, well, what that means very importantly,
and the huge difference with students, why I don't think students should always be using AI,
even though I'm a advocate, is students and young people have to learn the critical thinking skill.
I already kind of know what good quality looks like. What is the foundation that companies need
to have before they start scale like scaling their AI agentic uh workflows inside their organization?
Yeah, it's it's a good question because I'm guessing that with previous technologies,
as you'd be more familiar with, obviously, this probably looked very different. It was more,
you know, you have your IT solutions, you have your human solutions, you have all these kinds
of things. And I feel like this is much more of a human solution than an IT solution. But
the number one thing, the foundation is whether your senior leadership team is
totally on board and not just on board, but really understands how to use it. So, so we do, you know,
with with my company with AI mindset, we do like enterprise partnerships, right? And the thing
is we used to just train teams, which you know, like, oh, can you come in and train our, you know,
200 person digital marketing team or something like that? And it was great and it would work
uh because I think this thing is a great employee engagement strategy to sort of say like hey we're
going to teach you like nobody takes nor you excel home and makes their life better necessarily right
like I mean like what you do with this can you know help you in your future career your kids'
future career like help you become a better gardener and spouse and cook everything else
right so number one this is for you but number two like this should make your work easier and
better and it should bring you more joy at work but what we were finding was something really
interesting which is if we would train teams. We kind of just thought, "Oh, this will be great.
This digital marketing team will take off." But what would happen is they would essentially kind
of like take their 8-hour day and make it into a six-hour day because they could, right? That was
number one. And then the other thing that would happen was, and we saw this in some private banks
and wealth management places and some big places we worked with, which was shorter tenured people
started to outperform senior tenure people. And that's just because they were using this tool and
senior tenure people weren't using it. And so all of a sudden like you had 23 year olds who seemed
to be outperforming the actual people with the actual knowledge inside your organization which
is a disaster as you were talking about right because you need that brain. You need to augment
something you because that person in the short term can just say like hey look how well I'm doing
and they their product is amazing but if you start testing them on it they don't actually really know
what they're talking about. In both those cases it wasn't systematically hitting revenue. So when we
think about the foundational thing we now first exclusively work with leadership teams if we're
going to work with an organization we have to because the number one thing and Mackenzie uh
data has has uh verified this as well but we saw it anecdotally for a long time is your CEO using
this and I don't mean like do they know what it is I'm talking about like they spill coffee are
they you know taking a photo of you know the rug and be like oh no you know how am I get this out
like that level of whatever right so they have to know that but also they have to set new benchmarks
works for the organization. They have to know know what a new 8-hour day looks like. They
have to know essentially sort of like what a new structure of an organization should be. And then
really to your point, well one other layer of that is they have to do talent evaluation. They
really need to know because if you just see people spitting out AI stuff and it looks really good,
that's disastrous. You need people who know what they're doing to be using AI. So what we work with
uh organizations on is start with like your job descriptions. You know what I start with who
you're hiring because that's the low-hanging fruit of restructuring in a way, right? It's
like don't bring anybody on board if AI agents that are being produced here can do 60% of that
work. So I think that that's really critical. But all all that sort of like bundled together is the
senior leadership team needs to understand deeply why and how this is going to change every single
role in the organization. So making the leadership the AI advocates within your organization seems
like it would just supercharge like the the moving forward with AI adoption which is I think is I I
don't know I don't know if that's counterintuitive or not because a lot of times technology sometimes
bubbles up from like the dev level all the way up to their magic like I'm like listen my manager you
got to try this tool out we got to talk about this this is amazing this is going to change everything
but my manager's okay with it my manager's we're in a tech AI tech company basically.
But I wonder for other organizations like having that advocacy coming from the top really that's
where you think we should start. That's what I've seen and I I haven't seen anything else work and
the reason is that first of all companies have to start moving from encouragement to expectation of
use, right? Because if you think about it like you would never like if you had an accounting
department and just three of the people in your accounting department were like, "Yeah, I'm kind
of done with Excel. Like I'm just going back to pen and paper." You'd be like, "No, what? No,
man. Like you got to use Excel. Like that's not a you know it's not it's not optional." But the
problem here with this is that it's very hard to track whether people are using this or not. So
we've all we've done so far is encouragement. It's like come on guys like this is going to be great
for you and look at all these use cases. Give us your use cases. This is why I don't do use
cases right because even when you go deep and you teach everybody use cases the problem is the human
psyche. So here's how we sort of see it. Most of America maybe the globe I don't know is going from
point A starting their day to point B which is the end of the day. And a lot of the world just wants
to get to point. They want to go home at night like they don't need to learn a new technology.
They don't care. They're not in, you know, it like they don't care about technology. They don't like
it. They've never liked it. Whatever. And you can say yes, but look at all these things that
you can do throughout the day. Look, and they're like power-ups that you can grab along the road.
And what we've realized is that that doesn't drive adoption in organization. It takes the people who
are already probably excited about it and helps them. But that's not how you drive adoption. And
as you were talking about earlier before we got on the air, this is really differentiation of an
entire company when you do this, right? So the way that we've been thinking about it of late is
well what if we instead and this is what we do at AI mindset instead of sort of like thinking about
this as encouraging people use cases what if you bake it into processes that cannot be avoided. So
if you imagine that point A to point B that road imagine putting a roundabout in the middle that
people have to drive through it. Imagine that's uh a meeting or whatever and imagine that you then
make every meeting and I we have ways of doing this I think are really effective. It's stipulated
that AI has to be used and there's a very specific way that it has to be used. I don't think you'll
ever have a meeting of five people. I'm sure you've discovered this as well where when somebody
doesn't bust out a large language model and say like, "Hey, here's some ideas." Nobody's ever
like, "Get that out. Get that out of here." You know, like, "Hey, like no," they're like, "That's
pretty good. What else could it do?" Right? So, you have to sort of like start moving it in.
And I think that that is the huge thing with um leadership moving this needle instead of it coming
from the bottom. Yes, it come from the bottom. Yes, there's a ton of organic ideas, all that
kind of stuff. I think the way that it comes from the bottom is when we start talking about future
of work, right? But I think that the future I think the history of work has always been defined
by the creators of the technology, right? Like if you create, you know, a camera or a microphone,
you're like, "Okay, here's the people that need to create, you know, build this and here's how you
use it, right? Those are the people that define it." But I don't know about you, but that's not
how I'm seeing it. I'm wondering how this works with agents now specifically because I think even
if you go to the, you know, the the IBMs of the world or the OpenAIs or the Amazons or the Googles
or the Microsofts, they don't know that how this technology should be used. Leadership doesn't know
how it should be used. And instead I think the only thing you can do and this is what we do.
We upskill an entire organization. We have like a digital course and all that kind of stuff. So
we do this, you know, at scale, tens of thousands of people, and you just let everybody go, again,
not teaching them how to do it, but changing behavior because then what we've seen anyway
is like leadership is like, you know, they they're just sort of watching. And then like over in the
corner of like building six, there's like a team of three doing the work of like 14 and you're like
it's like very like Heart of Darkness, but like you're like, how is that person like, you know,
like bringing back this many tusks or whatever it is, like you know, you're like, what are they
doing over there? and you go over and you're like and they'll tell you what their process
is and then you try to imitate that process. But I'm wondering how that would work with agents as
well. Do you know what I mean? Like that process of will some people figure it out or how do you
guys think about it? So we internally we have something called the watsonX challenge. Oh cool,
right? We have like 25,000 teams of people. Oh my gosh. No, it's crazy. That's insane. and they
and like you're incentivized to come up with an idea, build out the agent, and then you're ranked
and then you go go through levels. And that was a way like to get my manager to be like, "Hey,
let's write an agent together or like to talk to somebody like who who's, you know, business
technology. They're not like technical, but they're like they start to think about how these
agents can affect the way they do their work." And all of a sudden they're like, "Wait, can we build
this out?" And we can, of course. And that's how you in our experience at least in IBM is like you
you just have people you get people to the the the three people who are coming back with the
most tusk from building six. That's how you find it in IBM because you have so many people working
with it and that and it's been huge. I mean agents changed everything for me. Agents are they're able
to call tools. They're able to do like things that an LLM wasn't able to like functional things. And
now you're using the LM think is as its strongest which is just a way to interpret what you're
saying and turn that into action. The idea of orchestrating many agents to do a task cause you
think about like okay you're cooking right let's say you wanted the entire workflow to be agents.
You have the shopping list you have the before you even get to the shopping list you have to
come up with the recipe and then all this other stuff and you'd have those as individual agents
that are orchestrated. And that I think is the future. I think the future of simple agents that
do a particular task and they're just accessible to this supervisor that's able to understand what
you're trying to do and able to route you to the things that you want to do. I think that really
is where where we're coming like where we're going with agents and just the orchestration of
them and they're just this massive amount of very simple agents then that that's I think is going
to be really an interesting way of communicating with this user interface when you you have this
anticipation that the thing you're interacting with has all of these competencies like they it's
able to do you know shopping really well it's able to do recipe building is able to do cooking really
like understand all these things and I think you have to get people to understand that that's
what's really it's not like an you're not just talking to ChatGPT you're talking to a million
different sub agents right and I the reason I love that too is that in order to really understand
that you have to understand what the brain feels like and this is why we want to sort of like and
not I mean I don't mean our brain I mean the brain of like the large language model meaning
when we talk about agents I think there's still this assumption that they're just an automatic
machine which like an automatic mail sorter or or a calculator which does the very extremely
predictable the same thing every single time. And the beauty of agents is that no it can actually
do more. It can process through how you talk to it and everything like that. And that's the exciting
thing to me is to watch the evolution of agents and as they get as the underlying language models
get smarter and smarter that they make I don't want to say they make fewer and fewer mistakes
but that is part of it but also they can just intuitit what you need better and even if that
starts with a very small thing the orchestration of a lot of those things it allows people to start
thinking not just in terms of replacing one task but like how could this task be totally reimagined
are there any red flags that companies should look out for before they actually take this AI plunge.
Yeah, I think you know the expectation is the big red flag. You know, I think that uh as as you guys
do maybe better than anybody else, but if your data isn't in good shape, then then there's not a
ton you can sort there. That's not true. There's a ton you can do, but you're not going to be able to
pull the right data at exactly the right time. And I think that we see that a lot. And I would even
bring that down to the microscopic. I mean, it's this whole term around context engineering, which
uh which I know you know well. It's like less about like oh what's the great prompt that you
put in and more like where is this pulling this data from? So I'm not saying go out and you know
reinvent your whole data system although you know if you can if you have the resources by all means
I think that's best practice to be totally honest but but it really is a phenomenal investment even
though yes it costs uh it costs a lot and and you guys do this I think really really well but there
has to be an expectation which is if you have just you know kind of even garb not not garbagey
data but data that's unlabeled everything else the problem is large language models are so now now so
good at pulling unstructured data that it almost fools us into thinking Well, if it can do this,
it can do this. And even when I'm like demoing things on stage, various tools and things like
that, I sometimes just can't get it to pull the right piece of information. And that is really,
really hard. And if you can't do that predictably, you're really hamstring yourself. Like, I mean,
there's a lot of things that people could do, but then they all have to be fantastic individually
and all have to understand that. I think you're just going to have too many people in your company
who need to do something kind of like automated. They're not going to be your big thinkers,
all that kind of stuff. There's no, and by the way, those people are probably not listening to
this podcast in in in fairness. But without that, you really have to have your data in phenomenal
shape. And I think the big red flag is to think like if we just slap a awesome large language
model on top of this, it's going to sort it all out because it's gotten so good at that. But the
mistakes, if you need to be precise, and a lot of organizations really need precision,
that's going to blow up in their face. This is a problem people are running into and that they have
to really, really consider. So the data problem I think is foundational to every enterprise before
they move and they start taking this kind of stuff. It's so it's so right and I actually
I love the way that you frame that around the orchestration layer as well because so master
class I'm I'm doing the new master class uh right now. We just finished filming it which is awesome
that's coming out now. But one of the things I was doing in that was talking about trust and
hallucinations because I think what we think about when we think about hallucinations, you know,
hallucinations are just going to, you know, when did the Golden Gate Bridge move to Connecticut?
It's like ah 1952 or whatever, right? Like that's not even the biggest problem. The biggest problem
I think is what you're articulating which is you can usually spot a crazy hallucination. What is
harder to spot is first of all the syncopency problem which is when you're saying like is
this a good idea people like oh yeah it's a great idea right? So like that's the second
part of trust and then the third part of trust was what if the data isn't the right data. So,
in other words, like if you say, "Hey, I'm I want to, you know, figure out I want to create
a new HR tech company," whatever. It doesn't matter. It's like, "Do me help me do some market
research." And these large language models will pull something that looks amazing until you say,
"When did you get that data from?" Right? It's like, "This is well, this is from, you know,
October 2022." You're like, "Okay, so let me stop you right there." Right? So, the third part of
this trust thing in the hallucination problem is not whether it's giving you the the the the right
data, all that kind of is, is it timely? Is it the right data? Is it the better version of that thing
that you created and that's sitting in your data but these things don't know cuz they don't they
don't think that way. So I love when you think about like orchestration context and everything
else when we can crack that code that's it like then if we can and I don't know how to that's your
job good good luck with that but like how you know how how it knows what to pull and when to pull it.
Conor thank you so much for being on the podcast. Uh it was really really a tremendous pleasure for
me to all of you. Thank you for watching. We have more AI in Action available on YouTube.
Make sure to go check out more of the best names in tech. We'll see you soon.
[Music]
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.