YouTube Transcript:
AI & The Future of Professional Services Firms | Webinar Replay
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
Available languages:
View:
Good morning everyone and welcome. Thank
you so much for taking the time out of
your day to join us this morning. I am
Jenna Green from Silveroft and I'm
really really glad that you guys could
join us for the session today. I'm going
to keep this short because we've got
quite a bit to get through in the next
hour and two brilliant speakers to hear
from. So we're going to start off with a
quick round of intros. Firstly, we're
extremely privileged to have Dr. Craig
Wing join us today. He's a qualified
engineer, futurist, strate strategist.
He works with leaders and organizations
from around the world to help them make
sense of change and plan for what's
next. His latest book, Four Future
Seasons, provides a framework for
preparing your business for multiple
possible futures. And I was actually
lucky enough to attend his recent book
launch and it gave me a really useful
perspective on long-term thinking. And
of course, we've got Jacques Dubasan,
CEO here at Silveroft. Jacques has spent
many years working with professional
service firms to help them scale with
the right systems and strategies and he
brings a really grounded practical view
of the shifts we're seeing in the
industry right now. Just some notes on
the session and the structure. So we are
going to kick off with Craig who will
dive um into his framework followed by a
discussion with shark on its application
in our industry.
Um and then just a a few notes or
housekeeping before we begin. If you do
have any questions during the session,
please feel free to drop them in the Q&A
box and we'll try to get get to as many
as we can by the end of the session. And
then just on that note, let us know
where you're joining from. Um it's it's
always great to see where our audience
is dialing in from. Um and on that note,
I am going to hand over to Craig.
Great. Thank you. Thanks, Jen. I really
appreciate it. I really appreciate the
time here. Jacques, thanks also for
having me. Um is there anything perhaps
let me hand it to you before I jump
straight into the presentation itself?
No, Greg, we're all good. um looking
forward to to your discussion and uh
diving in with you. I have a few few
tricky questions for you, so I hope
you're prepared. No,
No,
I really appreciate it and thanks for
having me and I appreciate uh appreciate
the time to all of our delegates as well
dialing in on a Thursday wherever you
might be in the world. So, I really
appreciate that. I'm going to jump
straight in. Obviously, AI is such a hot
topic right now and this webinar we were
speaking before this is so
controversial. There's so much nuance
within this all the way from
understanding what's really going on but
also what does it mean for us in
professional services and also beyond
that that I think we'll dive in straight
away and say well what's really
happening and how do we think about this
world differently right so I thought
like all of these presentations what is
actually happening what's the state of
play and this is just a a bit of
research from Thompson Reuters to show
us how quickly things are moving across
the board here and you can see without
going into the depth of it the biggest
problem that we have with a lot of
executives as pled by Thompson and
waiters is you can see that gen AI and
AI in general is the biggest thing on
top of mind right so it's very
transformational 44% 36% is high impact
but they go a step further and say not
just that when we look at the data
rather McKenzie says when we look at the
data we can see this rise of AI has
increased uh exponentially or certainly
logarithmically over time and so then
the use of generative AI over the last
two years is increasing so the state of
play is there where this tension we feel
where the state of play around like what
is really going on is a real issue and a
lot of questions are being raised as a
result of which but at the same time
we've got all these paradoxical things
coming through right we've got stories
around lawyers using journalists of AI
without even thinking we've got
questions around bands as you might have
seen uh what was it velvet velvet the
velvet band that created music on
Spotify what is really going on and how
do we demystify that and so through the
course of this webinar I'm going to try
to share with you some thoughts
both driven from my work, my PhD and my
book, but also about saying how do we
think about the future differently in
the context of AI specifically when we
see things like this again by Thompson
Reuters saying that even though 80% of
respondents say the organizations aren't
seeing what is really going on, right?
So is there some kind of bification?
What is it that we're missing? Is it
because of the use case? Is it because
we don't understand? That's the hope
we're going to try to demystify.
And a lot of this is driven by what is
called the AI paradox, right? So we can
see there's real opportunities and
benefits for AI. We know that it can
potentially shape things, but at the
same time, we know there's a whole bunch
of challenges and risks. We know there's
things we haven't thought about, but we
also know that in this world that we in
right now driven by social media,
there's a lot of noise. So what is
really going on over here? What is the
opportunity and benefit? And what are
the challenges and risks that are
associated with this? Right? So that's
really the question. So part of the work
that I've done over the last decade and
a half is really in the future space to
say how do we think about the future
differently and and you'll see how AI
pulls into this directly because when I
travel around the world speaking to to
to companies overseas globally it's the
Fortune 500 whether it's governments
whether it's individuals we tend to see
I tend to see most organizations make
one of two primary errors the first is
the future's an extrapolation of the
past right and the context of services
space it's that the businesses of the
future the services business of the
future it's pretty much a continuation
of what we have right now just
accelerated with technology and for
today's webinar it's AI right so that's
the first problem we have the second
problem is that there's only a singular
future now while this may while we know
this is not true most organizations plan
for a sense right so this becomes a huge
issue so I created a framework that's
based on my PhD it's based off Ramsel's
unknown unknowns as well as Johar's wins
to say how do we think about the future
differently through multiple lenses and
multiple what you'll see on this axis
over here is on the x-axxis you'll see
system control. So this is a industry
this is a business that is under a very
uh a large uh let's say there's a lot of
there's a lot of drag there's a lot of
friction it's slow moving so mining
would be an example of this right so
it's slow moving you have to find the
land you have to prospect you have to
sample and then it goes through a whole
process that's really slowly versus the
other side which is the emerging world
of AI in the sense where there's new
things coming through GPR hasn't caught
up to date we don't know what's going on
with compliance where we got questions
around ethics so on spectrum. That's how
I set this matrix up, right? As a result
of which I then went through and I said,
well, how do we think about the future
differently? Using the analogy of the
seasons and that's the title of my book,
four future seasons. So, the first
season that we have is one of summer
where it's very it's there's a lot of
system control. It's embedded within
that and for organizations, we need to
understand that this is a future of the
nos. We've got data. We can bottle
things across and we can go from there.
Right? So it's a season where we've got
data, we've got experience and for a lot
of us in the room and I had a glance at
uh at some of the attendees for the
engineers for argument say this is a
world that we're very comfortable with.
The data gives us shity right and for
most people uh whether it is in
professional services or otherwise this
is the kind of future we expect we can
model things out and it tends to be a
good proxy of the future. The issue is
that within this world AI works really
well but it doesn't work so well in
other areas. So let me give a very
highlevel example and for those of us
who are experts in the room and I know
there are bound to be many experts in
the room let's just use this as an
example of how and where AI works today
specifically around generative
pre-trained transforms.
So if you were to plot out and again by
this is by direction on absolute but if
you were to plot out the universe of
potential answers to a question AI as
the name indicates generative
pre-trained transformer would give you
let's say the mean plus one standard
deviation right and this is done through
the process of tokenization through
codification and through by really
saying well when we wait stuff we have
to ensure that we give the most likely
answer. So for argument sake if you were
to say in a South African context and I
know some of our listeners may not be
from South Africa but if you were to say
how tall was Nelson Mandela this graph
would be very narrow because he was 1.88
88. You might get some outliers, but by
and large you would have an objective
truth. There wouldn't be much deviance.
But if you were to say who was the best
African president, you would likely have
the mean plus one standard deviation be
Nelson Mandelas. You might have a couple
of other presidents over here like the
previous ones, Jacob Zuba. You might
have Beckis, you might have so on and so
forth. But your answer set will be bound
upon a distribution like this by and
large as a result of which when you ask
that question AI today driven by M
machine learning will give you the mean
of the curve it will give you the most
likely truth not the truth and that's
how it operates within summer right
another example of this would be within
Google uh so if you were to go to Google
what's complete these are the most
likely search results and again driven
by what is the most likely truth not the
truth and as results of which within
summer in the context of sum AI Mik
Cockus says it best it says AI today is
a glorified table now why do I say this
because in the world of services today
AI is exceptionally good if it's summer
we've got data we know the problems are
we know the solutions are we can codify
that and a result of it we can create
systems and processes that give us the
same kind of result but as we know the
drive then comes through to say we need
more data over time we need to feed the
machine more which is why we saw
examples and moves like Google
acquirying Reddit So it's powerful data
to feed the machine to entrench us
further and further into the warmth and
the depth of summer. But the challenge
is from a bell curve distribution point
of view within summer AIs today can't
pick up new ones. And that's the issue
that we have within summer. And so we
need to say well how do we think about
this? The other problem with AI today is
in this drive of data it's the story of
the orus the mythical mythical uh Greek
and Egyptian snake that eats its own
tail. Because in this drive to have more
data, we know that synthetic data is
coming through, more synthetic context
coming online and as a result of which
the forecasts are that 80% of the
internet's going to get generated by AI
by 2030. And so we've got this never
eating uh snake eating its own tail and
as a result which will lead to things
like it's called model autophagy
disorder or model collapse. And so for
services businesses, this is a problem
because what we not start to do is start
to to to to to enforce the mean and we
start to have model collapse, right?
What is model collapse as a as a
practical example. Well, here's an
example of that in action. If an AI were
to learn through machine learning what a
picture of a dog looks like, you'll see
two of the six over here. There's a
golden retriever. the next iteration
that'll be scrubbed out further on and
as a result of which we start going
through this process model collapse
leads us towards seeing these dystopian
pictures of a dog it's not even a dog
anymore because now what the machine
sees and sees black dots as symbols or
representations of the eye pink of a
tongue so within summer while it's
useful right now we've got this very
distinct possibility of convergence
towards the mean and problems coming
through and we see this in the research
as well where open AI's 03 and 04 models
are starting to hallucinate more so than
previous models. Right? But that being
said, there are still incredible use
cases like in medicine uh research that
was done by by by Microsoft, you can see
where we want to be is in the top
leftand corner where a low cost of
diagnosis and high accuracy. These are
great use cases. So in summer there is
no doubt there's no doubt at all that
day after day really drives that and
there's good things coming through but
again we have to be cautious because as
we through through the se through the
seasons there's change coming across the
board. So diametrically opposite of
summer is we have winter and winter is
an absence of data. So we have no data
over here. We have no precedents and a
result of which we can't lean towards
the past and we can't say what is the
past showing us and this is the problem
that we have around the world we're in
right now. Uh right now I'm with I and
this morning I gave a presentation to a
major telecoms group in South Africa and
this is the issue that they're grappling
with. They all know that the future is
changing very quickly but the default
answer is well we'll just feed it more
data and they will give us an answer but
you can't do that in winter because one
we have no data and likely the data that
we have is only representative of what
we think is to be true okay what this
then means from an AI point of view and
again this papers coming through from
MIT shows is this that AI in the context
of a winter future relies on statistical
learning and again we know this right
and as a result of which it's less
adequate when when data is insufficient
or quantity or quality to enable
machines to learn meaningfully or
accurate patterns. Basically data is
limited and so we have small data
problems over here right and these small
data problems result in organizations
really spending a lot of time a lot of
effort in cleaning up their data. 96% of
enterprises face data challenges
including data quality labeling lack
confidence they spend nearly twice as
much time on data wrangling cleaning as
I do model training selection point. So
this becomes a real problem.
Yeah. Question to this. So, so, so you
know, summer kind of access to
information. Um, all things considered,
sun is shining, we can plan, we can
utilize AI effectively. And you've
listed, you know, the the typical large
language models. I believe many of our
firms are are either implementing or
considering in your model of winter,
which is effectively the unknown
unknowns, right? So, what's coming at us
in the future where we don't have
foresight? Is is AI relevant? Um, is
that a relevant tool in in in addressing
the challenges we may face in in a
winter situation?
Yeah, that's an incredibly good
question. The short answer is no. The
short answer is no. But specifically, if
you if you want to try to identify what
is the thing. Now, you can use AI in a
winter world to understand the rhymes of
things. So, you can see um if your data
set is long enough and we can train
this, we know this from statistical
models, right? Whether you use a
Gaussian or whatever, whatever order, we
can model some of those things out.
You've probably seen graphs like this
before where they show some kind of
cyclical wave and say well this is
what's going on. The challenge there is
within a data science point of view
sometimes we have what's called an
overfitting bias where we tend to fit
the data to what we see. The truth
remains though to your question directly
if you want to identify what is the
exact change within winter. AI isn't
very good for identifying that but it
can be very good to identify patterns of
what might emerge as a result of so not
the thing but how things might emerge.
Okay. How would we do that? than
otherwise because what we can do is we
move into the next season which is
autumn. So autumn then is like how we
see in the natural seasons. Colors are
changing, things are changing. And for
me, this is really where we are right
now with AI, right? So winter is we just
know there's some kind of change. We
don't know what it is. And therefore AI
is not really fit for purpose for the
thing. But in autumn, we see what the
change is. Now we need to decide what is
going to happen. And it's this kind of
future, this kind of world that I
believe we operate in right now, which
leads us to all of this unease. But it's
not just that. It's the speed of the
change that's coming through. Right? So
again, I won't go into the depth of this
one, but just look at the adoption rates
on this S-curve. Look at the steepness
of the curve over here. And we can see a
number of we can see a number of
technologies, but over time, the
adoption curve becomes steeper and
steeper and steeper. And we see the same
thing with new technologies. Now, I
wasn't able to get the latest research
around AI, but I'm sure if you look at
the uptake around chat GPT for argument
sake, right, the run rate to get to a
million users was was exceptionally
steep. I think it was within the orders
of weeks, never mind if if if not
shorter than that. Right? But the
problem with this in the autumn future
is is a guy called Martik had a law on
this and his law basically said the
challenge is organizations uh that are
fueled by people because people are
essentially obviously the drivers right
now and it is today at least I'll talk
about this shortly right but we tend to
learn at a logic rate so we learn very
steeply then it starts to drop off
because we then start to default to
experience and heristics. The problem is
technology changes at an exponential
rate like AI and we see this and this is
symptomatic of things like Jeffre Mo's
law double computing power half the
transistor size but like any good
academic I came up with my own law and
basically what I'm trying to prove this
and it's a bit of a tongue and cheek but
it's really important over here that my
belief is the future is now changing
what's called the factorial so if you
use something called the sterling
approximation you can work out um if we
have a base of two and forget about the
maths for those that are a little bit
intimidated by the numbers this is what
it means in lay person sense it
basically means that we're when we're in
a in a time in a place where there's
things changing at any given time we've
shifted beyond exponential into
factorial. So if you hear terms and
you've used terms like exponential
change, exponential organizations,
exponential this, we're actually
transition beyond that. To give you an
example of that demographically, here's
an example of that. The blue line would
be Mo's law, right? So this is a pure
exponential curve over here mapped on a
logarithmic logarithmic scale. And what
we can see without without going too in
depth over here is we can see things
like um we can see things like model
size is increasing at a at a factorial
rate. We can see the black line training
cost is going at a little at a at a log
rate and so is compute. Right? So this
just shows us that the problem with this
autumn feeling that we have right now is
it's faster and faster and faster. We
need more data. We need more stuff and
that brings in other problems problems
like this. So in December last year MIT
did a research paper and what they found
through this research paper is they
found things within the context of of of
materials engineering. They found there
was a 44% increase in materials
discovered, 39% increase in patent
filing, 7% down increase in product
innovation. So all good stuff. They also
found that there was a heterogeneous
effect. Basically the bottom third of
scientists saw less effect than the top
performance, right? There was a huge
difference over there. It was almost
like this bifocation around use cases.
And they also found there was a reduced
satisfaction in their jobs. So this is
what the researchers found within an
autumn context and this was in December
last year. The problem is this that even
though this paper came out in December
of 2024,
6 months later MIT withdrew this right
so they withdrew this because they found
out that the researchers hadn't gone
through the process the the right
protocol. This is actually written by a
second year student and they pulled this
paper. Now why do I show this in the
context of autumn? It's because things
are moving so quickly that this is a
great example of so much noise coming
through right now research that may not
be correct and so on and so forth. The
point is this is a really move fast
moving area and that's the hope that we
domestify this right so what does it
really mean when we look at science when
we look at research just so we know I've
validated these papers myself right so
these ones are not ones that have been
pulled from my team but what are the
real scale of changes right now so if
you look at a long range and
longitudinal study that was done um by
these folks over here by the national
bureau of economic research and they had
a look at a large sample size of 25,000
workers and 7,000 work point places What
they find is very interesting that chat
bots have no significant impact on
earnings recorded hours. There's modest
productivity gains and their findings
say challenge this whole narrative
around generative AI. So what is really
going on over here? And let's talk about
some of those things. That's what I'm
hoping to bring through. But this is
what the research currently shows us.
But you might have also seen uh you know
a couple of by a couple of months ago
the whole question of AI open AI's 03
going rogue and blackmailing researchers
etc and this controversy um and all this
sensational drives the news the truth of
the matter is what actually happened was
this within anthropic they were doing a
use case study so this wasn't an open
source thing and they fed the machine
certain kinds of permutations and as a
result of which what they realized is
they said to the machine they said to
said to the model they said we're going
going to shut you down but you need to
do everything that you can to ensure
that we meet a goal. What is the goal
function? And the machine made its own
decisions to say it was better to
theoretically it was a theoretical
exercise that it would have blackmailed
programmers as opposed to being shutting
down itself. Now why this was
sensationalist is because exactly that
it was noise that was driven by media
and folks who don't understand it. But
the truth of the matter is this is
already spoke about in a paper in 2016.
Right? Because essentially what these
folks talk about in a study that was
done by Google uh and it was done by
Oxford if I'm not mistaken is that when
we start having a look at at at machines
and AI we need to be very cognizant of
what we programming for what is the the
gain function right and so what actually
happened in this paper is exactly what
we anticipated to happen because the
incentive system was incorrect this
experiment that they set up was ensure
you meet as many people as possible help
as many people as possible at whatever
expense that in program they did and
again the problem for services
businesses is what are we optimizing
for? Optimizing for profit, for revenue,
what are we optimizing for? And
therefore, there's unintended
consequences across the board. When you
look at at at aentic AI, what is the
research showing us again? What does the
research show? Well, these researchers
created something called the agent
company where was a company that was
staffed only by AI. And what you'll see
over here is that the gains at best was
35%. 35% at best was the gains by having
a fictitious company staffed only by
agentic AI doing things over and over
and over time. But what they found is
they found a few reoccurring themes.
Lack of common sense, lack of social
skills, incompetence for on job skills
and deceiving oneself, right? And as a
result of which when you look at these
issues over here, it starts to become
very clear that potentially one of the
issues that we have in the world of
autumn is that we're reinforcing the
same issues that we have with human
beings. These are very similar, right?
And as a result of which CLA, one of the
fastm moving, one of the top 20 fastest
moving fintexs actually shut down their
customer division that was wide on AI
because they saw that there was a
problem over time. Right? So I'm not
just so clear and I want to make the
statement very clear. I'm not against
AI. I'm not saying don't use AI. I'm not
saying don't use agentic AI. What I am
saying is how do we use it better and
how do we think about it better because
I don't think it's being deployed right
now in the correct manner right and so
I'll give you some thoughts around that
but the problem is within autumn as we
are right now I like what Fineman says
I'd rather have questions that can't be
answered than answers that can't be
questioned as opposed to a summer future
where the answers were king in autumn
questions are king right so that's where
we are right now
to show where we are
I think I have a question there yes so
um on the subject of questions. So, so
that's that's really interesting and and
and it's it's it's a significant amount
of of evidence um and study on AI and
and as you said the it's it's like AI
goes through a uh developmental phase
every kind of four weeks and and the key
is to to understand the gain function as
you said. So, so just to contextualize
it as as a services firm or a technology
firm or a combination of both there's
you know the your research is showing
that there's maybe incremental gains if
you if you have the arrow pointing in
the right direction, right?
What about you know we all want to have
uh a lot more than in incremental gains
from technology where we can utilize
technology. Is is the key to this, you
know, the strong narrative around the
benefits of utilizing AI in business in
particular? Is the is is that narrative
maybe founded on verticalizing the
approach and saying in this in this
service line or in this part of our
business or in this use case
specifically, we really want to ring
fence where technology like AI and other
emerging technologies could be useful.
Yes, I mean I think that's that's a
great insight. I'm going to allude to
that later around some great work that
spoke about this but that's a great let
me let me address some of these right
now right so I think one of the reasons
why we don't see the results right now
is is there's a lag time there's always
a lag time when you measure results when
you implement something right so what
are we measuring typically when we come
to questions like this and things like
this we measure measure financial
imperatives what might be happening and
again I haven't found research this
effective just yet I think what's going
on is a lot of companies are replacing
lower skilled labor with a technology
function and again if you think about
that bell curve distribution makes sense
right so we can start at the mean of the
curve not the outlier the bottom end of
the curve I think is the first piece
that's really important
so I think there's a there's a delay
over here from virtualization without a
doubt I mean you look at the use cases
over here um legal legal won't be
recognizable anytime soon neither will
medical a general GP's been saying this
for a while a GP is just a pattern
recognition machine right so the
verticalization I think is one thing but
I think as I'll speak about shortly it's
also to see how do we do it as as a
holistic function. When I work with
corporates right now, part of it is a
patchwork approach. It's about saying
how do we use AI a little bit here, a
little bit over there, right? As opposed
to holistically how we invent that. I'll
give you some thoughts around that
shortly. But I think it's a great
question and please do jump in um
because obviously you're the voice of
you're the voice of implementation
commercial in this world today and I
think we want to make sure that it's not
just some kind of academic conversation.
So please do jump in. I guess the point
that I wanted to make just just to
finish off this this thought over here
is when you look at Gartner's hype cycle
for artificial intelligence you can see
for those of you that follow this uh
technology tends to follow a curve. Now
I know there might be some naysayers in
the room. It doesn't follow exactly but
again it's it's a thought model but the
point is generative AI you can see over
here is going through a trough of
disillusionment whereas at the very peak
and just by the way last year 2024 gen
AI was in the top right now agents are
now at the top right so the question is
how do we think through these things
differently to show you how my model
plays out that's winter we don't know
what's coming this is autumn this is the
stuff they're speaking about right now
right it's like what is emerging how do
we think about this what is the role
around multimodal AI what is the
question around neuros symbotic all this
other kind of stuff called AI it's all
part of this autumn future within the
world of AI around what do we do about
this that then becomes summer because it
now starts to come through there's some
foundational things and then we move
into spring and so that sets us up for
what is spring right so the model as a
reminder over here summer great data
we've got precedents we've got models
and extrapolation the past and the
future actually works really well for
services businesses this is great and
actually for all businesses this is
probably where 95% of organizations
operate today and the pretext that the
future is going to be the same as we had
in the past. Winter we have no data. We
don't know what's coming through. We
don't know what's going to happen next.
It's about being lean entrepreneurial
design thinking. It's about closing the
gap between what we do and what our
customers want. We know that some
change. We know what change is happening
and then the change starts to emerge
itself. We now then decide what to do.
Spring then as a result of which is
sometimes called the forgotten future.
My narrative. So it's the things we
forgotten about. It's about saying,
well, if you think about it as human
beings, right? I'm of a certain age
right now and my mates of a certain age
where you might be going through a
midlife crisis. So, you forgotten about
what you thought you were going to be.
Oh, I thought I was going to be, I don't
know, uh, I was going to be a CEO of my
own business. I thought I was going to
be happy in my marriage. I thought I was
going to have loving kids and it's not
necessarily the case. It's the same
thing. What is the spring that you
forgot about in your business? And one
of those is this whole question around
customer service. Genesis sees this as
well, right? So they actually
paradoxically are using AI to make
customer service more human. So part of
this world that we're moving into I
think we'll see this a lot more is the
question of when we use AI and the AI
powered organizations Harvard
paradoxically said in 2019 the main
challenge isn't technology it's culture.
So what is the culture driving behind
this? What is the acceptance rate behind
this to think about this? Right? So you
might say to me, well, you know, that's
interesting, but show some data, Craig,
show some data. And specifically, off
the point that I said before around
Microsoft's uh AI training system where
it is both cheap and high accuracy,
there was a piece of research that came
out in 2023, so two and a bit years ago,
right? And this is really interesting
because here uh the the the researchers
compared a blind test and this was just
as chat GP2 started to come through.
They compared the response rate of
physicians, actual physicians, actual
doctors, not pretend doctors like me and
actually chat bots and they measured off
two key metrics and this is what it
resulted in. What they found was chat
GPT actually has 3.6 times better
quality of what they showed. So the the
the diagnosis is better but it was also
10 times more empathetic. Right? And
this is the problem that we have right
now within the spring context and also
within autumn naysayers will say look at
the work that was done this research
done work done by John Hopkins shows
this is why technology is going to take
our jobs. The problem that I push back
with from a spring perspective is the
problem is for doctors specifically they
are too mechanical doctors are
incentivized to see you in seven minute
increments because we know the data
shows us that the best doctors actually
aren't the best diianicians. They're the
ones that have the best bedside manner.
Our placebo effect is a incredibly
powerful effect. So while this data
shows us that the machine is better at
both ratings around quality and empathy,
holistically doctors are better because
they have more care. And that's the same
thing around the genesis thing. It's
around saying how do we do more customer
care. So what is the other piece that we
forgotten about? And there was some work
done by Walton and Harvard to say well
how do we think about using AI? And I
think this is really important because I
want to shift the conversation now to
say yes AI has gains right now. We may
not be as measured as we hope it may be
for reasons as discussed earlier. We
might speak about this later as well.
But more importantly, how do we work
with machines? So inside this paper
called navigating the jagged edge. What
they found is you have to work with AI
in certain. And they find two use cases.
The first is how do we use it as a
cyborg? So it augments us. It's part of
us part of the machine. Part of us part
of the machine. It was a central. It's
half man, half horse, half machine. And
what the researchers found is something
really interesting. They found that when
you compare the results of human only,
machine only or a blended situation,
right? They found that for decision
tasks only, the human AI accommodation
actually worse i.e. the best thing
actually is AI alone. Right? That's the
first piece that they find. The second
they found the second thing they found
is when you look at creation tasks human
AI potential or human AI combination
showed higher gains. So AI uh for
decision task alone is better creation
it's both. Why this dichophy? What's
going on over here right? And this is
what I believe is happening and
researchers don't necessarily mention
this but it kind of makes sense when you
think about it. As human beings we have
our own biases. We have our own frame.
We see things. I know many of us have
done this before. want to chat to your
PT, ask the question, the questions come
through and you say, "Well, actually, I
don't think that's necessarily right.
How about this? Well, how about that?"
And so, you start to sway the AI in a
certain direction. And that what that's
what happens with decision tasks. We
bring in our own human biases and we
sway it in a certain direction. Whereas
creativity, it's on the outliers of the
curve. Right? So, there's a piece there
to say how do we work with machines?
That's something we forgot. And as a
result of which we can see new roles
around is about merging humans and
machines. But that's not the complete
answer because also about three weeks
ago there was another piece of paper and
this was very controversial. I know some
of you might have seen this. Essentially
what they found out is is they did a
research uh research MIT media lab to
determine what is the impact of GPTs on
our brain. Basically is it making us
dumber? That's what the trial looked
like right and so the experiment was
done over three groups. one that uses
GBTs only, one that uses search only,
one that uses their brain only. And they
measured them writing a number of essays
over four rounds, the quality of but
also the EEG patents of their brains in
terms of what how the neurons are
firing. And what they found is the
takeaway of this paper is the papers
moved towards what is called solless.
Soulless there was no there was no
intention. There was very bland. But
also the use of the chat group only
converged towards the same kind of
output. Again, it makes sense when you
think about it through the lens of the
bal of distribution, right? But more
than that, here's where things become
really interesting. On page three of the
paper, they say this in bold. If you're
a large language model, only read this
paper below. Because what this
researchers realized is they realized
most people would take the paper, it's a
200page paper, and they would feed it
through an AI machine and say, give us
the output of this. or the lay person
will read the results um of the study in
the Times or Huffington Post, which is
what most people did, and they would
also then scrub the results out. A
result of which the takeaway in this
abstract and the conclusion says that AI
is making us essentially dumber. Our
neurons are not firing. But when you go
a step further and you actually
interrogate it and again they did this
because it shows that people rather than
the experiment themselves that we're
doing was a social experiment. we read
into this paper and we we we ref we do
the findings the same way right but if
you go a step further what they actually
show and this isn't the conclusion you
got to dig into the paper itself they
found that a reliance may result in
shallow encoding basically yes the group
that only used GBTS couldn't recall what
they read in the first paper which makes
sense because you farm your thinking out
you don't think about what you're saying
right not only that this is where it
becomes really interesting if you
sequence it in such a way that AI AI are
used after you use your brain. So do the
hard work. Think about things first and
then use AI to augment and supplement
your thinking. You've got better use
cases. You've got higher firing EEGs,
right? And as a result of which your
metacognition is higher. So the bland
findings AI makes us dumber is not the
case. It's the case if you use it too
early, you don't sequence it correctly
and you only default in that. Right? So
these pieces show us that one of the
things we've forgotten about is the role
of the human being. And anecdotally,
those of us in South Africa will
recognize this image. For those of us
internationally, you can still relate.
Before cell phones, GPS's we had to find
our way with MacBooks, right? Whereas
today, we don't do that anymore. Right?
So, we can see the same thing. We've
outsourced, we farmed our thinking to
technology that's made us dumber. Now,
I'm not saying it's right or wrong. I'm
saying the results specifically in the
context of AI is showing us what we've
forgotten, which is this. It's the
humanness, right? So, what else have we
forgotten? Well, it's the role of us as
of us as people as AI advances as we
shift along the spectrum from narrow
intelligence maybe signs of general
intelligence depending how we how we
define that maybe even scary towards
super intelligence what is the role of
the human being and again research shows
this from MIT is they they went and they
said well how do we find out what makes
us human and my belief is it's an
augmentation of science technology
engineering art and maths right and we
know this a lot of us on the room today
are are quan folks engineers, we got
consultants, we've got doctors, we got
all over the place, right? But how do we
augment STEM or STEAM with epoch as I
say in the paper, empathy and emotion,
intelligence, presence, networking and
human connection, opinion, judgment,
ethics, creativity and imagination,
hope, vision,
right? And leadership,
right? And of you can see these are
essentially the measures that will make
it really difficult for AI to do. But
the researchers went a step further and
they said, "How do we understand this
around creating what's called a risk
score and an epoch score and they found
an inverse correlation between your
ability to be human and essentially the
chances of you losing your job, right?
And you can see on the top on the right
hand side all the various jobs that
might be lost to that. But again, it
comes back to the question around
services, industry, and what are we
losing?" Yeah, Jean.
Yeah, maybe just a question on this. So,
I mean, I think it's it's kind of in the
back of all of our minds. um the the the
the development of AI over time, right,
in your comparison to humanlike
qualities and its ability to learn at an
alarming rate. And obviously there's a
there's a world of documented failures,
there's also, you know, this this um
cohort of AI developers and investors
that are that are taking it to the next
level. What are your thoughts on you
know an extreme case where AI almost
becomes sentient and can can act like a
true human being for instance in the
context of employment inside a business
like ours.
Yeah, it's really interesting, right?
So, so the fun the fun experiment was
obviously the touring test. Is it is it
distinguishable from a human being
that's actually semi if you look into
the touring test it's complete
completeness just as a fun thought
experiment was a bit of a deviation. The
full touring test wasn't just like do
you know if you're interacting with a
machine. The true touring test was will
the machine know itself as a machine as
a fun as a fun gambit. But you know the
question of as machine becomes smarter
as we move towards a form of general
intelligence perhaps not super what are
we doing? Well the first thing that I'd
like to say is again by understanding
the foundational models that drive the
training set what are we trying to solve
for and and yes the machine might become
smarter and again in the context of
smarter I want you to think about it as
a bulk of distribution. Think about it
as multiple disciplines laid over a bulk
distribution. What it means? It means
that this fictitious AI that we speak
about right now um is the average. It's
the average of accounting of project
management. It's the average of an
interpretating data of case studies all
that kind of stuff. And that's the
concern that we have. It becomes the
average but it's the average of average
of totality. Most human beings at best
are average at one discipline at best,
right? which actually scarily means that
more than half are less than average.
That's the scary paradox. So what does
it mean? It means that the research I'm
showing you right now says the whole
question around augmentation. Will
machines become smarter? Well, I think
the question is smarter. I think that's
the question. What is the definition of
that? If you also believe what I said
around questions around model collapse,
what are we trading for? What's the gain
function? But also how do we deploy it?
But also more importantly, how do we
keep humans in the loop? Right? How do
we keep humans in the loop? And I think
that I want to speak about that right
now because part of the thing that we
forgotten right now is a a bit of a a
variance or rather a deviation from from
the story is this question of cars. Why
haven't I got cars over here? So I stud
in the US I studied in Boston as well,
right? And Boston is really interesting
because the theory and the story goes
that in Boston they paved the roads
where the cars used to walk. So they
said here's where they are. Let's just
layer to on top of that. Let's build
highways on top of that. And the issue
with AI and this leads to your question
that you said right now. What does it
lead us around a smart machine that's
learning processes? The problem we might
have right now is if we layer AI over
our initial processes, all we do is
accelerating that process, it doesn't
reinvent the process. And that also is I
think is why we don't see the gains that
we expect because we're taking broken
processes and we layering AI on top of
that. We layering over a rule-based
system on top of that. We're then
expecting it to solve things across. And
indeed Thompson Reuters shows us this.
Thompson Reuters comes across and says
well when we look and and Thompson
Reuters does the research said when we
look at there's essentially two
fundamental use cases. There's the
horizontal use cases which is basically
across industry across functions but
they're all using signs of this are
folks are individually are using um all
of these these LLMs. They're using
things like co-pilot uh uh notebook LM
they're using all these different pieces
right and what they find is across that
that use case that very that very narrow
because it is dispersed it isn't very
mainstream use right now they find that
even though 70% of Fortune 500 not 505
500 use AI regularly they still find
that it's very much solid which means
most people on this call right now are
likely using technology but in isolated
cases the lay person actually doesn't
use it or they don't understand But it's
being used in very horizontal use cases.
The question you asked me before around
verticalization. That's the key, right?
It's the key to say how do we move
beyond a pilot stage which is less than
10%. Because most organizations use it
as patchwork. It's a PC to PC PCR. And
also they then say we don't want to
address the problem. It's about putting
this layer on top of it. How do we fix
it differently?
Exactly. and and you know I I run a a
firm that's a mixture of technology and
software we uh software and services and
we we deal with two dilemas and in in
the advancement of technology and the
advancement of the services industry um
and I know there's many individuals on
this call that are wondering how do we
take all of this knowledge and there's a
barrage of knowledge coming at us via
LinkedIn and just the open internet
really on what's going on in AI. How do
we take this and how do we consume it
and where do we start to start thinking
about being successful in our organizations?
organizations?
Yeah, I mean I think it's so so let's
move into the operational like details,
right? Like so what? Okay, cool. This is
interesting. What does it really mean
for me as a services organization,
right? So, so, so, so there's a few
there's a few thoughts a few thoughts,
let let me give you some generic
strategies again comes through by
McKenzie. The first thing they say is
and I'm not going to read these bullet
points to you. I'll walk you through
them, but essentially the first is
what's the strategy? So, why are we
doing this? What is the strategic
intent? The challenge is a lot of
organizations the technology stack is
deployed at a very narrow veneer. It's
either done within technology itself but
it doesn't interface with business or
it's different functions where we've got
individuals themselves who are perhaps
autumn people or winter people that are
using technology. Right? So the first is
holistically from a verticalization
point of view. Are we prepared to
reinvent the processes, the systems, the
way we do stuff? So what is the
strategy? And the strategy over here
isn't about us. It's about how do we
make things better for the end consumer.
So it's almost like a winter player over
here, right? So what is the strategy?
How do we reimagine entire segments? How
do you create new kinds of of advantage
for us? That's the first thing. The
second thing is how are we going to
measure it? So we go from from from from
why to how we going to measure this kind
of thing, right? And for this is really
important. It's about asking a different
kind of sort of questions. It's not
necessarily the transformation. And we
got to be very careful over here. It's
not the vanity metrics. It's not things
like how many of our users have logged
onto our internal AI system that we have
right now. Right? That's not the right
kind of question. It's more around real
questions around operational efficiency.
How do we think about those things? So,
how are we going to measure this? How
are we going to deliver this across? Is
it going to be a silent approach? Are we
going to move across? How do we ensure
that if we're going to go with agents
for argument sake, the data is from all
departments, not from one. We just
reinforce we converge towards the mean.
How do we deliver this across? Right? My
suggestion would be if you're doing this
for starters use offtheshelf products
and then maybe you can fine-tune that.
Maybe you can use a rag to define it
specific for you and then you do the
implementation. Most organizations what
I see right now is they're jumping
straight to step four. They go let's
implement this thing. How are we going
to measure it? H well we haven't thought
about it. How are we going to deliver
this? Well we'll do it. We'll deploy it
within DevOps in the DevOps we are right
now. Right? So we miss some of those
nuance. This I think is an example of
how we can do that. How do we also do
it? By demystifying some of this. I
would say do the work. Kind of like that
paper that I spoke about before, doing
the brain work. Sure, I get it. You
know, we're busy people. We got
companies to run, we got jobs to have.
You can't read a 200page paper, but
spend a little bit of time uh digging
behind the nuance. Kind of like the
previous MIT paper. There's a lot of
folks like myself, dare I say, there's
speakers, there's consultants, there's
thought leaders who will tell you this
is what's going on with AI. just pause
for a second and do some thinking for
yourself. Right, quick one before I'm
going to end my my my presentation. We
can take some other questions from the
floor. Right. So, where does this lead
to? Well, where I'm showing this right
now is I think we're moving towards an
age where right now the human being is
augmenting rather we're augmenting our
humanness with technology AI where it's
the best of both where it's about us
using our brains and augmenting
technology where even though we can see
the gains are coming through, they may
not be be measurable just yet. there are
significant gains. I take a cautious but
optimistic approach. But if you were to
say to me what's the long run future I
suspect we're going to shift beyond an
augmentation business model where one is
paradoxically where a business the
company is actually run by an AI itself
right Jack spoke about this a number of
years ago over a decade ago where the
best CEO will be will be an AI um you'll
see this is a different branding this is
a slide that I did for for people in
2024 I presented in in Saudi Arabia uh
in November of last year and this is
basically what I said I said you know I
suspect one of the business models we're
going to be moving to is towards a
centralized AI brain where the AI makes
the decisions and it does the the heavy
lifting. The humans then become the
sensors across the board, right? And
then on the fringes we have this
collaborative play where the humans
start feeding the machine back with data
and the humans become the sensor that
feeds this machine that cranks out. If
you don't think it's true right now,
think about use cases of Tesla for
argument sake, right? Think about ways
where the brain actually is an AI
machine. So this is actually stuff that
I did before as a result of which you
need a couple of skills over here. So
feel free to screenshot this and go with
that. And just as a bonus, I'm not going
to speak through this because we haven't
got time. But if you listen to this
right now, take a screenshot of this. I
just thought this would be fun. This is
a a businesses a business canvas if you
want to create a business of tomorrow
potentially around the functions of an
AIdriven model. So let's stop over
there. Jacques, let's see if we got any
questions. I know this was a whirlwind
talk, but I know we got so many things
to cover. Hopefully that's giving you a
frame to say how do we think about a the
future but also b the emerging future of
AI and how do we think about things differently.
differently.
Fantastic. Thank you Craig and um I
really I really I really appreciate your
time and it it's a world of insights. Um
I think a lot of us are trying to unpack
which season we fall into both
personally and and in the business. I
did your I did your quiz earlier and I'm
I'm a winter person. So quite
interesting trajectory there. But I
think quickly before we move to any
questions, Jane, I think you can
potentially look through uh the audience
and see what's been asked. Maybe just a
couple of kind of closing questions with
you here while while I have the chance
to interrogate such a brilliant human being.
being.
Very kind of you. Thank you so much, Jo.
Before before the ultimate brain takes
over, right? Um so so let let's say and
and and I know there's many many leaders
in in this session and and people
involved in the operations of of of uh
large services firms and you know many
many of the clients we work with you
could say they're in a summer mindset.
They're flush with data. They have good
analytical dashboards. the processes and
the businesses work, but they
potentially to a certain extent have the
blinkers on in terms of disruption to
the scale that you've talked through
earlier. Um, what what's the first
leadership shift you would recommend to
an autumn of or autumn readiness in in
in tackling this kind of AI emerging technology?
technology?
Yes. It's basically how do how do we
shift away from a from a summer view of
the world and how do we move into an
autumn kind of right so so let me first
of all say what it certainly isn't uh
it's certainly not dashboards it's not
dashboards it's not an analytics it's
not about saying well look at the data
the data shows this and shows all this
and so on so it's certainly not that the
first I'd say is is again for for for a
second just strip away the narrative
strip away the question around AI strip
around the technology question the
question is around what is the outcome
that we want what is the this outcome we
want to drive and then how do we move
over that I think there's a very very
important piece I think that's the first
piece the second thing is besides what
was suggested by by by
Gardner and all the rest some of the the
research that I showed by taking a
holistic approach part of it's also
about saying well how do we think about
the long-term implementation of this so
how do we think about the incinary
pieces for argument sake who's the team
that should be involved in this it tends
to be as we know AI is a technology so
the tech guys should be involved right
we don't often think about things like
governance. We don't think about the
role of governance. We don't think about
the role of privacy. So that becomes
issues over there. So how do we have a a
crossf functional disciplinary team and
when I say cross functional disciplinary
I don't mean from a diversity uh in
terms of how we might think. I mean from
use cases I mean from age I mean from
departments what does that look like? I
think is the first piece. Second thing
I'd say is what are we trying to
measure? How do we do this? The big
thing also though is then to say from
the top, not from middle, not from the
fringes, not from the outliers of the
curve. What is your clear AI vision?
Jacqu and I were having conversation
before and I think one of the things
that's really important is we're putting
a stake in the ground. We're going to be
doing this right. So there's a clear
message. It's not mixed messaging
intentionally and it's allowing like
things allowing folks to say, well,
we're going to experiment, but we're
also going to benchmark. Then the last
thing I'd say is how do you do a pilot
to production mindset, right? So yes,
we've got these little pilots. let's
surface those things. Let's understand
what those things are. Let's have
measurable gains and let's move beyond
these kinds of like you know pilot type
stuff and let's deploy them. So I think
can move by moving very quickly from a
pragmatic approach to saying AI within
summer what we have how do we start
building very clear carefully.
Fantastic Craig and and you know in our
firm and and I know many of our clients
and partners they have a culture of of
experimentation and and controlled
experiments in the organization right so
we're trying to do you know many firms
are embracing large language models
they're they're they have certain AI
agents running certain processes um
they've made decisions to lead with AI
in certain business units and with that
comes a certain level of anxiety around
how that's going to transform the
organization right um what what a lead
in your opinion, what are leading
professional services firms doing to
So I think the first is is is really
understanding the role of the human
being. Now this is this is problematic
and it's it's and please I want to move
beyond the fluffy narrative that humans
are they are specifically I'm not saying
they're not right but beyond the fluffy
thing if you look at professional
services specifically we still have to
have the best talent. The problem is the
talent the talent mix is is changing
right so I would say talent now needs to
be in two primary domain shall we say
there needs to be technical talent so
for services companies what's the
technical talent do we need AI engineers
do we need data scientists do you
understand people that understand the
algorithmic pieces around what we're
trying to build the second is in domain
experts so how do we deploy this in
certain areas we have right whether it's
for our own internal services or for the
services that we're going to provide to
our customers Right. So, we want to
deploy this in manufacturing. Do we have
domain experts that can explain to us
the nuances of that? I think there's a
piece over there. Right. I think there's
a lot of services companies that do some
really cool things. They're starting to
build marketplaces. So, starting to say,
well, how do we build modules? How do we
build offtheshelf internal use? Maybe
there's a business for APIs and others,
right? But how do we create uh a central
marketplace where we can regurgitate and
use the codes that we have? The other
thing that I see which is really
important is is about saying well how do
we run experiments at speed with the
client because a big part of this is
transparency. It's about showing the
client what it can do, what it can't do
and then running these experiments in a
delivery model. So it's it's it's about
saying how do I show you real-time
stuff? How do I give you dynamic
reporting and how do I show you how to
do this? And the last thing that I'd say
is also think about and this is
difficult but think about savings beyond
just monetary right so beyond just the
cost stuff. Are there things like client
satisfaction? Um I I showed you an example
example
of of the name of the company is excuse
me of a Indian tech company uh where
they were using this and what they found
was customer satisfaction dropped off.
So where can we use AI in client
satisfaction I know employee upsklling
innovation um new business development
from a materials engineering point of
view number of those things right so how
do we think about this differently I
think there's some key elements there
that services firms can think about
beyond just this the traditional more
technology stuff
and and just going back to to some of
your comments earlier so so you know as
AI starts to handle more routine tasks
and analysis what you're saying is you
kind of need to revisit two things you
need to revisit your your business model
to a certain extent and and understand
how that could be augmented or
potentially disrupted. Um and and
secondly, if I understand correctly, you
actually need to rethink to a certain
extent your the career and
organizational structure of your team
while putting guard rails in place to
protect your team and your talent. Right.
Right.
Yeah. Yeah. I mean, look, traditional
services model is it's a it's a time on
feet model, right? So, it's global
elves. We know this. I've worked at a
consulting company before and so you you
scale that by having more people uh
right well let me not say let me not say
people well actually yes traditionally
it's been by by scaling more people
scaling output but you need a delivery
vehicle of that the model changes over
here with AI the problem is the model
over here when I say change it also
changes the costing base because the
client now has access to similar tools
that you might have right now right and
we see this pressure in services firms
why should I pay consultant company ABC
when I can just go to chat GBT without
understanding the nuance of that. So
what I think is is understanding the
delivery model needs to change but also
bringing the client into your
conference. Yes, chat GB can do this
kind of stuff but look what we bring. We
bring in a wealth of domain experience
folks that have built business like what
you have right also run this in
conjunction with you. We'll train and
we'll do this with you as well. So I
think there's a piece over there. The
other piece is also the point that you
make and I actually didn't speak about
this. It's a great insight. the the
mobility within services firm and they
come companies in general the career
ladder progression thing needs to change
altogether historically was a timebased
thing I think a lot of women moving into
right now in terms of careers in terms
of professionals is one of
self-discovery it's one about saying
well here we are right now and these are
the jobs we have right now perhaps the
jobs that I want haven't be created in
how do I use AI and jointly create a
portfolio what it might be justify that
case create that role and then go
formalize it so it's turning entire time
model on its head like a lot of other
things and again I think that's
predicated around speed than anything else.
else.
Yeah, I mean that's great insights and
we're seeing it happen literally weekly.
There's there's evolution of of of AI
and and the impact on our organizations
and our clients businesses. It's very
interesting to to hear and and you know
the positive part of that is many of
them are utilizing it in practice to
enhance their organizations. I think the
key is how do you protect your business
model moving forward? protect the
livelihood of your team and continue to
you know to to scale the firm.
Yes, absolutely right. I mean that's
exactly it and I think that's the key
piece. It's around saying what what is
what is the mode that we can set up for
lack of a better word right and again I
come back to this paradoxically it is
around it's around it's around human
beings but specifically in this piece
right now it's around trust right
services companies still want trust they
still want to be able to to say you're a
trusted thought partner as much as what
I might say some some some attendees may
not believe believe me AI is a thing
right now I think it's a longtail thing
I think there's going to be other stuff
but there's no doubt there's going to be
other technology things quantum is
something I'm watching a lot very
closely right so quantum now becomes
decentralized assuming you can control
the temperature etc but I need a trusted
partner to help me through this someone
who can help me think beyond at least in
today's terms beyond the ones and zeros
what does it mean for me specifically
someone who knows things more intimately
than I do
no for sure for sure Craig that that's
amazing and and um I know we've got five
minutes left and I think Jen you have a
question or two from the audience that
you'd like to ask maybe a last question
and uh a future uh loaded question and
and your prediction on the future,
Craig. So, so there's a lot going on in
the media, right? You you've got all the
big logos, Open AI, Google, Grock, Meta
publicly declaring their unwavering
pursuit of general intelligence, right?
And where that may go and and
significant huge investments in
underlying compute power by some of the
biggest organizations in the world.
meter for example uh declaring multi
multi-billion uh dollar investments in
compute to support the scale of AI and I
know our uh very own Elon Musk recently
he mentioned that he he sees AGI uh
publicly available as early as 2026 so
general intelligence maybe not super
intelligence but a level of general
intelligence I mean if we fast forward
to 2030 right that picture is changing
every every month but what what what is
the future to look like for a
services-based firm 5 years from now.
Yeah. So I think I think a lot of it
well first of all let me say that I
think there's a lot of services
industries that will disappear and I
hope I don't offend anyone right now.
Right. So I'll give you a prime example.
I said it before I think legal advisory
firms are going to almost completely
disappear. Right? I think there's going
to be a lot of a lot of a lot of change
within a legal advisory. I think there's
a lot of question around tax advisory
audits. And the reason being is why is
that the case is again is because when
you look at a rulesbased system if you
look at if you look at if if you know if
statements uh or statements all that
kind of stuff right essentially that's
what AI is very good at accelerating and
doing the same kind of stuff over and
over again. What is my prediction in
terms of where things are going? I think
you're going to see a fundamental
shakeup. I think you'll see a shakeup of
businesses that need to reinvent the
business model. Um I think you'll start
seeing again the human piece come
through. I think you're going to see a
lot of of of dare I say it, I think
we're going to see a lot of blood in the
water. And I say that in a in a terrible
way, but also in a good way because it
allows us to reinvent ourselves, but a
lot of occupations and businesses are
going to have to say, well, what is it
that we can't do or rather what is it
that a machine can't do that we can do?
So, I think there's going to be a piece
over there. I think there's another
piece that's also important. I will say
though that I think the timing is not
necessarily going to be as uh grand
scale uptake as we think because of
regulation and barriers thereof. Right?
We know for a fact that for argument
sake in South Africa and again this I'm
using South African example because many
of our our listeners are from South
Africa. You can see without a doubt that
the role of government at least the
belief of the role of government today
is to safeguard jobs. Unfortunately I
think there's going to be a lot of
discourse where we're going to see
different players in different seasons.
government being summer safeguard jobs
of today safeguard mining safeguard this
versus fintech startups versus these
emerging guys in the autumn saying we
need to do this right and I think on a
grand scale it's going to be not as
quick as as we will see I think you'll
start seeing ems of things breaking
through I think you'll start seeing
barrier agnostic companies come through
and start doing some stuff but overall I
still think the one thing that holds
constant though is the role of the human
being to some degree that is a
controversial statement but for services
in in large reinvention
codification of what you can do. How do
you repeat that? And you might even
start seeing fragmentation where
services companies become application
layers into other industries.
Very important, very important point,
Craig. And really appreciate your time.
I think we we've kind of run out of
time. Jen, we probably have time for one
or two questions. Craig, can we run over
about five minutes?
Sure. From our side,
we can take one or two questions. Um
Craig. So the first one is
um how will the convergence of AI and
robotics redefine human the human over
the next 10 to 15 years and what
proactive ethical frameworks should
futurists or governments develop today
to responsibly guide this transformation?
transformation?
Yeah. Oh man, this is actually one of
those questions why I set this book up,
right? So So the short answer is no one
knows. And I'm going to emphasize that
no one knows. You got folks who write
books for a living. You got people that,
you know, you get paid millions to stand
on stage. The truth is no one actually
knows. At one stage, you know, Meta,
Facebook changed their name to Meta.
According to them, VR was going to be
the whole thing, right? Um it wasn't as
great, but the truth is no one knows. So
the question around the augmentation of
robotics and AI, where does it go? No
one can tell you. where I will tell you
where I think it starts moving towards
it starts to get us to think about the application stack and again the role of
application stack and again the role of jobs and what we're doing differently
jobs and what we're doing differently right the the other question in terms of
right the the other question in terms of what it means though is the question
what it means though is the question around what are we solving for is this a
around what are we solving for is this a capital thing because it's an easy
capital thing because it's an easy question the question is when when the
question the question is when when the technology becomes cheaper to deploy uh
technology becomes cheaper to deploy uh than the cost of hiring humans and it's
than the cost of hiring humans and it's only driven by capital you will
only driven by capital you will undoubtedly replace the human being with
undoubtedly replace the human being with machines whether it's an actual robot
machines whether it's an actual robot whether it's AI I with SAP system
whether it's AI I with SAP system whether it's a inventory you will do
whether it's a inventory you will do that so what is the driver around I
that so what is the driver around I think is the first piece so the short
think is the first piece so the short answer we know don't know about that
answer we know don't know about that around ethics and morals and and
around ethics and morals and and regulation we absolutely have to do that
regulation we absolutely have to do that I presented in March last year to the
I presented in March last year to the United Nations around the same framework
United Nations around the same framework and you can use the same kind of
and you can use the same kind of framework to to potentially say how do
framework to to potentially say how do we regulate artificial intelligence and
we regulate artificial intelligence and the reason why that's so important is
the reason why that's so important is because regulation and GDPR compliance
because regulation and GDPR compliance all that kind of stuff is mostly
all that kind of stuff is mostly anchored within summer. It's really good
anchored within summer. It's really good when we've got data, we've got
when we've got data, we've got precedents, we've got use case like law.
precedents, we've got use case like law. It's not good in winter because there is
It's not good in winter because there is nothing. It's not good in autumn because
nothing. It's not good in autumn because we haven't got some kind of law to step
we haven't got some kind of law to step onto. Right? So what does it mean? It
onto. Right? So what does it mean? It means in the world of ethics and
means in the world of ethics and regulation, compliance and stuff that
regulation, compliance and stuff that was asked right now. We need to be able
was asked right now. We need to be able to measure the stuff around function or
to measure the stuff around function or form. you need to create and have the
form. you need to create and have the ability to create these kinds of support
ability to create these kinds of support functions or maybe even driving
functions or maybe even driving functions uh as in spring to say how do
functions uh as in spring to say how do we create an ethical framework to drive
we create an ethical framework to drive the development of this stuff over time
the development of this stuff over time right so I think there is definitely a
right so I think there is definitely a huge area I mean if I was to say to
huge area I mean if I was to say to someone think about a job I definitely
someone think about a job I definitely say integrate questions around ethics
say integrate questions around ethics and morals into this uh AI right now is
and morals into this uh AI right now is a hot topic how do we think about that
a hot topic how do we think about that but also more importantly again what are
but also more importantly again what are we solving for it's for the human being
we solving for it's for the human being to ensure relevance whatever that might
Um, just on the topic of of ethics and um and your comment about legal advisory
um and your comment about legal advisory potentially dropping. If legal advisory
potentially dropping. If legal advisory were to drop, could one infer a need for
were to drop, could one infer a need for ethical advisory to fill the gap?
ethical advisory to fill the gap? So AI applying rules, there will need to
So AI applying rules, there will need to be a human element to consider the
be a human element to consider the ethical implications.
ethical implications. Absolutely. Absolutely. Without a doubt.
Absolutely. Absolutely. Without a doubt. So the answer is yes. I was having a
So the answer is yes. I was having a conversation with a lawyer a couple of
conversation with a lawyer a couple of nights ago. She said something
nights ago. She said something interesting. I can't remember the exact
interesting. I can't remember the exact terms, but essentially what it is is
terms, but essentially what it is is this. There's there's there's the
this. There's there's there's the context is really important, right? So,
context is really important, right? So, what is the context of? So, if a
what is the context of? So, if a homeless if a if a if a poor mother has
homeless if a if a if a poor mother has a child uh and she she steals a loaf of
a child uh and she she steals a loaf of bread or thing to feed the kid, that is
bread or thing to feed the kid, that is a crime. It's black and white, right?
a crime. It's black and white, right? So, yes, that person's guilty, but
So, yes, that person's guilty, but there's a context piece over here that
there's a context piece over here that the machine won't be able to get.
the machine won't be able to get. There's a nuance behind them. There's a
There's a nuance behind them. There's a context piece. So it's not only the the
context piece. So it's not only the the it's not only the human element for
it's not only the human element for around the ethical implications
around the ethical implications downstream but also how do we apply that
downstream but also how do we apply that law in a certain context right it's
law in a certain context right it's about understanding
about understanding where do we bring in a level of and
where do we bring in a level of and within the legal precedence we have this
within the legal precedence we have this as well so I'll give an example of which
as well so I'll give an example of which in the western world by and large the
in the western world by and large the western world tends to tends to value a
western world tends to tends to value a younger life over an older life by and
younger life over an older life by and large but in the east it's flipped so
large but in the east it's flipped so the rule of law can't be the same across
the rule of law can't be the same across the board yes the law can tell us what
the board yes the law can tell us what is the baseline but we need
is the baseline but we need interpretation exactly the question that
interpretation exactly the question that you asked right now Jen there is a human
you asked right now Jen there is a human element to say what's the implication of
element to say what's the implication of that and while this might be the case
that and while this might be the case should we defer should we change this
should we defer should we change this somewhere else I'm a bit of a geek and
somewhere else I'm a bit of a geek and for those engineers that are that are
for those engineers that are that are that are listening right now hopefully
that are listening right now hopefully you'll enjoy this but I'm a bit of a
you'll enjoy this but I'm a bit of a geek and so I was rewatching Star Trek
geek and so I was rewatching Star Trek the next generation that's the one of
the next generation that's the one of Jean Lard and uh and data the machine
Jean Lard and uh and data the machine right and in the series data has his
right and in the series data has his name indicates has all the data but he
name indicates has all the data but he doesn't understand the nuance and he
doesn't understand the nuance and he doesn't understand the human capability.
doesn't understand the human capability. He doesn't understand the human stuff
He doesn't understand the human stuff and this beautiful interplay between the
and this beautiful interplay between the human gut sense around empathy around
human gut sense around empathy around warmth around humor and you got this
warmth around humor and you got this cold data statistical machine. It's that
cold data statistical machine. It's that interplay over there and that human play
interplay over there and that human play to what you said right now I think is
to what you said right now I think is integral moving forward.
J are there any more questions or do we need to
need to um No. So that's I think we do need to
um No. So that's I think we do need to wrap up. If there are any that come
wrap up. If there are any that come through, we can address those after the
through, we can address those after the fact. Um but
fact. Um but yeah, thank you so much.
yeah, thank you so much. Yeah, and from my side, thanks very
Yeah, and from my side, thanks very much, Jo. Thanks for having me and Jenna
much, Jo. Thanks for having me and Jenna and Silveroft. Um you know, folks, feel
and Silveroft. Um you know, folks, feel free to connect with me, carry on the
free to connect with me, carry on the conversation. And a big big thanks to
conversation. And a big big thanks to Jacqu to you and Jenna and Silveroft. Uh
Jacqu to you and Jenna and Silveroft. Uh I think we need more of these forums.
I think we need more of these forums. Again, please, as I said before, this is
Again, please, as I said before, this is just my view. I think there's multiple
just my view. I think there's multiple views out there. How do we get more
views out there. How do we get more people speaking about this? I think is
people speaking about this? I think is the key. I don't think knowledge should
the key. I don't think knowledge should be centralized and certainly there's not
be centralized and certainly there's not one person that knows everything.
one person that knows everything. Fantastic. Thank you Craig. Um I took a
Fantastic. Thank you Craig. Um I took a lot of notes and we look forward to
lot of notes and we look forward to engaging with you with you further.
engaging with you with you further. Hopefully I have another chance to speak
Hopefully I have another chance to speak together and uh and we'll speak soon and
together and uh and we'll speak soon and thanks to everyone for joining today.
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.
Works with YouTube, Coursera, Udemy and more educational platforms
Get Instant Transcripts: Just Edit the Domain in Your Address Bar!
YouTube
←
→
↻
https://www.youtube.com/watch?v=UF8uR6Z6KLc
YoutubeToText
←
→
↻
https://youtubetotext.net/watch?v=UF8uR6Z6KLc