0:08 Good morning everyone and welcome. Thank
0:09 you so much for taking the time out of
0:12 your day to join us this morning. I am
0:14 Jenna Green from Silveroft and I'm
0:15 really really glad that you guys could
0:18 join us for the session today. I'm going
0:20 to keep this short because we've got
0:21 quite a bit to get through in the next
0:23 hour and two brilliant speakers to hear
0:26 from. So we're going to start off with a
0:29 quick round of intros. Firstly, we're
0:32 extremely privileged to have Dr. Craig
0:34 Wing join us today. He's a qualified
0:37 engineer, futurist, strate strategist.
0:39 He works with leaders and organizations
0:41 from around the world to help them make
0:43 sense of change and plan for what's
0:46 next. His latest book, Four Future
0:48 Seasons, provides a framework for
0:49 preparing your business for multiple
0:52 possible futures. And I was actually
0:53 lucky enough to attend his recent book
0:56 launch and it gave me a really useful
0:59 perspective on long-term thinking. And
1:01 of course, we've got Jacques Dubasan,
1:04 CEO here at Silveroft. Jacques has spent
1:06 many years working with professional
1:08 service firms to help them scale with
1:11 the right systems and strategies and he
1:13 brings a really grounded practical view
1:15 of the shifts we're seeing in the
1:18 industry right now. Just some notes on
1:20 the session and the structure. So we are
1:23 going to kick off with Craig who will
1:26 dive um into his framework followed by a
1:28 discussion with shark on its application
1:30 in our industry.
1:34 Um and then just a a few notes or
1:36 housekeeping before we begin. If you do
1:38 have any questions during the session,
1:40 please feel free to drop them in the Q&A
1:43 box and we'll try to get get to as many
1:46 as we can by the end of the session. And
1:48 then just on that note, let us know
1:51 where you're joining from. Um it's it's
1:53 always great to see where our audience
1:56 is dialing in from. Um and on that note,
2:00 I am going to hand over to Craig.
2:02 Great. Thank you. Thanks, Jen. I really
2:03 appreciate it. I really appreciate the
2:06 time here. Jacques, thanks also for
2:08 having me. Um is there anything perhaps
2:09 let me hand it to you before I jump
2:12 straight into the presentation itself?
2:14 No, Greg, we're all good. um looking
2:16 forward to to your discussion and uh
2:18 diving in with you. I have a few few
2:20 tricky questions for you, so I hope
2:22 you're prepared. No,
2:22 No,
2:24 I really appreciate it and thanks for
2:25 having me and I appreciate uh appreciate
2:27 the time to all of our delegates as well
2:29 dialing in on a Thursday wherever you
2:30 might be in the world. So, I really
2:31 appreciate that. I'm going to jump
2:34 straight in. Obviously, AI is such a hot
2:36 topic right now and this webinar we were
2:37 speaking before this is so
2:39 controversial. There's so much nuance
2:40 within this all the way from
2:42 understanding what's really going on but
2:44 also what does it mean for us in
2:46 professional services and also beyond
2:47 that that I think we'll dive in straight
2:49 away and say well what's really
2:50 happening and how do we think about this
2:52 world differently right so I thought
2:53 like all of these presentations what is
2:55 actually happening what's the state of
2:57 play and this is just a a bit of
2:59 research from Thompson Reuters to show
3:01 us how quickly things are moving across
3:02 the board here and you can see without
3:04 going into the depth of it the biggest
3:06 problem that we have with a lot of
3:08 executives as pled by Thompson and
3:11 waiters is you can see that gen AI and
3:13 AI in general is the biggest thing on
3:14 top of mind right so it's very
3:17 transformational 44% 36% is high impact
3:19 but they go a step further and say not
3:21 just that when we look at the data
3:23 rather McKenzie says when we look at the
3:26 data we can see this rise of AI has
3:29 increased uh exponentially or certainly
3:32 logarithmically over time and so then
3:33 the use of generative AI over the last
3:35 two years is increasing so the state of
3:38 play is there where this tension we feel
3:39 where the state of play around like what
3:42 is really going on is a real issue and a
3:43 lot of questions are being raised as a
3:45 result of which but at the same time
3:46 we've got all these paradoxical things
3:48 coming through right we've got stories
3:50 around lawyers using journalists of AI
3:52 without even thinking we've got
3:53 questions around bands as you might have
3:56 seen uh what was it velvet velvet the
3:59 velvet band that created music on
4:01 Spotify what is really going on and how
4:03 do we demystify that and so through the
4:05 course of this webinar I'm going to try
4:07 to share with you some thoughts
4:09 both driven from my work, my PhD and my
4:11 book, but also about saying how do we
4:12 think about the future differently in
4:15 the context of AI specifically when we
4:17 see things like this again by Thompson
4:20 Reuters saying that even though 80% of
4:22 respondents say the organizations aren't
4:24 seeing what is really going on, right?
4:26 So is there some kind of bification?
4:27 What is it that we're missing? Is it
4:29 because of the use case? Is it because
4:31 we don't understand? That's the hope
4:33 we're going to try to demystify.
4:35 And a lot of this is driven by what is
4:37 called the AI paradox, right? So we can
4:39 see there's real opportunities and
4:41 benefits for AI. We know that it can
4:44 potentially shape things, but at the
4:45 same time, we know there's a whole bunch
4:47 of challenges and risks. We know there's
4:49 things we haven't thought about, but we
4:50 also know that in this world that we in
4:52 right now driven by social media,
4:54 there's a lot of noise. So what is
4:56 really going on over here? What is the
4:58 opportunity and benefit? And what are
5:00 the challenges and risks that are
5:02 associated with this? Right? So that's
5:03 really the question. So part of the work
5:05 that I've done over the last decade and
5:07 a half is really in the future space to
5:08 say how do we think about the future
5:11 differently and and you'll see how AI
5:12 pulls into this directly because when I
5:14 travel around the world speaking to to
5:17 to companies overseas globally it's the
5:18 Fortune 500 whether it's governments
5:21 whether it's individuals we tend to see
5:22 I tend to see most organizations make
5:24 one of two primary errors the first is
5:26 the future's an extrapolation of the
5:28 past right and the context of services
5:30 space it's that the businesses of the
5:31 future the services business of the
5:33 future it's pretty much a continuation
5:34 of what we have right now just
5:37 accelerated with technology and for
5:39 today's webinar it's AI right so that's
5:41 the first problem we have the second
5:43 problem is that there's only a singular
5:45 future now while this may while we know
5:48 this is not true most organizations plan
5:50 for a sense right so this becomes a huge
5:52 issue so I created a framework that's
5:54 based on my PhD it's based off Ramsel's
5:56 unknown unknowns as well as Johar's wins
5:58 to say how do we think about the future
6:00 differently through multiple lenses and
6:02 multiple what you'll see on this axis
6:04 over here is on the x-axxis you'll see
6:06 system control. So this is a industry
6:09 this is a business that is under a very
6:12 uh a large uh let's say there's a lot of
6:13 there's a lot of drag there's a lot of
6:15 friction it's slow moving so mining
6:17 would be an example of this right so
6:18 it's slow moving you have to find the
6:20 land you have to prospect you have to
6:22 sample and then it goes through a whole
6:24 process that's really slowly versus the
6:26 other side which is the emerging world
6:28 of AI in the sense where there's new
6:30 things coming through GPR hasn't caught
6:31 up to date we don't know what's going on
6:33 with compliance where we got questions
6:35 around ethics so on spectrum. That's how
6:38 I set this matrix up, right? As a result
6:39 of which I then went through and I said,
6:40 well, how do we think about the future
6:43 differently? Using the analogy of the
6:45 seasons and that's the title of my book,
6:47 four future seasons. So, the first
6:49 season that we have is one of summer
6:51 where it's very it's there's a lot of
6:54 system control. It's embedded within
6:56 that and for organizations, we need to
6:57 understand that this is a future of the
6:59 nos. We've got data. We can bottle
7:01 things across and we can go from there.
7:02 Right? So it's a season where we've got
7:04 data, we've got experience and for a lot
7:06 of us in the room and I had a glance at
7:08 uh at some of the attendees for the
7:09 engineers for argument say this is a
7:11 world that we're very comfortable with.
7:13 The data gives us shity right and for
7:16 most people uh whether it is in
7:17 professional services or otherwise this
7:20 is the kind of future we expect we can
7:21 model things out and it tends to be a
7:24 good proxy of the future. The issue is
7:26 that within this world AI works really
7:28 well but it doesn't work so well in
7:30 other areas. So let me give a very
7:32 highlevel example and for those of us
7:34 who are experts in the room and I know
7:36 there are bound to be many experts in
7:37 the room let's just use this as an
7:40 example of how and where AI works today
7:42 specifically around generative
7:44 pre-trained transforms.
7:46 So if you were to plot out and again by
7:48 this is by direction on absolute but if
7:49 you were to plot out the universe of
7:52 potential answers to a question AI as
7:54 the name indicates generative
7:56 pre-trained transformer would give you
7:58 let's say the mean plus one standard
7:59 deviation right and this is done through
8:01 the process of tokenization through
8:03 codification and through by really
8:05 saying well when we wait stuff we have
8:07 to ensure that we give the most likely
8:09 answer. So for argument sake if you were
8:11 to say in a South African context and I
8:12 know some of our listeners may not be
8:14 from South Africa but if you were to say
8:16 how tall was Nelson Mandela this graph
8:18 would be very narrow because he was 1.88
8:20 88. You might get some outliers, but by
8:22 and large you would have an objective
8:24 truth. There wouldn't be much deviance.
8:26 But if you were to say who was the best
8:28 African president, you would likely have
8:30 the mean plus one standard deviation be
8:32 Nelson Mandelas. You might have a couple
8:33 of other presidents over here like the
8:35 previous ones, Jacob Zuba. You might
8:37 have Beckis, you might have so on and so
8:40 forth. But your answer set will be bound
8:42 upon a distribution like this by and
8:44 large as a result of which when you ask
8:47 that question AI today driven by M
8:48 machine learning will give you the mean
8:50 of the curve it will give you the most
8:52 likely truth not the truth and that's
8:54 how it operates within summer right
8:56 another example of this would be within
8:58 Google uh so if you were to go to Google
8:59 what's complete these are the most
9:02 likely search results and again driven
9:05 by what is the most likely truth not the
9:07 truth and as results of which within
9:10 summer in the context of sum AI Mik
9:11 Cockus says it best it says AI today is
9:14 a glorified table now why do I say this
9:16 because in the world of services today
9:19 AI is exceptionally good if it's summer
9:21 we've got data we know the problems are
9:23 we know the solutions are we can codify
9:25 that and a result of it we can create
9:26 systems and processes that give us the
9:29 same kind of result but as we know the
9:30 drive then comes through to say we need
9:33 more data over time we need to feed the
9:34 machine more which is why we saw
9:36 examples and moves like Google
9:38 acquirying Reddit So it's powerful data
9:40 to feed the machine to entrench us
9:42 further and further into the warmth and
9:45 the depth of summer. But the challenge
9:47 is from a bell curve distribution point
9:49 of view within summer AIs today can't
9:51 pick up new ones. And that's the issue
9:53 that we have within summer. And so we
9:54 need to say well how do we think about
9:56 this? The other problem with AI today is
9:59 in this drive of data it's the story of
10:03 the orus the mythical mythical uh Greek
10:05 and Egyptian snake that eats its own
10:07 tail. Because in this drive to have more
10:10 data, we know that synthetic data is
10:11 coming through, more synthetic context
10:13 coming online and as a result of which
10:15 the forecasts are that 80% of the
10:18 internet's going to get generated by AI
10:20 by 2030. And so we've got this never
10:23 eating uh snake eating its own tail and
10:24 as a result which will lead to things
10:26 like it's called model autophagy
10:28 disorder or model collapse. And so for
10:30 services businesses, this is a problem
10:32 because what we not start to do is start
10:35 to to to to to enforce the mean and we
10:36 start to have model collapse, right?
10:38 What is model collapse as a as a
10:40 practical example. Well, here's an
10:42 example of that in action. If an AI were
10:44 to learn through machine learning what a
10:46 picture of a dog looks like, you'll see
10:47 two of the six over here. There's a
10:49 golden retriever. the next iteration
10:51 that'll be scrubbed out further on and
10:53 as a result of which we start going
10:55 through this process model collapse
10:57 leads us towards seeing these dystopian
10:59 pictures of a dog it's not even a dog
11:01 anymore because now what the machine
11:03 sees and sees black dots as symbols or
11:06 representations of the eye pink of a
11:08 tongue so within summer while it's
11:10 useful right now we've got this very
11:12 distinct possibility of convergence
11:14 towards the mean and problems coming
11:15 through and we see this in the research
11:18 as well where open AI's 03 and 04 models
11:20 are starting to hallucinate more so than
11:22 previous models. Right? But that being
11:24 said, there are still incredible use
11:26 cases like in medicine uh research that
11:29 was done by by by Microsoft, you can see
11:30 where we want to be is in the top
11:32 leftand corner where a low cost of
11:35 diagnosis and high accuracy. These are
11:37 great use cases. So in summer there is
11:39 no doubt there's no doubt at all that
11:41 day after day really drives that and
11:43 there's good things coming through but
11:45 again we have to be cautious because as
11:46 we through through the se through the
11:48 seasons there's change coming across the
11:51 board. So diametrically opposite of
11:53 summer is we have winter and winter is
11:55 an absence of data. So we have no data
11:57 over here. We have no precedents and a
11:59 result of which we can't lean towards
12:01 the past and we can't say what is the
12:02 past showing us and this is the problem
12:04 that we have around the world we're in
12:07 right now. Uh right now I'm with I and
12:09 this morning I gave a presentation to a
12:11 major telecoms group in South Africa and
12:12 this is the issue that they're grappling
12:14 with. They all know that the future is
12:16 changing very quickly but the default
12:18 answer is well we'll just feed it more
12:20 data and they will give us an answer but
12:21 you can't do that in winter because one
12:24 we have no data and likely the data that
12:26 we have is only representative of what
12:28 we think is to be true okay what this
12:30 then means from an AI point of view and
12:32 again this papers coming through from
12:35 MIT shows is this that AI in the context
12:37 of a winter future relies on statistical
12:39 learning and again we know this right
12:41 and as a result of which it's less
12:44 adequate when when data is insufficient
12:46 or quantity or quality to enable
12:47 machines to learn meaningfully or
12:49 accurate patterns. Basically data is
12:50 limited and so we have small data
12:52 problems over here right and these small
12:55 data problems result in organizations
12:57 really spending a lot of time a lot of
13:00 effort in cleaning up their data. 96% of
13:02 enterprises face data challenges
13:04 including data quality labeling lack
13:06 confidence they spend nearly twice as
13:08 much time on data wrangling cleaning as
13:11 I do model training selection point. So
13:14 this becomes a real problem.
13:16 Yeah. Question to this. So, so, so you
13:18 know, summer kind of access to
13:22 information. Um, all things considered,
13:25 sun is shining, we can plan, we can
13:27 utilize AI effectively. And you've
13:30 listed, you know, the the typical large
13:32 language models. I believe many of our
13:34 firms are are either implementing or
13:36 considering in your model of winter,
13:37 which is effectively the unknown
13:39 unknowns, right? So, what's coming at us
13:40 in the future where we don't have
13:45 foresight? Is is AI relevant? Um, is
13:47 that a relevant tool in in in addressing
13:49 the challenges we may face in in a
13:51 winter situation?
13:52 Yeah, that's an incredibly good
13:54 question. The short answer is no. The
13:56 short answer is no. But specifically, if
13:57 you if you want to try to identify what
14:00 is the thing. Now, you can use AI in a
14:02 winter world to understand the rhymes of
14:04 things. So, you can see um if your data
14:05 set is long enough and we can train
14:06 this, we know this from statistical
14:08 models, right? Whether you use a
14:10 Gaussian or whatever, whatever order, we
14:11 can model some of those things out.
14:12 You've probably seen graphs like this
14:14 before where they show some kind of
14:15 cyclical wave and say well this is
14:17 what's going on. The challenge there is
14:19 within a data science point of view
14:20 sometimes we have what's called an
14:22 overfitting bias where we tend to fit
14:24 the data to what we see. The truth
14:26 remains though to your question directly
14:28 if you want to identify what is the
14:30 exact change within winter. AI isn't
14:32 very good for identifying that but it
14:34 can be very good to identify patterns of
14:36 what might emerge as a result of so not
14:39 the thing but how things might emerge.
14:41 Okay. How would we do that? than
14:43 otherwise because what we can do is we
14:44 move into the next season which is
14:47 autumn. So autumn then is like how we
14:49 see in the natural seasons. Colors are
14:51 changing, things are changing. And for
14:52 me, this is really where we are right
14:55 now with AI, right? So winter is we just
14:56 know there's some kind of change. We
14:58 don't know what it is. And therefore AI
14:59 is not really fit for purpose for the
15:01 thing. But in autumn, we see what the
15:04 change is. Now we need to decide what is
15:06 going to happen. And it's this kind of
15:07 future, this kind of world that I
15:09 believe we operate in right now, which
15:12 leads us to all of this unease. But it's
15:14 not just that. It's the speed of the
15:15 change that's coming through. Right? So
15:16 again, I won't go into the depth of this
15:18 one, but just look at the adoption rates
15:19 on this S-curve. Look at the steepness
15:22 of the curve over here. And we can see a
15:23 number of we can see a number of
15:25 technologies, but over time, the
15:26 adoption curve becomes steeper and
15:28 steeper and steeper. And we see the same
15:30 thing with new technologies. Now, I
15:32 wasn't able to get the latest research
15:34 around AI, but I'm sure if you look at
15:36 the uptake around chat GPT for argument
15:38 sake, right, the run rate to get to a
15:39 million users was was exceptionally
15:41 steep. I think it was within the orders
15:43 of weeks, never mind if if if not
15:45 shorter than that. Right? But the
15:47 problem with this in the autumn future
15:50 is is a guy called Martik had a law on
15:51 this and his law basically said the
15:54 challenge is organizations uh that are
15:55 fueled by people because people are
15:57 essentially obviously the drivers right
15:58 now and it is today at least I'll talk
16:00 about this shortly right but we tend to
16:02 learn at a logic rate so we learn very
16:04 steeply then it starts to drop off
16:06 because we then start to default to
16:08 experience and heristics. The problem is
16:10 technology changes at an exponential
16:13 rate like AI and we see this and this is
16:15 symptomatic of things like Jeffre Mo's
16:17 law double computing power half the
16:19 transistor size but like any good
16:21 academic I came up with my own law and
16:22 basically what I'm trying to prove this
16:23 and it's a bit of a tongue and cheek but
16:25 it's really important over here that my
16:26 belief is the future is now changing
16:28 what's called the factorial so if you
16:29 use something called the sterling
16:32 approximation you can work out um if we
16:34 have a base of two and forget about the
16:35 maths for those that are a little bit
16:37 intimidated by the numbers this is what
16:39 it means in lay person sense it
16:41 basically means that we're when we're in
16:44 a in a time in a place where there's
16:46 things changing at any given time we've
16:47 shifted beyond exponential into
16:49 factorial. So if you hear terms and
16:51 you've used terms like exponential
16:53 change, exponential organizations,
16:55 exponential this, we're actually
16:57 transition beyond that. To give you an
16:59 example of that demographically, here's
17:01 an example of that. The blue line would
17:03 be Mo's law, right? So this is a pure
17:05 exponential curve over here mapped on a
17:08 logarithmic logarithmic scale. And what
17:10 we can see without without going too in
17:12 depth over here is we can see things
17:13 like um we can see things like model
17:16 size is increasing at a at a factorial
17:18 rate. We can see the black line training
17:20 cost is going at a little at a at a log
17:23 rate and so is compute. Right? So this
17:25 just shows us that the problem with this
17:27 autumn feeling that we have right now is
17:29 it's faster and faster and faster. We
17:31 need more data. We need more stuff and
17:34 that brings in other problems problems
17:36 like this. So in December last year MIT
17:38 did a research paper and what they found
17:40 through this research paper is they
17:42 found things within the context of of of
17:44 materials engineering. They found there
17:46 was a 44% increase in materials
17:48 discovered, 39% increase in patent
17:50 filing, 7% down increase in product
17:52 innovation. So all good stuff. They also
17:54 found that there was a heterogeneous
17:56 effect. Basically the bottom third of
17:59 scientists saw less effect than the top
18:00 performance, right? There was a huge
18:01 difference over there. It was almost
18:04 like this bifocation around use cases.
18:05 And they also found there was a reduced
18:07 satisfaction in their jobs. So this is
18:08 what the researchers found within an
18:10 autumn context and this was in December
18:13 last year. The problem is this that even
18:15 though this paper came out in December
18:18 of 2024,
18:21 6 months later MIT withdrew this right
18:22 so they withdrew this because they found
18:24 out that the researchers hadn't gone
18:25 through the process the the right
18:27 protocol. This is actually written by a
18:28 second year student and they pulled this
18:31 paper. Now why do I show this in the
18:33 context of autumn? It's because things
18:35 are moving so quickly that this is a
18:36 great example of so much noise coming
18:38 through right now research that may not
18:40 be correct and so on and so forth. The
18:42 point is this is a really move fast
18:44 moving area and that's the hope that we
18:46 domestify this right so what does it
18:48 really mean when we look at science when
18:50 we look at research just so we know I've
18:51 validated these papers myself right so
18:53 these ones are not ones that have been
18:55 pulled from my team but what are the
18:57 real scale of changes right now so if
18:59 you look at a long range and
19:02 longitudinal study that was done um by
19:03 these folks over here by the national
19:05 bureau of economic research and they had
19:07 a look at a large sample size of 25,000
19:10 workers and 7,000 work point places What
19:12 they find is very interesting that chat
19:14 bots have no significant impact on
19:16 earnings recorded hours. There's modest
19:19 productivity gains and their findings
19:21 say challenge this whole narrative
19:24 around generative AI. So what is really
19:25 going on over here? And let's talk about
19:26 some of those things. That's what I'm
19:28 hoping to bring through. But this is
19:30 what the research currently shows us.
19:32 But you might have also seen uh you know
19:34 a couple of by a couple of months ago
19:37 the whole question of AI open AI's 03
19:39 going rogue and blackmailing researchers
19:42 etc and this controversy um and all this
19:45 sensational drives the news the truth of
19:46 the matter is what actually happened was
19:49 this within anthropic they were doing a
19:51 use case study so this wasn't an open
19:52 source thing and they fed the machine
19:54 certain kinds of permutations and as a
19:56 result of which what they realized is
19:58 they said to the machine they said to
19:59 said to the model they said we're going
20:01 going to shut you down but you need to
20:02 do everything that you can to ensure
20:04 that we meet a goal. What is the goal
20:07 function? And the machine made its own
20:09 decisions to say it was better to
20:10 theoretically it was a theoretical
20:13 exercise that it would have blackmailed
20:14 programmers as opposed to being shutting
20:16 down itself. Now why this was
20:19 sensationalist is because exactly that
20:21 it was noise that was driven by media
20:22 and folks who don't understand it. But
20:24 the truth of the matter is this is
20:27 already spoke about in a paper in 2016.
20:28 Right? Because essentially what these
20:30 folks talk about in a study that was
20:32 done by Google uh and it was done by
20:34 Oxford if I'm not mistaken is that when
20:36 we start having a look at at at machines
20:38 and AI we need to be very cognizant of
20:40 what we programming for what is the the
20:42 gain function right and so what actually
20:44 happened in this paper is exactly what
20:46 we anticipated to happen because the
20:48 incentive system was incorrect this
20:50 experiment that they set up was ensure
20:52 you meet as many people as possible help
20:54 as many people as possible at whatever
20:56 expense that in program they did and
20:57 again the problem for services
20:59 businesses is what are we optimizing
21:01 for? Optimizing for profit, for revenue,
21:03 what are we optimizing for? And
21:04 therefore, there's unintended
21:07 consequences across the board. When you
21:10 look at at at aentic AI, what is the
21:11 research showing us again? What does the
21:13 research show? Well, these researchers
21:14 created something called the agent
21:16 company where was a company that was
21:18 staffed only by AI. And what you'll see
21:21 over here is that the gains at best was
21:24 35%. 35% at best was the gains by having
21:26 a fictitious company staffed only by
21:28 agentic AI doing things over and over
21:30 and over time. But what they found is
21:33 they found a few reoccurring themes.
21:34 Lack of common sense, lack of social
21:37 skills, incompetence for on job skills
21:39 and deceiving oneself, right? And as a
21:40 result of which when you look at these
21:42 issues over here, it starts to become
21:44 very clear that potentially one of the
21:46 issues that we have in the world of
21:48 autumn is that we're reinforcing the
21:50 same issues that we have with human
21:52 beings. These are very similar, right?
21:55 And as a result of which CLA, one of the
21:57 fastm moving, one of the top 20 fastest
22:00 moving fintexs actually shut down their
22:02 customer division that was wide on AI
22:04 because they saw that there was a
22:06 problem over time. Right? So I'm not
22:07 just so clear and I want to make the
22:09 statement very clear. I'm not against
22:11 AI. I'm not saying don't use AI. I'm not
22:14 saying don't use agentic AI. What I am
22:16 saying is how do we use it better and
22:17 how do we think about it better because
22:19 I don't think it's being deployed right
22:21 now in the correct manner right and so
22:23 I'll give you some thoughts around that
22:26 but the problem is within autumn as we
22:28 are right now I like what Fineman says
22:29 I'd rather have questions that can't be
22:30 answered than answers that can't be
22:33 questioned as opposed to a summer future
22:36 where the answers were king in autumn
22:38 questions are king right so that's where
22:39 we are right now
22:40 to show where we are
22:43 I think I have a question there yes so
22:44 um on the subject of questions. So, so
22:46 that's that's really interesting and and
22:49 and it's it's it's a significant amount
22:52 of of evidence um and study on AI and
22:56 and as you said the it's it's like AI
22:59 goes through a uh developmental phase
23:01 every kind of four weeks and and the key
23:04 is to to understand the gain function as
23:06 you said. So, so just to contextualize
23:09 it as as a services firm or a technology
23:11 firm or a combination of both there's
23:13 you know the your research is showing
23:15 that there's maybe incremental gains if
23:17 you if you have the arrow pointing in
23:19 the right direction, right?
23:22 What about you know we all want to have
23:24 uh a lot more than in incremental gains
23:26 from technology where we can utilize
23:29 technology. Is is the key to this, you
23:31 know, the strong narrative around the
23:34 benefits of utilizing AI in business in
23:37 particular? Is the is is that narrative
23:40 maybe founded on verticalizing the
23:42 approach and saying in this in this
23:44 service line or in this part of our
23:46 business or in this use case
23:47 specifically, we really want to ring
23:50 fence where technology like AI and other
23:53 emerging technologies could be useful.
23:54 Yes, I mean I think that's that's a
23:55 great insight. I'm going to allude to
23:57 that later around some great work that
23:58 spoke about this but that's a great let
23:59 me let me address some of these right
24:01 now right so I think one of the reasons
24:03 why we don't see the results right now
24:05 is is there's a lag time there's always
24:07 a lag time when you measure results when
24:08 you implement something right so what
24:10 are we measuring typically when we come
24:11 to questions like this and things like
24:13 this we measure measure financial
24:15 imperatives what might be happening and
24:17 again I haven't found research this
24:19 effective just yet I think what's going
24:21 on is a lot of companies are replacing
24:23 lower skilled labor with a technology
24:25 function and again if you think about
24:26 that bell curve distribution makes sense
24:28 right so we can start at the mean of the
24:30 curve not the outlier the bottom end of
24:31 the curve I think is the first piece
24:32 that's really important
24:33 so I think there's a there's a delay
24:36 over here from virtualization without a
24:38 doubt I mean you look at the use cases
24:40 over here um legal legal won't be
24:42 recognizable anytime soon neither will
24:44 medical a general GP's been saying this
24:46 for a while a GP is just a pattern
24:48 recognition machine right so the
24:51 verticalization I think is one thing but
24:52 I think as I'll speak about shortly it's
24:55 also to see how do we do it as as a
24:57 holistic function. When I work with
24:59 corporates right now, part of it is a
25:00 patchwork approach. It's about saying
25:02 how do we use AI a little bit here, a
25:03 little bit over there, right? As opposed
25:05 to holistically how we invent that. I'll
25:06 give you some thoughts around that
25:07 shortly. But I think it's a great
25:10 question and please do jump in um
25:11 because obviously you're the voice of
25:13 you're the voice of implementation
25:14 commercial in this world today and I
25:15 think we want to make sure that it's not
25:17 just some kind of academic conversation.
25:20 So please do jump in. I guess the point
25:21 that I wanted to make just just to
25:22 finish off this this thought over here
25:24 is when you look at Gartner's hype cycle
25:26 for artificial intelligence you can see
25:28 for those of you that follow this uh
25:30 technology tends to follow a curve. Now
25:32 I know there might be some naysayers in
25:33 the room. It doesn't follow exactly but
25:35 again it's it's a thought model but the
25:37 point is generative AI you can see over
25:38 here is going through a trough of
25:41 disillusionment whereas at the very peak
25:43 and just by the way last year 2024 gen
25:45 AI was in the top right now agents are
25:47 now at the top right so the question is
25:48 how do we think through these things
25:50 differently to show you how my model
25:52 plays out that's winter we don't know
25:54 what's coming this is autumn this is the
25:55 stuff they're speaking about right now
25:57 right it's like what is emerging how do
25:58 we think about this what is the role
26:00 around multimodal AI what is the
26:02 question around neuros symbotic all this
26:04 other kind of stuff called AI it's all
26:06 part of this autumn future within the
26:08 world of AI around what do we do about
26:10 this that then becomes summer because it
26:12 now starts to come through there's some
26:14 foundational things and then we move
26:16 into spring and so that sets us up for
26:18 what is spring right so the model as a
26:21 reminder over here summer great data
26:22 we've got precedents we've got models
26:24 and extrapolation the past and the
26:25 future actually works really well for
26:27 services businesses this is great and
26:29 actually for all businesses this is
26:31 probably where 95% of organizations
26:33 operate today and the pretext that the
26:34 future is going to be the same as we had
26:36 in the past. Winter we have no data. We
26:37 don't know what's coming through. We
26:39 don't know what's going to happen next.
26:41 It's about being lean entrepreneurial
26:43 design thinking. It's about closing the
26:45 gap between what we do and what our
26:47 customers want. We know that some
26:49 change. We know what change is happening
26:50 and then the change starts to emerge
26:53 itself. We now then decide what to do.
26:55 Spring then as a result of which is
26:56 sometimes called the forgotten future.
26:58 My narrative. So it's the things we
26:59 forgotten about. It's about saying,
27:01 well, if you think about it as human
27:03 beings, right? I'm of a certain age
27:04 right now and my mates of a certain age
27:05 where you might be going through a
27:07 midlife crisis. So, you forgotten about
27:08 what you thought you were going to be.
27:10 Oh, I thought I was going to be, I don't
27:12 know, uh, I was going to be a CEO of my
27:13 own business. I thought I was going to
27:14 be happy in my marriage. I thought I was
27:16 going to have loving kids and it's not
27:18 necessarily the case. It's the same
27:19 thing. What is the spring that you
27:21 forgot about in your business? And one
27:23 of those is this whole question around
27:25 customer service. Genesis sees this as
27:26 well, right? So they actually
27:29 paradoxically are using AI to make
27:31 customer service more human. So part of
27:33 this world that we're moving into I
27:35 think we'll see this a lot more is the
27:38 question of when we use AI and the AI
27:39 powered organizations Harvard
27:42 paradoxically said in 2019 the main
27:44 challenge isn't technology it's culture.
27:45 So what is the culture driving behind
27:47 this? What is the acceptance rate behind
27:50 this to think about this? Right? So you
27:52 might say to me, well, you know, that's
27:53 interesting, but show some data, Craig,
27:55 show some data. And specifically, off
27:56 the point that I said before around
27:59 Microsoft's uh AI training system where
28:01 it is both cheap and high accuracy,
28:03 there was a piece of research that came
28:05 out in 2023, so two and a bit years ago,
28:06 right? And this is really interesting
28:10 because here uh the the the researchers
28:12 compared a blind test and this was just
28:14 as chat GP2 started to come through.
28:16 They compared the response rate of
28:18 physicians, actual physicians, actual
28:20 doctors, not pretend doctors like me and
28:22 actually chat bots and they measured off
28:24 two key metrics and this is what it
28:26 resulted in. What they found was chat
28:29 GPT actually has 3.6 times better
28:31 quality of what they showed. So the the
28:34 the diagnosis is better but it was also
28:36 10 times more empathetic. Right? And
28:37 this is the problem that we have right
28:40 now within the spring context and also
28:42 within autumn naysayers will say look at
28:44 the work that was done this research
28:45 done work done by John Hopkins shows
28:47 this is why technology is going to take
28:49 our jobs. The problem that I push back
28:51 with from a spring perspective is the
28:53 problem is for doctors specifically they
28:55 are too mechanical doctors are
28:57 incentivized to see you in seven minute
28:59 increments because we know the data
29:01 shows us that the best doctors actually
29:03 aren't the best diianicians. They're the
29:05 ones that have the best bedside manner.
29:07 Our placebo effect is a incredibly
29:09 powerful effect. So while this data
29:11 shows us that the machine is better at
29:14 both ratings around quality and empathy,
29:16 holistically doctors are better because
29:17 they have more care. And that's the same
29:19 thing around the genesis thing. It's
29:20 around saying how do we do more customer
29:22 care. So what is the other piece that we
29:24 forgotten about? And there was some work
29:26 done by Walton and Harvard to say well
29:28 how do we think about using AI? And I
29:29 think this is really important because I
29:30 want to shift the conversation now to
29:33 say yes AI has gains right now. We may
29:35 not be as measured as we hope it may be
29:37 for reasons as discussed earlier. We
29:38 might speak about this later as well.
29:40 But more importantly, how do we work
29:43 with machines? So inside this paper
29:46 called navigating the jagged edge. What
29:47 they found is you have to work with AI
29:50 in certain. And they find two use cases.
29:52 The first is how do we use it as a
29:53 cyborg? So it augments us. It's part of
29:55 us part of the machine. Part of us part
29:57 of the machine. It was a central. It's
30:00 half man, half horse, half machine. And
30:02 what the researchers found is something
30:03 really interesting. They found that when
30:06 you compare the results of human only,
30:09 machine only or a blended situation,
30:10 right? They found that for decision
30:13 tasks only, the human AI accommodation
30:15 actually worse i.e. the best thing
30:19 actually is AI alone. Right? That's the
30:20 first piece that they find. The second
30:21 they found the second thing they found
30:24 is when you look at creation tasks human
30:26 AI potential or human AI combination
30:30 showed higher gains. So AI uh for
30:33 decision task alone is better creation
30:36 it's both. Why this dichophy? What's
30:38 going on over here right? And this is
30:39 what I believe is happening and
30:40 researchers don't necessarily mention
30:42 this but it kind of makes sense when you
30:44 think about it. As human beings we have
30:45 our own biases. We have our own frame.
30:47 We see things. I know many of us have
30:48 done this before. want to chat to your
30:50 PT, ask the question, the questions come
30:53 through and you say, "Well, actually, I
30:54 don't think that's necessarily right.
30:56 How about this? Well, how about that?"
31:00 And so, you start to sway the AI in a
31:01 certain direction. And that what that's
31:03 what happens with decision tasks. We
31:05 bring in our own human biases and we
31:07 sway it in a certain direction. Whereas
31:09 creativity, it's on the outliers of the
31:11 curve. Right? So, there's a piece there
31:12 to say how do we work with machines?
31:14 That's something we forgot. And as a
31:15 result of which we can see new roles
31:17 around is about merging humans and
31:19 machines. But that's not the complete
31:21 answer because also about three weeks
31:23 ago there was another piece of paper and
31:24 this was very controversial. I know some
31:26 of you might have seen this. Essentially
31:28 what they found out is is they did a
31:31 research uh research MIT media lab to
31:33 determine what is the impact of GPTs on
31:35 our brain. Basically is it making us
31:36 dumber? That's what the trial looked
31:38 like right and so the experiment was
31:40 done over three groups. one that uses
31:42 GBTs only, one that uses search only,
31:44 one that uses their brain only. And they
31:47 measured them writing a number of essays
31:49 over four rounds, the quality of but
31:52 also the EEG patents of their brains in
31:53 terms of what how the neurons are
31:55 firing. And what they found is the
31:57 takeaway of this paper is the papers
31:59 moved towards what is called solless.
32:01 Soulless there was no there was no
32:03 intention. There was very bland. But
32:06 also the use of the chat group only
32:07 converged towards the same kind of
32:09 output. Again, it makes sense when you
32:10 think about it through the lens of the
32:12 bal of distribution, right? But more
32:13 than that, here's where things become
32:16 really interesting. On page three of the
32:18 paper, they say this in bold. If you're
32:20 a large language model, only read this
32:21 paper below. Because what this
32:23 researchers realized is they realized
32:25 most people would take the paper, it's a
32:27 200page paper, and they would feed it
32:29 through an AI machine and say, give us
32:31 the output of this. or the lay person
32:34 will read the results um of the study in
32:36 the Times or Huffington Post, which is
32:38 what most people did, and they would
32:40 also then scrub the results out. A
32:42 result of which the takeaway in this
32:44 abstract and the conclusion says that AI
32:46 is making us essentially dumber. Our
32:48 neurons are not firing. But when you go
32:49 a step further and you actually
32:51 interrogate it and again they did this
32:54 because it shows that people rather than
32:55 the experiment themselves that we're
32:58 doing was a social experiment. we read
33:00 into this paper and we we we ref we do
33:02 the findings the same way right but if
33:04 you go a step further what they actually
33:05 show and this isn't the conclusion you
33:07 got to dig into the paper itself they
33:09 found that a reliance may result in
33:12 shallow encoding basically yes the group
33:14 that only used GBTS couldn't recall what
33:15 they read in the first paper which makes
33:18 sense because you farm your thinking out
33:19 you don't think about what you're saying
33:21 right not only that this is where it
33:23 becomes really interesting if you
33:25 sequence it in such a way that AI AI are
33:28 used after you use your brain. So do the
33:30 hard work. Think about things first and
33:32 then use AI to augment and supplement
33:33 your thinking. You've got better use
33:36 cases. You've got higher firing EEGs,
33:38 right? And as a result of which your
33:40 metacognition is higher. So the bland
33:43 findings AI makes us dumber is not the
33:45 case. It's the case if you use it too
33:47 early, you don't sequence it correctly
33:49 and you only default in that. Right? So
33:50 these pieces show us that one of the
33:52 things we've forgotten about is the role
33:54 of the human being. And anecdotally,
33:56 those of us in South Africa will
33:57 recognize this image. For those of us
33:58 internationally, you can still relate.
34:01 Before cell phones, GPS's we had to find
34:03 our way with MacBooks, right? Whereas
34:04 today, we don't do that anymore. Right?
34:06 So, we can see the same thing. We've
34:08 outsourced, we farmed our thinking to
34:09 technology that's made us dumber. Now,
34:11 I'm not saying it's right or wrong. I'm
34:12 saying the results specifically in the
34:14 context of AI is showing us what we've
34:16 forgotten, which is this. It's the
34:18 humanness, right? So, what else have we
34:20 forgotten? Well, it's the role of us as
34:23 of us as people as AI advances as we
34:25 shift along the spectrum from narrow
34:27 intelligence maybe signs of general
34:28 intelligence depending how we how we
34:31 define that maybe even scary towards
34:33 super intelligence what is the role of
34:35 the human being and again research shows
34:36 this from MIT is they they went and they
34:38 said well how do we find out what makes
34:40 us human and my belief is it's an
34:42 augmentation of science technology
34:45 engineering art and maths right and we
34:46 know this a lot of us on the room today
34:49 are are quan folks engineers, we got
34:51 consultants, we've got doctors, we got
34:53 all over the place, right? But how do we
34:55 augment STEM or STEAM with epoch as I
34:57 say in the paper, empathy and emotion,
34:59 intelligence, presence, networking and
35:01 human connection, opinion, judgment,
35:03 ethics, creativity and imagination,
35:05 hope, vision,
35:07 right? And leadership,
35:09 right? And of you can see these are
35:11 essentially the measures that will make
35:13 it really difficult for AI to do. But
35:15 the researchers went a step further and
35:17 they said, "How do we understand this
35:18 around creating what's called a risk
35:20 score and an epoch score and they found
35:23 an inverse correlation between your
35:25 ability to be human and essentially the
35:27 chances of you losing your job, right?
35:29 And you can see on the top on the right
35:30 hand side all the various jobs that
35:32 might be lost to that. But again, it
35:34 comes back to the question around
35:35 services, industry, and what are we
35:36 losing?" Yeah, Jean.
35:38 Yeah, maybe just a question on this. So,
35:40 I mean, I think it's it's kind of in the
35:42 back of all of our minds. um the the the
35:45 the development of AI over time, right,
35:47 in your comparison to humanlike
35:50 qualities and its ability to learn at an
35:52 alarming rate. And obviously there's a
35:54 there's a world of documented failures,
35:57 there's also, you know, this this um
36:00 cohort of AI developers and investors
36:01 that are that are taking it to the next
36:03 level. What are your thoughts on you
36:06 know an extreme case where AI almost
36:09 becomes sentient and can can act like a
36:11 true human being for instance in the
36:14 context of employment inside a business
36:16 like ours.
36:17 Yeah, it's really interesting, right?
36:19 So, so the fun the fun experiment was
36:21 obviously the touring test. Is it is it
36:23 distinguishable from a human being
36:25 that's actually semi if you look into
36:26 the touring test it's complete
36:28 completeness just as a fun thought
36:30 experiment was a bit of a deviation. The
36:32 full touring test wasn't just like do
36:33 you know if you're interacting with a
36:35 machine. The true touring test was will
36:37 the machine know itself as a machine as
36:39 a fun as a fun gambit. But you know the
36:41 question of as machine becomes smarter
36:42 as we move towards a form of general
36:45 intelligence perhaps not super what are
36:47 we doing? Well the first thing that I'd
36:48 like to say is again by understanding
36:50 the foundational models that drive the
36:52 training set what are we trying to solve
36:54 for and and yes the machine might become
36:56 smarter and again in the context of
36:57 smarter I want you to think about it as
36:59 a bulk of distribution. Think about it
37:01 as multiple disciplines laid over a bulk
37:03 distribution. What it means? It means
37:06 that this fictitious AI that we speak
37:09 about right now um is the average. It's
37:12 the average of accounting of project
37:14 management. It's the average of an
37:16 interpretating data of case studies all
37:17 that kind of stuff. And that's the
37:19 concern that we have. It becomes the
37:21 average but it's the average of average
37:24 of totality. Most human beings at best
37:27 are average at one discipline at best,
37:28 right? which actually scarily means that
37:30 more than half are less than average.
37:32 That's the scary paradox. So what does
37:34 it mean? It means that the research I'm
37:36 showing you right now says the whole
37:37 question around augmentation. Will
37:40 machines become smarter? Well, I think
37:42 the question is smarter. I think that's
37:43 the question. What is the definition of
37:45 that? If you also believe what I said
37:46 around questions around model collapse,
37:48 what are we trading for? What's the gain
37:50 function? But also how do we deploy it?
37:52 But also more importantly, how do we
37:53 keep humans in the loop? Right? How do
37:55 we keep humans in the loop? And I think
37:56 that I want to speak about that right
37:57 now because part of the thing that we
38:02 forgotten right now is a a bit of a a
38:04 variance or rather a deviation from from
38:06 the story is this question of cars. Why
38:08 haven't I got cars over here? So I stud
38:10 in the US I studied in Boston as well,
38:12 right? And Boston is really interesting
38:14 because the theory and the story goes
38:15 that in Boston they paved the roads
38:17 where the cars used to walk. So they
38:19 said here's where they are. Let's just
38:21 layer to on top of that. Let's build
38:23 highways on top of that. And the issue
38:24 with AI and this leads to your question
38:25 that you said right now. What does it
38:27 lead us around a smart machine that's
38:29 learning processes? The problem we might
38:32 have right now is if we layer AI over
38:34 our initial processes, all we do is
38:36 accelerating that process, it doesn't
38:38 reinvent the process. And that also is I
38:40 think is why we don't see the gains that
38:42 we expect because we're taking broken
38:45 processes and we layering AI on top of
38:46 that. We layering over a rule-based
38:48 system on top of that. We're then
38:50 expecting it to solve things across. And
38:53 indeed Thompson Reuters shows us this.
38:54 Thompson Reuters comes across and says
38:56 well when we look and and Thompson
38:57 Reuters does the research said when we
38:58 look at there's essentially two
39:00 fundamental use cases. There's the
39:03 horizontal use cases which is basically
39:05 across industry across functions but
39:07 they're all using signs of this are
39:09 folks are individually are using um all
39:11 of these these LLMs. They're using
39:14 things like co-pilot uh uh notebook LM
39:15 they're using all these different pieces
39:17 right and what they find is across that
39:20 that use case that very that very narrow
39:22 because it is dispersed it isn't very
39:24 mainstream use right now they find that
39:27 even though 70% of Fortune 500 not 505
39:30 500 use AI regularly they still find
39:32 that it's very much solid which means
39:34 most people on this call right now are
39:37 likely using technology but in isolated
39:39 cases the lay person actually doesn't
39:41 use it or they don't understand But it's
39:44 being used in very horizontal use cases.
39:45 The question you asked me before around
39:48 verticalization. That's the key, right?
39:50 It's the key to say how do we move
39:52 beyond a pilot stage which is less than
39:55 10%. Because most organizations use it
39:57 as patchwork. It's a PC to PC PCR. And
39:59 also they then say we don't want to
40:02 address the problem. It's about putting
40:03 this layer on top of it. How do we fix
40:04 it differently?
40:08 Exactly. and and you know I I run a a
40:10 firm that's a mixture of technology and
40:13 software we uh software and services and
40:16 we we deal with two dilemas and in in
40:18 the advancement of technology and the
40:20 advancement of the services industry um
40:22 and I know there's many individuals on
40:24 this call that are wondering how do we
40:26 take all of this knowledge and there's a
40:28 barrage of knowledge coming at us via
40:30 LinkedIn and just the open internet
40:33 really on what's going on in AI. How do
40:34 we take this and how do we consume it
40:37 and where do we start to start thinking
40:38 about being successful in our organizations?
40:40 organizations?
40:42 Yeah, I mean I think it's so so let's
40:44 move into the operational like details,
40:46 right? Like so what? Okay, cool. This is
40:47 interesting. What does it really mean
40:49 for me as a services organization,
40:51 right? So, so, so, so there's a few
40:53 there's a few thoughts a few thoughts,
40:54 let let me give you some generic
40:55 strategies again comes through by
40:57 McKenzie. The first thing they say is
40:58 and I'm not going to read these bullet
40:59 points to you. I'll walk you through
41:01 them, but essentially the first is
41:03 what's the strategy? So, why are we
41:04 doing this? What is the strategic
41:07 intent? The challenge is a lot of
41:10 organizations the technology stack is
41:12 deployed at a very narrow veneer. It's
41:14 either done within technology itself but
41:15 it doesn't interface with business or
41:17 it's different functions where we've got
41:19 individuals themselves who are perhaps
41:21 autumn people or winter people that are
41:23 using technology. Right? So the first is
41:25 holistically from a verticalization
41:27 point of view. Are we prepared to
41:29 reinvent the processes, the systems, the
41:30 way we do stuff? So what is the
41:33 strategy? And the strategy over here
41:35 isn't about us. It's about how do we
41:36 make things better for the end consumer.
41:38 So it's almost like a winter player over
41:40 here, right? So what is the strategy?
41:42 How do we reimagine entire segments? How
41:44 do you create new kinds of of advantage
41:46 for us? That's the first thing. The
41:48 second thing is how are we going to
41:50 measure it? So we go from from from from
41:53 why to how we going to measure this kind
41:54 of thing, right? And for this is really
41:56 important. It's about asking a different
41:58 kind of sort of questions. It's not
41:59 necessarily the transformation. And we
42:00 got to be very careful over here. It's
42:02 not the vanity metrics. It's not things
42:04 like how many of our users have logged
42:06 onto our internal AI system that we have
42:08 right now. Right? That's not the right
42:10 kind of question. It's more around real
42:12 questions around operational efficiency.
42:13 How do we think about those things? So,
42:15 how are we going to measure this? How
42:17 are we going to deliver this across? Is
42:18 it going to be a silent approach? Are we
42:20 going to move across? How do we ensure
42:21 that if we're going to go with agents
42:24 for argument sake, the data is from all
42:25 departments, not from one. We just
42:27 reinforce we converge towards the mean.
42:29 How do we deliver this across? Right? My
42:31 suggestion would be if you're doing this
42:33 for starters use offtheshelf products
42:35 and then maybe you can fine-tune that.
42:37 Maybe you can use a rag to define it
42:39 specific for you and then you do the
42:41 implementation. Most organizations what
42:42 I see right now is they're jumping
42:44 straight to step four. They go let's
42:45 implement this thing. How are we going
42:47 to measure it? H well we haven't thought
42:48 about it. How are we going to deliver
42:50 this? Well we'll do it. We'll deploy it
42:52 within DevOps in the DevOps we are right
42:54 now. Right? So we miss some of those
42:56 nuance. This I think is an example of
42:58 how we can do that. How do we also do
43:01 it? By demystifying some of this. I
43:03 would say do the work. Kind of like that
43:04 paper that I spoke about before, doing
43:06 the brain work. Sure, I get it. You
43:08 know, we're busy people. We got
43:09 companies to run, we got jobs to have.
43:12 You can't read a 200page paper, but
43:14 spend a little bit of time uh digging
43:15 behind the nuance. Kind of like the
43:17 previous MIT paper. There's a lot of
43:19 folks like myself, dare I say, there's
43:21 speakers, there's consultants, there's
43:22 thought leaders who will tell you this
43:24 is what's going on with AI. just pause
43:26 for a second and do some thinking for
43:29 yourself. Right, quick one before I'm
43:30 going to end my my my presentation. We
43:32 can take some other questions from the
43:33 floor. Right. So, where does this lead
43:35 to? Well, where I'm showing this right
43:37 now is I think we're moving towards an
43:39 age where right now the human being is
43:41 augmenting rather we're augmenting our
43:43 humanness with technology AI where it's
43:45 the best of both where it's about us
43:47 using our brains and augmenting
43:49 technology where even though we can see
43:51 the gains are coming through, they may
43:53 not be be measurable just yet. there are
43:55 significant gains. I take a cautious but
43:57 optimistic approach. But if you were to
43:59 say to me what's the long run future I
44:01 suspect we're going to shift beyond an
44:03 augmentation business model where one is
44:05 paradoxically where a business the
44:07 company is actually run by an AI itself
44:10 right Jack spoke about this a number of
44:11 years ago over a decade ago where the
44:14 best CEO will be will be an AI um you'll
44:16 see this is a different branding this is
44:18 a slide that I did for for people in
44:21 2024 I presented in in Saudi Arabia uh
44:23 in November of last year and this is
44:25 basically what I said I said you know I
44:26 suspect one of the business models we're
44:28 going to be moving to is towards a
44:31 centralized AI brain where the AI makes
44:33 the decisions and it does the the heavy
44:35 lifting. The humans then become the
44:36 sensors across the board, right? And
44:38 then on the fringes we have this
44:40 collaborative play where the humans
44:42 start feeding the machine back with data
44:44 and the humans become the sensor that
44:46 feeds this machine that cranks out. If
44:47 you don't think it's true right now,
44:49 think about use cases of Tesla for
44:51 argument sake, right? Think about ways
44:53 where the brain actually is an AI
44:54 machine. So this is actually stuff that
44:56 I did before as a result of which you
44:57 need a couple of skills over here. So
45:00 feel free to screenshot this and go with
45:01 that. And just as a bonus, I'm not going
45:02 to speak through this because we haven't
45:04 got time. But if you listen to this
45:05 right now, take a screenshot of this. I
45:07 just thought this would be fun. This is
45:10 a a businesses a business canvas if you
45:11 want to create a business of tomorrow
45:13 potentially around the functions of an
45:16 AIdriven model. So let's stop over
45:18 there. Jacques, let's see if we got any
45:20 questions. I know this was a whirlwind
45:22 talk, but I know we got so many things
45:23 to cover. Hopefully that's giving you a
45:25 frame to say how do we think about a the
45:29 future but also b the emerging future of
45:30 AI and how do we think about things differently.
45:32 differently.
45:34 Fantastic. Thank you Craig and um I
45:36 really I really I really appreciate your
45:39 time and it it's a world of insights. Um
45:41 I think a lot of us are trying to unpack
45:42 which season we fall into both
45:45 personally and and in the business. I
45:47 did your I did your quiz earlier and I'm
45:50 I'm a winter person. So quite
45:52 interesting trajectory there. But I
45:53 think quickly before we move to any
45:55 questions, Jane, I think you can
45:56 potentially look through uh the audience
45:58 and see what's been asked. Maybe just a
46:00 couple of kind of closing questions with
46:02 you here while while I have the chance
46:04 to interrogate such a brilliant human being.
46:06 being.
46:10 Very kind of you. Thank you so much, Jo.
46:12 Before before the ultimate brain takes
46:15 over, right? Um so so let let's say and
46:17 and and I know there's many many leaders
46:19 in in this session and and people
46:23 involved in the operations of of of uh
46:26 large services firms and you know many
46:28 many of the clients we work with you
46:29 could say they're in a summer mindset.
46:31 They're flush with data. They have good
46:34 analytical dashboards. the processes and
46:36 the businesses work, but they
46:38 potentially to a certain extent have the
46:41 blinkers on in terms of disruption to
46:43 the scale that you've talked through
46:46 earlier. Um, what what's the first
46:49 leadership shift you would recommend to
46:51 an autumn of or autumn readiness in in
46:54 in tackling this kind of AI emerging technology?
46:56 technology?
46:57 Yes. It's basically how do how do we
47:00 shift away from a from a summer view of
47:01 the world and how do we move into an
47:03 autumn kind of right so so let me first
47:05 of all say what it certainly isn't uh
47:08 it's certainly not dashboards it's not
47:09 dashboards it's not an analytics it's
47:10 not about saying well look at the data
47:12 the data shows this and shows all this
47:15 and so on so it's certainly not that the
47:18 first I'd say is is again for for for a
47:20 second just strip away the narrative
47:22 strip away the question around AI strip
47:23 around the technology question the
47:25 question is around what is the outcome
47:26 that we want what is the this outcome we
47:28 want to drive and then how do we move
47:30 over that I think there's a very very
47:31 important piece I think that's the first
47:34 piece the second thing is besides what
47:37 was suggested by by by
47:39 Gardner and all the rest some of the the
47:40 research that I showed by taking a
47:42 holistic approach part of it's also
47:43 about saying well how do we think about
47:45 the long-term implementation of this so
47:46 how do we think about the incinary
47:48 pieces for argument sake who's the team
47:50 that should be involved in this it tends
47:53 to be as we know AI is a technology so
47:55 the tech guys should be involved right
47:56 we don't often think about things like
47:58 governance. We don't think about the
47:59 role of governance. We don't think about
48:01 the role of privacy. So that becomes
48:04 issues over there. So how do we have a a
48:06 crossf functional disciplinary team and
48:07 when I say cross functional disciplinary
48:10 I don't mean from a diversity uh in
48:12 terms of how we might think. I mean from
48:14 use cases I mean from age I mean from
48:15 departments what does that look like? I
48:16 think is the first piece. Second thing
48:18 I'd say is what are we trying to
48:20 measure? How do we do this? The big
48:22 thing also though is then to say from
48:24 the top, not from middle, not from the
48:25 fringes, not from the outliers of the
48:28 curve. What is your clear AI vision?
48:29 Jacqu and I were having conversation
48:30 before and I think one of the things
48:32 that's really important is we're putting
48:33 a stake in the ground. We're going to be
48:35 doing this right. So there's a clear
48:36 message. It's not mixed messaging
48:38 intentionally and it's allowing like
48:40 things allowing folks to say, well,
48:41 we're going to experiment, but we're
48:43 also going to benchmark. Then the last
48:45 thing I'd say is how do you do a pilot
48:47 to production mindset, right? So yes,
48:49 we've got these little pilots. let's
48:50 surface those things. Let's understand
48:52 what those things are. Let's have
48:54 measurable gains and let's move beyond
48:55 these kinds of like you know pilot type
48:58 stuff and let's deploy them. So I think
49:00 can move by moving very quickly from a
49:02 pragmatic approach to saying AI within
49:04 summer what we have how do we start
49:08 building very clear carefully.
49:10 Fantastic Craig and and you know in our
49:11 firm and and I know many of our clients
49:14 and partners they have a culture of of
49:16 experimentation and and controlled
49:18 experiments in the organization right so
49:20 we're trying to do you know many firms
49:22 are embracing large language models
49:25 they're they're they have certain AI
49:27 agents running certain processes um
49:29 they've made decisions to lead with AI
49:31 in certain business units and with that
49:33 comes a certain level of anxiety around
49:34 how that's going to transform the
49:37 organization right um what what a lead
49:39 in your opinion, what are leading
49:41 professional services firms doing to
49:47 So I think the first is is is really
49:48 understanding the role of the human
49:50 being. Now this is this is problematic
49:51 and it's it's and please I want to move
49:53 beyond the fluffy narrative that humans
49:54 are they are specifically I'm not saying
49:56 they're not right but beyond the fluffy
49:57 thing if you look at professional
49:59 services specifically we still have to
50:02 have the best talent. The problem is the
50:04 talent the talent mix is is changing
50:06 right so I would say talent now needs to
50:10 be in two primary domain shall we say
50:11 there needs to be technical talent so
50:13 for services companies what's the
50:15 technical talent do we need AI engineers
50:17 do we need data scientists do you
50:19 understand people that understand the
50:20 algorithmic pieces around what we're
50:22 trying to build the second is in domain
50:25 experts so how do we deploy this in
50:26 certain areas we have right whether it's
50:28 for our own internal services or for the
50:30 services that we're going to provide to
50:31 our customers Right. So, we want to
50:33 deploy this in manufacturing. Do we have
50:35 domain experts that can explain to us
50:37 the nuances of that? I think there's a
50:39 piece over there. Right. I think there's
50:41 a lot of services companies that do some
50:42 really cool things. They're starting to
50:44 build marketplaces. So, starting to say,
50:46 well, how do we build modules? How do we
50:48 build offtheshelf internal use? Maybe
50:50 there's a business for APIs and others,
50:53 right? But how do we create uh a central
50:55 marketplace where we can regurgitate and
50:57 use the codes that we have? The other
50:58 thing that I see which is really
51:01 important is is about saying well how do
51:03 we run experiments at speed with the
51:05 client because a big part of this is
51:06 transparency. It's about showing the
51:08 client what it can do, what it can't do
51:10 and then running these experiments in a
51:12 delivery model. So it's it's it's about
51:13 saying how do I show you real-time
51:14 stuff? How do I give you dynamic
51:16 reporting and how do I show you how to
51:18 do this? And the last thing that I'd say
51:21 is also think about and this is
51:23 difficult but think about savings beyond
51:25 just monetary right so beyond just the
51:27 cost stuff. Are there things like client
51:29 satisfaction? Um I I showed you an example
51:31 example
51:33 of of the name of the company is excuse
51:35 me of a Indian tech company uh where
51:37 they were using this and what they found
51:39 was customer satisfaction dropped off.
51:41 So where can we use AI in client
51:43 satisfaction I know employee upsklling
51:46 innovation um new business development
51:48 from a materials engineering point of
51:50 view number of those things right so how
51:51 do we think about this differently I
51:52 think there's some key elements there
51:54 that services firms can think about
51:56 beyond just this the traditional more
51:58 technology stuff
52:01 and and just going back to to some of
52:03 your comments earlier so so you know as
52:05 AI starts to handle more routine tasks
52:07 and analysis what you're saying is you
52:08 kind of need to revisit two things you
52:10 need to revisit your your business model
52:13 to a certain extent and and understand
52:15 how that could be augmented or
52:18 potentially disrupted. Um and and
52:20 secondly, if I understand correctly, you
52:21 actually need to rethink to a certain
52:23 extent your the career and
52:26 organizational structure of your team
52:28 while putting guard rails in place to
52:30 protect your team and your talent. Right.
52:30 Right.
52:32 Yeah. Yeah. I mean, look, traditional
52:34 services model is it's a it's a time on
52:35 feet model, right? So, it's global
52:37 elves. We know this. I've worked at a
52:39 consulting company before and so you you
52:41 scale that by having more people uh
52:43 right well let me not say let me not say
52:44 people well actually yes traditionally
52:46 it's been by by scaling more people
52:49 scaling output but you need a delivery
52:51 vehicle of that the model changes over
52:53 here with AI the problem is the model
52:54 over here when I say change it also
52:56 changes the costing base because the
52:58 client now has access to similar tools
52:59 that you might have right now right and
53:02 we see this pressure in services firms
53:04 why should I pay consultant company ABC
53:06 when I can just go to chat GBT without
53:08 understanding the nuance of that. So
53:11 what I think is is understanding the
53:13 delivery model needs to change but also
53:14 bringing the client into your
53:16 conference. Yes, chat GB can do this
53:17 kind of stuff but look what we bring. We
53:20 bring in a wealth of domain experience
53:21 folks that have built business like what
53:23 you have right also run this in
53:24 conjunction with you. We'll train and
53:25 we'll do this with you as well. So I
53:27 think there's a piece over there. The
53:29 other piece is also the point that you
53:30 make and I actually didn't speak about
53:32 this. It's a great insight. the the
53:35 mobility within services firm and they
53:37 come companies in general the career
53:38 ladder progression thing needs to change
53:41 altogether historically was a timebased
53:43 thing I think a lot of women moving into
53:45 right now in terms of careers in terms
53:46 of professionals is one of
53:48 self-discovery it's one about saying
53:50 well here we are right now and these are
53:52 the jobs we have right now perhaps the
53:53 jobs that I want haven't be created in
53:56 how do I use AI and jointly create a
53:58 portfolio what it might be justify that
53:59 case create that role and then go
54:02 formalize it so it's turning entire time
54:03 model on its head like a lot of other
54:04 things and again I think that's
54:07 predicated around speed than anything else.
54:08 else.
54:10 Yeah, I mean that's great insights and
54:12 we're seeing it happen literally weekly.
54:15 There's there's evolution of of of AI
54:17 and and the impact on our organizations
54:19 and our clients businesses. It's very
54:21 interesting to to hear and and you know
54:23 the positive part of that is many of
54:25 them are utilizing it in practice to
54:27 enhance their organizations. I think the
54:29 key is how do you protect your business
54:30 model moving forward? protect the
54:33 livelihood of your team and continue to
54:36 you know to to scale the firm.
54:37 Yes, absolutely right. I mean that's
54:38 exactly it and I think that's the key
54:40 piece. It's around saying what what is
54:42 what is the mode that we can set up for
54:44 lack of a better word right and again I
54:46 come back to this paradoxically it is
54:48 around it's around it's around human
54:50 beings but specifically in this piece
54:52 right now it's around trust right
54:54 services companies still want trust they
54:55 still want to be able to to say you're a
54:58 trusted thought partner as much as what
54:59 I might say some some some attendees may
55:02 not believe believe me AI is a thing
55:04 right now I think it's a longtail thing
55:05 I think there's going to be other stuff
55:06 but there's no doubt there's going to be
55:08 other technology things quantum is
55:09 something I'm watching a lot very
55:10 closely right so quantum now becomes
55:12 decentralized assuming you can control
55:15 the temperature etc but I need a trusted
55:17 partner to help me through this someone
55:19 who can help me think beyond at least in
55:21 today's terms beyond the ones and zeros
55:22 what does it mean for me specifically
55:24 someone who knows things more intimately
55:25 than I do
55:28 no for sure for sure Craig that that's
55:30 amazing and and um I know we've got five
55:31 minutes left and I think Jen you have a
55:33 question or two from the audience that
55:35 you'd like to ask maybe a last question
55:41 and uh a future uh loaded question and
55:43 and your prediction on the future,
55:45 Craig. So, so there's a lot going on in
55:46 the media, right? You you've got all the
55:49 big logos, Open AI, Google, Grock, Meta
55:52 publicly declaring their unwavering
55:54 pursuit of general intelligence, right?
55:57 And where that may go and and
55:59 significant huge investments in
56:02 underlying compute power by some of the
56:03 biggest organizations in the world.
56:05 meter for example uh declaring multi
56:08 multi-billion uh dollar investments in
56:10 compute to support the scale of AI and I
56:14 know our uh very own Elon Musk recently
56:17 he mentioned that he he sees AGI uh
56:20 publicly available as early as 2026 so
56:22 general intelligence maybe not super
56:24 intelligence but a level of general
56:26 intelligence I mean if we fast forward
56:29 to 2030 right that picture is changing
56:32 every every month but what what what is
56:34 the future to look like for a
56:37 services-based firm 5 years from now.
56:38 Yeah. So I think I think a lot of it
56:40 well first of all let me say that I
56:41 think there's a lot of services
56:42 industries that will disappear and I
56:44 hope I don't offend anyone right now.
56:45 Right. So I'll give you a prime example.
56:47 I said it before I think legal advisory
56:48 firms are going to almost completely
56:50 disappear. Right? I think there's going
56:52 to be a lot of a lot of a lot of change
56:54 within a legal advisory. I think there's
56:55 a lot of question around tax advisory
56:57 audits. And the reason being is why is
56:59 that the case is again is because when
57:01 you look at a rulesbased system if you
57:04 look at if you look at if if you know if
57:06 statements uh or statements all that
57:08 kind of stuff right essentially that's
57:10 what AI is very good at accelerating and
57:11 doing the same kind of stuff over and
57:13 over again. What is my prediction in
57:14 terms of where things are going? I think
57:15 you're going to see a fundamental
57:18 shakeup. I think you'll see a shakeup of
57:19 businesses that need to reinvent the
57:21 business model. Um I think you'll start
57:22 seeing again the human piece come
57:24 through. I think you're going to see a
57:26 lot of of of dare I say it, I think
57:27 we're going to see a lot of blood in the
57:29 water. And I say that in a in a terrible
57:30 way, but also in a good way because it
57:32 allows us to reinvent ourselves, but a
57:34 lot of occupations and businesses are
57:36 going to have to say, well, what is it
57:38 that we can't do or rather what is it
57:39 that a machine can't do that we can do?
57:40 So, I think there's going to be a piece
57:41 over there. I think there's another
57:44 piece that's also important. I will say
57:46 though that I think the timing is not
57:48 necessarily going to be as uh grand
57:50 scale uptake as we think because of
57:52 regulation and barriers thereof. Right?
57:54 We know for a fact that for argument
57:55 sake in South Africa and again this I'm
57:57 using South African example because many
57:58 of our our listeners are from South
58:00 Africa. You can see without a doubt that
58:01 the role of government at least the
58:03 belief of the role of government today
58:06 is to safeguard jobs. Unfortunately I
58:07 think there's going to be a lot of
58:09 discourse where we're going to see
58:11 different players in different seasons.
58:12 government being summer safeguard jobs
58:15 of today safeguard mining safeguard this
58:17 versus fintech startups versus these
58:18 emerging guys in the autumn saying we
58:21 need to do this right and I think on a
58:22 grand scale it's going to be not as
58:24 quick as as we will see I think you'll
58:26 start seeing ems of things breaking
58:27 through I think you'll start seeing
58:30 barrier agnostic companies come through
58:32 and start doing some stuff but overall I
58:33 still think the one thing that holds
58:35 constant though is the role of the human
58:36 being to some degree that is a
58:38 controversial statement but for services
58:41 in in large reinvention
58:43 codification of what you can do. How do
58:44 you repeat that? And you might even
58:46 start seeing fragmentation where
58:48 services companies become application
58:51 layers into other industries.
58:52 Very important, very important point,
58:54 Craig. And really appreciate your time.
58:56 I think we we've kind of run out of
58:58 time. Jen, we probably have time for one
58:59 or two questions. Craig, can we run over
59:00 about five minutes?
59:02 Sure. From our side,
59:05 we can take one or two questions. Um
59:08 Craig. So the first one is
59:11 um how will the convergence of AI and
59:13 robotics redefine human the human over
59:16 the next 10 to 15 years and what
59:18 proactive ethical frameworks should
59:20 futurists or governments develop today
59:22 to responsibly guide this transformation?
59:24 transformation?
59:26 Yeah. Oh man, this is actually one of
59:27 those questions why I set this book up,
59:30 right? So So the short answer is no one
59:32 knows. And I'm going to emphasize that
59:35 no one knows. You got folks who write
59:36 books for a living. You got people that,
59:39 you know, you get paid millions to stand
59:40 on stage. The truth is no one actually
59:43 knows. At one stage, you know, Meta,
59:44 Facebook changed their name to Meta.
59:46 According to them, VR was going to be
59:48 the whole thing, right? Um it wasn't as
59:50 great, but the truth is no one knows. So
59:52 the question around the augmentation of
59:54 robotics and AI, where does it go? No
59:56 one can tell you. where I will tell you
59:58 where I think it starts moving towards
60:00 it starts to get us to think about the application stack and again the role of
60:02 application stack and again the role of jobs and what we're doing differently
60:03 jobs and what we're doing differently right the the other question in terms of
60:06 right the the other question in terms of what it means though is the question
60:08 what it means though is the question around what are we solving for is this a
60:11 around what are we solving for is this a capital thing because it's an easy
60:12 capital thing because it's an easy question the question is when when the
60:14 question the question is when when the technology becomes cheaper to deploy uh
60:17 technology becomes cheaper to deploy uh than the cost of hiring humans and it's
60:19 than the cost of hiring humans and it's only driven by capital you will
60:21 only driven by capital you will undoubtedly replace the human being with
60:22 undoubtedly replace the human being with machines whether it's an actual robot
60:24 machines whether it's an actual robot whether it's AI I with SAP system
60:26 whether it's AI I with SAP system whether it's a inventory you will do
60:28 whether it's a inventory you will do that so what is the driver around I
60:29 that so what is the driver around I think is the first piece so the short
60:31 think is the first piece so the short answer we know don't know about that
60:32 answer we know don't know about that around ethics and morals and and
60:35 around ethics and morals and and regulation we absolutely have to do that
60:38 regulation we absolutely have to do that I presented in March last year to the
60:40 I presented in March last year to the United Nations around the same framework
60:42 United Nations around the same framework and you can use the same kind of
60:43 and you can use the same kind of framework to to potentially say how do
60:46 framework to to potentially say how do we regulate artificial intelligence and
60:48 we regulate artificial intelligence and the reason why that's so important is
60:49 the reason why that's so important is because regulation and GDPR compliance
60:52 because regulation and GDPR compliance all that kind of stuff is mostly
60:54 all that kind of stuff is mostly anchored within summer. It's really good
60:55 anchored within summer. It's really good when we've got data, we've got
60:57 when we've got data, we've got precedents, we've got use case like law.
60:59 precedents, we've got use case like law. It's not good in winter because there is
61:01 It's not good in winter because there is nothing. It's not good in autumn because
61:03 nothing. It's not good in autumn because we haven't got some kind of law to step
61:04 we haven't got some kind of law to step onto. Right? So what does it mean? It
61:06 onto. Right? So what does it mean? It means in the world of ethics and
61:08 means in the world of ethics and regulation, compliance and stuff that
61:09 regulation, compliance and stuff that was asked right now. We need to be able
61:11 was asked right now. We need to be able to measure the stuff around function or
61:14 to measure the stuff around function or form. you need to create and have the
61:16 form. you need to create and have the ability to create these kinds of support
61:18 ability to create these kinds of support functions or maybe even driving
61:19 functions or maybe even driving functions uh as in spring to say how do
61:22 functions uh as in spring to say how do we create an ethical framework to drive
61:24 we create an ethical framework to drive the development of this stuff over time
61:26 the development of this stuff over time right so I think there is definitely a
61:28 right so I think there is definitely a huge area I mean if I was to say to
61:30 huge area I mean if I was to say to someone think about a job I definitely
61:31 someone think about a job I definitely say integrate questions around ethics
61:33 say integrate questions around ethics and morals into this uh AI right now is
61:36 and morals into this uh AI right now is a hot topic how do we think about that
61:37 a hot topic how do we think about that but also more importantly again what are
61:39 but also more importantly again what are we solving for it's for the human being
61:40 we solving for it's for the human being to ensure relevance whatever that might
61:49 Um, just on the topic of of ethics and um and your comment about legal advisory
61:52 um and your comment about legal advisory potentially dropping. If legal advisory
61:56 potentially dropping. If legal advisory were to drop, could one infer a need for
61:58 were to drop, could one infer a need for ethical advisory to fill the gap?
62:01 ethical advisory to fill the gap? So AI applying rules, there will need to
62:04 So AI applying rules, there will need to be a human element to consider the
62:06 be a human element to consider the ethical implications.
62:08 ethical implications. Absolutely. Absolutely. Without a doubt.
62:10 Absolutely. Absolutely. Without a doubt. So the answer is yes. I was having a
62:12 So the answer is yes. I was having a conversation with a lawyer a couple of
62:13 conversation with a lawyer a couple of nights ago. She said something
62:14 nights ago. She said something interesting. I can't remember the exact
62:16 interesting. I can't remember the exact terms, but essentially what it is is
62:17 terms, but essentially what it is is this. There's there's there's the
62:19 this. There's there's there's the context is really important, right? So,
62:21 context is really important, right? So, what is the context of? So, if a
62:23 what is the context of? So, if a homeless if a if a if a poor mother has
62:26 homeless if a if a if a poor mother has a child uh and she she steals a loaf of
62:30 a child uh and she she steals a loaf of bread or thing to feed the kid, that is
62:31 bread or thing to feed the kid, that is a crime. It's black and white, right?
62:33 a crime. It's black and white, right? So, yes, that person's guilty, but
62:35 So, yes, that person's guilty, but there's a context piece over here that
62:36 there's a context piece over here that the machine won't be able to get.
62:37 the machine won't be able to get. There's a nuance behind them. There's a
62:39 There's a nuance behind them. There's a context piece. So it's not only the the
62:42 context piece. So it's not only the the it's not only the human element for
62:43 it's not only the human element for around the ethical implications
62:44 around the ethical implications downstream but also how do we apply that
62:47 downstream but also how do we apply that law in a certain context right it's
62:49 law in a certain context right it's about understanding
62:50 about understanding where do we bring in a level of and
62:52 where do we bring in a level of and within the legal precedence we have this
62:54 within the legal precedence we have this as well so I'll give an example of which
62:56 as well so I'll give an example of which in the western world by and large the
62:58 in the western world by and large the western world tends to tends to value a
63:00 western world tends to tends to value a younger life over an older life by and
63:02 younger life over an older life by and large but in the east it's flipped so
63:04 large but in the east it's flipped so the rule of law can't be the same across
63:06 the rule of law can't be the same across the board yes the law can tell us what
63:08 the board yes the law can tell us what is the baseline but we need
63:10 is the baseline but we need interpretation exactly the question that
63:11 interpretation exactly the question that you asked right now Jen there is a human
63:13 you asked right now Jen there is a human element to say what's the implication of
63:15 element to say what's the implication of that and while this might be the case
63:18 that and while this might be the case should we defer should we change this
63:20 should we defer should we change this somewhere else I'm a bit of a geek and
63:22 somewhere else I'm a bit of a geek and for those engineers that are that are
63:23 for those engineers that are that are that are listening right now hopefully
63:24 that are listening right now hopefully you'll enjoy this but I'm a bit of a
63:26 you'll enjoy this but I'm a bit of a geek and so I was rewatching Star Trek
63:27 geek and so I was rewatching Star Trek the next generation that's the one of
63:29 the next generation that's the one of Jean Lard and uh and data the machine
63:32 Jean Lard and uh and data the machine right and in the series data has his
63:35 right and in the series data has his name indicates has all the data but he
63:37 name indicates has all the data but he doesn't understand the nuance and he
63:38 doesn't understand the nuance and he doesn't understand the human capability.
63:40 doesn't understand the human capability. He doesn't understand the human stuff
63:42 He doesn't understand the human stuff and this beautiful interplay between the
63:44 and this beautiful interplay between the human gut sense around empathy around
63:47 human gut sense around empathy around warmth around humor and you got this
63:49 warmth around humor and you got this cold data statistical machine. It's that
63:52 cold data statistical machine. It's that interplay over there and that human play
63:54 interplay over there and that human play to what you said right now I think is
63:56 to what you said right now I think is integral moving forward.
64:04 J are there any more questions or do we need to
64:05 need to um No. So that's I think we do need to
64:07 um No. So that's I think we do need to wrap up. If there are any that come
64:09 wrap up. If there are any that come through, we can address those after the
64:11 through, we can address those after the fact. Um but
64:13 fact. Um but yeah, thank you so much.
64:15 yeah, thank you so much. Yeah, and from my side, thanks very
64:16 Yeah, and from my side, thanks very much, Jo. Thanks for having me and Jenna
64:18 much, Jo. Thanks for having me and Jenna and Silveroft. Um you know, folks, feel
64:20 and Silveroft. Um you know, folks, feel free to connect with me, carry on the
64:22 free to connect with me, carry on the conversation. And a big big thanks to
64:24 conversation. And a big big thanks to Jacqu to you and Jenna and Silveroft. Uh
64:26 Jacqu to you and Jenna and Silveroft. Uh I think we need more of these forums.
64:28 I think we need more of these forums. Again, please, as I said before, this is
64:30 Again, please, as I said before, this is just my view. I think there's multiple
64:31 just my view. I think there's multiple views out there. How do we get more
64:33 views out there. How do we get more people speaking about this? I think is
64:34 people speaking about this? I think is the key. I don't think knowledge should
64:36 the key. I don't think knowledge should be centralized and certainly there's not
64:37 be centralized and certainly there's not one person that knows everything.
64:39 one person that knows everything. Fantastic. Thank you Craig. Um I took a
64:42 Fantastic. Thank you Craig. Um I took a lot of notes and we look forward to
64:43 lot of notes and we look forward to engaging with you with you further.
64:45 engaging with you with you further. Hopefully I have another chance to speak
64:46 Hopefully I have another chance to speak together and uh and we'll speak soon and
64:49 together and uh and we'll speak soon and thanks to everyone for joining today.