The core theme revolves around the strategic implementation of Artificial Intelligence (AI) and Machine Learning (ML) within institutional investing, emphasizing the dual goals of enhancing operational efficiency and improving decision-making, while underscoring the critical importance of robust governance and continuous education to navigate the complexities and potential pitfalls.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
in every organization where you have um
uh internally you've got people who
don't really have any knowledge of AI
and machine learning who are in
positions of authority and your board
has various levels of sophistication and
comfort with you know AI and machine
learning and so that you you're stuck
with we need to innovate here and move
with the times but also we've got to
bring people along but people who have
other responsibilities also and I think
you do have to move forward with
governance in mind Because one thing
that will kill innovation really quickly
is just poor governance and poor oversight.
oversight.
>> Hey everyone, I'm Angelo Calllo, host of
the institutional edge, a podcast in
partnership with pensions and
investments. Thanks for joining us for
another episode in our series on
artificial intelligence and
institutional investing. And I'm pleased
to say that my guest today is Mark
Steed, CIO of the Arizona Public Safety
Personal Retirement Systems. I'm excited
to have Mark on the episode. Mark is one
of the few asset owners with an academic
background in predictive analytics and
firsthand experience integrating AI into
his plans, investment decisionmaking. In
this episode, Mark discusses PSRS's
AI use cases, implementation strategy,
and future wish list. Hey Mark, it's
great to have you on today's show.
>> Yeah, I'm happy to be here. This is fun.
Anytime we can talk about AI, it's a
good time. You know, I got to say I'm
grateful that our uh our mutual friend
Mark Bombgardner, who's also a friend to
the show, recommended you as a guest.
And I got to say, you know, after
talking to you in our precall, I'm
surprised our paths really hadn't
crossed previously.
>> Yeah, me too. Yep.
>> And I say, I mean, for me, the surprise
is you're one of the few senior asset
owners who has firsthand knowledge of
and firsthand experience with AI
investment use cases. And I would have
thought our approach at Rosetta
Analytics would have caused our paths to
cross before, but hey, here we are. And
let's jump right into the topic. And the
topic today, asset allocators, AI use
cases and wish list. So let's start with
the use cases. Man,
>> give me your top use cases and their
benefits. And if you could, you know,
kind of take your time and break it down
for me.
>> So I think there are really sort of two
main use cases. One of them being just
the ability to make things more
efficient. So that just operational
efficiency. The other main benefit is
really decision- making. So when I think
about operational efficiency, that's
just using AI to some extent machine
learning to
streamline and automate a lot of the
routine task that the investment office
executes. So that's collecting documents
from proprietary data sites, getting
around the two-factor authentication. So
going in and retrieving documents,
downloading those documents, then
extracting data from those documents
because all of us have PDFs and that's
really what we're mostly worried about
are just the PDFs. It's unstructured
data. So then extracting relevant
information from that data. So just to
give you an example, uh I'm looking at a
private equity fund. They give me access
to their data room. I go in there and
they've got a mountain of documents,
ppms, DDQs, whatever spreadsheets
and uh I can go in there automatically
download all those documents and then
within those documents start to extract
useful information. How many partners,
how many partners to each portfolio
company, portfolio company operating
metrics, things about the track record,
right? Fund sizes, you know, vintages,
things like that. who uh who the
compliance officer is, right? Anything
you might want to pull out and stick
into a a database that you can then use
to build um predictive models to then
help inform the front-end decision-m.
So, that's sort of operational
efficiency and there's a lot of other
applications writing investment memos
automat automatically writing those
investment memos once you have uh a due
diligence packet. So, lots of ways to to
use the automation features that uh AI
offers. And then in terms of decision-m
there's kind of two components of this.
is just uh kind of the machine learning
component which I uh isn't necessarily
different from traditional kind of
statistical techniques in some sense in
that you know you've got most of the
time you've got structured data and I
mean you know kind of what your data set
looks like and you want to load it in
and you want to figure out hey what
variables matter you know I'm trying to
ascertain there's an output variable
whether it's quartile performance or
just absolute performance I'm trying to
predict you know the predictor variable
and you have all these input variables
And those are the you know the machine
learning techniques can help with that.
Machine learning technically is part of
AI and the reason why we have machine
learning versus say traditional linear
regression which is what most of us are
accustomed to is because a lot of the
relationships are nonlinear and in fact
most of the data we have violates the
assumptions of normality which is sort
of what you need to run a lot of the
traditional regression models. And so
when you look at small sample sizes and
a lot of us have small sample sizes just
in the traditional sense of statistical
measures right we don't have enough data
to really make high confidence claims at
sort of you know the 5% level or the 1%
level uh and the and a lot of the data
is just not normally distributed right
or it's not independent right so so we
have all those problems and I think
machine learning helps with that part
but then there's this other avenue of AI
which is deep learning where you really
are kind of in the traditional kind of
black box and that's where you're just
pointing it in the direction of
information whether that's in a document
or that's it it's unstructured data or
it's a data set of some type but you
know it could be spreadsheets whatever
and you're tell you're asking it to tell
you what what patterns matter right and
I think that's a tremendously powerful
technique that might highlight some
patterns that the traditional investment
office wouldn't be aware of so that's
kind of where our brain is at I know a
long answer to that first question but
um I think those are the applications I see
see
>> let's go back to the uh uh the efficiency
efficiency
>> that's gained what type of AI are you
using there
>> and it's proprietary Mark you know I mean
mean >> no
>> no
>> no so I I can say like all of this is
just in R&D for us
>> uh and so we just now um got
authorization to put the uh LLMs on our
local machines so part of the you know
part of the problem uh historically has
just been data security and and and
still is right with data security and um
you know uploading stuff to chat GPT and
things like that and what do they do
with it? But um we uh just got the
authorization to put the large language
models on our local machines. So the so
for us um uh what we're going to start
to do once we have access to the once we
point to the in the direction of the
documents. So you can use these uh
robotic processes to kind of go in and
just access the documents and you can
you know write those scripts internally
to just go in and access documents and
pull them down and and and uh set them
in a specific location on your S drive.
But then uh what we're going to do next
uh is just use the uh LLMs uh point
those to the direction of our documents.
And we're probably going to start with
uh there's Gemma, right, which is the um
the Google or Llama, which is metas uh
that are are both fairly robust. And uh
so I'm just kind of sharing a little bit
of like our research and development
here. And we're going to point one of
those in the direction of our documents
and see how it does and start to train
it on our own documents locally because
you don't need an internet connection
and your data is staying local uh to
start to build um and extract that
documentation just for just for that
proof of concept because right now it's
just manual if there's something you
know or you can compartmentalize it in
your DDQs and just make everything hey
just tell us who this person is or what
this is and what this is which is what
we're currently doing so that we can
then move the information over to a
database that people can serve. search,
but now we're going to start to automate
that with the LLMs and uh and that'll
just be the start of it and then we'll
start to train the LLMs on writing the
investment memos and things like that.
>> So, Gen AI is going to really be a kind
of a foundational approach.
>> Yeah, I hope so. Yeah. And these are and
these are and these are like exercises
that you where you can confirm and
verify what the machine is doing. So,
we're not fussing with the blackbox
element yet. We're just saying, hey, you
know, did you label the compliance
officer the right person? uh you know
did you identify carry as 20% because
some groups call it carry some will call
it the performance bonus or whatever
right so there's these different
nomenclatures for the same thing and uh
the nice thing about having the machine
do a lot of that is just that uh we can
confirm whether it was accurate or not.
>> Yeah. So using this uh this generative
AI, you know, you talked about document
analysis. Are you using it also for
manager selection process? I mean
there's a lot of data that you have to
be pulling in and I'm not sure if you
use a consultant or not, but either way
there's going to be a lot of data.
>> We do. Yeah, we do use a consultant and
we've always just run parallel
processes. So we do our own work and
then you know hopefully they kind come
to the same conclusion and I think for
for the most part uh you know we haven't
had any uh issues with that. So on the
screening on the screening side,
I'd say yes and uh no at the same time.
So when uh I when we get inquiries, what
we don't have and I I think this is a
change that we're making is the ability
for managers to enter information
uh without contacting a staff member,
but just enter information say through
like a web portal or some other location
about their fund. And I imagine what uh
what will happen for us is uh after we
we have enough observations
from the machine learning AI uh
algorithms that can sort of start to say
these factors matter, right? So they're
sort of like kind of what you your hunch
terms of uh what you think matters in
terms of of performance but you look for
the models to help inform that and you
need a number of observations before
you're going to be confidence that hey
this you know this actually might be
accurate here. Then what you do is you
put that on your web portal and you say
hey this these are the five or six
things that we think matter most. Uh
some of them might be you know you might
you might you might imagine what they
are. Uh you know u top performance and
prior funds is maybe a good indication
that it might continue or maybe not but
things like that and uh you put them on
a you know a portal so that managers can
kind of enter that information and then
you have a really good way uh to just
you know scan for new ideas. We don't
have that portal set up yet. And so
right now what we're doing is when we
have a a GP that's contacted us um you
know we're taking their information. our
due diligence is really sort of is
frontloaded. So we send a spreadsheet
with all sorts of quantitative requests
and then they send that that back to us
and then we're crunching it through um
our models and uh so we're not quite at
the point yet where we've been able to
sort of filter based on three or four
criteria that matter. I think we'll
probably get there soon, but you just
need a number of observations before you
have any confidence that the um that the
model outputs are um accurate. I think
when we get that, we'll start to say,
okay, here's the filter. This is going
to, you know, it's an 8020 kind of a
thing. So, we might, you know, there
might be some false negatives. Um, but
we're not too worried about that.
That's, you know, the deals we don't do
that turn out to be okay. Um, we're only
worried about the deals that we do, and
we need to make sure that those ones are okay.
okay.
>> It's kind of interesting, you know, a
theme that's run throughout your
comments is this idea of verification.
And the the word I'll use is
explainability. You want to be able to
understand the decisions, the models,
you know, the LLMs or if you're using
some kind of NLP, you want to
understand, you know, where the
decisions coming from. Is that just a a
kind of a bedrock for you like a like a
ground truth? You need to have that
verification and explainability in the process.
process.
>> I mean, I think you want it if you can
get it. You want to be able to explain
how things are going and just like we do
with with kind of human reasoning,
right? where ideally what you want is
someone to kind of explain their view
cogently organized thinking right
adhering to the best practices of kind
of logic but sometimes that doesn't
happen sometimes you can't get there and
sometimes you're just sometimes you're
left with these arguments where you have
to say I don't know it's just kind of my
gut I just I can't articulate it uh you
know and so I think that's kind of an
advantage that we have at our
organization where every recommendation
that's relevant we record and you have
to be very specific about what what it
is you're recommending ing or what you
think will happen. So, if you're a PM
recommending uh you know, investment in
a company or a fund or whatever, you've
just got to say, I think this uh here
here's real a real clear definition of
success. For example, this fund will
outperform the S&P 500 by at least 2%
over the next 12 months and I'm 75%
confident. And then we benchmark people
and we ask them to explain why they
think that way. So, there's always this
ability to kind of go back and piece
together sort of their logic. And the
reason you want to do that is because
it's more likely to be repeatable and
you're less likely to have surprises.
But sometimes
you just can't articulate it. And
there's sometimes as a CIO where you're
just you just feel like I just feel this
like this is compelling. I can't quite
articulate it or maybe it violates kind
of our traditional rules, but I still
feel uh exercised about it and and uh
that it's going to perform. So we write
these things down so that over time we
can sort of look at these gut decisions
as well as the ones that weren't gut
decisions and start to say like well
actually they are pretty accurate. So
what's going on here? Maybe we can pull
this apart and understand it. And I
think that's the same uh the same the
same discipline we apply to the models
which is we want to be able to explain
what's going on here. Sometimes we
can't. So we're going to benchmark these
things. We're going to track them uh
just to see hey if it's if it's saying
we should be 80% confident here. It's
going to be 80% accurate. Is it actually
right? 80% of the times it says that and
that does help us. But it's just a way
of of managing surprises and if you can
explain it potentially you can you have
better control over it and that's why
we're interested in it. Although again
just like with humans there's an element
where sometimes you just can't explain
it. I I expect that's going to be the
case with models too.
>> I know that firsthand given the
reinforcement learning model we were
using was a dark black box. Man, it was
you know it was a challenge.
>> Yeah. And the more complicated and and
ironically I think some of the more
accurate models are are easy like like
the hardest to explain. And so uh you
know if you look at like you know with
um some of the neural networks you know
yeah there are these these uh devices
you can use uh you know whether it's um
what's called feature attribution right
or these activation layers where with
feature attribution whether it's the
lime or shape you you can kind of pull
apart you know how much each of these
features is sort of contributing to the
output or with the activation layers you
can actually see where the what what
patterns the neural network is you know
is is keying in on for example shapes it
might be keen on like outlines or things
like that. So there are things you can
do to some of these models. I don't want
to say that the entire thing is a black
box, right? But decision trees and
neural networks I feel like are the are
easy are on the more explainable side of
kind of the AI machine learning, but
like some of the reinforcement learning,
deep learning, gen that stuff uh is
really kind of hard to explain at a deep
level just because of all I mean it's
effectively like trying to explain the
synapses in the human brain uh that you
know that are that are going on at any
point in time. They're just so fast and
there's so many layers and parameters,
it's really really kind of hard to
explain. So I think what's important
there is you come up with a governance
framework as to how you're going to
approach these models and these outputs
and and how you benchmark them and and
what decisions these models are actually
making and whether you're they're doing
them without uh you know without human
oversight or not which at in our shop
isn't happening.
>> Really you're building a a a governance
framework around the use of AI is what I
hear you saying. I mean, yeah, you have to
to
>> and and documentation. You've got, you
know, documentation to support the structure.
structure. >> Yeah.
>> Yeah.
>> But go ahead. I mean, you have to. I I
don't know if a lot of people are
thinking this way. Some people just use
it. And of course, they use it within,
you know, a compliance framework. Uh you
mentioned, you know, you're putting your
own data, but it's not a machine
connected to the internet, etc. But talk
about that as a governance uh framework.
>> Yeah. So I think this is a tough one
because in every organization you know
you've experienced this uh where you
have um uh internally you've got people
who don't really have any knowledge of
AI and machine learning who are in
positions of authority and so internally
you have that dynamic going on at your
organization and then for people like me
where we report to a board your board
has various levels of of sophistication
and comfort with you know AI and machine
learning and so that you you're stuck
with you know this awkward heard, you
know, hey, uh, we we need to innovate
here and move with the times, but also
we've got to bring people along, but
people who have other responsibilities
also. And so, like, how do we how do we
kind of bring these groups together and
and I think you do have to move forward
with governance in mind because one
thing that will kill innovation really
quickly is just poor governance and poor
oversight. So, oversight is as much I
think it's kind of paradoxical, but it's
as much part of the problem as it is
part of the solution. And so you can it
can be ownorous and prevent any growth
or innovation from occurring but uh and
it and and it can also be like you know
too lax and you can and it could be
non-existent. So for us, we've we've
started to say, look, with every
decision, it's written down and it's
tracked. And that's that's point number
one. And that goes a long way. And in
debiasing people in the discussion
because now it you have kind of an
objective score and it's not so
subjective and you can start to say,
hey, this machine is saying it's 70, you
know, here's its predictions for at the
75% threshold. Here's it predictions at
the 70 65 and we start to track it and
we can say hey how many uh how many
forecasts does it have at 70%. Well,
it's we've done 10 of them. Okay, small
sample size but maybe it's right, you
know, we'd expect seven out of the 10 to
be right if it were appropriately
calibrated. And that goes a long way.
You know, if after you've done say 50 of
these and you can say look um it's done
uh you know we've got we've got 50 of
these. it's about 70% accurate for 70%
confidence. That gives you some level of
comfort. But the other thing you can do
too is is start to just give it simple
tasks that like I said earlier that you
can just verify right is it is it
identifying the right things in
documents um running certain analyses
doing value attribution bridges and
things like that say in private equity
that you've done yourself and just
double-checking its work. And that's
that's one way to build it. But I think
along sort of like just like those hard
rules internally about hey we're
benchmarking decisions and things like
that. I think you also have to educate
your constituents whether that's your
board uh your executive directors other
people on your investment team and just
create that fluency with the vernacular
to start to get them comfortable with it
because it's like going to another
country and you're listening to people
talk and you don't understand what
they're saying. there's going to be a
natural level of distrust and so if you
don't understand the language of AI and
machine learning there's just going to
be a natural distrust. So I I think
that's that's sort of another prong to
the approach.
>> Do you use uh like workshops to kind of
build that educational level or does
that occur you know for example in a
board meeting? I mean how do you get
them fluent? right now it's it's uh it's
mentioning it uh on the occasion during
the board meetings and just you know our
board meetings are are fairly um uh uh
the they're spelt you know there's not a
lot of Mickey Mouseing going around
we're talking about performance and you
know governance you know and so on the
occasion I'll do my best to you know
sort of drop in a nugget about how we're
doing things and why we're doing it um
and I suspect we've been sort of waiting
to to kind of get the LLM initiative
launched and once we get there I think
we'll have a lot more to And probably
what we'll do is is some sort of
semiannual education uh with the board
to just sort of formally kind of go
through again set the stage for the
ecosystem and the various aspects of AI
and how it's impacting their day-to-day
with ways that they can you know
understand and then talk specifically
about what we're doing with it uh and
what decisions it's making because um on
the I've got some trustes who are really
comfortable and and like the idea that
we're kind of moving in this direction
and comfortable with my background and
experience in in us doing that and then
others who are are are are sort of um
cautiously optimistic and I think would
be a generous interpretation a little a
little more cynical and uh you know you
have our articles uh like the one
yesterday in the Wall Street Journal
about the uh you know AI overriding its
code so you can't shut it down and then
it kind of sets you back uh you know so
>> yeah exactly yeah so we're you know
that's so I suspect you know as as we
broadly roll it out it'll be under very
controlled circumstances and then when
we get to sort of the predictive side of
it and letting it sort of make
decisions. We'll run those in parallel
with the human uh you know and this is
just the deep learning side of it. You
know, we're already using the machine
learning algorithms and things like
that, but I those are easily I think
easy to explain and interpret. It's the
deep learning where you're basically
just giving it to a black box and
saying, "Hey, here's a bunch of due
diligence material. Tell me which fund
is going to outperform say the other
funds and uh you know or what variables
matter most." um where I think you have
to run those in parallel with the human
decisions and and over a number of uh of
observations to get any comfort there.
>> Let me go back to use cases for a
minute. You know, I hear from managers
and I also hear from allocators that
they're looking at sentiment analysis,
you know, trying to scrape the web and
then, you know, kind of detect sentiment
because sentiment is pre-ric, you know,
it it it it's there before prices
manifest. What do you think about this?
I mean, and and whether you're using it
or not, I mean, just intellectually, is
that a tool that you want to focus on? I
well I'm a little more cynical because I
don't know that I come from the
perspective that it the sentiment is
sort of pre-trade. I I think there's a I
think you can argue that causation goes
the other way that people people make
the trades and then and then they got to
talk the book and and create sentiment
if it's not going the right direction.
But I think anyway, let's just assume
that sentiment is sort of pre-trade. I
looked into this a number of years ago.
Uh it was probably 10 years ago when
sentiment analysis seemed like I mean
you know someone's probably been doing
this for 20 years but I know somebody
has been doing this for 20 years but I
mean it felt like it really entered the
mainstream in in the predictive analytic
circles about 10 years ago and back then
everyone was having trouble you know um
with hey if I'm looking at a review for
a vacuum cleaner and it says this vacuum
cleaner sucks is you know is the model
interpreting that the right way or not a
good review or a bad review and um you
know and so there's a number there
there's that but I I think I think you
know my my bias would be to say I'm not
so sure that like you know the sentiment
is actually being reflected in in sort
of the aggregate levels of flow. I I
think there are a lot of regulatory
requirements that are also driving
allocation decisions in terms of what
you have to rebalance you know what you
have to buy or sell to rebalance to stay
in compliance and I think you know who's
got to buy treasuries uh because they
have to have a certain number of you
know credits you know a certain amount
of credits in at this level and so I
don't I I'd be a little I've always been
a little dubious of sentiment analysis
because I just feel like there's too
much erosion between like the sentiment
and then like what's actually happening
in the trade. um you know pro I mean I
think relevant but I don't know how relevant
relevant
>> you know I agree with you historically
it's been around for a little bit what's
kind of gotten my attention just as an
aside is the amount of disinformation
that is out there >> right
>> right
>> and it's very difficult to detect you
know uh for a machine to detect the the truthfulness
truthfulness
>> and it's also difficult for humans to do
it especially when you have to do it at
a certain velocity if you're looking at
uh X feeds or blue sky feeds there's
just so much disinformation. I think it
erodess the benefits of any kind of
possible sentiment analysis.
>> Yeah, I think that's right. Yeah, for
sure. I mean, because I think the the
the media with the I mean, sort of the
highest uh uh sort of like periodicity,
right? I mean, the it seems like the the
services that are spitting out
information the fastest are are also the
ones that have the most misinformation.
And I So, I think it does make that job
pretty hard.
>> Just shifting gears for a second. I
mean, we've talked a little bit now if
you got these use cases and you talked
about governance, making sure there's
good governance documentation around it.
I mean, what are the like like the two
or three other key features that you
need as an asset owner to actually do
this stuff? I mean, okay, governance and
>> I'm going to guess you're going to tell
me talent because you can't do this
alone given your full-time job, but what
are the few things you need to actually
accomplish this? Yeah, you do have the
multiple dependency problem uh which
makes it hard to get off the ground. So
you need you do need the talent uh in
house. So we have uh two data scientists
uh as part of our investment program. Uh
one of them is an uh was um a younger
investor turned data scientist who uh
you know years ago uh you know went and
got an an advanced degree went back to
school to kind of learn it but kind of
came from the risk standpoint and
learned data science and has been uh you
know our data scientist for probably
seven or eight years at this point. And
then another one came from food science
and was just a data scientist naturally
and is now learning investments. Uh and
so I think it's important to kind of
have both of those uh represented on
your team. And I think it is important
it is and it's okay to have people on
your team at a certain point who don't
know what a stock or a bond is. Uh
because I I think that's part of the
advantage and sort of again like we
haven't talked about this but bias in
these models is important. No, I think
you're less likely to have a bias with
some of the deep learning in the bias in
the traditional sense of like, you know,
human bias, but certainly in the with
the traditional the traditional
statistical techniques and machine
learning, even if you're giving uh, you
know, there's bias in what information
you give the models to look at to start
with. So, you can't convince me that
these models are unbiased. So, I I think
it's important to have that discipline
on the team. So, you have people um that
have less of a traditional investment
bias in terms of what they think should
should matter and are just looking at
raw da d d d d d d d d d d d d d d d d d
d d d d d d d d d d d d d d d d d d d d
d d da. You also need data and that's
another big problem that we all have.
It's I mentioned the PDFs. Most of what
we have is is unstructured data locked
up in PDFs and some humans going to have
to go through uh and manually extract
information and put it somewhere. And so
you have so you have unstructured data,
you also just don't have a lot of it.
And so the the biggest problem for most
of us is just with the alternative
investments. That's where you have this
the the most difficult information. A
lot of it's in PDFs. A lot of it's
locked down because, you know, your
partners don't want to give you ability,
you know, to, you know, they're not
going to give you word docs or they're
not going to give you Excels that aren't
traditionally kind of locked down. Um,
so you can't change it after they give
it to you. So, you've you've got a lot
of like data problems on that side. You
don't have a large data set by any
means, most of us. Um, in terms of
alternative investments, you're looking
at, you know, a handful of uh, you know,
portfolio companies and private equity
funds that maybe update quarterly. But
again, like I said at the start, in a
statistical sense, that's a pretty small
sample size. It's not like you've got
50,000 people or 100,000 people that are
using credit cards and you're you're
look, you know, you've got a huge set
that you can, you know, um extract uh
insights from. So you have talent and uh
data and compute is probably the other
one. Um once you have the data, you've
got to you've got to have some uh some
pretty serious horsepower. So I
mentioned our um our initiative with uh
the LLM's take uh llama or O lama which
would be the LLM uh you know the meta
open source LLM and you can put that on
your machine if you use the 70 billion
parameter version a parameter being
basically like a word or a sub kind of a
subword uh that you probably need about
96 gigs of uh memory uh you know so
that's a pretty robust um you know
gaming PC probably um that that can
handle that. So it's it's you know it's
serious compute and again that's fairly
that has a context window of maybe
150,000. So that's sort of like how many
parameters can it sort of like ingest
and and synthesize at the same time. Uh
so if you think about like a 10 Q or
something you know that's 100 200 pages
that's probably
50 to 75,000
you know parameters. So you would need
sort of the more robust version of Olama
and a pretty strong computer to just
sort of like you know work through a
10Q. So uh if you're thinking about
multiple doc you know then then
obviously the the the you know the
requirements just extrapolate from there.
there.
>> But you're talking about doing this
locally you're not talking about doing
it in the cloud for now. Am I correct
for now? Yeah.
>> Yeah. That must be a security issue you
know going into the cloud I assume.
Yeah, I think while uh we investigate
sort of the cloud security and and get
our arms around the enterprise solutions
where what we're doing right now is
basically R&D. So it's can we get proof
of concept, it's not going to cost us
anything really to do this. Uh you just
it's open source, put it on your machine
and then uh we're working actually with
one of our former PMs uh who retired
last year um you know who's a computer
scientist uh and and said hey actually I
think we're on to something here and
this is cool. I just want to do that
full-time. So he's um you know probably
going to be consulting with us so we can
come up work on work work on these use
cases uh with the LLMs and if we get
good proof of concept there uh using the
LLMs and these ways that I explained
then I think we'll go to the enterprise
solutions and start to see if we can get
our arms around the security there
because we probably will need more compute.
compute.
>> Yeah. I might have to ask then what's
next? I mean you're kind of building you
know this uh this library and you know
it has different applications but let's
go to the wish list.
>> Yeah. Yeah. Let's assume that you find
satisfaction in these tests and these
early implementations. What what would
you like to see AI do for you, you know,
given you're the CIO of a very large
public plan?
>> I'm certainly interested in the
efficiency side of it. So I want I want
my team removed from these these really
I mean high volume but kind of low value
ad tasks you know of fetching documents
you know extracting data from those
documents that we just want to use for
reporting purposes uh and then a lot of
those become variables that we then feed
into uh you know the the the predictive
models but at the very least the
efficiency side is crucial and I'm I'm
actually surprised when I have
conversations with colleagues how many
of them are very hesitant to use AI or
you know generally because most of us
have really spartan staffs and I and I
think again there's just a bias against
it but you know it can save a ton of
time so if any I I would think that
institutional investors like us would be
at the forefront of this because we all
have you know pretty small budgets so
you know I think where I see this going
is we'll ask a GP for a certain set of
information right that information we'll
know is is pretty highly relevant. It'll
be a paired down uh sort of data request
from us based on the feedback we have
from our models that will have analyzed
all of the funds that we've invested in
to this point and we'll ask them a
paired down set of questions that'll
matter. They'll answer that. So they're
not going to have to, you know, they're
not going to be, you know, responding to
numerous data requests from us. They'll
respond to that. We'll ask for access to
their data room. We'll get it. Then the
model will pull down all the all the
documents. It'll extract all the
relevant data. it will it will conduct
all the analyses and now we're doing
some of that already and then it'll
write the investment memo and that will
be reviewed you know by the PM and the
and the investment team internally and
then we can just sit around and talk
about what the information means not
have to go looking for it fetching it
editing things like that so that's
that's where I that's where I see us going
going
>> you know to me it you know it sounds
like you're building kind of a
multi-agenic system in the future.
>> Yeah, very much could be. Yep.
>> Yeah. I mean, I I could see where, you
know, you've got uh one agent doing, you
know, kind of the assembly of the
information, another agent, you know,
kind of reading and then there's a
compliance governance feature in there.
>> I mean, it's, you know, I wrote a piece
on this. I think you wrote about the
future of
>> investing. Well, the Yeah, I think it's
the uh the agent hospital, right? The I
think the white paper that you wrote
about, right? Where you have sort of
like, you know, and and I could totally
see that. I could totally see that where
you have one sort of one agent that's
that's doing data kind of recon and you
have another one cleaning data, you've
got another one, you know, writing
writing memos, and then you've got uh
you even have a hierarchal system,
right? Then like what they're doing is
then reports up to the humans.
>> Yeah. Yeah. I mean, we're we're a ways
off from that conceptually. I love
talking about it, but there's so many
there's so many barriers to get there.
>> Yeah. So Mark, I want to wrap this up
and say, you know, thank you, but I'm
going to try to do a quick summary. And
that is first your approach to this.
Your use cases are built around two
things. One is, let's call it improved
efficiency, and the second is improved decisionm
decisionm
>> and that's a fair summary. That's fair.
I mean, those are those are the two big
use cases. And underneath it, you know,
you've talked about specifics as they
relate to analyzing opportunities.
um you know kind of structuring
unstructured data in a way that can be
read and you know kind of getting to the
point where you have an AI assistant at
this point you now clearly you've got a
governance structure you've got your own
knowledge you've got some data science
talent around you you know it it this is
you know you're kind of native in this
space but your colleagues as you point
out are not quite there yet uh and I've
always scratched my head wondering on
that first piece
You got a limited headcount, limited
budget. Man, if you could use something
within, you know, a very constrained
environment, it'd be a good thing.
>> Yeah, that's right. >> So,
>> So,
>> that's right. Yep. Nailed it.
>> I'm glad I got it. But, uh, I I and I
got to ask you, you know, my final
question with all my guests, what's the
worst investment pitch you ever heard?
>> Yeah. Look, my my glib response to that
is the worst investment pitch is one
that's never given. If you if you have a
chance to take a swing, take a swing.
Now, that said, I was I was I was part
of somebody who took me up on that
offer. Uh, and I was I was um part of a
pitch that was kind of awkward where,
you know, it was, hey, this was 15 years
ago and it was investment into Mexico.
And, you know, the bricks were kind of
hot back then. I was like, all right,
let's hear this. And it wasn't so much
investment in Mexico as it was like just
a Formula 1 racetrack uh in Mexico. And
uh I think it was Monteray and uh you
know it was one of these you know where
you you you kind of wanted to hear it
out cuz like well okay you know every
idea sounds crazy until some of you know
some of them are kind of proven they
actually aren't that crazy but this this
one this one was crazy and uh you know I
just thought guys we're not you know
they they had uh offered to put us in a
a Formula 1 race car to to test that
out. I don't know what that had to do
with anything. We didn't we didn't take
them up on that offer. Uh, so we said
no, but they also then asked for the
pitchbooks back because they were the
they were beautiful really gloss high
gloss high gloss covers, you know, they
had everything. I mean, they just
planned this out. And when we said no,
right there then and there, I mean,
normally we we go back and kind of think
through things, but I was convinced that
this was a no. And uh, you know, we
said, "Hey, it's not not something we
do." And they said, "Okay, well, that's
thanks for your time, but by the way,
can we have the books back? We only have
uh three and you have our three." So, I
thought I was like, "Well, you know,
bless their hearts for trying.
>> It's a tight budget they're on." Yeah.
I'm a little concerned about how much
runway they have.
>> Yeah. Exactly. Yeah. >> Yeah.
>> Yeah.
>> Well, this is cool. Thank you very much
again, Mark. I enjoyed it. I'm I'm glad
that our paths have crossed and I've
certainly appreciate you sharing your
knowledge and experience.
>> Yeah. Thanks, Angel. I appreciate the
invite. It was fun.
>> Thanks for listening. Be sure to visit
PNI's website for outstanding content
and to hear previous episodes of the
show. You can also find us on PNI's
YouTube channel. Links are in the show
notes. If you have any questions or
comments on the episode or have
suggestions for future topics and
guests, we'd love to hear from you. My
contact information is also in the show
notes. And if you haven't already done
so, we'd really appreciate an honest
review on iTunes. These reviews help us
make sure we're delivering the content
you need to be successful. To hear more
insightful interviews with allocators,
be sure to subscribe to the show on the
podcast app of your choice. Finally, a
special thanks to the Northrup family
for providing us with music from the
Super Trio. We'll see you next time. Namaste.
Namaste.
>> The information presented in this
podcast is for educational
andformational purposes only. the host,
guest and their affiliated organizations
are not providing investment, legal,
tax, or financial advice. All opinions
expressed by the host and guest are
solely their own and should not be
construed as investment recommendations
or advice. Investment strategies
discussed may not be suitable for all
investors as individual circumstances vary.
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.