YouTube Transcript:
Artificial Intelligence and Aerospace Power: Industry Leadership Insights — Ep. 255
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
Welcome to the Aerospace Advantage
podcast brought to you by PenFed. I'm
your host, Heather Lucky Penny. Here on
the Aerospace Advantage, we speak with
leaders in the DoD, industry, and other
subject matter experts to explore the
intersection of strategy, operational
concepts, technology, and policy when it
comes to air and space power.
So, today we've got a special treat.
Senior leaders from the AI company
Primer. They're joining us to talk about
the industry perspective when it comes
to AI and aerospace power. We can dream
up cool applications all day long, but
it takes real engineers and companies to
deliver on those visions. Somebody's got
to build it, but more often than not
nowadays, industry is moving faster than
the Department of Defense. Civil
companies can move at the speed of
market, not defense bureaucracy. They're
not slowed down by the defense
acquisition process or told exactly what
and how to build it. Instead, they see a
need in the market. They see a
capability gap, a problem, and they have
to jump faster than their competition to
get there and deliver. So if you think
about it, capitalism shares a lot of the
innovation dynamics and competition that
the military faces. So with that, I'd
like to introduce Shawn Morardi in
Leonard Law. Shawn is the CEO of Primer
and his experience ranges from being the
chief executive officer of companies
such as Leaf Group and Ticket Master and
the being the chairman of Metacloud and
so many others. Sean, it's great to have
you here.
Thanks so much for having us. We're
thrilled to be here and looking forward
to the conversation.
>> Thank you. And Leonard Law is the chief
product officer at Primer. Welcome to
the podcast, Leonard.
>> Thanks for having us.
>> So, gentlemen, to start things off,
please introduce Primer for our
listeners. The AR market is really
active right now. So, where does Primer
fit into the mix and what is your
differentiator? We'd like to help our
listeners get to know you better and how
you connect to the war fighter. And I
think a huge piece of this is
understanding the kind of AI that you
bring to to bear.
>> Sure. So at Primer, we originally
incubated out of InQel. Our bread and
butter is to make sense out of massive
amounts of unstructured data in near
real time or certainly as close as we
can get to that. And at our core, we've
got deep skills and experience in
natural language processing. And we
integrate models, both proprietary
models which we've developed and
publicly available LLMs to bring speed,
power, and accuracy to analysis for the
customer. We're an enterprise software
product shop at our core. And we work
backwards from mission need, which means
that when we build our solutions, we're
really thinking through that enduser
experience and mission set. And the goal
really is to allow people to be much
more effective in their jobs. And if you
think about the personas that we serve,
analysts, operators, targeters,
researchers, leaders, they're trying to
make sense of a world around them that
is moving very very rapidly. They're
inundated with data, whether it's
proprietary data sources or open- source
publicly available information. and they
need to be able to make sense of that
and make decisions that they can have
confidence in in that basis. And so we
build the tooling to allow people to
achieve that.
>> I think it's really important that you
emphasize that you're there to really
enhance the enduser experience and their
ability to make decisions. You're not
trying to replace those end users
because what you bring to the table
being able to really access all this
unstructured data which we'll talk about
a little bit define that for our
listeners a little bit more and be able
to make sense of that so that we can
improve our decision processes. That's
fascinating. Leonard, what's your perspective?
perspective?
>> Yeah, I think that's right. We're here
to supercharge those analysts. And as as
Sean said, our focus is really on
building practical, trustworthy AI that
helps provide speed, power, and accuracy
to those missional aligned workflows
that our customers are trying to
perform. And so really, we are trying to
work very closely to understand their
tradecraftraft to make sure that we are
building AI that is aligned to their workflows.
workflows.
>> Yeah, you're not replacing people,
you're making them better. And I think
that really gets to what the true value
proposition of AI is enhancing human
performance and enhancing mission
outcomes. And I think a lot of folks
when they think about AI, they get
confused. They think, "Oh, these things
are going to create robots or they're
going to replace humans. They're going
to replace human cognition." And they're
not coming at it the right way. I really
like your approach there. One of the
things that really important to us is
that's exactly the point which is if you
look at the preconditions for someone to
do their job, it is often not value ad
in a strategic sense. It's necessary.
So, oh my gosh, I've got all these
sources of data. I've got to figure out
how to structure it to correlate it to
look for patterns. And if you're doing
that via manual inspection, it takes
forever. And what we really want to be
able to do is take our software and
effectively do all of the heavy lifting
that is nonstrategic,
right? So that that person can actually
bring their trade craft, their
creativity, their knowledge, and their
imagination to bear from the very
beginning. One of the things we say for
the customer is we want you to be able
to start your day where you used to end it.
it.
That is that is a great a great tagline
but actually something that matters and
is really meaningful for the people who
are actually doing the jo. You've
mentioned unstructured as have I a
couple different times. Can we just take
a quick break and define what you mean
by unstructured data and why your
ability to really integrate that and
make meaning out of it is so important
and part is part of your magic sauce. >> Leonard.
>> Leonard.
>> Yeah. So when we talk about instructor
data, what we're talking about is sort
of the universe of human readable sort
of document or narrative data. So this
can come in the form of human
intelligence reports. This can come in
the form of publicly available
information like news and social media.
This can come in the form of things like
NOTMS or afteraction reports. And so we
are working through all the the vast
quantities of data that is sort of
delusioning kind of our customers today
and really try trying to help them to
find the insights buried within all that
unstructured data. So we're not really
working with things like databases and
tables, but really focusing on sort of
those narrative structures that have a
lot of hidden meaning that's often hard
to uncover for our our human analysts
and operators. and hard to uncover for
human analysts and operators because
they just have they end up drowning in
the data because the the reports just
get stacked so high and their human
cognition has the ability to make
meaning out of the words and the
information that they're getting. It's
just too much though, right? And what a
lot of other companies end up what a lot
of other AI approaches require is for
that data to then be cleaned and curated
so it has the same format for them to be
able to use. So that's one of the
reasons why this unstructured all the
information you have is fairly weird.
It's all a little bit different. It's
cats and dogs, but you that your ability
to still make meaning out of that is I
think something that's very unique to
what you bring to the table.
>> One of the things I'll call out, so
imagine a world where someone is getting
inputs from several thousand different
disparate sources, right? But they have
very good understanding of what they're
actually looking for within that. We can
in a matter of seconds parse all of that
for them. Surface all of the documents
that are relevant. We can produce a
summary of those documents. And that
customer always is only one click away
from the original source. So they
understand providence. They have the
ability to visually inspect and assess
that primary source as they go. So
you're getting the benefit of the speed
of the machine. you're getting that
summary for you, but you're also never
far from the document that formed it.
You're always one click away.
>> That's amazing because that really
allows us, as you mentioned, through the
provenence to be able to either dig a
little bit deeper, get additional
context, and most importantly, avoid hallucinations.
hallucinations.
>> On the hallucination part, we we've done
a lot of work. We've got a proprietary
implementation built off of Rag, which
we refer to as RAGV, where effectively
what we're doing is running that
generative output back through the
system to test and assess claims. And
what that allows us to do for the
customer is to make sure that what we're
producing has a very high accuracy rate.
And anything that's produced that is
contested is going to actually go back
through that system. So it effectively
can be adjudicated. More importantly,
it's actually flagged for them. So they
actually can see where there's any sort
of conflict or where in fact the machine
may have produced something that's
inconsistent with a reference data set.
>> That's very interesting and very
important when we begin to talk about
intelligence and intelligence analysis.
Which brings me back to the genesis of
of primer from INQEL. And Sean, you had
mentioned this a little bit earlier, but
INQEL is basically a nonprofit
venture-back technology incubator for
the intelligence community or IC for
those that like to sound like insiders.
The concept behind InQel was in 1999,
the CIA and the rest of the intelligence
community realized that Silicon Valley
was moving really fast and basically had
much better tools than the Intel
community did. So, inel put the IC back
in the lead when it came to advanced
technology. And nowadays that speed and
service is core to your origin story and
core to your ethos. Do you want to talk
more about that and how that's connected
you not just broadly to the the IC but
to the war fighter and some of the
projects that you've already fielded?
>> Yeah. So I think the IC astutely
realized to your point that the the
rapid pace of innovation in Silicon
Valley and that it was going to be
important to see these high potential
best of breed companies that were
working on problems that were very well
suited for many of the challenges that
the IC was grappling with. And certainly
you know if you think about the massive
amount of unstructured data in the world
and then if you think about the
explosion of that data with for example
the rise of social media and news
sources and also increasingly the
knowledge that there was extraordinary
value in OSEN
>> and you know you had an intelligence
community for decades that was relying
substantially on proprietary
information. I think companies like
Primer are really important in solving
that problem and certainly you know the
analyst world has changed dramatically
because of that explosion of data. Our
approach has been within the IC is to
really get an understanding again of
that end user and the challenges that
they're grappling with and and always
working backwards from user need. And
we're a product company not a service
company. We provide a very high level of
service including where appropriate
working side by side with the customer
in whatever environment they're
operating in. But the true north is to
be informed by customer need. But if but
to productize that to build a platform
and build extensible products that can
solve those needs as they evolve. And
that's critical because the world we're
in is changing almost by the year.
Whether that's the capabilities of large
language models or new needs that are
coming to the customer on the basis of
what's happening. If you go back, you
know, a decade ago, your social media
consumption and the way these platforms
were used was very different than it is
today. Now there are sources where
nation state actors or folks working on
their behalf can effectively use these
channels for messaging. they can use
these channels to effectively wage, you
know, asymmetric communications warfare.
And being able to understand those
things in real time is increasingly
critical. And so we really work to
provide the tooling so that people can
not only understand the world around
them, but who's seeking to influence
narratives across the globe and how that
impacts global security and national security.
security.
>> Fascinating. Let's pivot from the IC and
talk about how you think your products
could be used for the Air Force and
Space Force.
>> Sure. So, if you think about the vast
amount of data that is relevant to those
mission sets and also the fact that you
know this the Space Force is newly stood
up, they're going to be grappling with
massive amounts of information whether
that's related to their peer competitors
and adversaries. If you think about one
of the big issues and a use case we're
working on right now is that counter UAS
mission site. You got a lot of stuff in
the air and understanding what is where.
Everything from what a particular drone
type is in the air. What's its payload?
How long can it remain in flight? What's
the activity not only over a timeline
but against the map? You know, where are
we seeing increasing or decreasing
activity? What might that mean?
collating and synthesizing that data is
a really tough task but you can't
achieve situational awareness without it
and we have the ability to provide
tremendous insights very quickly in this
emerging world.
>> So that's fascinating. Now counter UAS
is a huge priority for the Air Force.
What exactly are you doing in that domain?
domain?
>> We're helping to shift the paradigm in
the counter UAS mission by moving us
from reactive posture to a more
proactive left at launch approach. We've
all seen how drone threats are evolving
rapidly, especially in conflict zones
like Ukraine and Israel. These regions
have become testing grounds for new
drone tactics. From swarm coordination
and kamicazi UAVs to improved payloads
and evasion techniques. And
increasingly, we're seeing the use of
DIY builds and modified commercial
drones, which are cheap, adaptable, and
very hard to track. There's a wealth of
real world data out there that national
security and military stakeholders can
learn from if they have the right tools.
And that's where we come in. We fuse
open source intelligence like social
media, news, and technical forums with
proprietary reporting streams to provide
a full spectrum view of UAS threats as
they emerge. Our natural language
processing technology constantly ingests
and synthesizes unstructured data to
uncover emerging tactics, techniques,
and procedures. spoofing tactics,
payload innovation, UAV platform
modifications, and evasion patterns.
This isn't just about intelligence,
though. It's about gaining operational
advantage. By automating discovery and
minimizing manual review, we shift the
burden from data collection and manual
triage to actionable intelligence,
enabling CAS professionals to spend less
time sifting through information and
more time delivering insights that drive
interdiction and policy decisions.
We equip agencies to proactively
identify emerging behaviors, assess
capabilities like endurance and payload
class, and recommend effective counter
measures. Whether it's securing borders,
providing critical infrastructure, or
staying ahead of drone enabled smuggling
and espionage, we help mission
stakeholders stay left to launch and
stay ahead. This capability sounds
really important for battle managers,
for mission commanders, for component
commanders and planners too who are
looking to optimize what they're going
to do to maximize their mission effect
in real time as well as for the next go.
So there's lots of different tasks that
AI can do and for different missions and
those algorithms have to be specialized
for their specific tasks. So these tools
are fairly specialized and although one
can knit together different algorithms
to execute more complex tasks, so
they're stacking algorithms, we really
should avoid talking about AI as a
catch-all term. So what do you see as
your sweet spot within this AI world?
I think the sweet spot for us is
again going back to understanding that
customer need with respect to discovery
and analysis building that N10 tooling
which is tradecraft aware which is how
do people work every day what are their
existing workflows uh you know I don't
think that any whether it's an analyst
an operator a targeter a researcher they
need another narrowcast tool in their
portfolio and what they really need is
is software that can actually take them
from beginning to end andor plug into
existing systems and platforms. Right?
If you think across, you know, the
Department of Defense or the IC, you
talk about deploying into legacy
environments and platforms that people
are working with every single day. And
you know being able to deploy into
existing platforms or standing up new is
an evaluation we make really driven by
what is going to be the most powerful
implementation for the user to be more
effective in their job. I would call out
you know again we are model agnostic. We
recognize that depending on the use
cases, there are going to be various
models, they're more or less suited to a
particular task. And that can be
informed by speed, precision, accuracy,
but also total cost of ownership. Could
also be the environment you're deploying
into, which is someone may be operating
in an environment that is compute poor,
and they need as much speed, power, and
accuracy as they can get in that
environment. you may be able to achieve
that with a smaller model that's
actually operating on a laptop or a
laptop equivalent device and form factor
and so you know that really informed
our design principles I would say
modularity the ability to deploy in any
environment and deliver maximal signal
from noise within that environment and
giving the customer the ability to
either work within their existing
platform use our software denovo on a
standalone basis and and also be able to
tailor the models that they're
leveraging to the task they're trying to achieve.
achieve.
>> Perhaps to add a little bit of
specificity as well in terms of our
sweet spot in AI, you know, Primer has a
legacy sort of in this NLP space. And so
increasingly we've used both proprietary
and homegrown NLP models, but
increasingly some of the more advanced
open- source and proprietary LLM to do
exactly what Sean said, right, which is
to again help our users to find and
understand insights within the sort of
vast troves of data that are out there
and ultimately use those for for mission
effectiveness. And so that's really our
goal. So, Leonard, it sounds like the
products that you're delivering are,
first of all, they're bespoke to what
that individual user is doing from the
beginning to end because you're looking
at their entire trade craft and and what
they do through the course of the day to
be able to achieve the particular
outcomes that they're interested in. But
what was fascinating about what Sean had
said was how it's not as if you're just
one more thing that a human is going to
have to work with and they're balancing
a bunch of different new AI tools that
you can actually integrate and glue
those tools together to streamline the
workflows to improve mission outcomes,
improve decisions, and accelerate those
decisions as well. And that you actually
take into consideration the context and
environment that they're operating in.
That's uh that's really unique.
>> The capabilities we build out for
example are deployable and capable being
leveraged uh via very robust API. And so
again, you know, the ability to deploy
into an existing platform in the right
place within their workflows, not just
the environment, but the workflows
themselves is critical. Right? Our our
job is to make it easier for people to
get their job done. And in many cases,
that requires meeting them exactly where
they are. If they're spending their time
on a particular platform and they need
our capability within it, the best thing
for them often is for us to deploy
directly into that.
>> Yeah, I know the last thing anybody
wants to do is to have to learn one more
platform, one more tool and one more
password and login, right?
So in this whole world of AIM ML
oftentimes when I hear people talking
about using these tools is they say
sense makes sense act. The makes sense
piece is huge. That's a major element of
your value proposition because
situational awareness is everything. But
I'm going to be a little old school. I'm
going to go back to John Boyd's udaloop
and call this orientation. And the main
reason why is that we still have to
decide after we make sense. We don't
want to act before we make a decision.
Ready? fire aim. So I'm going to be old
school as I said and stick with the
udaloop paradigm because we can't make
good decisions if we don't have good
makes sense orientation or situational
awareness essay as fighter pads would
say. So how do you use AI to make good
situational awareness so that the user
can then make good decisions and
therefore take good actions and how
should they assess the types of actions
that they will take because of course
that's always going to have some level
of trade-off. For example, in the
tactical realm, we will sometimes
suboptimize our maneuvering or our
positioning or our energy management
because we're thinking about preserving
options for follow-on actions. So, I
know the real magic happens in the
actual mass. So, I'm not asking you to
divulge any real secrets here, but what
do you think is foundational to building
that orientation or that sensemaking?
One of the things that I'd call out and
I'd like Leonard to elaborate a bit um
is you because again we're working with
primary sources and the end user always
has the ability to go back to the
original source. Um information quality
matters an awful lot, right? So they're
going to be looking at something on the
screen as a consequence of doing a
search or filtering from search results
and then they have the ability to assess
the quality of that information. We can
also do some work to give them a sense
for what we believe the quality of that
information is, you know, based on
historical source quality or the extent
to which it is consistent with or in
conflict with other sources. So, a lot
of it is information providence and
quality assessment prior to making a
decision. At this point, I'll kick it
over to Leonard to elaborate a bit.
>> Yeah. So as Sean said, we are living in
a informationrich and information dense
environment. And really what we're
trying to do is help our users
understand insights buried within that
information that may be coming from
different perspectives and sometimes
contradictory perspectives. And so our
ability to allow users to kind of search
and slice and dice information from
various perspectives allows them to kind
of establish that situational ground
truth and understand kind of what they
want to believe, what they don't want to
believe, and really help to feed into
that common operating picture that
they're all trying to establish. And at
the end of the day, what we're trying to
do is extract signal from noise. And
right now, we have an increasingly noisy
information environment. And so by
providing users these quick rarest
viable tied to source summaries that
allow us to kind of derive those
insights that are important for the an
operator to understand, they can start
feeding that data back into that common
operating picture to then make the the
decisions that they want to make for for
that next action.
>> And another example of that is being
very quick to surface contested claims.
Right? So you got two completely
different points of view. It could be
with respect to asset strength. It could
be, you know, with respect to a troop
movement. And that is a very clear
signal for the user to say, "Hey, wait a
second. I got two contested claims here.
I'm going to go I dig deeper. I'm going
to go to what I believe is the best
source of information for this because I
can't rely on either of these claims.
Even though I think that claim A might
be more accurate, but clearly there's
some conflict and some noise here."
>> So, can you provide some real world
examples to make it concrete for our listeners?
listeners?
This isn't a contested claims example,
but I think it is an example of giving
someone the ability to make a decision
quickly that has life and death
ramifications. So we doing some work
with combatant command an area of the
globe where we don't have a lot of
infrastructure boots on the ground and
they were monitoring the movements of a
rebel group using our tooling and what
they were able to see was that they
actually had been moving closer and
closer to an area where there were aid
workers and as a consequence of that
they were able to get to the that group
very quickly and tell them effectively
to move out of harm's way. And in
talking um with the folks using uh
primer, what they basically said was
that in a matter of hours they were able
to ascertain this and if they looked at
their prior conventional means of doing
this, it would have taken them I think
it was on the order of 50 days. Now this
was something that was initiated very
shortly after understanding that. So,
several hours. So, very clearly, you
couldn't have gotten to that outcome of
getting people out of harm's way um
using conventional means because you
couldn't plow through all that
information and you would have missed it.
it.
>> Manage your money on your time with 247
digital banking on the PenFed mobile
app. Easily make payments, transfer
funds, and deposit checks through mobile deposit.
deposit.
>> That's music to my ears.
>> Learn more at penfed.org. Federally
insured by NCOA.
Sean, thank you for that. Um because
these operational examples are so
important. They help us as war fighters
understand how AI can help us be better
at our jobs and how we can execute our
missions more successfully. And one term
I've kept hearing from both of you is
practical AI. And and we've had these
conversations uh prior to this podcast.
That's what I'm drawing from. What do
you mean by practical AI?
Alex Karp probably a year and a half ago
said publicly when I think probably
weary of all of the noise and hype
around AI. He said look AI is just
software that works. And I think what he
really pointing at is, you know, the
fact that, you know, we live in a world
where people need to use software to be
effective in their jobs. And, you know,
natural language processing, machine
learning, the large language models. Um,
these are tools in the toolbox to
produce better software that has much
greater impact on mission outcome. And
so when we talk about practical AI, we
really look at it through the lens of
people being able to do their jobs
better than they could before. And so,
you know, this idea of AI for AI's sake,
by the way, this is very common at the
beginning of any hype cycle, right? If
you're a leader right now responsible
for technical implementation, it's
expected that you have an AI strategy.
Now the challenge of course it you know
with emerging technology is well what's
going to have the most impact on my
mission because saying I have an AI
strategy that's effectively divorced
from significant impact on mission is
just box checking and so practical AI
for us you know the proof is in the
pudding if people are able to be much
more effective in their jobs than they
were before that's practical AI right
implementing something that's not moving
the needle either on time spent or
mission outcome allowing you
particularly in this environment to do
more with less is not terribly
practical, right? And the the other
thing I'd say is, you know, within that
realm of practicality, cost
effectiveness matters an awful lot.
I was just going to say that because I
think a lot of people underestimate what
the real cost of developing AI tools
actually is as well as and Sean and you
had mentioned this previously when
you're looking to rightsize the compute
power necessary to execute those tools
is the level of swap C necessary to do
that as well and so if we sort of embark
on this overly optimistic expectations
of what AI can deliver divorced from the
current uh maturity of of the AI or the
algorithms or what we're asking them to
do devoid of does it as Sean you said
does it improve mission outcomes does it
make it faster does it make me um leaner
is it cost effective I mean it really
just I think sets both the humans and
the machines up for failure
>> the other thing I you know call out on
that is and I do believe this you know
what we're talking about kind of the
foundational elements of AI are well
implemented in an organization
consistent with mission need and a real
focus on benchmarking. Here's what we
were able to do before. Here's what
we're able to do now. There is
extraordinary benefit from intelligent
implementation, from a cost perspective,
from an effectiveness perspective. And
you know, the other thing that I'd call
out is, you know, the the innovation in
the United States with respect to
software is profoundly good. And it's,
you know, the best deal ever if we get
it right for the US government to be a
very smart consumer and implement of
these capabilities. Because the other
thing I'll call out this is all of these
capabilities are funded by venture
investment to the tune of now hundreds
of billions of dollars
>> and then these capabilities are then
made available to the government. And if
you think about what has been done
historically, a ton of custom
development, highly fragmented and what
I would say much slower pace than
commercial sector can move and certainly
much slower than we can move in this AI
age. And so we've got an extraordinary
opportunity that we've never really had
before to really evolve our capabilities
effectively without breaking the bank
because again so much of this innovation
is private sector funded and with
increased focus on delivering these
capabilities to government which was you
know if you went back 15 years ago, 10
years ago even five there was nowhere
near the focus as the government as a
market for cutting edge tech with
respect to software.
>> Yeah. Yeah.
>> So, I want to go back to the the
contested claims conversation because
this is really important because a lot
of the the information that you're
gaining to be able to of your
unstructured data is is open source.
What happens when we see adversaries
trying to leverage that opensource data
lake with bad data? like are they what
how do we go about filtering out the bad
narratives, the bad documentation, the
bad data that adversaries might be using to
to
take us in the wrong direction, corrupt
our AI algorithms, uh lead us to poor
decisions. So, we've got a way to flag
that, hey, there's some contested claims
going on here, but how do we grade and
how will a user know what they should
buy us and what they should wait for?
Yeah, you know, so that that's a
complicated problem set and you know
there are multiple approaches to that.
So obviously one source providence
matters an awful lot. Two, uh you know
making sure particularly in sensitive
environments that your models are
effectively sequestered and
encapsulated, right? So they are not
vulnerable to outside, you know,
interference or being pumped with bad
information. But I think it also speaks
to the importance of the quality also of
proprietary data. Right? So if you you
go back for example to um you know the
Russian invasion of Ukraine you know
assessing you know Russian capability it
can be a challenging thing and understanding
understanding
how well equipped are they in their
various movements approaches and
attacks. um they're certainly going to
say one thing, we may know another. How
are we informed and how does that shape
our risk calculus and how much do we
want to wait what they're saying or what
we've seen in their exercises versus
proprietary intelligence that we may
have about them and what's the
difference and where do we want to land
on that risk continuum. You certainly no
one wants to underestimate an adversary,
but overestimating an adversary is not
without consequence either.
>> That's definitely true. Yeah, and that's
I think a one reason why the approach
that you take towards your products is
really important. Understanding that
trade craft, getting the workflows,
really understanding the mission need
and the context of that mission and then
being able to provide those benchmarks.
That to me seems like a very rational
and very useful way to apply AI. But
we've got to get it in the hands of the
war fighter in operational exercises and
in that real world more broadly. Can you
talk to us about how you're deploying uh
some of your tools for war fighters today?
today?
>> Sure. So, you know, we we will move as
quickly as a customer is capable of
moving, whether that's spinning up a
prototype for them quickly, the ability
to work in any environment that they're
operating in. kind of I would say
independent of acquisition vehicles and
contracting and the complexity there you
know we need to be you know we want to
be able to deploy as quickly as possible
if we get into the right environment you
know we can get up and running in a
matter of weeks rather than months and
that's also core to the way we think
about building our products and
deploying our products and doing a lot
of tooling because you're also talking
about you got to be able to comport with
uh security policy and governance and so
you can't just have a great capability
that is by itself not capable of being
deployed into these environments. So a
lot of the tooling that would become
much less interesting to the end user is
a significant area of investment for us.
Yeah, that authority to connect, that
authority to operate, the fact that you
understand the limitations that the
government users, that military users,
DoD users have is really important. And
you've developed that knowledge through
decades of experience with the IC
community. So you're talking about
deploying these tools in weeks. Do you
have sort of like basic platforms that
you can then that you then adapt and
layer and integrate or are you
developing everything from scratch?
No. So we are a software product company
and you know our primer enterprise
platform is the basis for that and what
we build out on top of that you can
think about them as distinct
capabilities. We refer to them
internally as assets. And those assets
that we talk about, it could be maps, it
could be events, but some enduser
capability that is built on the platform
and the customer based on their needs
has the ability to pick and choose
amongst those assets and we can make
sure that they're getting them delivered
in a workflow consistent with their
needs. So again, you know, we're a
products company that will provide as
hightouch service as necessary to
accomplish the mission, but that product
mindset is at our core. So a high degree
of repeatability and deployment of these
capabilities, you know, the idea that we
can ship primer enterprise to a hundred
customers that may have different needs.
They have the ability to pick and choose
amongst the assets they need to do their
job. But also keep in mind, you know, at
the foundation, we're talking about
ingesting this unstructured data from
any source and doing all the work we
need to do so the user can interact
meaningfully with that. The foundational
capability is search, which is part of
the enterprise platform. And again, you
know, the key to effectively unlocking
insights all starts with a natural
language search or boolean search. It's
really up to that user and then they can
go on that journey consistent with what
their job is and then that asset library
is what allows them to have that robust
experience and you know as we layer in
these capabilities we can just cover
more surface area of user need over
time. So something that we may develop
specific to a single customer in most
cases becomes part of that asset library
and that is available to others. There
are certainly some cases where we will
do something specific to customer need
um which will remain with that customer
but mostly the work we're doing on
assets has broad application
>> Leonard I'd really love to hear what oh
sorry I was wanted to bring you in the
conversation so that jumping in was
perfect go ahead
>> yeah the one thing I would add to what
Sean just said Sean used the word
modularity poor right modularity is one
of the keys to unlocking our ability to
deploy effective solutions for our
customers very rapidly
And I I do want to dig in a little bit
to that product orientation that that
Sean talked about because we have a very
deep belief that we can drive the best
outcomes for our customers by investing
continuously in a core product versus
building bespoke solutions for every
customer that we have. And so
reinvesting and continuously investing
in a core set of capabilities and assets
like Sean talked about is really
important for us to build the best
capabilities for our end users.
That's sort of the rising tide lifts all
boats type of analogy, which I hate to
use naval analogy for an air power
podcast, but hopefully our listeners
will forgive me. So, I understand why
that makes sense to invest in the
enterprise. Are you doing some kind of
on the job training for your AI, if you
will. How are you continuing to improve
those products, Leonard?
Yeah. So we spend a lot of time and as
much as we are a product company, a very
important element of our business is our
four deployed engineers and our our
staff and field engineers that are out
working closely with customers to
understand their data, integrate with
their data sets and make sure that our
models are fine-tuned appropriately to
kind of provide the best results for the
data they have at hand. So absolutely we
do spend quite a bit of time. It's not a
turnkey, you know, you don't insert a CD
into a computer and expect to install.
This is something that is absolutely
enterprise software that requires close
collaboration between primer and its
customers to make sure that we have
optimized it for their specific mission needs.
needs.
>> Yeah, and that's part of understanding
the trade craft and the workflow and
what they're really needing to get out
of that. I'd like to go back to the
environment. Uh Sean, you had mentioned
this and Leonard, you did as well about
really understanding how the enduser
what they're actually living in. Do they
prioritize speed? Do they have compute
constraints? Do they have power
generation constraints? what kind of
reach back do they have? How do you make
that AI practical in those environments?
>> Yeah. So, so that discovery process is
really done with our sales engineers,
you know, working with a as we're
sitting and talking to a customer about
the realities of their environment. We
can stand up on bare metal. We can plug
into existing platforms. We can
certainly run in their cloud
environment. But that's part of the
discovery process with the customer. you
know what are what are the constraints
of the environment is it compute is it
that you're operating in a harsh
environment and you have sporadic
connectivity and on that basis we will
figure out with the customer the right
approach for them so we're hitting that
sweet spot given the constraints the
mission needs and what we believe in
working with the customer is going to
give them the best mission results and
that goes all the way down to the models
that they you know choose choose to use
certainly with our input. The other
thing I'd call out is we you know we
have an applied research team and the
applied part is the most important part
of that. So it's not pie in the sky
theoretical research but in a world
where innovation is happening almost by
the day and AI these are really sharp
engineers who are deep understanding of
what we're solving for customers what
their needs are and where things are
going in the future. So, we have these
really tight iteration loops where
they're learning about something that
they've been researching and that
capability is going to be hitting in a
matter of weeks or months and it's
something that we're going to be on
early. We can test the heck out of it in
the lab and we can see if it's going to
ek out performance benefit for real
customer need, right? And so, I mean,
it's our job to do two things. one
leverage our deep expertise as
technologists so that we can bring our
customers into the future but two always
working of a standpoint of a deep
understanding of how they work every day
and what they need to be successful
that's really the art of it but that you
know again there are things we can do
now that you you couldn't be done three
months ago and that's going to persist
for the next several years at least. So
gentlemen, you know, you're working in
the DoD environment and with INDQEL,
you've been there for a long time, but
everyone knows that the government is
incredibly ownorous to work with and
that the timelines of development and
acquis acquisition are at a geologic
pace, especially when you compare that
to what VCs, venture capitalists are
expecting and what Silicon Valley and
what the commercial civil market is
capable of doing. But we also have seen
this administration make significant
changes and strides in how they're
approaching software acquisition and
frankly we've been seeing this for
several administrations previously. Can
you comment on the broader trend lines
you're seeing how that's impacting the
the AI market space as well as your company?
company?
Yeah. So I think the administration is
very clearly not only kind of on the
right side of this particular issue but
has been very clear in articulating the
criticality of adopting best of breed
commercial software against mission and
you know we've also seen that manifest
over the course of the past several
years with the rise of you know DIU and
so many different programs where the
government outreach and the invitation
to the commercial sector is probably at
order of magnitude greater than it's
ever been before. Now the culture
changes slowly but if I think about it
from a standpoint of direction from the
administration and leadership and
attitude all the signs are there that
we're moving in the right direction. Now
at the same time again you know when
you're dealing with a huge bureaucracy
that has done things a certain way and
if the way you know the way to think
about acquisitions it was for a world
that was substantially hardwareheavy and
programs took forever and software can
be deployed very quickly and improved
very quickly. You know it's going to
take a bit of time but at the same time
there's more momentum. There's certainly
more buy in and the administration has
also been very clear about the
criticality of moving in this direction
and we're certainly seeing that in
customer interactions. Although obvious
I you know I'd love us to be going much
much faster than we're going today. We
>> Leonard, any thoughts on that?
No, I think Sean's conclusion is exactly
where my head is at, which is really
just we see great signs, but we'd like
to see it accelerate.
>> Okay, this next question about the
notion of complete cost. I think you
both addressed this when you talked
about your research and development arm
and how you are investing in that to
take advantage of the most recent
developments that have come out of the
broader AI community to deliver that to
your customers as well as investing in
the enterprise platform as opposed to
just becoming overly tailored and
customized to one particular end user
and then having to reinvent that every
time you have another end user. What
about the notion of complete cost?
because we've seen from other AI
companies that they have concerns that
the US government doesn't fully
understand the cost of developing AI for
particular mission sets.
>> Yeah, look, I I think it's important to
address. I'd say on one hand, I do
believe there's truth in that. On the
other, I think that's more our problem
than the customer's problem. And that
one of the beauties of innovation is the
forcing function of customer demands.
Not just in capabilities and
requirements but cost effectiveness and
you know it's you know our job to
deliver the greatest capability at the
best possible price we can for the
customer and in so doing we also need a
sustainable business model. I think one
of the things though that's hidden in
this so certainly understanding you know
the total cost of ownership is something
we're very focused on. It's part of our
discovery process. We don't want the
customer to be surprised. We certainly
don't want to be surprised. And so we're
very mindful of that going in. But I
think a a probably a greater problem is,
you know, we have got a real legacy
technology challenge across the federal
government, which is, you know, you've
got systems that are decades old that
are mediocre at best that are really,
really expensive to maintain. And the
transition is not easy, right? because
people need to get their work done as
they're transitioning to new capability.
And I think that's probably more of a
gating issue, right, which is
effectively new start cost at a time of
transition because I mean, while we can,
you know, we can deploy quickly if an
environment is effectively ripe for
that. Someone may have the ongoing
carrying cost of an inferior solution
that they are tethered to for much
longer than they'd like. and very few
systems can come in and wholesale
replace uh without interruption of
service. So that transition time in the
carrying cost of you know inferior
legacy systems I think is actually the
greater part of the problem.
>> Yeah. Yeah, I mean the only thing I
would add to that is this is why it's so
important now for the government to be
focused on product acquisition as
opposed to services acquisition because
to Sean's point. It is incumbent on on
us as as product builders and vendors to
continue to keep pace with the market.
You know, right now AI is doubling it in
capabilities every few months and it's
also having in cost every few months. In
order for products to stay afloat and to
take advantage of those things, we need
to invest continuously in our product.
And that would just not be the case in a
servicesoriented AI deployment. Yeah,
>> that that bridge that bridging exercise
is going to be an important one uh you
know for the government to be mindful of
and get very good at figuring out you
know working with the private sector
because Leonard calls out a very
important point which is the beauty of
technology is it gets more powerful and
cheaper you know on a regular basis and
but you want to be on the right side of
that curve you know and so are we
willing to incur the near-term cost to
accelerate transition and transformation
because the faster we go, the more money
we'll save. But early in that
transition, you're going to have some
incremental spend. And will we be able
to find that funding for those most
critical mission sets to actually go
faster. And that's, you know, what we're
certainly going to see. Obviously,
you've got a flat DoD budget for FY26,
but you got about $150 billion on a
onetime basis of reconciliation money.
And I think if those funds are
thoughtfully deployed, you can
significantly accelerate pace.
>> It's really interesting that bridging
exercise that you talked about, Sean. I
I think of that often, especially with
respect to the Air Force's need to
recapitalize. So, it's I think a very
apt analogy of having old outdated
hardware and software where the uh the
Department of Defense is holding on to
orphan languages and outmoded processors
and so forth and being able to and the
needing to jump on board to new systems,
new hardware, new software, but there's
going to have to be that overlap because
guess what? You know, the sun never
sets. I do think that one thing that you
know every you know leader in this area
needs to be thinking about on the
government side is as I'm bringing on
new capabilities what am I going to be
suns setting and that should actually be
part of their thinking and if it is just
direct incremental cost unless there is
clear mission advantage to bringing on
something new but it can't replace
something else we got to be very mindful
that we don't allow kind of this the
legacy the environment persists any
longer and then we have even further
proliferation of tech because the
environments are already unwieldy and
unmanageable and you know if not done
right you can make that worse.
>> I think I see everybody nodding their
noggin including all of our listeners.
So what would you want war fighters and
senior decision makers in the department
of the air force and the DoD to know
about these AI tools and the small
companies that are often at the bleeding
edge of technology? Uh what words of
wisdom would you pass to them so they
can optimize and scale AI in their
mission execution?
Um I think that two things. One AI done
right which is really software done
right is increasingly powerful in the
innovation going on in the private
sector is extraordinary. And two, to
spend ample time really thinking about
what could be transformative to mission
outcome and to seek out and challenge
these companies like ours, like Primer
that are doing this work and be very
clear about what you need for mission
outcome and challenge us because I know
the private sector is ready, willing,
and able to rise to the occasion and I
certainly know we
I echo what Sean said. You know, I think
it's important to find AI partners that
are driven to support mission outcomes
and and to remember that, you know,
companies like Primer are there to
support the war fighter and help
supercharge their abilities as opposed
to try to replace them. And the other I
think I would call out right is that to
remember that our adversaries are
adopting new technologies and and AI
rapidly. And so it's incumbent on us to
kind of stay a breast of the
advancements of capabilities that will
help empower you know our our domestic force.
force.
>> Thank you. So Sean, what you said about
AI done right, uh software done right
and needing to be transformative, I
think is really key there because we
can't just do AI for AI's sake. So in
the name of doing it right and being
transformative, I'd like to hear from
each of you where we should be in five
and then 10 years. paint a picture to
help us understand how we should grade
the homework of that progression. You
know, our listeners read the news and in
the coming years, how should they assess
what good looks like when it comes to AI progress?
progress?
>> It's always a great question to ask what
where do we want to be out into the
future because you need to be able to
work towards that and you have to have a
goal if you have any hope in heck of
getting there. You know, I would say,
you know, one way to think about it, and
I think benchmarking is really
important, and I think that, you know,
the federal government can do a much
better job of benchmarking the quality
and efficacy of systems. But if you
think about it right now, I think, you
know, in the DoD, I think it's you're
going to widely accepted that somewhat
less than 5% of all unstructured data
sitting in DoD repositories broadly is
accessible and actionable. I don't know
what the right number of that should be.
when you look out 5 to 10 years, but we
should have an aggressive goal to get to
50% of that data being actionable within
five years and darn close to all of it
within 10. Because if you think about
what's happening, so certainly you've
got increasingly powerful software that
has the ability to ingest and make sense
of that data, companies like ours. And
hardware is getting cheaper, AI is
getting more powerful and cheaper. And
you know we should look and say if we
truly believe data is a profoundly
valuable information asset as long as
it's not inert just sitting on a file
system then we should have aggressive
goals to make you know every bit useful
to us and it may be that x% of it is
junk and that's fine but we should be
able to know that rather than guess at
that because of our ability to interact
with that.
>> Yeah. The second, you know, thing I'd
speak to is we should assess baseline
capabilities, you know, against these
functions. So, how long does it take for
an analyst to complete a piece of
finished intelligence and how many
inputs go into that? And you know, can
we cut that in half over the next five
years with double or triple the number
of inputs? You know, if we look at, you
know, common operating picture,
situational awareness, you know, can we
assess baseline robustness uh against a
variety of scenarios today, recognize
the deficiencies in that and then
benchmark something that's materially
better that that could be achieved by
the deployment of high quality software
on top of really robust IT infrastructure?
>> So, Leonard, I'm going to ask you to
answer this question slightly
differently. What warning would signal
that we're missing the mark, especially
as we look to keep it to especially as
we look to keep an edge over our adversaries?
>> So, I think there's a couple things
there, right? One thing is when we look
at the future and and I tell this to my
team all the time, it's a fool's errand
to try to predict the future more than
six months out. This AI space is
evolving so rapidly that it's almost
impossible to kind of know what's to
come next. I do think though that
looking at where AI can be effective is
is kind of critically important and you
know we've talked a little bit about the
udaloop before and I would be worried if
in the future uh we led to a world of
complete autonomy of the AI to kind of
do the full loop. I think it's important
for the human to always be in and on the
loop. However, I do see a world where
the AI can increasingly start supporting
other parts of that loop. Right?
supporting decision-m not replacing
decision-m but supporting decision-m so
how can we get to a point where we're
not just sensing making sense but also
helping uh the war fighter to decide
kind of what are the best courses of
action he or she should take how should
I evaluate those against the data that's
available to me and the insights that
I'll come to to become aware of and so on
on
>> well gentlemen thank you so much I know
you both have super busy schedules and
we definitely appreciate the time that
you took to share your industry
perspective with us it's been a really
fascinating conversation
I only wish that I had your AI tools
when I was in the NMCC. Again, thank you.
you.
>> Thank you so much. It's a pleasure and a
privilege to be here. We've enjoyed the
time. Thank you so much.
>> And with that, I'd like to extend a big
thank you to our guests for joining in
today's conversation. I'd also like to
extend a big thank you to you, our
listeners, for your continued support
and for tuning into today's show. If you
like what you heard today, don't forget
to hit that like button or follow or
subscribe to the Aerospace Advantage.
You can also leave a comment to let us
know what you think about our show or
areas that you would like us to explore
further. As always, you can join in on
the conversation by following the
Mitchell Institute on X, Instagram,
Facebook or LinkedIn. And you can always
find us at mitchell aerospace.org.
Thanks again for joining us and have a
great aerospace power kind of day. See
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.
Works with YouTube, Coursera, Udemy and more educational platforms
Get Instant Transcripts: Just Edit the Domain in Your Address Bar!
YouTube
←
→
↻
https://www.youtube.com/watch?v=UF8uR6Z6KLc
YoutubeToText
←
→
↻
https://youtubetotext.net/watch?v=UF8uR6Z6KLc