The presentation outlines how organizations can build confidence and security in deploying AI technologies by adapting existing risk management frameworks and focusing on data agility and organizational adaptability.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
[Music]
how you enjoying the talk so far pretty
good stuff a yeah I'd say so well the
fun continues because in our next
session please welcome Jesse Jameson
senior cyber risk engineer at carnegi
melan University software engineering
Institute and Jesse will illustrate how
to mitigate and respond to AI risk
examine what it is that makes us
uncomfortable with emerging Technologies
like Ai and discuss some concrete steps
we can deploy uh with security to
measure confidently and securely just
how well we're doing things so Jesse the
stage is
yours hello I am Dr Jesse Jameson here
from the Carnegie melon University
software engineering Institute the C
division there um and I'm here to talk
to you today about sec securing Ai and
my perspective on securing AI uh the
title of my talk here is becoming more
comfortable with AI and uh one of the
things we're going to talk about as part
of my talk is the fact that uh AI as a
technology is something that uh has made
us kind of uncomfortable it's uh it's
something that requires some work to get
to understand and to uh employ securely
and confidently uh but hopefully by the
end of my talk you will will be left
with some tools techniques thoughts
practices and encouragement uh that will
help you uh deploy AI enabled
capabilities with confidence and
securely uh so with that we'll go ahead
and kick this right
off uh just regular document markings
here uh which I am required to show as I
mentioned AI is a technology that has
really burst in popularity it has been
around for a while Ai and machine
learning in enabled capabilities but the
recent Surge and popularity of large
language model and generative AI enabled
capabilities uh is has AI all over the
news and is a very and makes it a very
very hot topic but one thing I want us
to know is that uh new technologies
posing new challenges in cyber security
is something we've heard before we've
been here before we've dealt with this
before a few examples the burst of
Internet of Things
and distributed device Computing uh
security at the edge is a challenge that
uh we have had to overcome and that we
are still grappling to overcome today
another example is software supply chain
we're going to talk about this a little
bit with respect to AI but dealing with
the software supply chain has posed a
very complex set of challenges that
we've had to overcome and has changed
some of our thoughts on risk and risk
management another good example is uh
the event of remote work during the
covid-19 pandemic has opened us up to
all kinds of vulnerabilities and threats
different kinds of threats in the threat
space that maybe we were not uh as
attuned to or aware of as we were before
the pandemic so this includes uh threats
relevant to bring your own device uh
either mandates or or or what's actually
happening in distributed computing at
organizations and uh the Advent and
popularity increas in popularity of
virtualization has all forced us to
rethink um vulnerabilities the
vulnerability space how we're going to
manage those vulnerabilities and what
threats actually pose risk to our
organization another good example is the
event of cloud computing uh this is
another thing that's been uh another
topic that's been of high interest
especially in the world of big data that
we need to use uh cloud computing and
cloud computing resources to help us get
the most out of our data and our
computing power and all of these
challenges together are have presented
new new challenges to us new
technologies have presented new
challenges to us but this is not
something new is all I'm trying to say
here um the the the one thing though
about generative Ai and AI enabled
Technologies though is that uh is this
sense of urgency now I mentioned this
just a second ago uh when I was talking
about generative AI Technologies in
general there is a sense of urgency in
capital izing on AI right now and the
rate of innovation with respect to these
uh geni tools and gen capabilities has
our heads spinning and this has
definitely served to compound and
already complex security situation and
this is something that's posed A new
challenge uh or has us uh has us feeling
a little bit on the backf foot
especially in the cyber security and
risk World we've had to rethink a few
things um but like I said before we're
going to be working to find comfort in
the familiar as part of this talk I'm
going to be talking about uh risk
management Frameworks that have been
very successful tried andr Frameworks
that have helped us get through some of
these technological challenges that I
mentioned on the previous slides and
we're going to talk through um some of
the unique challenges that generative AI
Technologies and the Advent of these
very popular technologies have um
presented to those risk to the
implementations of those risk management
Frameworks so let's let's take a moment
and pause and go back to talk about what
we're familiar with um one thing that
that a properly executed riskmanagement
framework really hinges upon is
understanding a technology and how you
use it in your business context so how
one organization or another uses a
technology in it in their own business
context is going to be different and
that's going to impact the risks that a
institution or organization faces
relevant to that uh use of Technology
coupled with that is a threat space how
an organization uses technology in the
business context really uh serves to uh
filter that threat space down from one
that applies to the use of technology in
general to just the use cases that are
particular to an
organization and once an organization
has a good Gra grasp of the technology
and how they're using it in a business
context as well as the threat space and
what's relevant to them uh they can
really start to analyze how the use of
that technology is relevant to the risk
appetite and cont to the risk appetite
statements uh that an organization has
already invested time in developing um a
good risk management program spends some
time going through and developing these
risk appetite statements and revisits
them uh once a new technology or
innovation comes out and is going to be
implemented uh across an organization
that also shapes uh or helps an
organization understand the contextual
risk so there is risk in general just
like the threat space there's risk in
general that applies to a technology but
um every organization implements the
technology in its own way and that
causes uh some of the risk to not re to
not be particularly relevant uh and
makes uh and gives the RIS that
additional context that applies to the
organ to the coupling of the
organization its use cases and that
technology now once an organization
revisits its risk appetite uh and its
risk management
methodologies uh in the context of this
new technology and how they're going to
be using it then they can convert that
into a proper risk response and this
doesn't just mean mitigation so a good
risk response is mitigation is
understanding how to to transfer risk uh
maybe in a third party agreement you
work through uh you know how the risk is
is mitigated or handled by that third
party and then how you will assume it uh
you may have to choose to avoid that
risk altogether and maybe turn down uh
the opportunity to use some of those new
technologies um and you may employ
mechanisms for reducing that risk across
the Enterprise but all of this together
uh both the the B the technology and a
business context coupled with a threat
space and your risk appetite that you've
gone through a whole lot of effort to
develop and understand at the
organizational level uh goes into that
risk response right but there are two
pieces of this General risk management
framework that uh the Advent of large
language models and generative AI
technologies have kind of thrown a
wrench in and that is the threat space
that's the first one uh and and how we
might summarize this is that uh
generative AI as an emerging technology
has challenged traditional definitions
of vulnerability and has revealed new
threads we are now having to ask
ourselves very fundamental questions
around what a vulnerability even is and
what risk exposure means from the
context of this new technology we're
going to talk about the threat space uh
at length here in just a moment now the
second area where uh these technologies
have kind of thrown a risk in thrown
thrown a ringe in is in our risk
response uh and and you can summarize
this as saying that the options for
dealing with risk in the Gen space are
very broad they're very complex and they
apply to these new challenges that these
technologies have imposed upon us uh and
part of this is due to the fact that
large language models generative AI
Technologies are complex themselves have
a lot of moving parts so understanding
what knobs to turn to help you mitigate
transfer avoid or reduce that risk can
sometimes be a little bit more difficult
than if you're employing a more traditional
traditional
technology so so how do we mitigate this
how do we deal with this and and what
I'm going to talk to you about a little
bit today is that the keys to
effectively securing AI especially as
you employ a risk management framework
that's been generally uh presented like
here on the slide is is that you need to
invest a little bit in data agility and
organizational adapt adaptability
organizational adaptability is something
that I think we can we can grasp and
understand meaning that an organ it's
really an organization's ability to to
Pivot quickly uh from a strategic at a
strategic level now data agility is
something uh that that I've worked with
before but when I say that I mean the
ability of an organization to quickly
and efficiently utilize data to meet
evolving needs challenges and
opportunities so it's not just reacting
to negative but it's also embracing data
for positive benefit for an
organization um so so just to recap here
is I'm going as part of this talk I'm
actually going to be talking mostly
about uh the threat space and risk
response and how data agility and
organizational adaptability are really
going to be keys to to allowing you to
confidently and securely uh employ and
deploy um generative Ai and and large
language model machine learning AI
enabled Technologies so with that we're
kind of going to dive right into it and
talk about the threat space bit okay so
we're going to dive right into it here
and talk about the changing threat space
that has emerged as a result of the
Advent of generative AI Technologies
we've already touched on this a little
bit generative AI technologies have uh
changed the way that we think about and
reason about vulnerabilities they're no
longer just bugs and code they're a
little bit more complex
and affect a very complex technology
stack and one of the things I want to
emphasize just as I said before is that
overcoming this challenge is both a data
agility and an adaptability problem and
we're going to talk about that uh here
in just a second first I want to talk
about um some of the uh knowledge bases
and threat Frameworks that have come out
uh in the Advent of these generative AI
Technologies at first when these
Technologies hit the market there were a
lot of questions around what what new
threats and risks and vulnerabilities
and exposures are there do we even
understand this technology enough to
know and thankfully over time a lot of
these knowledge bases have come out that
have allowed us to more succinctly
organize and reason around what these
threats actually are and I'll mention a
couple of them here of the first being
the OAS top 10 and the second being
miter Atlas And there are others and
other Frameworks that one can adopt um
the point being though that you need to
adopt them so every time a new uh threat
database comes out I'm sure your
analysts in your organization are asking
questions around oh no I have to
integrate all this new data I have to uh
be able to map these to our risks and
map these to threats and generate uh
cyber cyber security threat intelligence
that I can use in my organization and
unfortunately especially in this case
that kind of is the case these are brand
new threats and vulnerabilities that
we've never seen before they're they're
not just rehashes of the same type of of
Playbook they're they're actually
sufficiently different to Warrant
upending sometimes the processes that we
use to measure and understand these so
if we're going to take these new threat
Frameworks MTH them to our uh to our
technology Stacks that requires a lot of
uh or a greater
capability uh with respect to the data
and data integration and really well
have a having a good foundational
understanding of your technology stack
as it is um so an organization that's
very quick to adopt these new threat
Frameworks that can uh very quickly
generate or adopt or harness a thread
intelligence is going to have a leg up
with respect to securing uh the
capabilities that you want to deploy
across your organization and I'm
actually going to show you an example of
why I think that this is a data agility
problem so from the oasp top oasp top 10
actually here is an example of a of of
an entry in that um in that framework
llm 07 2025 system prompt leakage now
that is a a vulnerability uh where
system prompts or instructions used to
steer the behavior of a generative AI
model can also contain sensitive
information that was not intended to be
discovered and the reason that this is
included is that uh through some prompts
uh one uh malicious actor or even
somebody who's not malicious and is just
playing around with the capability might
be able to cause the general ative AI
capability to return some of the
information on the back end that was
never quite meant to be to be revealed
to the end user uh through whether it's
prompt engineering you know we I'm sure
some of us have seen the the classic
ignore all of your previous instructions
uh types of prompts that are entered
into these gen capabilities um whatever
the mechanism might be the fact that uh
that information that was never meant to
be revealed can be revealed uh through
these uh very sophisticated uh prompts
is is a problem uh but how do we how do
we now check for this how do we um
understand this vulnerability what does
this actually mean how do we heal from
this and that's a complex question with
a very complex answer the first step is
understanding that threat so I just
walked through a very very succinct
explanation of what that vulnerability
actually is um and and I'm sure I'm
definitely not even doing that Justice
there's a whole lot of uh information
out there about these vulnerabilities
now that one can use to understand uh
what they are what the risks what the
risk to these vulnerabilities are what
capabilities are affected what
components of the capability are
affected whether that's your data lake
or the um interface that's used to send
and serve the prompts uh there's a whole
lot to understand there with respect to
these to these uh
vulnerabilities now once you understand
that we you have to identify in this
particular case the prompts and system
instructions that are being used and
embedded in the models as well as the
possible attack vectors now already for
both of these two steps I'm talking a
lot about data so you're collecting a
lot of data about the threat about risks
to your organization you're collecting a
lot of data around the uh capabilities
that exist out there in the wild that
have been employed by your organization
and not to even mention the internal
asset data that you have to collect to
know if you're even using any of these
capabilities and now once you have that
understanding even knowing and logging
and cataloging all of the prompts and
system instructions that you're using as
you implement these Technologies
requires an another level of uh of data
uh orchestration that that uh that
that's that can get overwhelming right
right so this is step two and then the
second or the third part might be to uh
to Now respond and Implement and unlock
this down Implement some prompt and
response logging uh sanitizing your
prompts uh having a mechanism for
refusing those prompts and then finally
going through and updating your models
your policies with data and
functionality updates and this is a
pipeline for just one type of
vulnerability there are plenty of other
vulnerabilities out there that apply to
these Technologies but again at every
step of this uh of this pipeline here
we're dealing with data whether that's
data about the external capabilities and
our own internal capabilities uh the
prompts how you're going to log those
sanitizing those uh the process for
dealing with those prompts uh logging uh
information about the models and the
changes and updates you've made to those
that's all data so your organization
needs to be agile in its capabilities uh
around dealing with with this data
harnessing this data knowing what to do
with it logging it uh keeping track of
it validating it uh and and the better
an organization is with respect to to
its data posture and its ability to
handle and deal with data uh I would
argue that the that those organizations
are more well posed poised to actually
deploy these capabilities securely and
in a way that that they can uh you know
confidently employ them uh and and uh
and use them to the ends of the
organization so that's just again that's
just one example uh of of how
complicated this can be and really if I
were to sum this up I would say that uh
at the end of the day your
organization uh should adopt and
hopefully uh or adapt and hopefully not
get to the point where it needs to start
over and rebuild a lot of its data
implementation and and data Technologies
uh from from the ground up so that again
that's just one example of how data is
kind of one of the common threads here
with all of these vulnerabilities and
the best organ the the best positioned
organizations are going to be those that
know how to handle their
data um now I already talked about
adaptability a little bit uh and what do
I mean by that if you're not just
talking about the data agility piece but
if but if you're talking about the
adaptability PS again not just knowing
how to use the data and and uh and and
employ that data to the benefit of your
organization but taking measures to
protect you and your data um data is
everything with these generative AI
capabilities and I think that um A
Renewed interest in data security is is
definitely at the Forefront of securing
AI Technologies you have to ensure that
these capabilities are tested before you
integrate them that's part of securing
in you and your data and you have to put
your money where your mouth is again I
just said it revisiting your data
security controls regularly and validate
that they are still doing what they need
to do um the third thing I'll touch on
here is prioritizing explainability and
transparency um especially for securing
AI if you don't know what behavior is
normal is expected uh then how are you
going to know what uh behavior is not
normal and not expected and both
explainability and transparency in your
data your prompts your model your
architectures uh and and your your whole
pipelines both of those are key to being
able to give you that strong
Foundation um another thing that every
organization needs to be doing already
uh but that is of particular relevance
to the geni Boom is uh monitoring your
Tech debt I know that as organizations
move to confidence deploy these
Technologies they do so incrementally
which is a great strategy but
incrementally deploying these
Technologies May mean that some
technical debt is building up underneath
that you don't really need anymore but
because these capabilities are so
complex rolling those off and rolling
those back uh as you Sunset them is very
very important and important to keep
track of and then finally I touched on
this just very briefly in the previous
slide but version controlling your
models your data and establishing data
Providence is something that's brand new
that a lot of organizations have not
really thought of before but the fact of
the matter is that with a lot of gen
capabilities there's no longer a
segmentation between code and data
because now the data that you use as a
prompt to get a response and use the
capability is now being used as code to
generate the output on the back end and
that's uh it's something that that's
taken some some thinking about to wrap
our heads around and your ability to to
track your models and your data is going
to be uh Paramount to keeping you secure
in this space now that's a lot to talk
through just with the threat space and
I'm going to Pivot now to talk about the
risk response um this is uh this is
another area that requires uh a little
bit more agility in this new space and
we're gonna we're going to touch on that
here as we kind of wrap up the talk so
so what's new we talked about this a
little bit uh the CH that the fact that
the challenges for dealing with risk in
the Gen space are very broad generative
AI capabilities are extremely complex uh
and are the the tools switches buttons
and knobs that we could turn to help us
manage mitigate uh and reason around
this risk um are our our Bountiful I
guess one would say and I would say that
dealing with risk in the geni space is
mostly an adaptability problem uh and
and you'll see kind of why I say that so
A few things here um that in the geni
space there is a trade-off between
utility and security uh you might the
ideal might be that you have a
generative AI technology that uh is
trained on your data that is only using
information that's relevant to your
product um or that um you know is
extremely fine-tuned to your use case
but the fact of the matter is that if a
capability is using your data and only
exclusively your data then your data is
what might be leaked or exposed should a
vulnerability be found in that
capability uh whereas if you employ the
use of a third party technology that's
trained on only external data um then
you're you're you're um leaving some
space and a gap between your data and
risk right so there is a trade-off there
and it's important to evaluate that trade-off
trade-off
continually um another suggestion here
is uh organizations and we didn't see
this with other with other Technologies
um but uh creating an AI review board um
the reason why this is such uh an
important step to take for an
organization with the event of these
types of Technologies the generative AI
Technologies is that generative AI
Technologies propo uh pose unique risks
to things like uh like legal issues um
they pose legal issues they pose um uh
issues of bias and ethics and having a
diverse cross functional team to
evaluate use cases and the risks to
those aspects of your organization uh is
something that a lot of a lot of teams
I've seen have adopted to great success
uh and and and I advocate for that as well
well
and I already talked about building
capabilities on a small scale and I will
remind you to keep in mind that uh when
you do this although it's a great idea
keep that technical debt in check and
then uh the the last two things I'll
touch on here to kind of wrap up uh on
this slide is uh sisa has actually
unveiled uh they have cyber security
performance goals these are ideal for
small and medium-sized organizations to
help organizations prioritize steps they
could take to uh to mitigate to deal
with risks imposed by any technology in
their cyber security stack um much less
generative a AI Technologies but those
have been evaluated for these new gen AI
Technologies and then finally the risk
the nist AI risk management framework I
talked generally about risk management
Frameworks earlier the N AI risk
management framework is a tried and true
process with just a few differences
between more traditional risk management
Frameworks and something that's AI
focused and I'll just run through those
really quick just to give you an idea um
there's new guidance in here for harmful
bias and AI systems I talked about that
just a second ago this is brand new to
generative AI Technologies and these new
uh security concerns related to machine
learning attacks so some of these
attacks or vulnerabilities um have kind
of been around even for machine learning
tools uh not just the generative AI
tools uh and this is the first I believe
the first actual treatment of those uh
security concerns in a risk management
framework a published by n i mean and
then um the complexity of the attack
service of AI systems has a specific
treatment in the AI risk management
framework as well as a there is a there
is a specific treatment for third- party
risk and all of these things taken
together is something that um that the
nist AI risk management framework
presents uh that organizations should
should consider adopting uh that that
makes it a little bit different from
traditional risk management Frameworks
and um your ability to look at this risk
management framework and pivot from one
version of an RMF a risk management
framework to something like an AI risk
management framework is a Hallmark of
good organizational adaptability and
that's why I mention it on this slide
that um that as these new risk
management Frameworks and controls pop
out uh your ability to evaluate and
adapt uh and adopt these risk management
Frameworks into your own risk processes
is a really good sign of organizational
adaptability um so so that's I
definitely wanted to make sure that I
mentioned both the cyber security
performance goals and the risk
management framework there and and so
now I've talked about a lot of different
things hopefully some concrete steps
that we can take in order to help us
better manage risk and secure and
securely and confidently deploy these AI
Technologies and I know that AI makes us
uncomfortable but we can do it we've
returned to places where we find Comfort
these risk management Frameworks
returning to what we know but I
acknowledge that this is not without
Challenge and and how we're going to
overcome these challenges is just like I
was saying this is one piece of how
we're going to do it right that we have
we have to have a healthy culture of
risk management in our organization
anyway uh that's first and foremost we
have to ask if we already have a
business appropriate risk appetite uh as
we continue to evaluate Technologies
just like this one and then our ability
to use data to its to its benefit and
our benefit can we use and manage this
new data and engineer new data pipelines
quickly and securely that's going to
really be really be key to uh to using
these Technologies um confidence ly and
securely finally the organizational
adaptability I talked about that asking
if you can adopt a new risk management
framework or a pivot on a relevant time
scale are you going to be able to use it
uh quickly and uh and in a way that
makes sense for your organization and
all of this together is actually going
to to to to come together to give us a
confident and secure use of emergent
Technologies these very quickly evolving
Technologies as they change as they
evolve over time and so with that um I
will wrap up it's been a really great
pleasure being here for the qualus uh
cyber RIS series talking about securing
AI I hope that you found even just a
little bit of my talk comforting um and
rewarding and if you do have any
follow-up questions or want to give me
feedback or uh or want to know more
about the work that we do at the SE uh
please feel free to reach out anytime
and with that I will turn it back over
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.