This podcast episode explores the critical role of ethics in the development and deployment of artificial intelligence, emphasizing Intel's commitment to responsible AI and its future direction.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
hello my name is grahe class and I'm
your host for this season of technically
speaking an Intel podcast while Intel is
at the Forefront of so many cuttingedge
Technologies this season is all about
artificial intelligence and that's why
I've been tapped as your host having a
background in Tech as a software
engineer I was always interested in
merging the advanc of artificial
intelligence with my love for media this
culminated in one of my other projects
daily dad jokes an AI powered podcast
churning out jokes and humor for
listeners worldwide but artificial
intelligence can do a lot more than help
whip up a corny joke this technology has
been revolutionizing the way we engage
with the world with Innovations across
Healthcare agriculture business and even
the public sector another way that
artificial intelligence is changing the
world is through philosophy the term
ethical AI is a framework on how to use
AI what systems should be in place to
govern its use with business and
consumers in this episode we'll dive
into the ethics of artificial
intelligence with one of the Pioneers in the
the
field joining me for today's
conversation is Intel's Ria chervu Ria
can perhaps be described as the moral
compass of the company's AI as an AI
software architect and generative AI
evangelist she is charged with finding
responsible trustworthy solutions for
Intel's Internet of Things Engineering
Group her role exists at the
intersection of hardware and software
product design and effective consumer
use having studied extensively at
Harvard in the subjects of computer
science and data science her domains of
expertise are solutions for security and
privacy in machine learning fairness
explainable and responsible AI systems
uncertain AI reinforcement learning and
computational models of intelligence she
is a reoccurring keynote speaker on
issues in data science and responsible
AI we are very very excited to have her
on the podcast to share her expertise on
Intel's ethics in their AI [Music]
[Music]
development Ria welcome to the show
thank you C it's awesome to be here I've
had a look at your bio and would like to
know how did you come about to join the
Intel family sure I joined Intel in 2018
when I was 14 years old as an intern I
had yes I had an amazing Mentor who went
through all of the legal pages and the
review needed to get me to that position
so initially I interviewed with three
teams on three different areas in the AI
space one of them was around Ai and
Healthcare very theoretical and
mathematical implications in pathf
finding the other two were on software
development and profiling and the next
was on deep learning optimization
specifically so I did have the
opportunity to pick the one on
optimization for deep learning for
hardware and that is how I started off
my journey at Intel and got introduced
to it the interplay between hardware and
software is something that always Drew
my attention so when I was able to work
on that as part of my first role as an
intern I was really excited okay great
so now I understand that you're a
software AI architect can you just give
an overview of what that entails as a
software architect today I have a couple
of roles and responsibilities
corresponding to the latest and greatest
which is very exciting to me in my
day-to-day the first is generative AI so
looking at and taking into account the
different software optimizations that
we're planning for generative AI how the
workloads are shaping changes in the
algorithms over time as well as also the
associated mechanisms that we see that
are in touch with them as an evangelist
I also get to work on top of my software
architect role as a marketer and an
advocate for these Technologies so
creating very short demos and tutorials
for users to quickly grasp what exactly
is going on with this model how can I
use it in my day-to-day how can I Port
it to my use case so a lot of the focus
today for me is on gender of AI I also
look into ethical and explainable AI
tools and Technologies as part of my
pathfinding yeah I've been using
generative AI apps to do research
creating podcast artwork and
experimented with creating music so this
leads me into asking you what's your
definition of artificial intelligence
and maybe some examples of where we're
seeing it as a central Topic in the tech
world the way that I like to Define it
is something I I copied over actually
from a recent regulations on AI around
how AI models are agents or systems that
are capable of consuming and producing
data in an environment and also taking
actions that can in turn influence our
decisions there's a lot of use cases for
them everywhere Healthcare retail Etc
yeah when I talked with uh people even
in the tech world there's a lot of
confusion around okay you've got
algorithms you've got AI you've got
machine learning perhaps if you could
start with maybe some of the difference between
between
algorithms versus say AI what do you see
as the difference between the two
typical algorithms I'd say are based off
of certain schemes that we're already
aware of with machine learning you have
these new paradigms that are coming in
and completely spinning the narrative
things like continual learning very
large models different types of State
machines altoe depending on the
application you integrate it into okay
so I I would say there there are some
fundamental differences that are coming
in between algorithms and machine
learning models on that front when it
comes to use cases applic and of course
implementation as well and where I see
the power is sort of combining the
traditional sort of if then else
algorithms uh with AI and I'm just
wondering if you've seen any sort of
practical applications merging of all these
these
techniques yes and I'm very interested
in composite AI it's something that I'm
getting to work on a lot more in my
day-to-day and something that we're
actually doing a demo for at Intel
Innovation where we are chaining
multiple large language mod models
together the way I see composite AI is
being able to tie together multiple
models as part of a interface or an
application with chaining models I see
it as a subset of composite AI where you
have models that are linked to each
other and have dependencies on their
inputs and outputs it can be sometimes a
nightmare to get the dependencies all
together because you have cascading
models one after the other dependent on
each's output but it is possible and it
does give you a lot of applications and
opens up the possibilities where you can
get to a very nice user interface that
users can interact with developers can
build upon businesses and other
communities can just leverage and adopt
that is giving you a lot of capabilities
at once with ease of deployment oh
that's good now turning to the ethics
side of there which you've done quite a
lot of thinking and working how would
you define ethics in
AI with ethical AI the definition that I
like to adopt is soot technical
development of AI systems and that
involves societal and Technical aspects
but really focus on the implications and
the intentions with these algorithms in
terms of when you're talking with your
peers and colleagues has been a lot of
discussion and talk about trying to have
an a uniform ethical framework that at
least gives a common language into you
know when you're discussing these sorts
of things related to ethics in
AI there are common Frameworks that are
in place most of them are centered
around implications and intention and
how we structure that around certain
Technologies right now it's very popular
for applications to generative AI where
we see these Frameworks being put into
place around let's look at the inputs
the outputs and then the overall model
or framework and this may seem
simplistic but it really is boil down to
these very simple elements similarly for
other AI domains that are outside of
generative AI like object detection it's
very much focused on what is the
particular use case for example is it
something that is of high risk like
healthcare applications or surveillance
or is it something that's a bit lower
risk like content creation and then
seeing how exactly our user experience
and our development of those models is
echoing ethical principles so I would
say like to summarize there are
different Frameworks and summaries that
we apply but of course the templates
need to be flexible when we're talking
about ethical AI for these new AI models
how do you go about ensuring that your
staff and your engineers and your product
product
managers actually embed that ethical
framework into its AI development sure
it it's such a challenging problem even
to describe as well um as you're
mentioning it you know there's so many
different things that you can L do right
like as you mentioned policies
assessments Etc so at Intel we take a
multiple approaches towards it the one
thing that we very heavily emphasize on
is internal governance and um Lama
knockman who's my mentor and also
leading the responsible AI efforts at
Intel very neatly and concisely
describes them as guard rails that we
have internally in place and these are
really guidelines that are designed to
help our developers Engineers managers
and you know our communities and
marketers Etc understand the
implications again of what exactly are
we producing in terms of the content
what are some Technical Solutions that
we can instill mid pipeline or early on
before starting the effort when we're
getting started with AI development
efforts and I would say that that's the
core process that we focus on we're also
very heavily invested in technological
development whether that's through the
Deep fake detection work that LK deir
and team are taking on um explainable AI
tools Etc so really trying to approach
this from a governance perspective
internally from a tooling perspective
what we can provide to the developer
community and our customers and to
Partners and from a third perspective
regulations how do we influence the
industry at large and help contribute to
discussions that's really good and you
mentioned the work of llama nman and
we're actually going to be talking with
her in an upcoming episode this season
so I'm looking forward to asking her
about this as well but I think you've
said the key phrase deep fake so I might
switch to to that side of things so in
terms of the society and and culture in
general um there are some people that
are hes about AI particularly around AI
limiting jobs You've Got Deep fakes I've
actually created a clone of my voice
what do you try and do to reassure
people who have hesitations I'm
definitely not I would say not directly
enthusiastic about technologies that are
allowing for passing off as another
person for you know copying and pasting
essentially in certain cases we see the
development of those Technologies for a
certain use case and then it does start
to stray away from that into some of
these newer kind of applications that
are scary as you shared so when it comes
to reassuring individuals my family my
community as well and the industry at
large I think that it's definitely a
problem to see in a straightforward way
honestly yeah without the hype
surrounding it there is a levity
associated with the disadvantages of the
technology that we do need to consider
we also do see the benefits of them for
different things whether that's
improving your ease of using it just
being able to communicate with others
from my perspective what I try to do in
my space is to look at an honest
assessment of the technology which is
very common in the ethical AI domain and
to see what exactly is it really
contributing to the problem statement
and if it isn't contributing to it then
do we need it and in terms of Intel's I
guess method or communication with the
society and people at large are they
working on things to help
people feel a little bit more
comfortable about this new world we're
moving into yes and we we tackle it from
a couple of different fronts we've got
um some amazing teams working on
different parts of the puzzle one of
them is democratization where one of the
challenging things about AI from an
ethical AI perspective but also in
general from a development perspective
is being able to give communities access
to the technology so that they can test
it and validate it I've been speaking
about ethical AI for about two years now
or so last year we really didn't have
the same amount of tools and techniques
that we have this year and also the
popularity of testing and validating AI
systems right we always understand and I
think many compan and organizations
understand it's not a one-size fits-all
solution for ethical AI um you know many
companies and organizations are trying
to do their best so I would say that
again that that push back that community
that we're trying to create around
ethical AI is critical for us going
forward to be able to better build
Solutions has there been any case
studies within Intel that you could
share that maybe there was a real
challenging ethical
conundrum uh for producing AI software
and you know how how was it resolved how
did you work through it generative AI is
definitely a very big one so we're
always actively cautious about the types
of implications of our technology
whether or not we can incorporate
disclaimers or clarify on the intent of
it as well and um Graham one of my
favorite parts of ethical AI from a
technical perspective in terms of
solutions is something called Model
cards model cards clarify a very simple
theme around ethic AI which is you know
figure out what exactly is the intention
the core assumptions and the development
that went behind a model and what you're
going to use it for as part of
deployment and I think that for me
personally I see that that theme is
conveyed as part of our efforts in
generative AI there's a lot of
challenging things out there when it
comes to image generation copyright Etc
or even you know object detection
related Technologies for retail if you
have Solutions like intelligent Q
management or automated self-checkout it
makes sense but you know how do we keep
it for from proliferating otherwise and
what sort of work is going on with
inclusive AI diversity of stakeholders
is critical for the AI models that we're
building today whether that's detection
of skin agnostic of skin tone or being
able to adapt to different folks with
different accents so at Intel and again
across the industry I think a lot of the
efforts are really about making sure we
have the right people on board the right
experts with different backgrounds were
able to contribute to the Technologies
one thing when I was started um looking
into machine learning very quickly I got
a sense of you know being a traditional
engineer you kind of go okay input
output and you kind of know what's in
the in the black box to transform it
when I started working with AI and some
machine learning code I couldn't get a
sense of that onetoone kind of mapping
of what the output is the input and that
comes to the to transparency and
explainability of AI algorithms what are
you seeing and also what is Intel seeing
around trying to make that
understandable to the end users it's a
really interesting question because
explainability is one of the the first
topics that we think about when we think
about responsibly I and I agree the
blackbox metaphor has been used so many
times um because it's true but the key
idea is about demystifying what exactly
is going on within the model whether
that is the internal representation
again the data that it's pulling from
how the data is being leveraged feature
importance Etc there's also an added
consideration to explain ability around
surfacing that to an end user for them
to understand why the model made a
decision I would say with Intel we're
approaching it in a couple different
ways and I'm just I'm very excited to
see how again different experts approach
our problems we have a dedicated Suite
of Technologies for explainability I led
a team that was developing one of these
for Intel open Veno where again you're
getting that internal representation
analysis saleny maps and other
Technologies for explainability we also
incorporate transparency and
explainability into our algorithm so
whether that's being a to visualize
what's going on again saleny Maps or you
know really good user experience user
interface to figure out why am I being
surfaced this particular prediction or
decision from a model I'd say that's a
couple of the ways that we're
integrating and thinking about
explainability at Intel one of the
obviously the big things is around the
privacy and security of data perhaps you
could outline some of the new techniques
and new initiatives out in the industry
to try and use the power of AI but still
protect compan information and and data
I would say there's mechanisms like
differential privacy and many others
homomorphic encryption these were
incredibly popular two years ago you
kind of don't hear them a lot now so
again the hype is it it depends on the
technology of the day but yes
localization is a key thing it's
actually something I have the
opportunity to look at now as part of my
role around hyperd AI Edge versus Cloud
Edge and Cloud so there's a number of
different parameters and assumptions
that we can start to make at the edge
around localization PR privacy of data
not necessarily having to communicate it
back to the cloud that are changing the
way that we think about data privacy and
security for AI models Federated
learning is another Paradigm like this
so to put it shortly I'd say there are
mechanisms that are coming up in place
but there is still more needed emphasis
on security and privacy more development
for Technologies
Etc okay so just to extend that just a
little bit more so say if you're meeting
with an executive saying I've been
hearing all about large language models
and I was talking to my colleague
in another company and they're starting
to use chat Bots with within their
organization and using the power of that
is that related to large language models
but fine-tuning it to their own
corporate data in their own servers if
you like am I sort of on the right track
yes that is a perfect use case and thank
you for bringing that up you know
centralization of data on your server
there's also red teaming um gram that's
worth mentioning where you're testing
your model or your system thoroughly
with the generative AI space there's
come to life a lot of different types of
red teaming approaches including prompt
injection and many others which is
really around being able to test and
mock the kinds of inputs that
adversaries would provide to your model
and figure out how the model is going to
behave what are its strengths and
weaknesses Etc of course the compute
needed for that is another story but in
addition to that there's also again the
testing and validation approaches so red
teaming is really critical towards that
validating how susceptible your model is
to potential attacks whether it's biased
Etc so lot lots of lots of cool and
interesting approaches coming up but
exactly as you noted that's a key
example so going back on the ethics side
of things what are some of the arguments
for a corporation an organization to
have a clear set of code of ethics and
is Intel helping companies establish
those sorts of guidelines and
Frameworks there is a number of
different best practices that
organizations can incorporate today for
responsible eii one of them is the
internal governance assessments that we
talked about which is a step-by-step
process to checking where AI is used in
your organization how is it being
shipped outside what's your goto Market
strategy what's your change management
strategy Etc so in terms of Intel's
contributions we're very excited and
passionate about communication with
customers and partners and communities
in general around what exactly can we do
to help with the ethical AI development
that can include you know potential
compute platforms that help with running
this type of solutions pre-processing
post-processing what exactly do you need
towards that or if we have developers
working with Intel open voo and I work
in the open Veno team right now we want
to know what makes it easier for
developers to be able to run these
models and deploy them their feedback in
terms of you know hey you know is this
challenging to use I don't know how this
is working um is something that I do as
part of my evangelism team is again
helping contribute to that so I would
say that as part of the practices
there's a number of different things
that we do today with Solutions with
guard rails with assessments and at
Intel we're trying to help with the
communication the establishment of these
elements as well as the Technical
Solutions and um how we can help build
foundations that our partners customers
the community and Industry can take from
there you mentioned that you're part of
the Intel open voo group perhaps you
could spend a bit of time just
explaining what that group does and what
your role in it is sure the Intel open
Veno group is a team dedicated to
helping provide capabilities and
developing our open Vino toolkit the
toolkit is centered around computer
vision related applications and it's
recently expanded over five years to
generative Ai and it is really centered
around taking models in many different
Frameworks like pytorch tensorflow Caris
Etc and converting and optimizing them
to an intermediate representation format
that you can deploy on different
Hardware including Intel CPUs gpus and
other types of hardware and have you
seen any I guess impact on on Innovation
to to put it bluntly does having a code
of ethics put a break on Innovation and
for individual Engineers does it leave
them feeling oh maybe I shouldn't try
these things is it a hindrance the big
question yes I've encountered this
question before but my my answer um to
it is no it is not because um what again
my personal opinion and what I've also
seen at Intel and through my colleagues
mentors and Industry Academia and other
circles at the core of innovation is
certain themes like improving quality of
life Etc and as a part of if that human
rights responsible AI adoption of
Technologies and understanding why
you're using Technologies with awareness
those are all key attributes so I would
say if we're able to design the process
in a way that's efficient that is
incorporating the minimum requirements
and has the flexibility to grow with the
technology then we're doing it right and
it is not a hindrance time to go to
market is a key item however responsible
AI processes while they may take time
they don't necessarily have to hinder
that goal if they're streamlined and
done efficiently the onus is on all of
us to be able to contribute to that kind
of strategy or development of that
strategy and in terms of the AI evolving
over the next five years you know where
do you see it going human- centered AI
that is my personal opinion on it I've
done a lot of research on it I also had
the opportunity to author publication on
it technology that's centered around The
Human Experience that is contributing to
the way that we think that we act and
that we interact with others I would say
is the key thing and for me that's the
most exciting applications whether
that's smart care robots for the elderly
using generative AI for Health Care
applications identifying new protein
folding related techniques or something
similar but centered around The Human
Experience I would say so human-
centered AI is a good theme for that
overarching Journey yeah the human
centered AI is a very interesting
concept and have you seen any examples
either in the startup Community or
within Intel or in the industry where
you've given some examples but is any
that are actually like kind of in production
production
today so we have some accessibility
research that we've done with Intel you
know Lama knockman also leads the human
computer interaction lab and we see a
lot of I see a lot of great research
coming out of that around accessibility
hearing related initiatives Etc I would
say that they're in the process of being
researched right now to my knowledge
across the industry of technologies that
we can actively put in place but there
are blueprints in place for human
centered AI Technologies so it will be
exciting to see how they evolve how you
know we take into consideration newer
models like generative AI that again
popularity just kind of popped up but
they've been around for a while so we
need to see how the technology adapts
but I think it will stay true to like
the test of time um in a 5 years time
and then we will be able to see and
interact with AI applications that are
centered around our experiences around
nature Etc how do you differentiate the
two between the ethical Ai and
responsible AI um because in my mind
it's kind of a little bit in a little
bit jumbled sure I use the term actually
in overlap uh just my personal bias to
the but I I have seen that there are
differences there's been multiple
efforts to establish a nomenclature in
the ethical AI domain so responsible AI
is seen more as the internal governance
the processes and practices that we put
towards AI whereas ethical AI is seen as
really maybe kind of a combination of
the societal and Technical aspects as I
shared earlier so responsibly I in a
sense is the accountability and
responsibility part of it I talked
earlier about the future of AI how is
Intel going to be part of that wave in
terms of its programs and solutions for
customers AI is a a key inflection point
for us we are excited to ride the new
wave collaborate with our again Partners
customers communities and um see what we
can do next what's the next great big
thing uh generative AI is definitely a
key Focus for us it's what our customers
want it's what Dev Vel opers want and
it's what users want as well for their
content creation and many many other
needs so we're very focused on that
we're also incredibly focused on the
compute I see a lot of and get to work
with a lot of wonderful Engineers that
are very passionate about solving these
problems at hand specifically these um
because there's you know so much that
you can do a lot of problems in the llm
and generative AI space around you know
large models large footprint changing
outputs not a lot of predictability uh
challenging to Benchmark Etc so I think
that Intel is working on and actively
positioned to help our customers
developers provide these types of
optimizations the right kind of compute
etc for for the new wave of AI but
outside of generative AI also there's a
lot of other AI applications that we're
aware of human- centered AI Etc that
we're also actively working on so we're
ready oh that's that's good to hear I've
definitely learned quite a lot so thank
you very much for your time thank you gr
I would like to thank my guest Ria
chervu for joining me today on this
special episode of technically speaking
an Intel
podcast ethics and artificial
intelligence is so important right now
and what I've learned from today's
discussion with Ria having a code of
ethics can be an important standard
especially when it comes to deep fakes
companies in the media industry should
have a rule about never impersonating
someone without their knowledge in my
experience I've been able to clone my
own voice within a day and it's a pretty
good quality for me as an engineer and
technologist I think that's really
interesting however it does throw up a
lot of questions around ethics and
whether we should do these things the
other thing Ria touched on is human-
centered Ai and that's really
interesting from my perspective I think
technology has moved towards trying to
be human- centered and it's good to see
that AI wave that is coming is still
trying to keep humans as the center of
product and Technology design and
talking with r really did hit home to me
that it is artificial intelligence but I
am looking at the way that it can
actually augment us I think that it'll
augment our jobs I don't think on
balance that it will take away jobs you
only have to look back in history from
the printing press to the loom the AI
wave that we're going through now is
just another evolution of us as a
species and I love discussion around the
ethics and the philosophy of AI I hope
it will
continue and that's all for our first
episode thanks so much for joining me
today please join us on Tuesday October
17th for the next episode where we speak
with experts on the way AI is innovating
Agra Business Solutions you can follow
me on LinkedIn and Twitter or x with the
handle gr class or check the show notes
page for links this has been technically
technically speaking was produced by
Ruby Studios from iHeart Radio in
partnership with Intel and hosted by me
Graham class our executive producer is
Molly Soha our EP of post production is
James Foster and our supervising
producer is Nikia Swinton this episode
was edited by Siara spren and written
and produced by Tyree [Music]
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.