0:14 hello my name is grahe class and I'm
0:15 your host for this season of technically
0:18 speaking an Intel podcast while Intel is
0:20 at the Forefront of so many cuttingedge
0:22 Technologies this season is all about
0:24 artificial intelligence and that's why
0:26 I've been tapped as your host having a
0:27 background in Tech as a software
0:29 engineer I was always interested in
0:31 merging the advanc of artificial
0:33 intelligence with my love for media this
0:35 culminated in one of my other projects
0:37 daily dad jokes an AI powered podcast
0:39 churning out jokes and humor for
0:41 listeners worldwide but artificial
0:43 intelligence can do a lot more than help
0:45 whip up a corny joke this technology has
0:47 been revolutionizing the way we engage
0:49 with the world with Innovations across
0:52 Healthcare agriculture business and even
0:54 the public sector another way that
0:56 artificial intelligence is changing the
0:58 world is through philosophy the term
1:00 ethical AI is a framework on how to use
1:02 AI what systems should be in place to
1:04 govern its use with business and
1:07 consumers in this episode we'll dive
1:08 into the ethics of artificial
1:10 intelligence with one of the Pioneers in the
1:11 the
1:13 field joining me for today's
1:16 conversation is Intel's Ria chervu Ria
1:18 can perhaps be described as the moral
1:21 compass of the company's AI as an AI
1:23 software architect and generative AI
1:25 evangelist she is charged with finding
1:27 responsible trustworthy solutions for
1:29 Intel's Internet of Things Engineering
1:31 Group her role exists at the
1:33 intersection of hardware and software
1:35 product design and effective consumer
1:38 use having studied extensively at
1:39 Harvard in the subjects of computer
1:41 science and data science her domains of
1:43 expertise are solutions for security and
1:46 privacy in machine learning fairness
1:48 explainable and responsible AI systems
1:51 uncertain AI reinforcement learning and
1:54 computational models of intelligence she
1:56 is a reoccurring keynote speaker on
1:58 issues in data science and responsible
2:00 AI we are very very excited to have her
2:02 on the podcast to share her expertise on
2:04 Intel's ethics in their AI [Music]
2:07 [Music]
2:10 development Ria welcome to the show
2:13 thank you C it's awesome to be here I've
2:16 had a look at your bio and would like to
2:18 know how did you come about to join the
2:22 Intel family sure I joined Intel in 2018
2:25 when I was 14 years old as an intern I
2:28 had yes I had an amazing Mentor who went
2:30 through all of the legal pages and the
2:32 review needed to get me to that position
2:34 so initially I interviewed with three
2:36 teams on three different areas in the AI
2:38 space one of them was around Ai and
2:40 Healthcare very theoretical and
2:42 mathematical implications in pathf
2:44 finding the other two were on software
2:46 development and profiling and the next
2:48 was on deep learning optimization
2:50 specifically so I did have the
2:51 opportunity to pick the one on
2:53 optimization for deep learning for
2:55 hardware and that is how I started off
2:57 my journey at Intel and got introduced
2:59 to it the interplay between hardware and
3:01 software is something that always Drew
3:03 my attention so when I was able to work
3:05 on that as part of my first role as an
3:08 intern I was really excited okay great
3:11 so now I understand that you're a
3:14 software AI architect can you just give
3:16 an overview of what that entails as a
3:18 software architect today I have a couple
3:19 of roles and responsibilities
3:21 corresponding to the latest and greatest
3:23 which is very exciting to me in my
3:26 day-to-day the first is generative AI so
3:28 looking at and taking into account the
3:29 different software optimizations that
3:31 we're planning for generative AI how the
3:33 workloads are shaping changes in the
3:35 algorithms over time as well as also the
3:38 associated mechanisms that we see that
3:40 are in touch with them as an evangelist
3:43 I also get to work on top of my software
3:46 architect role as a marketer and an
3:47 advocate for these Technologies so
3:49 creating very short demos and tutorials
3:52 for users to quickly grasp what exactly
3:53 is going on with this model how can I
3:55 use it in my day-to-day how can I Port
3:58 it to my use case so a lot of the focus
4:00 today for me is on gender of AI I also
4:02 look into ethical and explainable AI
4:04 tools and Technologies as part of my
4:06 pathfinding yeah I've been using
4:08 generative AI apps to do research
4:10 creating podcast artwork and
4:12 experimented with creating music so this
4:15 leads me into asking you what's your
4:18 definition of artificial intelligence
4:21 and maybe some examples of where we're
4:24 seeing it as a central Topic in the tech
4:26 world the way that I like to Define it
4:28 is something I I copied over actually
4:31 from a recent regulations on AI around
4:35 how AI models are agents or systems that
4:37 are capable of consuming and producing
4:39 data in an environment and also taking
4:41 actions that can in turn influence our
4:43 decisions there's a lot of use cases for
4:47 them everywhere Healthcare retail Etc
4:49 yeah when I talked with uh people even
4:52 in the tech world there's a lot of
4:54 confusion around okay you've got
4:55 algorithms you've got AI you've got
4:57 machine learning perhaps if you could
4:59 start with maybe some of the difference between
5:00 between
5:02 algorithms versus say AI what do you see
5:04 as the difference between the two
5:07 typical algorithms I'd say are based off
5:08 of certain schemes that we're already
5:10 aware of with machine learning you have
5:12 these new paradigms that are coming in
5:14 and completely spinning the narrative
5:16 things like continual learning very
5:18 large models different types of State
5:19 machines altoe depending on the
5:21 application you integrate it into okay
5:23 so I I would say there there are some
5:24 fundamental differences that are coming
5:26 in between algorithms and machine
5:28 learning models on that front when it
5:30 comes to use cases applic and of course
5:33 implementation as well and where I see
5:36 the power is sort of combining the
5:38 traditional sort of if then else
5:41 algorithms uh with AI and I'm just
5:43 wondering if you've seen any sort of
5:45 practical applications merging of all these
5:47 these
5:49 techniques yes and I'm very interested
5:51 in composite AI it's something that I'm
5:53 getting to work on a lot more in my
5:54 day-to-day and something that we're
5:56 actually doing a demo for at Intel
5:58 Innovation where we are chaining
6:00 multiple large language mod models
6:02 together the way I see composite AI is
6:04 being able to tie together multiple
6:06 models as part of a interface or an
6:09 application with chaining models I see
6:11 it as a subset of composite AI where you
6:13 have models that are linked to each
6:14 other and have dependencies on their
6:17 inputs and outputs it can be sometimes a
6:18 nightmare to get the dependencies all
6:20 together because you have cascading
6:22 models one after the other dependent on
6:24 each's output but it is possible and it
6:26 does give you a lot of applications and
6:28 opens up the possibilities where you can
6:30 get to a very nice user interface that
6:33 users can interact with developers can
6:34 build upon businesses and other
6:36 communities can just leverage and adopt
6:38 that is giving you a lot of capabilities
6:40 at once with ease of deployment oh
6:43 that's good now turning to the ethics
6:45 side of there which you've done quite a
6:47 lot of thinking and working how would
6:50 you define ethics in
6:53 AI with ethical AI the definition that I
6:54 like to adopt is soot technical
6:56 development of AI systems and that
6:58 involves societal and Technical aspects
7:00 but really focus on the implications and
7:02 the intentions with these algorithms in
7:04 terms of when you're talking with your
7:07 peers and colleagues has been a lot of
7:08 discussion and talk about trying to have
7:11 an a uniform ethical framework that at
7:13 least gives a common language into you
7:14 know when you're discussing these sorts
7:16 of things related to ethics in
7:19 AI there are common Frameworks that are
7:21 in place most of them are centered
7:23 around implications and intention and
7:24 how we structure that around certain
7:26 Technologies right now it's very popular
7:28 for applications to generative AI where
7:30 we see these Frameworks being put into
7:32 place around let's look at the inputs
7:34 the outputs and then the overall model
7:36 or framework and this may seem
7:38 simplistic but it really is boil down to
7:40 these very simple elements similarly for
7:42 other AI domains that are outside of
7:44 generative AI like object detection it's
7:45 very much focused on what is the
7:47 particular use case for example is it
7:49 something that is of high risk like
7:51 healthcare applications or surveillance
7:53 or is it something that's a bit lower
7:55 risk like content creation and then
7:57 seeing how exactly our user experience
7:58 and our development of those models is
8:01 echoing ethical principles so I would
8:03 say like to summarize there are
8:05 different Frameworks and summaries that
8:06 we apply but of course the templates
8:08 need to be flexible when we're talking
8:11 about ethical AI for these new AI models
8:13 how do you go about ensuring that your
8:15 staff and your engineers and your product
8:16 product
8:20 managers actually embed that ethical
8:22 framework into its AI development sure
8:24 it it's such a challenging problem even
8:26 to describe as well um as you're
8:28 mentioning it you know there's so many
8:30 different things that you can L do right
8:32 like as you mentioned policies
8:35 assessments Etc so at Intel we take a
8:37 multiple approaches towards it the one
8:39 thing that we very heavily emphasize on
8:41 is internal governance and um Lama
8:43 knockman who's my mentor and also
8:45 leading the responsible AI efforts at
8:48 Intel very neatly and concisely
8:50 describes them as guard rails that we
8:52 have internally in place and these are
8:53 really guidelines that are designed to
8:56 help our developers Engineers managers
8:57 and you know our communities and
8:59 marketers Etc understand the
9:01 implications again of what exactly are
9:03 we producing in terms of the content
9:05 what are some Technical Solutions that
9:07 we can instill mid pipeline or early on
9:09 before starting the effort when we're
9:10 getting started with AI development
9:12 efforts and I would say that that's the
9:14 core process that we focus on we're also
9:16 very heavily invested in technological
9:18 development whether that's through the
9:20 Deep fake detection work that LK deir
9:22 and team are taking on um explainable AI
9:25 tools Etc so really trying to approach
9:26 this from a governance perspective
9:28 internally from a tooling perspective
9:30 what we can provide to the developer
9:32 community and our customers and to
9:34 Partners and from a third perspective
9:36 regulations how do we influence the
9:38 industry at large and help contribute to
9:41 discussions that's really good and you
9:43 mentioned the work of llama nman and
9:44 we're actually going to be talking with
9:46 her in an upcoming episode this season
9:47 so I'm looking forward to asking her
9:49 about this as well but I think you've
9:52 said the key phrase deep fake so I might
9:54 switch to to that side of things so in
9:56 terms of the society and and culture in
9:58 general um there are some people that
10:02 are hes about AI particularly around AI
10:05 limiting jobs You've Got Deep fakes I've
10:08 actually created a clone of my voice
10:10 what do you try and do to reassure
10:12 people who have hesitations I'm
10:16 definitely not I would say not directly
10:18 enthusiastic about technologies that are
10:21 allowing for passing off as another
10:23 person for you know copying and pasting
10:26 essentially in certain cases we see the
10:27 development of those Technologies for a
10:29 certain use case and then it does start
10:31 to stray away from that into some of
10:32 these newer kind of applications that
10:34 are scary as you shared so when it comes
10:38 to reassuring individuals my family my
10:40 community as well and the industry at
10:41 large I think that it's definitely a
10:43 problem to see in a straightforward way
10:45 honestly yeah without the hype
10:47 surrounding it there is a levity
10:49 associated with the disadvantages of the
10:51 technology that we do need to consider
10:53 we also do see the benefits of them for
10:54 different things whether that's
10:56 improving your ease of using it just
10:58 being able to communicate with others
11:00 from my perspective what I try to do in
11:03 my space is to look at an honest
11:04 assessment of the technology which is
11:06 very common in the ethical AI domain and
11:08 to see what exactly is it really
11:10 contributing to the problem statement
11:11 and if it isn't contributing to it then
11:15 do we need it and in terms of Intel's I
11:19 guess method or communication with the
11:22 society and people at large are they
11:24 working on things to help
11:26 people feel a little bit more
11:27 comfortable about this new world we're
11:31 moving into yes and we we tackle it from
11:32 a couple of different fronts we've got
11:34 um some amazing teams working on
11:36 different parts of the puzzle one of
11:39 them is democratization where one of the
11:41 challenging things about AI from an
11:43 ethical AI perspective but also in
11:44 general from a development perspective
11:47 is being able to give communities access
11:48 to the technology so that they can test
11:50 it and validate it I've been speaking
11:53 about ethical AI for about two years now
11:55 or so last year we really didn't have
11:56 the same amount of tools and techniques
11:58 that we have this year and also the
12:00 popularity of testing and validating AI
12:04 systems right we always understand and I
12:06 think many compan and organizations
12:08 understand it's not a one-size fits-all
12:11 solution for ethical AI um you know many
12:12 companies and organizations are trying
12:15 to do their best so I would say that
12:17 again that that push back that community
12:18 that we're trying to create around
12:20 ethical AI is critical for us going
12:22 forward to be able to better build
12:24 Solutions has there been any case
12:25 studies within Intel that you could
12:27 share that maybe there was a real
12:30 challenging ethical
12:33 conundrum uh for producing AI software
12:36 and you know how how was it resolved how
12:38 did you work through it generative AI is
12:39 definitely a very big one so we're
12:42 always actively cautious about the types
12:44 of implications of our technology
12:45 whether or not we can incorporate
12:48 disclaimers or clarify on the intent of
12:50 it as well and um Graham one of my
12:52 favorite parts of ethical AI from a
12:53 technical perspective in terms of
12:55 solutions is something called Model
12:58 cards model cards clarify a very simple
13:00 theme around ethic AI which is you know
13:02 figure out what exactly is the intention
13:04 the core assumptions and the development
13:06 that went behind a model and what you're
13:07 going to use it for as part of
13:09 deployment and I think that for me
13:12 personally I see that that theme is
13:13 conveyed as part of our efforts in
13:14 generative AI there's a lot of
13:16 challenging things out there when it
13:18 comes to image generation copyright Etc
13:20 or even you know object detection
13:22 related Technologies for retail if you
13:24 have Solutions like intelligent Q
13:26 management or automated self-checkout it
13:28 makes sense but you know how do we keep
13:30 it for from proliferating otherwise and
13:32 what sort of work is going on with
13:35 inclusive AI diversity of stakeholders
13:37 is critical for the AI models that we're
13:39 building today whether that's detection
13:42 of skin agnostic of skin tone or being
13:44 able to adapt to different folks with
13:46 different accents so at Intel and again
13:48 across the industry I think a lot of the
13:50 efforts are really about making sure we
13:51 have the right people on board the right
13:53 experts with different backgrounds were
13:55 able to contribute to the Technologies
13:57 one thing when I was started um looking
14:00 into machine learning very quickly I got
14:02 a sense of you know being a traditional
14:05 engineer you kind of go okay input
14:06 output and you kind of know what's in
14:10 the in the black box to transform it
14:12 when I started working with AI and some
14:15 machine learning code I couldn't get a
14:17 sense of that onetoone kind of mapping
14:19 of what the output is the input and that
14:21 comes to the to transparency and
14:25 explainability of AI algorithms what are
14:27 you seeing and also what is Intel seeing
14:29 around trying to make that
14:31 understandable to the end users it's a
14:33 really interesting question because
14:34 explainability is one of the the first
14:36 topics that we think about when we think
14:38 about responsibly I and I agree the
14:41 blackbox metaphor has been used so many
14:45 times um because it's true but the key
14:47 idea is about demystifying what exactly
14:49 is going on within the model whether
14:51 that is the internal representation
14:53 again the data that it's pulling from
14:55 how the data is being leveraged feature
14:57 importance Etc there's also an added
14:59 consideration to explain ability around
15:01 surfacing that to an end user for them
15:03 to understand why the model made a
15:05 decision I would say with Intel we're
15:07 approaching it in a couple different
15:09 ways and I'm just I'm very excited to
15:11 see how again different experts approach
15:13 our problems we have a dedicated Suite
15:15 of Technologies for explainability I led
15:17 a team that was developing one of these
15:19 for Intel open Veno where again you're
15:20 getting that internal representation
15:22 analysis saleny maps and other
15:25 Technologies for explainability we also
15:26 incorporate transparency and
15:28 explainability into our algorithm so
15:29 whether that's being a to visualize
15:32 what's going on again saleny Maps or you
15:34 know really good user experience user
15:36 interface to figure out why am I being
15:38 surfaced this particular prediction or
15:40 decision from a model I'd say that's a
15:41 couple of the ways that we're
15:42 integrating and thinking about
15:45 explainability at Intel one of the
15:46 obviously the big things is around the
15:49 privacy and security of data perhaps you
15:52 could outline some of the new techniques
15:55 and new initiatives out in the industry
15:57 to try and use the power of AI but still
16:01 protect compan information and and data
16:02 I would say there's mechanisms like
16:04 differential privacy and many others
16:06 homomorphic encryption these were
16:07 incredibly popular two years ago you
16:09 kind of don't hear them a lot now so
16:11 again the hype is it it depends on the
16:13 technology of the day but yes
16:15 localization is a key thing it's
16:16 actually something I have the
16:18 opportunity to look at now as part of my
16:21 role around hyperd AI Edge versus Cloud
16:23 Edge and Cloud so there's a number of
16:25 different parameters and assumptions
16:27 that we can start to make at the edge
16:30 around localization PR privacy of data
16:31 not necessarily having to communicate it
16:33 back to the cloud that are changing the
16:35 way that we think about data privacy and
16:37 security for AI models Federated
16:39 learning is another Paradigm like this
16:41 so to put it shortly I'd say there are
16:42 mechanisms that are coming up in place
16:45 but there is still more needed emphasis
16:47 on security and privacy more development
16:49 for Technologies
16:52 Etc okay so just to extend that just a
16:54 little bit more so say if you're meeting
16:55 with an executive saying I've been
16:57 hearing all about large language models
16:59 and I was talking to my colleague
17:00 in another company and they're starting
17:02 to use chat Bots with within their
17:05 organization and using the power of that
17:07 is that related to large language models
17:09 but fine-tuning it to their own
17:12 corporate data in their own servers if
17:15 you like am I sort of on the right track
17:17 yes that is a perfect use case and thank
17:18 you for bringing that up you know
17:20 centralization of data on your server
17:22 there's also red teaming um gram that's
17:24 worth mentioning where you're testing
17:26 your model or your system thoroughly
17:28 with the generative AI space there's
17:30 come to life a lot of different types of
17:32 red teaming approaches including prompt
17:34 injection and many others which is
17:36 really around being able to test and
17:38 mock the kinds of inputs that
17:39 adversaries would provide to your model
17:41 and figure out how the model is going to
17:42 behave what are its strengths and
17:44 weaknesses Etc of course the compute
17:47 needed for that is another story but in
17:48 addition to that there's also again the
17:50 testing and validation approaches so red
17:53 teaming is really critical towards that
17:55 validating how susceptible your model is
17:57 to potential attacks whether it's biased
17:59 Etc so lot lots of lots of cool and
18:01 interesting approaches coming up but
18:02 exactly as you noted that's a key
18:05 example so going back on the ethics side
18:08 of things what are some of the arguments
18:11 for a corporation an organization to
18:14 have a clear set of code of ethics and
18:17 is Intel helping companies establish
18:19 those sorts of guidelines and
18:21 Frameworks there is a number of
18:22 different best practices that
18:24 organizations can incorporate today for
18:26 responsible eii one of them is the
18:28 internal governance assessments that we
18:30 talked about which is a step-by-step
18:32 process to checking where AI is used in
18:34 your organization how is it being
18:36 shipped outside what's your goto Market
18:38 strategy what's your change management
18:40 strategy Etc so in terms of Intel's
18:43 contributions we're very excited and
18:45 passionate about communication with
18:47 customers and partners and communities
18:50 in general around what exactly can we do
18:52 to help with the ethical AI development
18:54 that can include you know potential
18:55 compute platforms that help with running
18:58 this type of solutions pre-processing
19:00 post-processing what exactly do you need
19:02 towards that or if we have developers
19:04 working with Intel open voo and I work
19:05 in the open Veno team right now we want
19:07 to know what makes it easier for
19:09 developers to be able to run these
19:11 models and deploy them their feedback in
19:13 terms of you know hey you know is this
19:14 challenging to use I don't know how this
19:16 is working um is something that I do as
19:18 part of my evangelism team is again
19:19 helping contribute to that so I would
19:21 say that as part of the practices
19:23 there's a number of different things
19:25 that we do today with Solutions with
19:27 guard rails with assessments and at
19:29 Intel we're trying to help with the
19:31 communication the establishment of these
19:33 elements as well as the Technical
19:35 Solutions and um how we can help build
19:38 foundations that our partners customers
19:39 the community and Industry can take from
19:41 there you mentioned that you're part of
19:44 the Intel open voo group perhaps you
19:45 could spend a bit of time just
19:48 explaining what that group does and what
19:51 your role in it is sure the Intel open
19:54 Veno group is a team dedicated to
19:55 helping provide capabilities and
19:57 developing our open Vino toolkit the
19:59 toolkit is centered around computer
20:00 vision related applications and it's
20:02 recently expanded over five years to
20:05 generative Ai and it is really centered
20:07 around taking models in many different
20:10 Frameworks like pytorch tensorflow Caris
20:12 Etc and converting and optimizing them
20:14 to an intermediate representation format
20:16 that you can deploy on different
20:18 Hardware including Intel CPUs gpus and
20:21 other types of hardware and have you
20:25 seen any I guess impact on on Innovation
20:27 to to put it bluntly does having a code
20:31 of ethics put a break on Innovation and
20:33 for individual Engineers does it leave
20:35 them feeling oh maybe I shouldn't try
20:37 these things is it a hindrance the big
20:39 question yes I've encountered this
20:42 question before but my my answer um to
20:45 it is no it is not because um what again
20:46 my personal opinion and what I've also
20:49 seen at Intel and through my colleagues
20:51 mentors and Industry Academia and other
20:54 circles at the core of innovation is
20:56 certain themes like improving quality of
20:59 life Etc and as a part of if that human
21:02 rights responsible AI adoption of
21:03 Technologies and understanding why
21:05 you're using Technologies with awareness
21:08 those are all key attributes so I would
21:10 say if we're able to design the process
21:12 in a way that's efficient that is
21:14 incorporating the minimum requirements
21:16 and has the flexibility to grow with the
21:18 technology then we're doing it right and
21:20 it is not a hindrance time to go to
21:22 market is a key item however responsible
21:25 AI processes while they may take time
21:26 they don't necessarily have to hinder
21:28 that goal if they're streamlined and
21:30 done efficiently the onus is on all of
21:32 us to be able to contribute to that kind
21:34 of strategy or development of that
21:38 strategy and in terms of the AI evolving
21:40 over the next five years you know where
21:42 do you see it going human- centered AI
21:44 that is my personal opinion on it I've
21:46 done a lot of research on it I also had
21:48 the opportunity to author publication on
21:50 it technology that's centered around The
21:52 Human Experience that is contributing to
21:55 the way that we think that we act and
21:57 that we interact with others I would say
21:59 is the key thing and for me that's the
22:00 most exciting applications whether
22:03 that's smart care robots for the elderly
22:05 using generative AI for Health Care
22:08 applications identifying new protein
22:09 folding related techniques or something
22:11 similar but centered around The Human
22:13 Experience I would say so human-
22:15 centered AI is a good theme for that
22:18 overarching Journey yeah the human
22:20 centered AI is a very interesting
22:23 concept and have you seen any examples
22:25 either in the startup Community or
22:28 within Intel or in the industry where
22:31 you've given some examples but is any
22:32 that are actually like kind of in production
22:34 production
22:37 today so we have some accessibility
22:38 research that we've done with Intel you
22:40 know Lama knockman also leads the human
22:42 computer interaction lab and we see a
22:43 lot of I see a lot of great research
22:46 coming out of that around accessibility
22:48 hearing related initiatives Etc I would
22:50 say that they're in the process of being
22:52 researched right now to my knowledge
22:54 across the industry of technologies that
22:56 we can actively put in place but there
22:58 are blueprints in place for human
23:00 centered AI Technologies so it will be
23:02 exciting to see how they evolve how you
23:04 know we take into consideration newer
23:06 models like generative AI that again
23:07 popularity just kind of popped up but
23:09 they've been around for a while so we
23:11 need to see how the technology adapts
23:13 but I think it will stay true to like
23:15 the test of time um in a 5 years time
23:17 and then we will be able to see and
23:19 interact with AI applications that are
23:21 centered around our experiences around
23:23 nature Etc how do you differentiate the
23:26 two between the ethical Ai and
23:29 responsible AI um because in my mind
23:30 it's kind of a little bit in a little
23:33 bit jumbled sure I use the term actually
23:36 in overlap uh just my personal bias to
23:39 the but I I have seen that there are
23:41 differences there's been multiple
23:43 efforts to establish a nomenclature in
23:45 the ethical AI domain so responsible AI
23:47 is seen more as the internal governance
23:49 the processes and practices that we put
23:52 towards AI whereas ethical AI is seen as
23:54 really maybe kind of a combination of
23:56 the societal and Technical aspects as I
23:58 shared earlier so responsibly I in a
23:59 sense is the accountability and
24:02 responsibility part of it I talked
24:05 earlier about the future of AI how is
24:07 Intel going to be part of that wave in
24:10 terms of its programs and solutions for
24:13 customers AI is a a key inflection point
24:15 for us we are excited to ride the new
24:18 wave collaborate with our again Partners
24:20 customers communities and um see what we
24:22 can do next what's the next great big
24:25 thing uh generative AI is definitely a
24:27 key Focus for us it's what our customers
24:29 want it's what Dev Vel opers want and
24:30 it's what users want as well for their
24:32 content creation and many many other
24:34 needs so we're very focused on that
24:36 we're also incredibly focused on the
24:39 compute I see a lot of and get to work
24:40 with a lot of wonderful Engineers that
24:43 are very passionate about solving these
24:45 problems at hand specifically these um
24:46 because there's you know so much that
24:49 you can do a lot of problems in the llm
24:51 and generative AI space around you know
24:53 large models large footprint changing
24:55 outputs not a lot of predictability uh
24:58 challenging to Benchmark Etc so I think
25:00 that Intel is working on and actively
25:03 positioned to help our customers
25:05 developers provide these types of
25:07 optimizations the right kind of compute
25:09 etc for for the new wave of AI but
25:11 outside of generative AI also there's a
25:12 lot of other AI applications that we're
25:15 aware of human- centered AI Etc that
25:18 we're also actively working on so we're
25:21 ready oh that's that's good to hear I've
25:23 definitely learned quite a lot so thank
25:25 you very much for your time thank you gr
25:32 I would like to thank my guest Ria
25:33 chervu for joining me today on this
25:35 special episode of technically speaking
25:37 an Intel
25:39 podcast ethics and artificial
25:41 intelligence is so important right now
25:43 and what I've learned from today's
25:44 discussion with Ria having a code of
25:46 ethics can be an important standard
25:49 especially when it comes to deep fakes
25:51 companies in the media industry should
25:53 have a rule about never impersonating
25:55 someone without their knowledge in my
25:57 experience I've been able to clone my
26:00 own voice within a day and it's a pretty
26:02 good quality for me as an engineer and
26:04 technologist I think that's really
26:07 interesting however it does throw up a
26:08 lot of questions around ethics and
26:10 whether we should do these things the
26:12 other thing Ria touched on is human-
26:14 centered Ai and that's really
26:17 interesting from my perspective I think
26:19 technology has moved towards trying to
26:22 be human- centered and it's good to see
26:25 that AI wave that is coming is still
26:28 trying to keep humans as the center of
26:31 product and Technology design and
26:34 talking with r really did hit home to me
26:36 that it is artificial intelligence but I
26:38 am looking at the way that it can
26:41 actually augment us I think that it'll
26:44 augment our jobs I don't think on
26:46 balance that it will take away jobs you
26:48 only have to look back in history from
26:52 the printing press to the loom the AI
26:54 wave that we're going through now is
26:56 just another evolution of us as a
26:58 species and I love discussion around the
27:01 ethics and the philosophy of AI I hope
27:03 it will
27:05 continue and that's all for our first
27:07 episode thanks so much for joining me
27:09 today please join us on Tuesday October
27:11 17th for the next episode where we speak
27:14 with experts on the way AI is innovating
27:16 Agra Business Solutions you can follow
27:20 me on LinkedIn and Twitter or x with the
27:22 handle gr class or check the show notes
27:25 page for links this has been technically
27:31 technically speaking was produced by
27:33 Ruby Studios from iHeart Radio in
27:35 partnership with Intel and hosted by me
27:38 Graham class our executive producer is
27:40 Molly Soha our EP of post production is
27:42 James Foster and our supervising
27:45 producer is Nikia Swinton this episode
27:48 was edited by Siara spren and written
27:50 and produced by Tyree [Music]