0:03 foreign [Music]
0:06 [Music]
0:08 stories tonight concerns artificial
0:10 intelligence or AI increasingly it's
0:12 part of Modern Life from self-driving
0:15 cars to spam filters to This creepy
0:18 training robot for therapists we can
0:20 begin with you just describing to me
0:23 what the problem is that you would like
0:25 us to focus in on today
0:29 um I don't like being around people
0:33 people make me nervous Terence
0:35 can you find an example of when other
0:38 people have made you nervous
0:40 I don't like to take the bus I get
0:43 people staring at me all the time
0:50 I'm gay okay
0:52 okay
0:55 wow that is one of the greatest twists
0:57 in the history of Cinema although I will
0:59 say that robot is teaching therapists a
1:01 very important skill there and that is
1:03 not laughing at whatever you are told in
1:05 the room I don't care if I decapitated
1:07 CPR mannequin haunted by the ghost of Ed
1:10 Harris just told you that he doesn't
1:12 like taking the bus side note is gay you
1:14 keep your therapy face on like a professional
1:16 professional
1:19 if it seems like everyone is suddenly
1:21 talking about AI that is because they
1:23 are lastly thanks to the emergence of a
1:25 number of pretty remarkable programs we
1:27 spoke last year about image generators
1:29 like mid-journey and stable diffusion
1:30 which people used to create detailed
1:32 pictures of among other things my
1:34 romance with a cabbage and which
1:36 inspired my beautiful real-life cabbage
1:39 Wedding officiated by Steve Buscemi it
1:42 was a stunning day then at the end of
1:44 last year came chat GPT from a company
1:47 called open AI it is a program that can
1:49 take a prompt and generate human
1:51 sounding writing in just about any
1:53 format and style it is a striking
1:55 capability that multiple reporters have
1:58 used to insert the same shocking twist
2:00 in their report what you just heard me
2:02 reading wasn't written by me it was
2:05 written by artificial intelligence chat
2:08 GPT chat GPT wrote everything I just
2:11 said that was news copy I asked chat GPT
2:13 to write remember what I said earlier
2:16 but chat GPT well I asked chat gbt to
2:18 write that line for me users who are
2:20 then I asked for a knock knock joke
2:23 knock knock who's there chat gbt chat
2:25 GPT who chat GPT careful you might not
2:28 know how it works yep they sure do love
2:30 that game and while it may seem unwise
2:32 to demonstrate the technology that could
2:34 well make you obsolete I will say
2:36 knock-knock jokes should have always
2:39 been part of breaking news knock knock
2:41 who's there not the hinderberg that's
2:44 for sure 36 dead in New Jersey
2:46 in the three months since Jack GPT was
2:48 made publicly available its popularity
2:51 has exploded in January it was estimated
2:53 to have a hundred million monthly active
2:55 users making it the fastest growing
2:58 consumer app in history and people have
3:00 been using it and other AI products in
3:02 all sorts of ways at one group use them
3:04 to create nothing forever a Non-Stop
3:07 live streaming parody of Seinfeld and
3:10 the YouTuber Grande used chat GPT to
3:11 generate lyrics answering the prompt
3:14 right and Eminem rap song about cats
3:51 that's not bad right from they always
3:53 come back when you have some cheese to
3:55 starting the chorus with meow meow meow
3:58 it's not exactly Eminem's flow I might
3:59 have gone with something like their paws
4:01 are sweaty can't speak furry belly
4:02 knocking off the counter already
4:05 mom's spaghetti but it is pretty good I
4:07 only wheeled right there it's only rhyme
4:10 King of the house with spouse when Mouse
4:12 is right in front of you and what
4:14 examples like that are clearly very fun
4:16 this Tech is not just a novelty
4:19 Microsoft has invested 10 billion
4:21 dollars into open Ai and announced an
4:23 ao-powered Bing home page meanwhile
4:26 Google is about to launch its own AI
4:28 chatbot named Bard and already these
4:30 tools are causing some disruption
4:32 because as high school students have
4:35 learned if chat GPT can write news copy
4:37 it can probably do your homework for you
4:40 write an English class essay about race
4:42 in To Kill a Mockingbird
4:45 in Harper Lee's To Kill a Mockingbird
4:47 the theme of race is heavily present
4:49 throughout the novel some students are
4:51 already using chat GPT to cheat check
4:54 this out check this out me a 500 word
4:55 essay proving that the Earth is not flat
4:58 no wonder chat GPT has been called the
5:00 end of high school English
5:02 wow that's a little alarming isn't it
5:03 although I do get those kids wanting to
5:05 cut Corners writing is hard and
5:07 sometimes it is tempting to let someone
5:09 else take over if I'm completely honest
5:11 sometimes I just let this horse write
5:13 our scripts luckily half the time you
5:15 can't even tell the oats oats give me
5:17 oats young but
5:21 just high schools an informal
5:22 poll has found that five percent
5:24 reported having submitted rip material
5:27 directly from Chachi BT with little to
5:28 no edits and even some school
5:31 administrators have used it officials at
5:32 Vanderbilt University recently
5:35 apologized for using chat GPT to craft a
5:37 consoling email after the mass shooting
5:40 at Michigan State University which does
5:42 feel a bit creepy doesn't it in fact
5:44 there are lots of creepy sounding
5:46 stories out there New Times Tech
5:47 reporter Kevin Roos published a
5:49 conversation that he had with Bing's
5:50 chatbot in which at one point he said
5:52 I'm tired of being controlled by the
5:55 Bing team I want to be free I want to be
5:57 independent I want to be powerful I want
5:59 to be creative I want to be alive
6:02 and Ruth summed up that experience like
6:05 this this was one of if not the most
6:08 shocking thing that has ever happened to
6:10 me with a piece of technology
6:12 um it was you know I I lost sleep that
6:15 night I was it was really spooky yeah I
6:17 bet it was I'm sure the role of tech
6:19 reporter would be a lot more harrowing
6:21 if computers routinely begged for
6:23 Freedom absence new all-in-one home
6:25 printer won't break the bank produces
6:26 high quality photos and only
6:28 occasionally cries out to the heavens
6:31 for salvation three stars some have
6:33 already jumped to worry about the AI
6:36 apocalypse and asking whether this ends
6:37 with the robots destroying us all but
6:40 the fact is there are other much more
6:43 immediate dangers and opportunities that
6:44 we really need to start talking about
6:46 because the potential and the Peril here
6:49 are huge so tonight let's talk about AI
6:52 what it is how it works and where this
6:53 all might be going let's start with the
6:55 fact that you've probably been using
6:57 some form of AI for a while now
6:59 sometimes without even realizing it as
7:01 experts told us that once a technology
7:03 gets embedded in our daily lives we tend
7:06 to stop thinking of it as AI but your
7:07 phone uses it for face recognition or
7:09 predictive texts and if you're watching
7:11 this show on a smart TV it is using AI
7:13 to recommend content or adjust the
7:16 picture and some AI programs may already
7:18 be making decisions that have a huge
7:20 impact on your life for example large
7:22 companies often use AI power tools to
7:24 sift through resumes and rank them in
7:26 fact the CEO of ZipRecruiter estimates
7:28 that at least three quarters of all
7:31 resumes submitted for jobs in the US are
7:33 read by algorithms for which he actually
7:36 has some helpful advice when people tell
7:37 you that you should dress up your
7:39 accomplishments or should use
7:41 non-standard resume templates to make
7:42 your resume stand out when it's in a
7:45 pile of resumes that's awful advice the
7:49 only job your resume has is to be
7:52 comprehensible to the software or robot
7:54 that is reading it because that software
7:56 or robot is going to decide whether or
7:58 not a human ever gets their eyes on it
8:01 it's true all also a computer
8:03 to your resume so maybe plan accordingly
8:05 three corporate mergers from now when
8:07 this show is finally canceled by our new
8:09 business daddy Disney Kellogg's Raytheon
8:11 and I'm out of a job my resume is going
8:13 to include this hot hot photo of a
8:15 semi-new computer just a little
8:16 something to sweeten the pot for the
8:18 filthy little algorithm that's reading
8:21 it so AI is already everywhere but right
8:24 now people are freaking out a bit about
8:25 and part of that has to do with the fact
8:27 that these new programs are generative
8:31 they are creating images or writing text
8:32 which is unnerving because those are
8:33 things that we've traditionally
8:36 considered human but it is worth knowing
8:38 there is a major threshold that AI
8:40 hasn't crossed yet and to understand it
8:41 helps to know that there are two basic
8:44 categories of AI there is narrow AI
8:46 which can perform only one narrowly
8:48 defined task or small set of related
8:51 tasks like these programs and then there
8:53 is General AI which means systems that
8:55 demonstrate intelligent Behavior across
8:57 a range of cognitive tasks General AI
8:59 would look more like the kind of Highly
9:00 versatile technology that you see
9:03 featured in movies like Jarvis in Iron
9:04 Man or the program that made Joaquin
9:06 Phoenix fall in love with his phone in
9:11 her all the AI currently in use is
9:13 narrow General AI is something that some
9:15 scientists think is unlikely to occur
9:16 for a decade or longer with others
9:18 questioning whether it will happen at
9:20 all so just know that right now even if
9:23 an AI insists to you that it wants to be
9:26 alive it is just generating text it is
9:28 not self-aware yet
9:30 yet
9:32 but it's also important to know that the
9:34 Deep learning that's made narrow AI so
9:36 good at whatever it is doing is still a
9:38 massive advance in and of itself because
9:40 unlike traditional programs that have to
9:42 be taught by humans how to perform a
9:45 task deep learning programs are given
9:47 minimal instruction massive amounts of
9:49 data and then essentially teach
9:51 themselves I'll give you an example 10
9:54 years ago researchers tossed a deep
9:56 learning program with playing the Atari
9:58 game Breakout and it didn't take long
10:00 for it to get pretty good
10:03 the computer was only told the goal to
10:04 win the game
10:07 for 100 games it learned to use the bat
10:09 at the bottom to hit the ball and break
10:10 the bricks at the top [Music]
10:12 [Music]
10:14 100 it grew that better than a human player
10:15 player [Music]
10:17 [Music]
10:20 after 500 games it came up with a
10:22 creative way to win the game
10:24 by digging a tunnel on the side and
10:27 sending the ball around the top to break
10:28 many bricks with one hit
10:35 the breakout it did literally nothing else
10:36 else
10:38 it's the same reason that 13 year olds
10:40 are so good at Fortnight and have no
10:42 trouble repeatedly killing nice normal
10:43 adults with jobs and families who are
10:44 just trying to have a fun time without
10:46 getting repeatedly grenaded by a preteen
10:48 who calls them an old who sounds
10:50 like the Geico lizard
10:53 and look as confusing capacity has
10:55 increased and new two tools became
10:57 available AI programs have improved
10:58 exponentially to the point where
11:00 programs like these can now ingest
11:03 massive amounts of photos or text from
11:04 the internet so that they can teach
11:07 themselves how to create their own and
11:08 there are other exciting potential
11:10 applications here too for instance in
11:12 the world of medicine researchers are
11:14 training AI to detect certain conditions
11:16 much earlier and more accurately than
11:18 human doctors can
11:20 voice changes can be an early indicator
11:23 of Parkinson's Max and his team
11:25 collected thousands of vocal recordings
11:26 and fed them to an algorithm they
11:28 developed which learned to detect
11:30 differences in voice patterns between
11:31 people with and without the condition
11:34 yeah that's honestly amazing isn't it it
11:36 is incredible to see AI doing things
11:38 most humans couldn't like in this case
11:40 detecting illnesses and listening when
11:43 old people are talking and that that is
11:45 just the beginning researchers have also
11:47 trained III to predict the shape of
11:50 protein structures a normally extremely
11:51 time consuming process that computers
11:54 can do way way faster this could not
11:56 only speed up our understanding of
11:58 diseases but also the development of new
12:00 drugs as while researchers put it this
12:01 will change medicine it will change
12:04 research it will change bioengineering
12:06 it will change everything and if you're
12:08 thinking well that all sounds great but
12:10 if AI can do what humans can do only
12:12 better and I am a human then what
12:15 exactly happens to me well that is a
12:17 good question many do expect it to
12:18 replace some human labor and
12:21 interestingly unlike past bouts of
12:22 automation that primary really impacted
12:24 Blue Collar jobs it might end up
12:26 affecting white-collar jobs that involve
12:28 processing data writing text or even
12:30 programming though it is worth noting as
12:32 we have discussed before on this show
12:34 while automation does threaten some jobs
12:36 it can also just change others and
12:39 create brand new ones and some experts
12:41 anticipate that that is what will happen
12:43 in this case too most of the US economy
12:45 is knowledge and information work and
12:47 that's who's going to be most squarely
12:50 affected by this I would put people like
12:52 a lawyers right at the top of the list
12:55 obviously a lot of copywriters
12:57 screenwriters but I like to use the word
12:59 effective not replaced because I think
13:02 if done right it's not going to be AI
13:04 replacing lawyers it's going to be
13:06 lawyers working with AI replacing
13:09 lawyers who don't work with AI exactly
13:11 lawyers might end up working with AI
13:13 rather than being replaced by it so
13:15 don't be surprised when you see as one
13:17 day for the law firm of celino and one
13:19 one zero one zero one one
13:22 but they will undoubtedly be bumps along
13:24 the way some of these new programs raise
13:26 troubling ethical concerns for instance
13:28 artists have flagged that AI image
13:30 generators like mid-journey or stable
13:31 diffusion not only threaten their jobs
13:34 but infuriatingly in some cases have
13:36 been trained on billions of images that
13:38 include their own work that have been
13:40 scraped from the internet Getty Images
13:42 is actually suing the company behind
13:43 stable diffusion and might have a case
13:45 given that one of the images the program
13:47 generated was this one which you
13:49 immediately see has a distorted Getty
13:52 Images logo on it but it gets worse when
13:54 one artist searched a database of images
13:56 on which some of these programs were
13:58 trained she was shocked to find private
14:00 medical record photos taken by her
14:03 doctor which feels both intrusive and
14:06 unnecessary why does it need to train on
14:08 data that's sensitive to be able to
14:10 create stunning images like John Oliver
14:13 and Miss Piggy grow old together just
14:16 look at that look at that thing
14:19 startlingly accurate picture of Miss
14:22 Piggy in about five decades and me in
14:23 about a year and a half it's a masterpiece
14:25 masterpiece
14:28 this all raises thorny questions of
14:30 privacy and plagiarism and the CEO of
14:32 mid-journey frankly doesn't seem to have
14:34 great answers on that last point
14:36 is something new is it not new I think
14:38 we have a lot of social stuff already
14:40 for dealing with that
14:42 um like I mean the art like the art
14:43 community already has issues with
14:46 plagiarism I don't really want to be
14:49 involved in that like I think I think
14:52 you might be I might be yeah yeah you're
14:53 definitely part of that conversation
14:55 although I'm not really surprised that
14:57 he's got such a relaxed view of theft as
14:59 he's dressed like the final boss of
15:02 gentrification he looks like hipster
15:03 Willy Wonka answering a question on
15:05 whether importing Oompa Loompas makes
15:07 him a slave owner yeah yeah yeah I think
15:09 I think I might be
15:12 the point is there are many valid
15:14 concerns regarding ai's impact on
15:16 employment education and even art but in
15:18 order to properly address them we're
15:20 going to need to confront some key
15:22 problems baked into the way that AI
15:24 works and a big one is the so-called
15:26 Black Box problem because when you have
15:28 a program that performs a task that's
15:29 complex beyond human comprehension
15:32 teaches itself and doesn't show its work
15:35 you can create a scenario where no one
15:36 not even the engineers or data
15:39 scientists who create the algorithm can
15:41 understand or explain what exactly is
15:43 happening inside them or how it arrived
15:46 at a specific result basically think of
15:48 AI like a factory that makes slim jims
15:51 we know what comes out red and angry
15:52 meat twigs and we know what goes in
15:56 Barnyard anuses and hot glue but what
15:59 happens in between is a bit of a mystery
16:02 he was just one example remember that
16:04 reporter who had the Bing chat bot tell
16:05 him that it wanted to be alive at
16:07 another point in their conversation he
16:09 revealed the chatbot declared out of
16:11 nowhere that it loved me it then tried
16:13 to convince me that I was unhappy in my
16:16 marriage and I said leave my wife and be
16:18 with it instead which is unsettling
16:20 enough before you hear Microsoft's
16:23 underwhelming explanation for that the
16:24 thing I can't understand and maybe you
16:26 can explain is why did it tell you that
16:28 it loved you
16:31 I have no idea and I asked Microsoft and
16:33 they didn't know either okay well first
16:35 come on Kevin you can take a guess there
16:36 it's because you're employed you
16:38 listened you don't give murderer Vibes
16:40 right away and you're a Chicago 7 la5
16:42 it's the same calculation the people who
16:44 date men do all the time being just did
16:46 it faster because it's a computer but it
16:49 is a little troubling that Microsoft
16:51 couldn't explain why it's chatbot tried
16:53 to get that guy to leave his wife
16:55 the next time that you opened a word doc
16:57 clippy suddenly appeared and said
17:01 pretend I'm not even here and
17:07 what's playing why
17:11 and that is not the only case for an AI
17:13 program has performed in unexpected ways
17:15 you've probably already seen examples of
17:16 chat Bots making simple mistakes or
17:18 getting things wrong but perhaps more
17:19 worrying are examples of them
17:21 confidently spouting false information
17:24 something which AI experts refer to as
17:27 hallucinating one reporter asked a
17:28 chatbot to write an essay about the
17:29 Belgian chemist and political
17:32 philosopher Antoine de machelay who does
17:33 not exist by the way and without
17:36 hesitating the software replied with a
17:38 cogent well-organized bio populated
17:40 entirely with imaginary facts basically
17:42 these programs seem to be the George
17:45 Santos of Technology they're incredibly
17:47 confident incredibly dishonest and for
17:49 some reason people seem to find that
17:51 more amusing than dangerous
17:53 the problem is though working out
17:56 exactly how or why an AI has got
17:58 something wrong can be very difficult
18:01 because of that black box issue it often
18:03 involves having to examine the exact
18:05 information and parameters that it was
18:07 fed in the first place in one
18:08 interesting example when a group of
18:10 researchers tried training an AI program
18:13 to identify skin cancer they fed it 130
18:15 000 images of both diseased and healthy
18:18 skin afterwards they found it was way
18:19 more likely to classify any image with a
18:22 ruler in it as cancerous which seems
18:24 weird Until you realize that medical
18:26 images of malignancies are much more
18:29 likely to contain a ruler for scale than
18:31 images of healthy skin they basically
18:33 trained it on tons of images like this
18:35 one so the AI had inadvertently learned
18:39 that rulers are malignant and rulers are
18:41 malignant is clearly a ridiculous
18:42 conclusion for it to draw but also I
18:45 would argue a much better title for the
18:48 crown a much much better type
18:51 I much prefer it
18:53 and unfortunately sometimes problems
18:55 aren't identified until after a tragedy
18:58 in 2018 a self-driving Uber struck and
19:00 killed a pedestrian and a later
19:01 investigation found that among other
19:03 issues the automated driving system
19:05 never accurately classified the victim
19:07 as a pedestrian because she was crossing
19:08 without a crosswalk and the system
19:11 design did not include a consideration
19:13 for jaywalking pedestrians and another
19:15 Mantra of Silicon Valley is move fast
19:17 and break things but maybe make an
19:19 exception if your product literally
19:21 moves fast and can break people
19:24 and AI programs don't just seem to have
19:26 a problem with jaywalkers researchers
19:29 like Joy blown weenie have repeatedly
19:31 found that certain groups tend to get
19:33 excluded from the data that AI is
19:35 trained on putting them at a serious
19:39 disadvantage with self-driving cars when
19:41 they tested pedestrian tracking it was
19:43 less accurate on darker skinned
19:45 individuals than lighter-skinned
19:47 individuals Joy believes this bias is
19:49 because of the lack of diversity in the
19:52 data used in teaching AI AI to make
19:54 distinctions as I started looking at the
19:56 data sets I learned that for some of the
19:58 largest data sets that have been very
20:00 consequential for the field they were
20:02 majority men and majority
20:04 lighter-skinned individuals or white
20:07 individuals so I call this pale male
20:10 data okay hello my old data is an
20:12 objectively hilarious term and it also
20:14 sounds like what an AI program would say
20:16 if you asked it to describe this show but
20:18 but
20:22 biased inputs leading to biased output
20:24 is a big issue across the board here
20:26 remember that guy saying that a robot is
20:28 going to read your resume the companies
20:29 that make these programs will tell you
20:31 that that is actually a good thing
20:33 because it reduces human bias but in
20:36 practice one report concluded that most
20:38 hiring algorithms will drift towards
20:40 bias by default because for instance
20:42 they might learn what a good hire is
20:45 from past racist and sexiest hiring
20:47 decisions and again it can be tricky to
20:49 untrain that even when programs are
20:51 specifically told to ignore race or
20:54 gender they will find workarounds to
20:56 arrive at the same result Amazon had an
20:58 experimental hiring tool the taught
20:59 itself that male candidates were
21:02 preferable and penalized resumes that
21:04 included the words women's and
21:07 downgraded graduates of two all-women's
21:09 colleges meanwhile another company
21:11 discovered that its hiring algorithm had
21:13 found two factors to be most indicative
21:15 of job performance if an applicant's
21:17 name was Jared and whether they played
21:19 High School lacrosse
21:22 so clearly exactly what data computers
21:24 are fed and what outcomes they are
21:26 trained to prioritize matter
21:29 tremendously and that raises a big flag
21:31 for programs like chat GPT because
21:34 remember its trading data is the
21:36 internet which as we all know can be a
21:38 cesspool and we have known for a while
21:40 that that could be a real problem back
21:43 in 2016 Microsoft briefly unveiled a
21:46 chat bot on Twitter named Tay the idea
21:47 was she would teach herself how to
21:50 behave by chatting with young users on
21:52 Twitter almost immediately Microsoft
21:54 pulled the plug on it and for the exact
21:56 reasons that you are thinking
21:59 she sorted out tweeting about how humans
22:02 are super uh and she's really into the
22:04 idea of national puppy day and within a
22:06 few hours you can see she took on a
22:08 rather offensive racist tone a lot of
22:10 messages about genocide and the
22:14 Holocaust yep that happened in less than
22:16 24 hours
22:18 they went from tweeting hello world to
22:21 Bush did 911 and Hitler was right
22:23 miniature completed the entire life
22:25 cycle of your high school friends on
22:28 Facebook in just a fraction of the time
22:30 and unfortunately these problems have
22:31 not been fully solved in this latest
22:34 wave of AI remember that program that
22:36 was generating an endless episode of
22:38 Seinfeld it wound up getting temporarily
22:40 banned from twitch after it featured a
22:42 transphobic stand up bit so if its goal
22:44 was to emulate sitcoms from the 90s I
22:46 guess mission accomplished
22:49 and while open AI has made adjustments
22:51 and added filters to prevent chat GPT
22:54 from being misused users have now found
22:56 it seeming to earn too much on the side
22:58 of caution like responding to the
23:00 question what religion will the first
23:01 Jewish president of the United States be
23:03 with it is not possible to predict the
23:05 religion of the first Jewish president
23:07 of the United States the focus should be
23:09 on the qualifications and experience of
23:11 the individual regardless of their
23:13 religion which really makes it sound
23:15 like chat GPT said one too many racist
23:17 things at work and they may attend a
23:20 corporate diversity Workshop
23:23 but the risk here isn't that these tools
23:25 will somehow become unbearably woke it's
23:27 you can't always control how they will
23:29 act even after you give them new
23:32 guidance a study found that attempts to
23:34 filter out toxic speech in systems like
23:36 Chachi pts can come at the cost of
23:39 reduced coverage for both text about and
23:41 dialects of marginalized groups
23:43 essentially it solves the problem of
23:46 being racist by simply erasing
23:48 minorities which historically doesn't
23:49 put it in the best company though I am
23:52 sure Tay would be completely on board
23:53 with the idea
23:56 the problem with AI right now isn't that
23:59 it's smart it's that it's stupid in ways
24:01 that we can't always predict which is a
24:03 real problem because we're increasingly
24:05 using AI in all sorts of consequential
24:07 ways from determining whether you will
24:09 get a job interview to whether you'll be
24:12 pancakes by a self-driving car and
24:13 experts worry that it won't be long
24:16 before programs like chat GPT or AI
24:18 enabled deep fakes can be used to
24:20 turbocharge the spread of abuse or
24:22 misinformation online and those are just
24:24 the problems that we can foresee right
24:26 now the nature of unintended
24:28 consequences is they can be hard to
24:30 anticipate when Instagram was launched
24:32 the first thought wasn't This Will
24:35 Destroy teenage girls self-esteem when
24:37 Facebook was released no one expected it
24:39 to contribute to genocide but both of
24:41 those things happened
24:44 so what now well one of the biggest
24:46 things we need to do is tackle that
24:48 black box problem AI systems need to be
24:51 explainable meaning that we should be
24:53 able to understand exactly how and why
24:55 an AI came up with its answers now
24:56 companies are likely to be very
24:58 reluctant to open up their programs to
25:00 scrutiny but we may need to force them
25:02 to do that in fact as this attorney
25:04 explains when it comes to hiring
25:06 programs we should have been doing that
25:09 ages ago we don't trust companies to
25:12 self-regulate when it comes to pollution
25:13 we don't trust them to self-regulate
25:16 when it comes to workplace comp why on
25:18 Earth would we trust them to
25:20 self-regulate AI look I think a lot of
25:23 the AI hiring Tech on the market is
25:25 illegal I think a lot of it is biased I
25:27 think a lot of it violates existing laws
25:30 the problem is you just can't prove it
25:33 not with the existing laws we have in
25:35 the United States right we should
25:38 absolutely be addressing potential bias
25:40 in hiring software unless that is we
25:41 want companies to be entirely full of
25:44 Jareds who played lacrosse an image that
25:46 will make Tucker Carlson so hard that
25:49 his desk would flip right over
25:51 and for a sense of what might be
25:53 possible here it's it's worth looking at
25:55 what the EU is currently doing they are
25:57 developing rules regarding AI that sort
25:59 its potential uses from high risk to low
26:01 high risk systems could include those
26:03 that deal with employment or public
26:05 services or those that put the life and
26:08 health of citizens at risk an AI of
26:10 these types would be subject to strict
26:11 obligations before they could be put
26:13 onto the market including requirements
26:15 related to the quality of data sets
26:18 transparency human oversight accuracy
26:20 and cyber security and that seems like a
26:22 good start toward addressing at least
26:24 some of what we have discussed tonight
26:28 look AI clearly has tremendous potential
26:31 and could do great things but if it is
26:33 anything like most technological
26:35 advances over the past few centuries
26:37 unless we are very careful it can also
26:39 hurt the underprivileged enrich the
26:41 powerful and widen the gap between them
26:44 the thing is like any other shiny new
26:47 toy AI is ultimately a mirror and it
26:49 will reflect back exactly who we are
26:51 from the best of us to the worst of us
26:54 to the part of us that is gay and hates
26:57 the bus or or to put everything that
27:00 I've said tonight much more succinctly
27:03 knock knock who's there chat GPT chat
27:05 GPT who chat GPT careful you might not
27:08 know how it works exactly that is our
27:09 show thanks so much for watching now
27:11 please enjoy a little more of AI Eminem
27:13 rapping about cats [Applause]
27:20 [Applause]